index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
4,100
Variational Bounds for Mixed-Data Factor Analysis Mohammad Emtiyaz Khan University of British Columbia Vancouver, BC, Canada V6T 1Z4 emtiyaz@cs.ubc.ca Guillaume Bouchard Xerox Research Center Europe 38240 Meylan, France guillaume.bouchard@xerox.com Benjamin M. Marlin University of British Columbia Vancouver, BC, Canada V6T 1Z4 bmarlin@cs.ubc.ca Kevin P. Murphy University of British Columbia Vancouver, BC, Canada V6T 1Z4 murphyk@cs.ubc.ca Abstract We propose a new variational EM algorithm for fitting factor analysis models with mixed continuous and categorical observations. The algorithm is based on a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the bound we propose is significantly faster than previous variational methods. We show that EM is significantly more robust in the presence of missing data compared to treating the latent factors as parameters, which is the approach used by exponential family PCA and other related matrix-factorization methods. A further benefit of the variational approach is that it can easily be extended to the case of mixtures of factor analyzers, as we show. We present results on synthetic and real data sets demonstrating several desirable properties of our proposed method. 1 Introduction Continuous latent factor models, such as factor analysis (FA) and probabilistic principal components analysis (PPCA), are very commonly used density models for continuous-valued data. They have many applications including latent factor discovery, dimensionality reduction, and missing data imputation. The factor analysis model asserts that a low-dimensional continuous latent factor zn ∈RL underlies each high-dimensional observed data vector yn ∈RD. Standard factor analysis models assume the prior on the latent factor has the form p(zn) = N(zn|0, I), while the likelihood has the form p(yn|zn) = N(yn|Wzn + µ, Σ). W is the D × L factor loading matrix, µ is an offset term, and Σ is a D × D diagonal matrix specifying the marginal noise variances. If we set Σ = σ2I and require W to be orthogonal, we recover probabilistic principal components analysis (PPCA). Such models can be easily fit using the expectation-maximization (EM) algorithm [Row97, TB99]. The FA model can be extended to other members of the exponential family by requiring that the natural (canonical) parameters have the form Wzn + µ [WK01, CDS02, MHG08, LT10]. This is the unsupervised version of a generalized linear model (GLM), and is extremely useful since it allows for non-trivial dependencies between data variables with mixed types. The principal difficulty with the general FA model is computational tractability, both at training and test time. A problem arises because the Gaussian prior on p(zn) is not conjugate to the likelihood except when yn also has a Gaussian distribution (the standard FA model). There are several approaches one can take to this problem. The simplest is to approximate the posterior p(zn|yn) using a point estimate, which is equivalent to viewing the latent variables as parameters and estimating them by maximum likelihood. This approach is known as exponential family PCA (ePCA) 1 Graphical Model: ΣC k µC k W C k k Y C n Y D nd qn zn π λz µD dk W D dk k n d λw Notation: qn Mixture indicator variable zn Latent factor vector yC n Continuous data vector yD nd Discrete data variable WC k , WD dk Factor loading matrices µC k , µD dk Offset vectors ΣC k Continuous noise covariance π Mixture prior parameter N # data cases L # latent dimensions K # mixture components Dc # continuous variables Dd # discrete variables Md + 1 # classes per discrete variable Figure 1: The generalized mixture of factor analyzers model for discrete and continuous data. [CDS02]. We refer to it as the “MM” approach to fitting the general FA model since we maximize over zn in the E-step, as well as W in the M-step. The main drawback of the MM approach is that it ignores posterior uncertainty in zn, which can result in over-fitting unless the model is carefully regularized [WCS08]. This is a particular concern when we have missing data. The opposite end of the model estimation spectrum is to integrate out both zn and W using Markov chain Monte Carlo methods. This approach has recently been studied under the name “Bayesian exponential family PCA” [MHG08] using a Hamiltonian Monte Carlo (HMC) sampling approach. We will refer to this as the “SS” approach to indicate that we are integrating out both zn and W by sampling. The SS approach preserves posterior uncertainty about zn (unlike the MM approach) and is robust to missing data, but can have a significantly higher computational cost. In this work, we study a variational EM model fitting approach that preserves posterior uncertainty about zn, is robust to missing data, and is more computationally efficient than SS. We refer to this as the “VM” approach to indicate that we integrate over zn in the E-step after applying a variational bound, and maximize over W in the M-step. We focus on the case of continuous (Gaussian) and categorical data. Our main contribution is the development of variational EM algorithms for factor analysis and mixtures of factor analyzers based on a simple quadratic lower bound to the multinomial likelihood (which subsumes the Bernoulli case) [Boh92]. This bound results in an EM iteration that is computationally more efficient than the bound previously proposed by Jaakkola for binary PCA when the training data is fully observed [JJ96], but is less tight. The proposed bound has advantages relative to other previously introduced bounds, as we discuss in the following sections. 2 The Generalized Mixture of Factor Analyzers Model In this section, we describe a model for mixed continuous and discrete data that we call the generalized mixture of factor analyzers model. This model has two important special cases: mixture models and factor analysis, both for mixed continuous and discrete data. We use the general model as well as both special cases in subsequent experiments. In this work, we focus on Gaussian distributed continuous data and multinomially distributed discrete data. The graphical model is given in Figure 1 while the probabilistic model is given in Equations 1 to 4. We begin with a description of the the general model and then highlight the two special cases. We let n ∈{1 . . . N} index data cases, d ∈{1 . . . Dd} index discrete data dimensions and k ∈ {1 . . . K} index mixture components. Superscripts C and D indicate variables associated with continuous and discrete data respectively. We let yC n ∈RDc denote the continuous data vector and 2 yD nd ∈{1 . . . M +1} denote the dth discrete data variable.1 We use a 1-of-(M +1) encoding for the discrete variables where a variable yD nd = m is represented by a (M + 1)-dimensional vector yD nd in which m’th element is set to 1, and all remaining elements equal 0. We denote the complete data vector by yn =  yC n , yD n1, . . . , yD nDd  . The generative process begins by sampling a state of the mixture indicator variable qn for each data case n from a K-state multinomial distribution with parameters π. Simultaneously, a length L latent factor vector zn ∈RL is sampled from a zero-mean Gaussian distribution with precision parameter λz. Both steps are given in Equation 1. The natural parameters of the distribution over the data variables is obtained by passing the latent factor vector zn through a linear function defined by a factor loading matrix and an offset term, both of which depend on the setting of the mixture indicator variable qn. p(zn, qn|θ) = N(zn|0, λ−1 z IL)M(qn|π) (1) p(yn|zn, qn = k, θ) = N(yC n |WC k zn + µC k , ΣC k ) Dd Y d=1 M(yD nd|S(ηndk)) (2) ηndk = WD dkzn + µD dk (3) Sm(η) = exp[ηm −lse(η)] (4) lse(η) = log[ M+1 X m=1 exp(ηm)] (5) Assuming that qn = k, the continuous data vector yC n is Gaussian distributed with mean WC k zn + µC k and covariance ΣC k , and each discrete data variable yD nd is multinomially distributed with natural parameters ηndk = WD dkzn + µD dk, as seen in Equation 2. Here, N(·|m, V) denotes a Gaussian distribution with mean m and covariance V, while M(·|α) denotes a multinomial distribution with parameter vector α such that P i αi = 1 and αi ≥0. For the discrete data variables, the natural parameter vector is converted into the standard mean parameter vector through the softmax function S(η) = [S1(η), . . . , SM+1(η)], where Sm(η) is defined in Equation 4. The softmax function Sm(η) is itself defined in terms of the log-sum-exp (LSE) function, which we give in Equation 5. We note that the factor loading matrices for the kth mixture component are WC k ∈RDc×L and WD dk ∈RM+1×L, while the offsets are µC k ∈RDc and µD dk ∈RM+1. We define the ensemble of factor loading matrices and offsets to be Wk = [WC k , WD 1k, WD 2k, . . . , WD Ddk] and µk = [µC k , µD 1k, µD 2k, . . . , µD Ddk], respectively. The complete set of parameters for this model is thus θ = {W1:K, µ1:K, ΣC 1:K, π, λz}. To complete the model specification, we must specify the prior on these parameters. For each row of each factor loading matrix Wk, we use a Gaussian prior of the form N(0, λ−1 w I). We use vague conjugate priors for the remaining parameters. As mentioned at the start of this section, this general model has two important special cases: generalized factor analysis and mixture models for mixed continuous and discrete data. The factor analysis model is obtained by using one mixture component and at least one latent factor (K = 1, L > 1). The mixture model is obtained by using no latent factors and at least one mixture component (K > 1, L = 0). In the mixture model case where L = 0, the distribution is modeled through the offset parameters µk only. We will compare these three models in Section 5. Before concluding this section, we point out one key difference between the current model and other latent factor models for discrete data like multinomial PCA [BJ04] and latent Dirichlet allocation (LDA) [BNJ03]. In our model, the natural parameters for discrete data are defined on a lowdimensional linear subspace and are mapped to the mean parameters via the softmax function. In multinomial PCA and LDA, the mean parameters are instead directly defined on a low-dimensional linear subspace. The latter approach can also be extended to the mixed-data case [BDdF+03]. However, model fitting is even more computationally challenging than in our approach. In fact, the bounds we propose can be used in this alternative setting, but we leave this to future work. 1Note that we assume all the discrete data variables have the same number of states, namely M + 1, for notational simplicity only. In the general case, the dth discrete variable has Md + 1 states. 3 3 Variational Bounds for Model Fitting In the standard expectation-maximization (EM) algorithm for mixtures of factor analyzers, the Estep consists of marginalizing over the complete-data log likelihood with respect to the posterior over the mixture indicator variable qn and latent factors zn. The M-step consists of maximizing the expected complete log likelihood with respect to the parameters θ. In the case of Gaussian observations, this posterior is available in closed form because of conjugacy. Introduction of discrete observations, however, makes it intractable to compute the posterior as the likelihood for these observations is not conjugate to the Gaussian prior on the latent factors. To overcome these problems, we propose to use a quadratic bound on the LSE function. This allows us to obtain closed form updates for both the E and M steps. We use the quadratic bound described in [Boh92]. In rest of the paper, we will refer to it as the “Bohning bound”. For simplicity, we describe the bound only for one discrete measurement with K = 1 and µk = 0 in order to suppress the n, k and d subscripts. To ensure identifiability, we assume that the last element of η is zero (this can be enforced by setting the last row of W to zero). The key idea behind the Bohning bound is to take a second order Taylor series expansion of the LSE function around a point ψ. An upper bound to the LSE function is found by replacing the Hessian matrix H(ψ), which appears in the second order term, with a fixed matrix A such that A−H(ψ) is positive definite for all ψ [Boh92]. Bohning gives one such matrix A, which we define below. The expansion point ψ is a free variational parameter that must be optimized. lse(η) ≤ 1 2ηT Aη −bT ψη + cψ (6) A = 1 2[IM −1M1T M/(M + 1)] (7) bψ = Aψ −S(ψ) (8) cψ = 1 2ψT Aψ −S(ψ)T ψ + lse(ψ) (9) ψ ∈RM is the vector of variational parameters, IM is the identity matrix of size M × M and 1M is a vector of ones of length M. By substituting this bound in to the log-likelihood, completing the square and exponentiating, we obtain the Gaussian lower bound described below. We obtain a Gaussian-like “pseudo” observation ˜yψ corresponding to the discrete observation yD. p(yD|z, W) ≥ h(ψ)N(˜yψ|Wz, A−1) (10) ˜yψ = A−1(bψ + yD) (11) h(ψ) = |2πA−1| 1 2 exp 1 2 ˜yT ψA˜yψ −cψ  (12) We use this result to obtain a lower bound for each mixed data vector yn. We will suppress the ψ subscripts, which differ for each data point n and each discrete variable d for clarity. Let ˜yn = [yC n , ˜y1,n, . . . , ˜yDd,n] be the data vector for a given n and ψ. It is straightforward to show that this observation gives the following lower bound on the joint likelihood, p(˜yn|zn) = N(˜yn| ˜ Wzn, ˜Σ), ˜ W =  WC, WD 1 , . . . , wD Dd  , ˜Σ = diag(ΣC, A−1 1 , . . . , A−1 Dd) Given this pseudo observation, the computation of the posterior means mn and covariances Vn is similar to the Gaussian FA model as seen below. This result can be generalized to the mixture case in a straightforward way. The M-step is the same as in mixtures of Gaussian factor analyzers [GH96]. Vn = ( ˜ WT ˜Σ −1 ˜ W + λzIL)−1, mn = Vn ˜ WT ˜Σ −1˜yn (13) The only question remaining is how to obtain the value of ψ. By maximizing the lower bound, one can show that the optimal value is ψn = ˜ Wmn. This follows from the fact that the Bohning bound is tight for lse(η) when ψ = η, and that the curvature is independent of η [Boh92]. We iterate this update until convergence. In practice, we find that the method usually converges in five or fewer iterations. The most attractive feature of the bound described above is its computational efficiency. To see this, note that the posterior covariance Vn does not in fact depend on n if the data vector is fully 4 observed, since A is a constant matrix. Consequently we need only invert Vn once outside the EM loop instead of N times, once for each data point. We will see in the next section that the other existing quadratic bounds do not have this property. To derive the overall computational cost of our EM algorithm, let us define the total dimension of ˜yn to be D and assume K = 1. Computing Vn takes O(L3 +L2D) time, and computing each mn takes O(L2 +LD) time. So the total cost of one E-step is O(L3 + L2D + NI(L2 + LD)), where I is the number of variational updates. If there is missing data, Vn will change across data cases, so the total cost will be O(NI(L3 + L2D)). 3.1 Comparison with Other Bounding Methods In the binary case, the Bohning bound reduces to the following: log(1 + eη) ≤1 2Aη2 −bψη + cψ, where A = 1/4, bψ = Aψ −(1 + e−ψ)−1, and cψ = 1 2Aψ2 −(1 + e−ψ)−1ψ + log(1 + eψ). It is interesting to compare this bound to Jaakkola’s bound [JJ96] used in [Tip98, YT04]. This bound can also be written in the quadratic form: log(1 + eη) ≤1 2 ˜Aξη2 −˜bξη + ˜cξ, where ˜Aξ = 2λξ, ˜bξ = −1 2, ˜cξ = −λξξ2 −1 2ξ + log(1 + eξ), λξ = 1 2ξ( 1 1+e−ξ −1 2). Although the Jaakkola bound is tighter than the Bohning bound, it has higher computational complexity. The reason is that the ˜Aξ parameter depends on ξ and hence on n, which means we need to compute a different posterior covariance matrix for each n. Consequently, the cost of an E-step is O(NI(L3 + L2D)), even if there is no missing data (note the L3 term inside the NI loop). To explore the speed vs accuracy trade-off, we use the synthetic binary data described in [MHG08] with N = 600, D = 16, and 10% missing data. We learn a binary FA model with L = 10, λz = 1, and λw = 0. We learn on the observed entries in the data matrix and compute the mean squared error (MSE) on the held out missing entries as in [MHG08]. We average the results over 20 repetitions of the experiment. We see in Figure 2 (top left) that the Jaakkola bound gives a lower MSE than Bohning’s bound in less time on this data. Next, we consider the case where the training data is fully observed using a modified version of the data generating procedure described in [MHG08]. We vary D from 16 to 128 while setting L = 0.25D and N = 10D. We sample L different binary prototypes at random, assign each data case to a prototype, and add 10% random binary noise. We measure the average time per iteration over 40 iterations of each method. Figure 2 (bottom left) shows that the Bohning bound exhibits much better scalability per iteration than the Jaakkola bound in this regime. The speed issue becomes more serious when combining binary variables with categorical variables. Firstly, there is no direct extension of the Jaakkola bound to the general categorical case. Hence, to combine categorical variables with binary variables, we can use the Jaakkola bound for binary and the Bohning for the rest. However, this is not computationally efficient as we need to compute the posterior covariance for each data point because of the Jaakkola bound. For computational simplicity, we use Bohning’s bound for both binary and categorical data. Various other bounds and approximations to the multinomial likelihood also exist; however, they are all more computationally intensive, and do not give an efficient variational algorithm. To the best of our knowledge these methods have not been applied to the FA model, but we describe them briefly for completeness. An extension of the Jaakkola bound to the multinomial case was given in [Bou07]. However, this tends to be less accurate than the Bohning bound. Another approach [BL06] is to use the concavity of the log function to write lse(η) ≤ν(1+PM j=1 exp(ηj))−log ν −1, where ν is a variational parameter. This bound does not give closed form updates for the E and M steps so a numerical optimizer needs to be used (see [BL06] for details). Instead of using a bound, an alternative approach is to apply a quadratic approximation derived from a Taylor series expansion of the LSE function [AX07]. This provides a tighter approximation that could perform better than a bound, but one cannot make convergence guarantees when using it inside of EM. In practice we found this alternative approach to be very slow on the datasets that we consider. In view of its speed and simplicity, we will only consider the Bohning method for the remainder of the paper. 5 10 0 10 1 10 2 0.05 0.1 0.15 Accuracy vs Speed MSE Time (s) FA−VM FA−VJM FA−SS 10 −2 10 0 10 2 10 −1 10 0 10 1 Test Sensitivity: 10% Mis. MSE Prior Strength (λW) FA−VM FA−MM FA−SS 10 −2 10 0 10 2 10 −1 10 0 10 1 Test Sensitivity: 50% Mis. MSE Prior Strength (λW) 16 32 64 128 0 0.5 1 1.5 2 2.5 Scalability Time per Iteration (s) Data Dimension (D) FA−VM FA−VJM 10 −2 10 0 10 2 10 −3 10 −2 10 −1 10 0 Train Sensitivity: 10% Mis. Train MSE Prior Strength (λW) 10 −2 10 0 10 2 10 −3 10 −2 10 −1 10 0 Train Sensitivity: 50% Mis. Train MSE Prior Strength (λW) Figure 2: Top left: accuracy vs speed of variational EM with the Bohning bound (FA-VM), Jaakkola bound (FA-VJM) and HMC (FA-SS) on synthetic binary data. Bottom left: Time per iteration of EM with Bohning bound and Jaakkola bound as we vary D. Right: MSE vs λw for FA-MM, FA-VM, and FA-SS on synthetic Gaussian data. We show results on the test and training sets, for 10% and 50% missing data. 4 Alternative Estimation Approaches In this section, we discuss several alternative methods for fitting the generalized FA model in the case K = 1, which we compare to the VM method. We defer comparisons of FA to mixture models to Section 5. 4.1 Maximize-Maximize (MM) Method The simplest approach to fit the FA model is to maximize log p(Y, Z, W|λw, λz) with respect to Z and W, the matrix of latent factor values and the factor loading matrix. It is straightforward to compute the gradient of the log posterior and apply a generic optimizer (we use the limited-memory quasi-newton method). Alternatively, one can use coordinate descent [CDS02]. We set the hyperparameters λw and λz by cross validation. To handle missing data, we simply evaluate the gradients by only summing over the observed entries of Y. At test time, consider a data vector consisting of missing and observed components, y∗= [y∗m, y∗o]. To fill in the missing entries, we compute ˆz∗= arg max p(z∗, y∗o| ˆ W) and use it with ˆθ to predict y∗m. The MM approach is simple and widely applicable, but these benefits come at the expense of ignoring the posterior variance of Z [WCS08]. This has negative consequences for the method in terms of sensitivity to the parameters λw and λz. To illustrate this effect, we generate a continuous dataset using D = 10, L = 5, and N = 200 data cases by sampling from the FA model. We set λw = 1, λz = 1, and σc = 0.1. We standardize each data dimension to have unit variance and zero mean. We consider the case of 10% and 50% missing data. We evaluate the sensitivity of the methods to the setting of the posterior precision parameter λw by varying it over the range 10−2 to 102. We fix λz = 1, since this is the standard assumption when fitting FA models. We run the methods on a random 50/50 train/test split. We train on the observed entries in the training set, and then compute MSE on the missing entries in the training and test sets. We average the results over 20 repetitions of the experiment. Figure 2 (top right) shows that the test MSE of the MM method is extremely sensitive to the prior precision λw. We can see that this sensitivity increases as a function of the missing data rate. We hypothesize that this is a result of the MM method ignoring the posterior uncertainty in Z. 6 This is supported by looking at the MSE on the training set, Figure 2 (bottom right). We see that the MM method overfits when λw is small. Consequently, MM requires a careful discrete search over the values of λw, which is slow, since the quality of each such value must be estimated by cross-validation. By contrast, the VM method takes the posterior uncertainty about Z into account, resulting in almost no sensitivity to λw over this range. Henceforth we set λw = 0 for VM, meaning we are performing (approximate) maximum likelihood parameter estimation. 4.2 Sample-Sample (SS) Method An alternative to the MM approach is to sample both Z and W from their posteriors using Hamiltonian Monte Carlo (HMC) [MHG08]. We call this the “SS” method, since we sample both Z and W. HMC leverages the fact that we can compute the gradient of the log posterior in closed form. However, it has several important parameters that must be set including the step size, the momentum distribution, the number of leapfrog steps, etc. To handle missing data, we can simply evaluate the gradients by only summing over the observed entries of Y. We do not need to impute the missing entries on the training set. At test time, we have a collection of samples of W. For each sample of W and each test case, we sample a set of z, and compute an averaged prediction for ym. In Figure 2 (right), we see that SS is insensitive to λw, just like VM, since it also models posterior uncertainty in Z (note that the absolute MSE values are higher for SS than VM since for continuous data, VM corresponds to EM with an exact posterior). However, in Figure 2 (top left), we see that SS can be much slower than VM. In the remainder of the paper we focus on deterministic fitting methods only. 5 Experiments on Real Data In this section, we evaluate the performance of our model on real data with mixed continuous and discrete variables. We consider the following three cases of our model: (1) a model with latent factors but no mixtures (FA) (2) a model with mixtures but no latent factors (Mix) and (3) the general mixture of factor analyzers model (MixFA). To learn the FA model, we consider the FAMM and FA-VM approaches. For the Mix model, we use the standard EM algorithm. In the Mix model, continuous variables can be modeled with either a diagonal or a full covariance matrix. We refer to these two variants as Mix-Diag and Mix-Full. For MixFA model, we use the VM approach. This gives us five methods: FA-MM, FA-VM, MixFA, Mix-Full and Mix-Diag. We consider three real datasets of different sizes (see the table in Figure 3).2 For each dataset, we use 70% for training, 10% for validation and 20% for testing. We consider 20 splits for each dataset. We use the validation set to determine the number of latent factors and the number of mixtures (ranges shown in the table) with imputation error (described below) as our performance objective. For the FA-MM method, we set the values of the regularization parameters λz and λw by cross validation. We use the range {0.01, 0.1, 1, 10, 100} for both λz and λw . As VM is robust to the setting of these parameters, we set λz = 1 and λw = 0. One way to assess the performance of a generative model is to see how well it can impute missing data. We do this by randomly introducing missing values in the test data with a missing data rate of 0.3. For continuous variables, we compute the imputation MSE averaged over all the missing values (these variables are standardized beforehand). For discrete variables, we report the crossentropy (averaged over missing values) defined as yT log ˆp, where ˆpm is the estimated probability that y = m and y uses the one-of-(M + 1) encoding. These errors are shown in Figure 3 along with the running time for ASES dataset in the bottom right subfigure. We see that FA-VM consistently performs better than FA-MM for all the datasets. Moreover, because of the need for cross-validation, FA-MM takes more time than FA-VM. We also see that the Mix model, although faster, performs worse than FA-VM. Finally, as expected, MixFA generally performs slightly better than FA, but takes longer to run. 2Adult and Auto are available in UCI repository, while ASES dataset is a subset of Asia-Europe Survey from www.icpsr.umich.edu 7 Dataset Details Auto Adult ASES N 392 45222 16815 Dd 3 5 42 P Md 21 27 156 Dc 5 4 0 D 26 31 156 L 5, 13, 4, 15, 20, 40, 26 31 60, 80 K 1, 5, 1, 5, 1, 10, 20, 10, 20 10, 20 30, 40 0.4 0.5 0.6 Error Discrete Auto 0.2 0.3 0.4 Error Continuous FA−MM FA−VM MixFA Mix−Full Mix−Diag 0.4 0.5 0.6 Error Discrete Adult 0.8 0.9 1 Error Continuous FA−MM FA−VM MixFA Mix−Full Mix−Diag 0.4 0.5 Error Discrete ASES 10 0 10 2 10 4 Time in sec FA−MM FA−VM MixFA Mix−Full Mix−Diag Figure 3: Left: the table shows the details of each dataset used. Here D = Dc + P Md is the total size of the data vector. L and K are the ranges of number of latent factors and mixture components used for cross validation. Note that the maximum value of L is D, as required by the FA model. Right: the figure shows the imputation error for each dataset for continuous and discrete variables. The bottom right subfigure shows the timing comparison for the ASES dataset. 6 Discussion and Future Work In this work we have proposed a new variational EM algorithm for fitting factor analysis models with mixed data. The algorithm is based on the Bohning bound, a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the Bohning bound iteration is theoretically faster than Jaakkola’s bound iteration and we have demonstrated this advantage empirically. More importantly, the Bohning bound also easily extends to the categorical case. This enables, for the first time, an efficient variational method for fitting a factor analysis model to mixed continuous, binary, and categorical observations. In comparison to the maximize-maximize (MM) method, which forms the basis of ePCA and other matrix factorization methods, our variational EM method accounts for posterior uncertainty in the latent factors, leading to reduced sensitivity to hyper parameters. This has important practical consequences as the MM method requires extensive cross validation while our approach does not. We have compared a range of models and algorithms in terms of imputation performance on real data. This analysis shows that the cost of the cross validation search for MM is higher than the cost of fitting the FA model using our method. It also shows that standard alternatives to FA, such as finite mixture models, do not perform as well as FA. Finally, we show that the MixFA model can yield a performance improvement over a single FA model, although at a higher computational cost. We note that the quadratic bound that we study can be used in a variety of other models, such as linear-Gaussian state-space models with categorical observations [SH03]. It might be an interesting alternative to a Laplace approximation to the posterior, which is used in [KPBSK10, RMC09]. The bound might also be useful in the context of the correlated topic model [BL06, AX07], where similar variational EM methods have been applied. In the Bayesian statistics literature, it is common to use latent factor models combined with a probit observation model; this allows one to perform inference for the latent states using efficient auxiliary-variable MCMC techniques (see e.g., [HSC09, Dun07]). Additionally, the recently proposed Riemannian Manifold Hamiltonian Monte Carlo sampler [GCC09] may significantly speedup sampling-based approaches for mixed-data factor analysis models. We leave a comparison to these approaches to future work. Acknowledgments We would like to thank the reviewers for their helpful coments. This work was completed in part at the Xerox Research Center Europe and was supported by the Pacific Institute for the Mathematical Sciences and the Killam Trusts at the University of British Columbia. 8 References [AX07] A. Ahmed and E. Xing. On tight approximate inference of the logistic-normal topic admixture model. In AI/Statistics, 2007. [BDdF+03] Kobus Barnard, Pinar Duygulu, Nando de Freitas, David Forsyth, David Blei, and Michael I. Jordan. Matching words and pictures. J. of Machine Learning Research, 3:1107–1135, 2003. [BJ04] W. Buntine and A. Jakulin. Applying Discrete PCA in Data Analysis. In UAI, 2004. [BL06] D. Blei and J. Lafferty. Correlated topic models. In NIPS, 2006. [BNJ03] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. J. of Machine Learning Research, 3:993–1022, 2003. [Boh92] D. Bohning. Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math., 44:197–200, 1992. [Bou07] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS 2007 Workshop on Approximate Inference in Hybrid Models, 2007. [CDS02] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal components analysis to the exponential family. In NIPS-14, 2002. [Dun07] D. Dunson. Bayesian methods for latent trait modelling of longitudinal data. Stat. Methods Med. Res., 16(5):399–415, Oct 2007. [GCC09] M. Girolami, B. Calderhead, and S.A. Chin. Riemannian manifold hamiltonian monte carlo. Arxiv preprint arXiv:0907.1100, 2009. [GH96] Z. Ghahramani and G. Hinton. The EM algorithm for mixtures of factor analyzers. Technical report, Dept. of Comp. Sci., Uni. Toronto, 1996. [HSC09] P. R. Hahn, J. Scott, and C. Carvahlo. Sparse Factor-Analytic Probit Models. Technical report, Duke, 2009. [JJ96] T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In AI/Statistics, 1996. [KPBSK10] S. Koyama, L. Perez-Bolde, C. Shalizi, and R. Kass. Approximate methods for statespace models. Technical report, CMU, 2010. [LT10] J. Li and D. Tao. Simple exponential family PCA. In AI/Statistics, 2010. [MHG08] S. Mohamed, K. Heller, and Z. Ghahramani. Bayesian Exponential Family PCA. In NIPS, 2008. [RMC09] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian Inference for Latent Gaussian Models Using Integrated Nested Laplace Approximations. J. of Royal Stat. Soc. Series B, 71:319–392, 2009. [Row97] S. Roweis. EM algorithms for PCA and SPCA. In NIPS, 1997. [SH03] V. Siivola and A. Honkela. A state-space method for language modeling. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 548–553, 2003. [TB99] M. Tipping and C. Bishop. Probabilistic principal component analysis. J. of Royal Stat. Soc. Series B, 21(3):611–622, 1999. [Tip98] M. Tipping. Probabilistic visualization of high-dimensional binary data. In NIPS, 1998. [WCS08] Max Welling, Chaitanya Chemudugunta, and Nathan Sutter. Deterministic latent variable models and their pitfalls. In Intl. Conf. on Data Mining, 2008. [WK01] Michel Wedel and Wagner Kamakura. Factor analysis with (mixed) observed and latent variables in the exponential family. Psychometrika, 66(4):515–530, December 2001. [YT04] K. Yu and V. Tresp. Heterogenous data fusion via a probabilistic latent-variable model. In Organic and Pervasive Computing (ARCS 2004), 2004. 9
2010
59
4,101
On Herding and the Perceptron Cycling Theorem Andrew E. Gelfand, Yutian Chen, Max Welling Department of Computer Science University of California, Irvine {agelfand,yutianc,welling}@ics.uci.edu Laurens van der Maaten Department of CSE, UC San Diego PRB Lab, Delft University of Tech. lvdmaaten@gmail.com Abstract The paper develops a connection between traditional perceptron algorithms and recently introduced herding algorithms. It is shown that both algorithms can be viewed as an application of the perceptron cycling theorem. This connection strengthens some herding results and suggests new (supervised) herding algorithms that, like CRFs or discriminative RBMs, make predictions by conditioning on the input attributes. We develop and investigate variants of conditional herding, and show that conditional herding leads to practical algorithms that perform better than or on par with related classifiers such as the voted perceptron and the discriminative RBM. 1 Introduction The invention of the perceptron [12] goes back to the very beginning of AI more than half a century ago. Rosenblatt’s very simple, neurally plausible learning rule made it an attractive algorithm for learning relations in data: for every input xi, make a linear prediction about its label: y∗ i = wT xi and update the weights as, w ←w + xi(yi −y∗ i ) (1) A critical evaluation by Minsky and Papert [11] revealed the perceptron’s limited representational power. This fact is reflected in the behavior of Rosenblatt’s learning rule: if the data is linearly separable, then the learning rule converges to the correct solution in a number of iterations that can be bounded by (R/γ)2, where R represents the norm of the largest input vector and γ represents the margin between the decision boundary and the closest data-case. However, ‘for data sets that are not linearly separable, the perceptron learning algorithm will never converge’ (quoted from [1]). While the above result is true, the theorem in question has something much more powerful to say. The ‘perceptron cycling theorem’ (PCT) [2, 11] states that for the inseparable case the weights remain bounded and do not diverge to infinity. In this paper, we show that the implication of this theorem is that certain moments are conserved on average. Denoting the data-case selected at iteration t by it (note that the same data-case can be picked multiple times), the corresponding attribute vector and label by (xit, yit) with xi ∈X, and the label predicted by the perceptron at iteration t for data-case it by y∗ it, we obtain the following result: || 1 T T X t=1 xityit −1 T T X t=1 xity∗ it|| ∼O(1/T) (2) This result implies that, even though the perceptron learning algorithm does not converge in the inseparable case, it generates predictions that correlate with the attributes in the same way as the true labels do. More importantly, the correlations converge to the sample mean with a rate 1/T, which is much faster than sampling based algorithms that converge at a rate 1/ √ T. By using general features φ(x), the above result can be extended to the matching of arbitrarily complicated statistics between data and predictions. 1 In the inseparable case, we can interpret the perceptron as a bagging procedure and average predictions instead of picking the single best (or last) weights found during training. Although not directly motivated by the PCT and Eqn. 2, this is exactly what the voted perceptron (VP) [5] does. Interesting generalization bounds for the voted perceptron have been derived in [5]. Extensions of VP to chain models have been explored in, e.g. [4]. Herding is a seemingly unrelated family of algorithms for unsupervised learning [15, 14, 16, 3]. In traditional methods for learning Markov Random Field (MRF) models, the goal is to converge to a single parameter estimate and then perform (approximate) inference in the resulting model. In contrast, herding combines the learning and inference phases by treating the weights as dynamic quantities and defining a deterministic set of updates such that averaging predictions preserves certain moments of the training data. The herding algorithm generates a weakly chaotic sequence of weights and a sequence of states of both hidden and visible variables of the MRF model. The intermediate states produced by herding are really ‘representative points’ of an implicit model that interpolates between data cases. We can view these states as pseudo-samples, which analogously to Eqn. 2, satisfy certain constraints on their average sufficient statistics. However, unlike in perceptron learning, the non-convergence of the weights is needed to generate long, non-periodic trajectories of states that can be averaged over. In this paper, we show that supervised perceptron algorithms and unsupervised herding algorithms can all be derived from the PCT. This connection allows us to strengthen existing herding results. For instance, we prove fast convergence rates of sample averages when we use small mini-batches for making updates, or when we use incomplete optimization algorithms to run herding. Moreover, the connection suggests new algorithms that lie between supervised perceptron and unsupervised herding algorithms. We refer to these algorithms as “conditional herding” (CH) because, like conditional random fields, they condition on the input features. From the perceptron perspective, conditional herding can be understood as “voted perceptrons with hidden units”. Conditional herding can also be interpreted as the zero temperature limit of discriminative RBMs (dRBMs) [10]. 2 Perceptrons, Herding and the Perceptron Cycling Theorem We first review the perceptron cycling theorem that was initially introduced in [11] with a gap in the proof that was fixed in [2]. A sequence of vectors {wt}, wt ∈RD, t = 0, 1, . . . is generated by the following iterative procedure: wt+1 = wt + vt, where vt is an element of a finite set, V, and the norm of vt is bounded: maxi ||vi|| = R < ∞. Perceptron Cycling Theorem (PCT). ∀t ≥0: If wT t vt ≤0, then there exists a constant M > 0 such that ∥wt −w0∥< M. The theorem still holds when V is a finite set in a Hilbert space. The PCT immediately leads to the following result: Convergence Theorem. If PCT holds, then: || 1 T PT t=1 vt|| ∼O(1/T). This result is easily shown by observing that ||wT +1 −w0|| = || PT t=1 ∆wt|| = || PT t=1 vt|| < M, and dividing all terms by T. 2.1 Voted Perceptron and Moment Matching The voted perceptron (VP) algorithm [5] repeatedly applies the update rule in Eqn. 1. Predictions of test labels are made after each update and final label predictions are taken as an average of all intermediate predictions. The PCT convergence theorem leads to the result of Eqn. 2, where we identify V = {xi(yi −y∗ i )| , yi = ±1, y∗ i = ±1, i = 1, . . . , N}. For the VP algorithm, the PCT thus guarantees that the moments ⟨xy⟩˜p(x,y) (with ˜p the empirical distribution) are matched with ⟨xy∗⟩p(y∗|x)˜p(x) where p(y∗|x) is the model distribution implied by how VP generates y∗. In maximum entropy models, one seeks a model that satisfies a set of expectation constraints (moments) from the training data, while maximizing the entropy of the remaining degrees of freedom [9]. In contrast, a single perceptron strives to learn a deterministic mapping p(y∗|x) = δ[y∗−arg maxy(ywT x)] that has zero entropy and gets every prediction on every training case 2 correct (where δ is the delta function). Entropy is created in p(y∗|x) only when the weights wt do not converge (i.e. for inseparable data sets). Thus, VP and maximum entropy methods are related, but differ in how they handle the degrees of freedom that are unconstrained by moment matching. 2.2 Herding A new class of unsupervised learning algorithms, known as “herding”, was introduced in [15]. Rather than learning a single ‘best’ MRF model that can be sampled from to estimate quantities of interest, herding combines learning and inference into a single process. In particular, herding produces a trajectory of weights and states that reproduce the moments of the training data. Consider a fully observed MRF with features φ(x), x ∈X = [1, . . . , K]m with K the number of states for each variable xj (j = 1, . . . , m) and with an energy function E(x) given by: E(x) = −wT φ(x). (3) In herding [15], the parameters w are updated as: wt+1 = wt + φ −φ(x∗ t ), (4) where φ = 1 N P i φ(xi) and x∗ t = arg maxx wT t φ(x). Eqn. 4 looks like a maximum likelihood (ML) gradient update, with constant learning rate and maximization in place of expectation in the right-hand side. This follows from taking the zero temperature limit of the ML objective (see Section 2.5). The maximization prevents the herding sequence from converging to a single point estimate on this alternative objective. Let {wt} denote the sequence of weights and {x∗ t } denote the sequence of states (pseudo-samples) produced by herding. We can apply the PCT to herding by identifying V = {φ −φ(x∗)| x∗∈X}. It is now easy to see that, in general, herding does not converge because under very mild conditions we can always find an x∗ t such that wT t vt < 0. From the PCT convergence theorem, we also see that ||φ −1 T PT t=1 φ(x∗ t )|| ∼O(1/T), i.e. the pseudo-sample averages of the features converge to the data averages φ at a rate 1/T 1. This is considerably faster than i.i.d. sampling from the corresponding MRF model, which would converge at a rate of 1/ √ T. Since the cardinality of the set V is exponentially large (i.e. |V| = Km), finding the maximizing state x∗ t at each update may be hard. However, the PCT only requires us to find some state x∗ t such that wT t vt ≤0 and in most cases this can easily be verified. Hence, the PCT provides a theoretical justification for using a local search algorithm that performs partial energy maximization. For example, we may start the local search from the state we ended up in during the previous iteration (a so-called persistent chain [13, 17]). Or, one may consider contrastive divergence-like algorithms [8], in which the sampling or mean field approximation is replaced by a maximization. In this case, maximizations are initialized on all data-cases and the weights are updated by the difference between the average over the data-cases minus the average over the {x∗ i } found after (partial) maximization. In this case, the set V is given by: V = {φ −1 N P i φ(x∗ i )| x∗ i ∈X ∀i}. For obvious reasons, it is now guaranteed that wT t vt ≤0. In practice, we often use mini-batches of size n < N instead of the full data set. In this case, the cardinality of the set V is enlarged to |V| = C(n, N)Km, with C(n, N) representing the ‘n choose N’ ways to compute the sample mean φ(n) based on a subset of n data-cases. The negative term remains unaltered. Since the PCT still applies: || 1 T PT t=1 φ(n),t −1 T PT t=1 φ(x∗ t )|| ∼O(1/T). Depending on how the mini-batches are picked, convergence onto the overall mean φ can be either O(1/ √ T) (random sampling with replacement) or O(1/T) (sampling without replacement which has picked all data-cases after ⌈N/n⌉rounds). 2.3 Hidden Variables The discussion so far has considered only constant features: φ(x, y) = xy for VP and φ(x) for herding. However, the PCT allows us to consider more general features that depend on the weights 1Similar convergence could also be achieved (without concern for generalization performance) by sampling directly from the training data. However, herding converges with rate 1/T and is regularized by the weights to prevent overfitting. 3 w, as long as the image of this feature mapping (and therefore, the update vector v) is a set of finite cardinality. In [14], such features took the form of ‘hidden units’: φ(x, z), z(x, w) = arg max z′ wT φ(x, z′) (5) In this case, we identify the vector v as v = φ(x, z) −φ(x∗, z∗). In the left-hand term of this expression, x is clamped to the data-cases and z is found as in Eqn. 5 by maximizing every data-case separately; in the right-hand (or negative) term, x∗, z∗are found by jointly maximizing wT φ(x, z). The quantity φ(x, z) denotes a sample average over the training cases. We note that φ(x, z) indeed maps to a finite domain because it depends on the real parameter w only through the discrete state z. We also notice again that wT v ≤0 because of the definition of (x∗, z∗). From the convergence theorem we find that, || 1 T PT t=1 φ(x, zt) −1 T PT t=1 φ(x∗ t , z∗ t )|| ∼O(1/T). This result can be extended to mini-batches as well. 2.4 Conditional Herding We are now ready to propose our new algorithm: conditional herding (CH). Like the VP algorithm, CH is concerned with discriminative learning and, therefore, it conditions on the input attributes {xi}. CH differs from VP in that it uses hidden variables, similar to the herder described in the previous subsection. In the most general setting, CH uses features: φ(x, y, z), z(x, y, w) = arg max z′ wT φ(x, y, z′). (6) In the experiments in Section 3, we use the explicit form: wT φ(x, y, z) = xT Wz + yT Bz + θT z + αT y. (7) where W, B, θ and α are the weights, z is a binary vector and y is a binary vector in a 1-of-K scheme (see Figure 1). At each iteration t, CH randomly samples a subset of the data-cases and their labels Dt = {xit, yit} ⊆D. For every member of this mini-batch it computes a hidden variable zit using Eqn. 6. The parameters are then updated as: wt+1 = wt + η |Dt| X it∈Dt (φ(xit, yit, zit) −φ(xit, y∗ it, z∗ it)) (8) In the positive term, zit, is found as in Eqn. 5. The negative term is obtained (similar to the perceptron) by making a prediction for the labels, keeping the input attributes fixed: (y∗ it, z∗ it) = arg max y′,z′ wT φ(xit, y′, z′), ∀it ∈Dt. (9) For the PCT to apply to CH, the set V of update vectors must be finite. The inputs x can be realvalued because we condition on the inputs and there will be at most N distinct values (one for each data-case). However, since we maximize over y and z these states must be discrete for the PCT to apply. Eqn. 8 includes a potentially vector-valued stepsize η. Notice however that scaling w ←λw will have no affect on the values of z, z∗or y∗and hence on v. Therefore, if we also scale η ←λη, then the sequence of discrete states zt, z∗ t , y∗ t will not be affected either. Since wt = η Pt−1 t′=0 vt′ + w0 , the only scale that matters is the relative scale between w0 and η. In case there would just be a single attractor set for the dynamics of w, the initialization w0 would only represent a transient affect. However, in practice the scale of w0 relative to that of η does play an important role indicating that many different attractor sets exist for this system. Irrespective of the attractor we end up in, the PCT guarantees that: || 1 T T X t=1 1 |Dt| X it φ(xit, yit, zit) −1 T T X t=1 1 |Dt| X it φ(xit, y∗ it, z∗ it)|| ∼O(1/T). (10) In general, herding systems perform better when we use normalized features: ∥φ(x, z, y)∥= R, ∀(x, z, y). The reason is that herding selects states by maximizing the inner product wT φ 4 and features with large norms will therefore become more likely to be selected. In fact, one can show that states inside the convex hull of the φ(x, y, z) are never selected. For binary (±1) variables all states live on the convex hull, but this need not be true in general, especially when we use continuous attributes x. To remedy this, one can either normalize features or add one additional feature2 φ0(x, y, z) = p R2max −||φ(x, y, z)||2, where Rmax = maxx,y,z φ(x, y, z) where x is only allowed to vary over the data-cases. Finally, predictions on unseen test data are made by: (y∗ tst,t, z∗ tst,t) = arg max y′,z′ wT t φ(xtst, y′, z′), (11) The algorithm is summarized in the algorithm-box below. Conditional Herding (CH) 1. Initialize w0 (with finite norm) and yavg,j = 0 for all test cases j. 2. For t ≥0: (a) Choose a subset {xit, yit} = Dt ⊆D. For each (xit, yit), choose a hidden state zit. (b) Choose a set of “negative states” {(x∗ it = xit, y∗ it, z∗ it)}, such that: 1 |Dt| X it wT t−1φ(xit, yit, zit) ≤ 1 |Dt| X it wT t−1φ(xit, y∗ it, z∗ it). (12) 3. Update wt according to Eqn. 8. 4. Predict on test data as follows: (a) For every test case xtst,j at every iteration, choose negative states (y∗ tst,jt, z∗ tst,jt) in the same way as for training data. (b) Update online average over predictions, yavg,j, for all test cases j. 2.5 Zero Temperature Limit of Discriminative MRF Learning Regular herding can be understood as gradient descent on the zero temperature limit of an MRF model. In this limit, gradient updates with constant step size never lead to convergence, irrespective of how small the step size is. Analogously, CH can be viewed as constant step size gradient updates on the zero temperature limit of discriminative MRFs (see [10] for the corresponding RBM model). The finite temperature model is given by: p(y|x) = P z exp  wT φ(y, z, x)  P z′,y′ exp [wT φ(y′, z′, x)]. (13) Similar to herding [14], conditional herding introduces a temperature by replacing w by w/T and takes the limit T →0 of ℓT ≜Tℓ, where ℓ= P i log p(yi|xi). 3 Experiments We studied the behavior of conditional herding on two artificial and four real-world data sets, comparing its performance to that of the voted perceptron [5] and that of discriminative RBMs [10]. The experiments on artificial and real-world data are discussed separately in Section 3.1 and 3.2. We studied conditional herding in the discriminative RBM architecture illustrated in Figure 1 (i.e., we use the energy function in Eqn. 7). Per the discussion in Section 2.4, we added an additional feature φ0(x) = p R2max −||x||2 with Rmax = maxi ∥xi∥in all experiments. 2If in test data this extra feature becomes imaginary we simply set it to zero. 5 xi1 xi2 xiD ... yi1 yi2 yiC ... zi1 zi2 ziK ... W B θ α Figure 1: Discriminative Restricted Boltzmann Machine model of distribution p(y, z|x). Voted perceptron Discr. RBM Cond. herding (a) Banana data set. (b) Lithuanian data set. Figure 2: Decision boundaries of VP, CH, and dRBMs on two artificial data sets. 3.1 Artificial Data To investigate the characteristics of VP, dRBMs and CH, we used the techniques to construct decision boundaries on two artificial data sets: (1) the banana data set; and (2) the Lithuanian data set. We ran VP and CH for 1, 000 epochs using mini-batches of size 100. The decision boundary for VP and CH is located at the location where the sign of the prediction y∗ tst changes. We used conditional herders with 20 hidden units. The dRBMs also had 20 hidden units and were trained by running conjugate gradients until convergence. The weights of the dRBMs were initialized by sampling from a Gaussian distribution with a variance of 10−4. The decision boundary for the dRBMs is located at the point where both class posteriors are equal, i.e., where p(y∗ tst = −1|˜xtst) = p(y∗ tst = +1|˜xtst) = 0.5. Plots of the decision boundary for the artificial data sets are shown in Figure 2. The results on the banana data set illustrate the representational advantages of hidden units. Since VP selects data points at random to update the weights, on the banana data set, the weight vector of VP tends to oscillate back and forth yielding a nearly linear decision boundary3. This happens because VP can regress on only 2+1 = 3 fixed features. In contrast, for CH the simple predictor in the top layer can regress onto M = 20 hidden features. This prevents the same oscillatory behavior from occurring. 3.2 Real-World Data In addition to the experiments on synthetic data, we also performed experiments on four real-world data sets - namely, (1) the USPS data set, (2) the MNIST data set, (3) the UCI Pendigits data set, and (4) the 20-Newsgroups data set. The USPS data set consists of 11,000, 16 × 16 grayscale images of handwritten digits (1, 100 images of each digit 0 through 9) with no fixed division. The MNIST data set contains 70, 000, 28 × 28 grayscale images of digits, with a fixed division into 60, 000 training and 10, 000 test instances. The UCI Pendigits consists of 16 (integer-valued) features extracted from the movement of a stylus. It contains 10, 992 instances, with a fixed division into 7, 494 training and 3, 498 test instances. The 20-Newsgroups data set contains bag-of-words representations of 18, 774 documents gathered from 20 different newsgroups. Since the bag-of-words representation 3On the Lithuanian data set, VP constructs a good boundary by exploiting the added ‘normalizing’ feature. 6 comprises over 60, 000 words, we identified the 5, 000 most frequently occurring words. From this set, we created a data set of 4, 900 binary word-presence features by binarizing the word counts and removing the 100 most frequently occurring words. The 20-Newsgroups data has a fixed division into 11, 269 training and 7, 505 test instances. On all data sets with real-valued input attributes we used the ‘normalizing’ feature described above. The data sets used in the experiments are multi-class. We adopted a 1-of-K encoding, where if yi is the label for data point xi, then yi = {yi,1, ..., yi,K} is a binary vector such that yi,k = 1 if the label of the ith data point is k and yi,k = −1 otherwise. Performing the maximization in Eqn. 9 is difficult when K > 2. We investigated two different procedures for doing so. In the first procedure, we reduce the multi-class problem to a series of binary decision problems using a one-versus-all scheme. The prediction on a test point is taken as the label with the largest online average. In the second procedure, we make predictions on all K labels jointly. To perform the maximization in Eqn. 9, we explore all states of y in a one-of-K encoding - i.e. one unit is activated and all others are inactive. This partial maximization is not a problem as long as the ensuing configuration satisfies wT t vt ≤0 4. The main difference between the two procedures is that in the second procedure the weights W are shared amongst the K classifiers. The primary advantage of the latter procedure is it less computationally demanding than the one-versus-all scheme. We trained the dRBMs by performing iterations of conjugate gradients (using 3 linesearches) on mini-batches of size 100 until the error on a small held-out validation set started increasing (i.e., we employed early stopping) or until the negative conditional log-likelihood on the training data stopped coming down. Following [10], we use L2-regularization on the weights of the dRBMs; the regularization parameter was determined based on the generalization error on the same heldout validation set. The weights of the dRBMs were initialized from a Gaussian distribution with variance of 10−4. CH used mini-batches of size 100. For the USPS and Pendigits data sets CH used a burn-in period of 1, 000 updates; on MNIST it was 5, 000 updates; and on 20 Newsgroups it was 20, 000 updates. Herding was stopped when the error on the training set became zero 5. The parameters of the conditional herders were initialized by sampling from a Gaussian distribution. Ideally, we would like each of the terms in the energy function in Eqn. 7 to contribute equally during updating. However, since the dimension of the data is typically much greater than the number of classes, the dynamics of the conditional herding system will be largely driven by W. To negate this effect, we rescaled the standard deviation of the Gaussian by a factor 1/M with M the total number of elements of the parameter involved (e.g. σW = σ/(dim(x) dim(z)) etc.). We also scale the step sizes η by the same factor so the updates will retain this scale during herding. The relative scale between η and σ was chosen by cross-validation. Recall that the absolute scale is unimportant (see Section 2.4 for details). In addition, during the early stages of herding, we adapted the parameter update for the bias on the hidden units θ in such a way that the marginal distribution over the hidden units was nearly uniform. This has the advantage that it encourages high entropy in the hidden units, leading to more useful dynamics of the system. In practice, we update θ as θt+1 = θt + η |Dt| P it(1 −λ) ⟨zit⟩−z∗ it, where ⟨zit⟩is the batch mean. λ is initialized to 1 and we gradually half its value every 500 updates, slowly moving from an entropy-encouraging update to the standard update for the biases of the hidden units. VP was also run on mini-batches of size 100 (with step size of 1). VP was run until the predictor started overfitting on a validation set. No burn-in was considered for VP. The results of our experiments are shown in Table 1. In the table, the best performance on each data set using each procedure is typeset in boldface. The results reveal that the addition of hidden units to the voted perceptron leads to significant improvements in terms of generalization error. Furthermore, the results of our experiments indicate that conditional herding performs on par with discriminative RBMs on the MNIST and USPS data sets and better on the 20 Newsgroups data set. The 20 Newsgroups data is high dimensional and sparse and both VP and CH appear to perform 4Local maxima can also be found by iterating over y∗,k tst , z∗,k tst,j, but the proposed procedure is more efficient. 5We use a fixed order of the mini-batches, so that if there are N data cases and the batch size is K, if the training error is 0 for ⌈N/K⌉iterations, the error for the whole training set is 0. 7 One-Versus-All Procedure XXXXXXXXX Data Set Technique VP Discriminative RBM Conditional herding 100 200 100 200 MNIST 7.69% 3.57% 3.58% 3.97% 3.99% USPS 5.03% (0.4%) 3.97% (0.38%) 4.02% (0.68%) 3.49% (0.45%) 3.35%(0.48%) UCI Pendigits 10.92% 5.32% 5.00% 3.37% 3.00% 20 Newsgroups 27.75% 34.78% 34.36% 29.78% 25.96% Joint Procedure XXXXXXXXX Data Set Technique VP Discriminative RBM Conditional herding 50 100 500 50 100 500 MNIST 8.84% 3.88% 2.93% 1.98% 2.89% 2.09% 2.09% USPS 4.86% 3.13% 2.84% 4.06% 3.36% 3.07% 2.81% (0.52%) (0.73%) (0.59%) (1.09%) (0.48%) (0.52%) (0.50%) UCI Pendigits 6.78% 3.80% 3.23% 8.89% 3.14% 2.57% 2.86% 20 Newsgroups 24.89% – 30.57% 30.07% – 25.76% 24.93% Table 1: Generalization errors of VP, dRBMs, and CH on 4 real-world data sets. dRBMs and CH results are shown for various numbers of hidden units. The best performance on each data set is typeset in boldface; missing values are shown as ‘-’. The std. dev. of the error on the 10-fold cross validation of the USPS data set is reported in parentheses. quite well in this regime. Techniques to promote sparsity in the hidden layer when training dRBMs exist (see [10]), but we did not investigate them here. It is also worth noting that CH is rather resilient to overfitting. This is particularly evident in the low-dimensional UCI Pendigits data set, where the dRBMs start to badly overfit with 500 hidden units, while the test error for CH remains level. This phenomena is the benefit of averaging over many different predictors. 4 Concluding Remarks The main contribution of this paper is to expose a relationship between the PCT and herding algorithms. This has allowed us to strengthen certain results for herding - namely, theoretically validating herding with mini-batches and partial optimization. It also directly leads to the insight that non-convergent VPs and herding match moments between data and generated predictions at a rate much faster than random sampling (O(1/T) vs. O(1/ √ T)). From these insights, we have proposed a new conditional herding algorithm that is the zero-temperature limit of dRBMs [10]. The herding perspective provides a new way of looking at learning as a dynamical system. In fact, the PCT precisely specifies the conditions that need to hold for a herding system (in batch mode) to be a piecewise isometry [7]. A piecewise isometry is a weakly chaotic dynamical system that divides parameter space into cells and applies a different isometry in each cell. For herding, the isometry is given by a translation and the cells are labeled by the states {x∗, y∗, z, z∗}, whichever combination applies. Therefore, the requirement of the PCT that the space V must be of finite cardinality translates into the division of parameter space in a finite number of cells, each with its own isometry. Many interesting results about piecewise isometries have been proven in the mathematics literature such as the fact that the sequence of sampled states grows algebraically with T and not exponentially as in systems with random or chaotic components [6]. We envision a fruitful cross-fertilization between the relevant research areas in mathematics and learning theory. Acknowledgments This work is supported by NSF grants 0447903, 0914783, 0928427 and 1018433 as well as ONR/MURI grant 00014-06-1-073. LvdM acknowledges support by the Netherlands Organisation for Scientific Research (grant no. 680.50.0908) and by EU-FP7 NoE on Social Signal Processing (SSPNet). References [1] C.M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. 8 [2] H.D. Block and S.A. Levin. On the boundedness of an iterative procedure for solving a system of linear inequalities. Proceedings of the American Mathematical Society, 26(2):229–235, 1970. [3] Y. Chen and M. Welling. Parametric herding. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010. [4] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, page 8. Association for Computational Linguistics, 2002. [5] Y. Freund and R.E. Schapire. Large margin classification using the perceptron algorithm. Machine learning, 37(3):277–296, 1999. [6] A. Goetz. Perturbations of 8-attractors and births of satellite systems. Internat. J. Bifur. Chaos, Appl. Sci. Engrg., 8(10):1937–1956, 1998. [7] A. Goetz. Global properties of a family of piecewise isometries. Ergodic Theory Dynam. Systems, 29(2):545–568, 2009. [8] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [9] E.T. Jaynes. Information theory and statistical mechanics. Physical Review Series II, 106(4):620–663, 1957. [10] H. Larochelle and Y. Bengio. Classification using discriminative Restricted Boltzmann Machines. In Proceedings of the 25th International Conference on Machine learning, pages 536– 543. ACM, 2008. [11] M.L. Minsky and S. Papert. Perceptrons; An introduction to computational geometry. Cambridge, Mass.,: MIT Press, 1969. [12] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386–408, 1958. [13] T. Tieleman. Training Restricted Boltzmann Machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine learning, volume 25, pages 1064–1071, 2008. [14] M. Welling. Herding dynamic weights for partially observed random field models. In Proc. of the Conf. on Uncertainty in Artificial Intelligence, Montreal, Quebec, CAN, 2009. [15] M. Welling. Herding dynamical weights to learn. In Proceedings of the 21st International Conference on Machine Learning, Montreal, Quebec, CAN, 2009. [16] M. Welling and Y. Chen. Statistical inference using weak chaos and infinite memory. In Proceedings of the Int’l Workshop on Statistical-Mechanical Informatics (IW-SMI 2010), pages 185–199, 2010. [17] L. Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability Theory and Related Fields, 82:625–645, 1989. 9
2010
6
4,102
Copula Bayesian Networks Gal Elidan Department of Statistics Hebrew University Jerusalem, 91905, Israel galel@huji.ac.il Abstract We present the Copula Bayesian Network model for representing multivariate continuous distributions, while taking advantage of the relative ease of estimating univariate distributions. Using a novel copula-based reparameterization of a conditional density, joined with a graph that encodes independencies, our model offers great flexibility in modeling high-dimensional densities, while maintaining control over the form of the univariate marginals. We demonstrate the advantage of our framework for generalization over standard Bayesian networks as well as tree structured copula models for varied real-life domains that are of substantially higher dimension than those typically considered in the copula literature. 1 Introduction Multivariate real-valued distributions are of paramount importance in a variety of fields ranging from computational biology and neuro-science to economics to climatology. Choosing and estimating a useful form for the marginal distribution of each variable in the domain is often a straightforward task. In contrast, aside from the normal representation, few univariate distributions have a convenient multivariate generalization. Indeed, modeling and estimation of flexible (skewed, multi-modal, heavy tailed) high-dimensional distributions is still a formidable challenge. Copulas [23] offer a general framework for constructing multivariate distributions using any given (or estimated) univariate marginals and a copula function C that links these marginals. The importance of copulas is rooted in Sklar’s theorem [29] that states that any multivariate distribution can be represented as a copula function of its marginals. The constructive converse is important from a modeling perspective as it allows us to separate the choice of the marginals and that of the dependence structure which is expressed in C. We can, for example, robustly estimate marginals using a non-parametric approach, and then use only few parameters to capture the dependence structure. This can result in a model that is easier to estimate and less prone to over-fitting than a fully nonparametric one, while at the same time avoiding the limitations of a fully parameterized distribution. In practice, copula constructions often lead to significant improvement in density estimation. Accordingly, there has been a dramatic growth of academic and practical interest in copulas in recent years, with applications ranging from mainstream financial risk assessment and actuarial analysis (e.g., Embrechts et al. [7]) to off-shore engineering (e.g., Accioly and Chiyoshi [2]). Despite the generality of the framework, constructing high-dimensional copulas is difficult, and much of the research involves only the bivariate case. Several works have attempted to overcome this difficulty by suggesting innovative ways in which bivariate copulas can be combined to form workable copulas of higher dimensions. These attempts, however, are either limited to hierarchical [26] or mixture of trees [14] compositions, or rely on a recursive construction of conditional bivariate copulas [1, 3, 17] that is somewhat elaborate for high dimensions. In practice, applications are almost always limited to a modest (< 10) number of variables (see Section 6 for further discussion). Bayesian networks (BNs) [25] offer a markedly different approach for representing multivariate distributions. In this widely used framework, a graph structure encodes independencies which imply a decomposition of the joint density into local terms (the density of each variable conditioned on its 1 parents). This decomposition in turn facilitates efficient probabilistic computation and estimation, making the framework amenable to high-dimensional domains. However, the expressiveness of these models is hampered by practical considerations that almost always lead to the the reliance on simple parametric forms. Specifically, non-parametric variants of BNs (e.g., [9, 27]) typically involve elaborate training setups with a running time that grows unfavorably with the number of samples and local graph connectivity. Furthermore, aside from the case of the normal distribution, the form of the univariate marginal is neither under control nor is it typically known. Our goal is to construct flexible multivariate continuous distributions that maintain desired marginals while accommodating tens and hundreds of variables, or more. We present Copula Bayesian Networks (CBNs), an elegant marriage between the copula and the Bayesian network frameworks.1 As in BNs, we make use of a graph to encode independencies that are assumed to hold. Differently, we rely on local copula functions and an explicit globally shared parameterization of the univariate densities. This allows us to retain the flexibility of BNs, while offering control over the form of the marginals, resulting in substantially improved multivariate densities (see Section 7 for a discussion of the related works of Kirshner [14] and Liu et al. [20]). At the heart of our approach is a novel reparameterization of a conditional density using a copula quotient. With this construction, we prove a parallel to the BN factorization theorem: a decomposition of the joint density according to the structure of the graph implies a decomposition of the joint copula. Conversely, a product of local copula-based quotient terms is a valid multivariate copula. This result provides us with a flexible modeling tool where joint densities are constructed via a composition of local copulas and marginal densities. Importantly, the construction also allows us to use standard BN machinery for estimation and structure learning. Thus, our model opens the door for flexible explorative learning of high-dimensional models that retain desired marginal characteristics. We learn the structure and parameters of a CBN for three varied real-life domains that are of a significantly higher dimension than typically reported in the copula literature. Using standard copula functions, we show that in all cases our approach leads to consistent and significant improvement in generalization when compared to standard BN models as well as a tree-structured copula model. 2 Copulas Let X = {X1, . . . , XN} be a finite set of real-valued random variables and let FX (x) ≡P(X1 ≤ x1, . . . , Xn ≤xN) be a (cumulative) distribution function over X, with lower case letters denoting assignment to variables. By slight abuse of notation, we use F(xi) ≡F(Xi ≤xi, XX/Xi = ∞) and f(xi) ≡fXi(xi), and similarly for sets of variables f(y) ≡fY(y). A copula function [23, 29] links marginal distributions to form a multivariate one. Formally, Definition 2.1: Let U1, . . . , UN be real random variables marginally uniformly distributed on [0, 1]. A copula function C : [0, 1]N →[0, 1] is a joint distribution function C(u1, . . . , uN) = P(U1 ≤u1, . . . , UN ≤uN) Copulas are important because of the following seminal result Theorem 2.2: [Sklar 1959] Let F(x1, . . . , xN) be any multivariate distribution over real-valued random variables, then there exists a copula function such that F(x1, . . . , xN) = C(F(x1), . . . , F(xN)). Furthermore, if each F(xi) is continuous then C is unique. The constructive converse which is of central interest from a modeling perspective is also true: since for any random variable the cumulative distribution F(xi) is uniformly distributed on [0, 1], any copula function taking the marginal distributions {F(xi)} as its arguments, defines a valid joint distribution with marginals F(xi). Thus, copulas are “distribution-generating” functions that allow us to separate the choice of the univariate marginals and that of the dependence structure expressed in the copula function C, often resulting in an effective real-valued construction.2. 1A preliminary draft of this paper appeared as a technical report. A companion paper [6] addresses the question of performing approximate inference in Copula Bayesian networks. 2Copulas can also be defined given non-continuous marginals and for ordinal random variables. These extensions are orthogonal to our work and to maintain clarity we focus here on the continuous case 2 Figure 1: Samples from the 2dimensional normal copula density using a correlation matrix with a unit diagonal and an off-diagonal coefficient of 0.25. (left) with zero mean and unit variance normal marginals; (right) with a mixture of two Gaussians marginals. ï1 0 1 2 3 4 5 ï1 0 1 2 3 4 5 ï2 0 2 4 6 8 ï4 ï2 0 2 4 6 8 Normal(1, 1) marginals Mix of Gaussians marginals To derive the joint density f(x) = ∂NF (x) ∂x1...∂xN from the copula construction, assuming F has N-order partial derivatives (true almost everywhere when F is continuous), and using the chain rule, we have f(x) = ∂NC(F(x1), . . . , F(xN)) ∂F(x1) . . . ∂F(xN) Y i f(xi) = c(F(x1), . . . , F(xN)) Y i f(xi), (1) where c(F(x1), . . . , F(xN)), is called the copula density function. Eq. (1) will be of central use in this paper as we will directly model joint densities. Example 2.3: A simple copula widely explored in the financial community is the Gaussian copula constructed directly by inverting Sklar’s theorem [7] C({F(xi)}) = ΦΣ Φ−1(F(x1)), . . . , Φ−1(F(xN))  , (2) where Φ is the standard normal distribution and ΦΣ is the zero mean normal distribution with correlation matrix Σ. To get a sense of the power of copulas, Figure 1 shows samples generated from this copula using two different families of univariate marginals. More generally and without added computational difficulty, we can also mix and match marginals of different forms. 3 Copula Bayesian Networks (CBNs) As in the copula framework, our goal is to model real-valued multivariate distributions while taking advantage of the relative ease of one dimensional estimation. To cope with high-dimensional domains, as in BNs, we would also like to utilize independence assumptions encoded by a graph. To achieve this goal, we will construct multivariate copulas that are a composition of local copulas that follow the structure of the graph. We start with the building block of our construction. 3.1 Copula Parameterization of The Conditional Density As in the BN framework, the building block of our model will be a local conditional density. We start with a parameterization of such a density using copulas: Lemma 3.1: Let f(x | y), with y = {y1, . . . , yK}, be a conditional density function and let f(x) be the marginal density of X. Then there exists a copula density function c(F(x), F(y1), . . . , F(yK)) such that f(x | y) = Rc(F(x), F(y1), . . . , F(yK))f(x) where Rc is the ratio Rc(F(x), F(y1), . . . , F(yK)) ≡ c(F(x), F(y1), . . . , F(yK)) R c(F(x), F(y1), . . . , F(yK))f(x)dx = c(F(x), F(y1), . . . , F(yK)) ∂KC(1,F(y1),...,F(yK)) ∂F(y1)...∂F(yK) , and where Rc is defined to be 1 when y = ∅. The converse is also true, for any copula density function c, Rc(F(x), F(y1), . . . , F(yK))f(x) defines a valid conditional density function. Before proving this result, it is important to understand why the derivative form of denominator (right-most term) is more useful than the standard normalization integral R c(F(x), F(y1), . . . , F(yK))f(x)dx. Recall that c() is itself an N-order derivative of the copula function so computing our denominator is no more difficult than computing c(). Indeed, for the majority of existing copula functions, both have an explicit form. In contrast, the integral term depends both on the copula form and the univariate marginal, and is generally difficult to compute. 3 Proof: From the basic properties of cumulative distribution functions, we have that for any copula function C(1, F(y1), . . . , F(yK)) = F(y1, . . . , yk) and thus, using the derivative chain rule, f(y) = ∂KC(1, F(y1), . . . , F(yK)) ∂y1, . . . , yK = ∂KC(1, F(y1), . . . , F(yK)) ∂F(y1) . . . ∂F(yK) Y k f(yk). From Eq. (1) we have that there exists a copula density for which f(x, y1, . . . , yK) = c(F(x), F(y1), . . . , F(yK))f(x) Q k f(yk). It follows that there exists a copula for which f(x | y) = f(x, y1, . . . , yK) f(y) = c(F(x), F(y1), . . . , F(yK))f(x) Q k f(yk) ∂KC(1,F(y1),...,F(yK)) ∂F(y1)...∂F(yK) Q k f(yk) = c(F(x), F(y1), . . . , F(yK))f(x) ∂KC(1,F(y1),...,F(yK)) ∂F(y1)...∂F(yK) ≡Rc(F(x), F(y1), . . . , F(yK))f(x) As in Sklar’s theorem and Eq. (1), the converse follows easily by reversing the arguments. The implications of this result will underlie our construction: any copula density function c(x, y1, . . . , yK), together with f(x), can be used to parameterize a conditional density f(x | y). 3.2 Decomposition of The Joint Copula Let G be a directed acyclic graph whose nodes correspond to the random variables X, and let Pai = {Pai1, . . . , Paiki} be the parents of Xi in G. G encodes the independence statements I(G) = {(Xi ⊥NonDescendantsi | Pai)}, where NonDescendantsi are nodes that are non-descendants of Xi in G. We say that fX (x) decomposes according to G if it can be written as a product of conditional densities fX (x) = Q i f(Xi | Pai). It can be shown that if f decomposes according to G then I(G) hold in fX (x). The converse is also true: if I(G) hold in fX (x) then the density decomposes according to G (see [16], theorems 3.1 and 3.2). These results form the basis for the BN model [25] where a joint density is constructed via a composition of local conditional densities. We now show that similar results hold for a multivariate copula. This in turn will provide the basis for our construction of the CBN model. Theorem 3.2: Decomposition. Let G be a directed acyclic graph over X, and let fX (x) be parameterized via a joint copula density fX (x) = c(F(x1), . . . , F(xN)) Q i f(xi), with fX (x) strictly positive for all values of X. If fX (x) decomposes according to G then the copula density c(F(x1), . . . , F(xN)) also decomposes according to G c(F(x1), . . . , F(xN)) = Y i Rci(F(xi), {F(paik)}), where ci is a local copula that depends only on the value of Xi and its parents in G. Proof: Using the positivity assumption, we can rearrange Eq. (1) to get c(F(x1), . . . , F(xN)) = f(x) Q i f(xi). From Lemma 3.1 and the decomposition of f(x) we have c(F(x1), . . . , F(xN)) = f(x) Q i f(xi) = Q i f(xi | pai) Q i f(xi) = Q i Rci(F(xi), {F(paik)})f(xi) Q i f(xi) = Y i Rci(F(xi), {F(paik)}) The constructive converse that is of central interest here is also true: Theorem 3.3 : Composition. Let G be a directed acyclic graph over X. In addition, let {ci(F(xi), F(pai1), . . . , F(paiki))} be a set of strictly positive copula densities associated with the nodes of G that have at least one parent. If I(G) hold then the function g(F(x1), . . . , F(xN)) = Y i Rci(F(xi), {F(paik)}), is a valid copula density c(F(x1), . . . , F(xN)) over X. 4 This above theorem can be proved directly via induction or using our reparameterization lemma and standard BN results. It is important to note that the local copulas do not need to agree on the non-univariate marginals of overlapping variables. This is a result of the fact that each copula ci only appears as part of a quotient term which is used to parameterize a conditional density. This gives us the freedom to mix and match local copulas of different types. Equally important is the fact that aside from the univariate densities, we do not need to concern ourselves with any marginal constraints when estimating the parameters of these local copulas functions. 3.3 A Multivariate Copula Model We are now ready to construct a joint density given univariate marginals by properly composing local terms and without worrying about global coherence: Definition 3.4: A Copula Bayesian Network (CBN) is a triplet C = (G, ΘC, Θf) that encodes the joint density fX (x). ΘC is a set of local copula densities functions ci(F(xi), {F(paik)}) that are associated with the nodes of G that have at least one parent. Θf is the set of parameters representing the marginal densities f(xi). fX (x) is parameterized as fX (x) = Y i Rci(F(xi), {F(paik)})f(xi). Using our previous developments and applying Eq. (1) to fX (x), we have: Corollary 3.5: A Copula Bayesian Network defines a valid joint density fX (x) whose marginal distributions are parameterized by Θf and where the independence statements I(G) hold. The main difference between the CBN model and a regular BN, aside from a novel choice for the local conditional parameterization, is in the shared global component that has the explicit semantics of the univariate marginals. Concretely, the CBN model allows us to decompose the problem of representing a multivariate distribution with given (or estimated) univariate marginals into many local problems that, depending on the structure of G, can be substantially smaller in dimension. For each family of Xi and its parents we are still faced with the problem of choosing an appropriate local copula. In this work we simply limit ourselves to copulas that have convenient multivariate form, but any of the recently suggested methods for constructing multivariate copulas functions (see Section 6) can also be used. In either case, limiting ourselves to a smaller number of variables (a node and its parents) makes the construction of the local copula substantially easier than the construction of the full copula over X. Importantly, as in the case of BNs, our construction of a joint copula density that decomposes over the graph structure G also facilitates efficient parameter estimation and model selection (structure learning), as we briefly discuss in the next section. 4 Learning As in the case of BNs, the product form of our CBN facilitates relatively efficient estimation and model selection. The machinery is standard and only briefly described below. Parameter Estimation Given a complete dataset D of M instances where all of the variables X are observed in each instance, the log-likelihood of the data given a CBN model C is ℓ(D : C) = PM m=1 P i log f(xi[m]) + PM m=1 P i log Ri(F(xi)[m], F(pai1[m]), . . . , F(paiki[m])) While this objective appears to fully decompose according to the structure of G, each marginal distribution F(xi) actually appears in several local copula terms (of Xi and its children in G). To facilitate efficient estimation, we adopt the common approach where the marginals are estimated first [13]. Given F(xi), we can then estimate the parameters of each local copula independently of the others. We estimate the univariate densities using a standard normal kernel-based approach [24]. In this work we consider two of the simplest and most commonly used copula functions. For Frank’s Archimedean copula C(u1, . . . , uN) = −1 θ log 1 + Q i(e−θF(xi) −1)/(e−θ −1)N−1 , and for the Gaussian copula (see Section 2) with a uniform correlation parameter, we find the maximum 5 Wine Train Dow Jones Train Crime Train ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ï11 ï10 ï9 ï8 ï7 ï6 ï5 ï4 ï3 Maximum number of parents 10ïfold train logïprobability / instance ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ï32 ï30 ï28 ï26 ï24 ï22 ï20 ï18 ï16 Maximum number of parents ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 20 40 60 80 100 120 140 160 180 200 Maximum number of parents Sigmoid BN KernelïGaussian CBN KernelïUnifCorr CBN KernelïFrank’s CBN NormalïUnifCorr CBN Wine Test Dow Jones Test Crime Test ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ï11 ï10 ï9 ï8 ï7 ï6 ï5 ï4 ï3 Maximum number of parents 10ïfold test logïprobability / instance ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ï32 ï30 ï28 ï26 ï24 ï22 ï20 ï18 Maximum number of parents ï0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 20 40 60 80 100 120 140 160 180 200 Maximum number of parents Sigmoid BN KernelïGaussian CBN KernelïUnifCorr CBN KernelïFrank’s CBN NormalïUnifCorr CBN Figure 2: Train and test set performance for the 12 variable Wine, 28 variable Dow Jones and 100 variables Crime datasets. Models compared: Sigmoid BN; CBN with a uniform correlation normal copula (single parameter); CBN with a full normal copula (0.5 ∗d(d −1) parameters); CBN with Frank’s single parameter copula. Shown is the 10-fold average log-probability per instance (y-axis) vs. the maximal number of parents allowed in the network (x-axis). Error bars (slightly shifted for readability) show the 10 −90% range. The structure for all models was learned with the same search procedure using the BIC model selection score. likelihood parameters using a standard conjugate gradient algorithm. For the Gaussian copula with a full covariance matrix, a reasonably effective and substantially more efficient method is based on the relationship between the copula function and Kendall’s Tau dependence measure [19]. For lack of space, further details for both of these copulas are provided in the supplementary material. Model Selection Very briefly, to learn the structure of G, we use a standard score-based approach that starts with the empty network, and greedily advances via local modifications to the current structure (add/delete/reverse edge). The search is guided by the Bayesian information criterion [28] that balances the likelihood of the model and its complexity score(G : D) = ℓ(D : ˆθ, G) −1 2 log(M)|ΘG|, where ˆθ are the maximum-likelihood parameters, and |ΘG| is the number of free parameters associated with the graph structure G. During the search, we also use a TABU list and random restarts [10] to mitigate the problem of local maxima. See Koller and Friedman [16] for more details. 5 Experimental Evaluation We assess the effectiveness of our approach for density estimation by comparing CBNs and BNs learned from training data in terms of log-probability performance on test data. For BNs, we use a linear Gaussian conditional density and a non-linear Sigmoid one (see Koller and Friedman [16]). For CBNs, to demonstrate the flexibility of our framework, we consider the three local copula functions discussed in Section 4: fully parametrized Normal copula; the same copula with a single correlation parameter and unit diagonal (UnifCorr); Frank’s single parameter Archimedean copula. We use standard normal kernel density estimation for the univariate densities. The structure of both the BN and CBN models was learned using the same greedy structure search procedure described in Section 4. We consider three datasets of a markedly different nature and dimensionality: • Wine Quality (UCI repository). 11 physiochemical properties and a sensory quality variable for the red Portuguese ”Vinho Verde” wine [4]. Included are measurements from 1599 tastings. 6 Figure 3: Comparison of the number of edges learned in the different random run for different models (y-axis) vs. the Sigmoid BN model (x-axis), when the maximal number of parents in the network was limited to 4. 14 16 18 20 22 24 26 28 30 32 14 16 18 20 22 24 26 28 30 32 # edges in Sigmoid BN # edges in competitor 180 200 220 240 260 280 300 320 340 360 180 200 220 240 260 280 300 320 340 360 # edges in Sigmoid BN # edges in competitor KernelïUnifCorr CBN Gaussian BN NormalïUnifCorr CBN Wine dataset Crime dataset • Dow Jones. 2001-2005 (1508 trading days) daily adjusted changes of the 30 index stocks. To avoid arbitrary imputation, two stocks not traded in all of these days were excluded (KFT,TRV). • Crime (UCI repository). 100 observed variables relating to crime ranging from household size to fraction of children born outside of a marriage, for 1994 communities across the U.S. Figure 2 compares average log-probability (y-axis) for 10 random equal train/test splits as a function of the maximal number of parents allowed in the network (x-axis). Results for the linear Gaussian BN were almost identical to those of the sigmoid BN for the Wine and Dow Jones datasets and inferior for the Crime dataset, and are omitted for clarity. For all datasets, the copula based models offer a clear gain in training performance as well as in generalization on unseen test instances. Remarkably, the single parameter (for each local density) UnifCorr model is superior to the BN model even when the latter utilizes up to 8 local parameters (with 4 parents). In fact, even Frank’s single parameter Archimedean copula which is constrained by the fact that all of its K-marginals are equal [23], is superior to the BN model. Importantly, the advantage of the CBN model is significant as the units of improvement are in bits/instance. That is, an improvement of 2 bits/instance translates into each test instance being, on average, four times as likely.3 It is also important to note the benefit that comes with structures that are richer than a tree. As the number of allowed parents (x-axis) is increased, gains are relatively small when the dimensionality of the domain is limited (12 variables); The gains are, however, quite substantial for the more complex domains. To understand the role of the univariate marginals, we start with the no dependency network (0 on x-axis), where the advantage of CBNs is solely due to the use of flexible univariate marginals. Surprisingly, even with single parameter copulas, although much simpler than the Sigmoid form used for the BN model, we are able to maintain much of that advantage as the model becomes more complex. As expected, this is not the case when we constrain the CBN model to have normal marginals (Normal-UnifCorr) and when the domain is sufficiently complex (Crime). To get a sense of the overall dependency structure, Figure 3 shows the number of edges learned for the different models. For the Wine dataset, the linear BN attempts to compensate for its constrained form by using substantially more edges than the non-linear Sigmoid BN. The Kernel-UnifCorr CBN, in contrast, tends to use less edges while achieving higher test performance. Finally, the Normal-UnifCorr CBN model, despite the forced normal marginals, does not lead to overly complex structures as it is constrained by the simplicity of the copula function (single parameter). For the challenging Crime dataset, the differences are more pronounced: both the linear and non-linear BN models almost saturate the limit of 4 parents per variable, while the Kernel-UnifCorr copula model requires, on average, less than half the number of parents to achieve superior performance. Finally, in Figure 4, we demonstrate the qualitative advantage of CBNs by comparing empirical values from the test data (left) with samples generated from the different models. For the ’physical density’ and ’alcohol’ variables (top), the CBN samples (middle) are better than the BN ones (right), but not dramatically so. However, for the ’residual sugar’ and ’physical density’ pair (bottom), where the empirical dependence is far from normal, the advantage of the CBN representation is clear. We recall that the CBN model uses a simple normal copula so that the advantage is solely rooted in the distortion of the input to the copula created by the kernel-based univariate representation. With more expressive copulas we can expect further qualitative and quantitative advantages. 3Note that the performance for the crime domain is on an unusually high scale since some of the variables are closely correlated, leading to peaked densities. We emphasize that this does not effect the relative merit of a method - an advantage of a bit/instance still translates to each instance being, on average, twice as likely. 7 Figure 4: Demonstration of the dependency learned for the Wine dataset for two variable pairs. Compared is the empirical distribution in the test data (left) with samples generated from the learned CBN (middle) and BN (right) models. To eliminate the effect of differences in structure, the CBN model was forced to use the structure learned for the BN model which contains the network fragment ’residual sugar’ → ’physical density’ →’alcohol level’. Empirical CBN Samples BN Samples 0.99 0.995 1 1.005 8 9 10 11 12 13 14 15 Physical density Alcohol level 0.99 0.995 1 1.005 8 9 10 11 12 13 14 Physical density Alcohol level 0.985 0.99 0.995 1 1.005 1.01 7 8 9 10 11 12 13 14 Physical density Alcohol level 0 2 4 6 8 10 12 14 16 0.99 0.995 1 1.005 Residual sugar Physical density 0 2 4 6 8 10 12 14 16 0.99 0.995 1 1.005 Residual sugar Physical density ï1 0 1 2 3 4 5 6 7 0.985 0.99 0.995 1 1.005 1.01 Residual sugar Physical density 6 Related Work For lack of space we do not discuss direct multivariate copula constructions (e.g., [8, 15, 18, 22]) that are typically effective only for few dimensions, and focus on composite constructions that build on smaller (bivariate) copulas. The Vine model [3] relies on a recursive construction of bivariate copulas to parameterize a multivariate one. Although it uses a graphical representation, the framework is inherently different from ours: conditional independence is replaced with a conditional dependence whose parameters depend on the conditioning variable(s). Kurwicka and Cooke [17] reveal a direct connection between vines and belief networks, but that is limited to the scenario of elliptical bivariate copulas. Relying on the same representation, Aas et al. [1] suggest an alternative construction methodology. While the vine representation is certainly general, the need to condition on many variables using a somewhat elaborate construction limits practical applications to a modest number of variables. Aas et al. [1] do note the simplification that can result from making independence assumptions, but do not provide a general framework for doing so. Savu and Trede [26] suggest an alternative model that is limited to a hierarchical tree structure of bivariate Archimedean copulas. Kirshner [14] uses the copula product operator of Darsow et al. [5] to suggest a mixture of trees model that is directly motivated by the field of graphical models. The relationship between our model to theirs is the same as that of a general BN to a mixture of trees model [21]. Most recently, Liu et al. [20] consider a general sparse undirected copula-based model that is focused on the semi and non-parametric aspect of modeling, and is specific to the case of the normal copula. Finally, it is important to put the dimension of the domains we consider in this work (up to 100 variables) in perspective. Copula applications are numerous yet most are limited to a relatively small number (< 10) of variables. Heinen and Alfonso [11] are unique in that they consider 95 variables, but using an approach that is tailored to the specific details of the GARCH model. 7 Discussion and Future Work We presented Copula Bayesian Networks, a marriage between the Bayesian network and copula frameworks. Building on a novel reparameterization of the conditional density, our model offers great flexibility in modeling high-dimensional continuous distribution while offering control over the form of the univariate marginals. We applied our approach to three markedly different real-life datasets and, in all cases, demonstrated a consistent and significant generalization advantage. Our contribution is threefold. First, our framework allows us to flexibly “mix and match” local copulas and univariate densities of any form. Second, like BNs, we allow for independence assumptions that are more expressive than those possible with tree-based constructions, leading to generalization advantages. Third, we leverage on existing machinery to perform model selection in significantly higher dimensions than typically considered in the copula literature. Thus, our work opens the door for numerous applications where the flexibility of copulas is needed but could not be previously utilized. In a companion paper [6], we also show that CBNs give rise to an efficient inference procedure. The gap between train and test performance for CBNs motivates the development of model selection scores tailored to the copula framework (e.g., based on rank correlation). It would also be interesting to see if our framework can be adapted to the cumulative scenario, while allowing for independencies quite different from the recently introduced cumulative network model [12]. 8 Acknowledgements I am grateful to Ariel Jaimovich, Amir Globerson, Nir Friedman and Fabio Spizzichino for their comments on earlier drafts of this manuscript. G. Elidan was supported by the Alon fellowship. References [1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependencies. Insurance: Mathematics and Economics, 44:182–198, 2009. [2] R. Accioly and F. Chiyoshi. Modeling dependence with copulas: a useful tool for field development decision process. Journal of Petroleum Science and Engineering, 44:83–91, 2004. [3] T. Bedford and R. Cooke. Vines - a new graphical model for dependent random variables. Annals of Statistics, 30(4):1031–1068, 2002. [4] P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis. Modeling wine preferences by data mining from physicochemical properties. Decision Support Systems, 47(4):547–553, 2009. [5] W. Darsow, B. Nguyen, and E. Olsen. Copulas and Markov processes. Illinois J Math, 36:600–642, 1992. [6] G. Elidan. Inference-less density estimation using Copula Bayesian Networks. In Uncertainty in Artificial Intelligence (UAI), 2010. [7] P. Embrechts, F. Lindskog, and A. McNeil. Modeling dependence with copulas and applications to risk management. Handbook of Heavy Tailed Distributions in Finance, 2003. [8] M. Fischer and C. Kock. Constructing and generalizing given multivariate copulas. Technical report, Working paper, University of Erlangen-Nurnberg, Nurnberg, 2007. [9] N. Friedman and I. Nachman. Gaussian Process Networks. In Uncertainty in AI (UAI), 2000. [10] F. Glover and M. Laguna. Tabu search. In C. Reeves, editor, Modern Heuristic Techniques for Combinatorial Problems, Oxford, England, 1993. Blackwell Scientific Publishing. [11] A. Heinen and A. Alfonso. Asymmetric CAPM dependence for large dimensions: The canonical vine autoregressive copula model. ECORE Discussion Paper, 2008. [12] J. Huang and B. Frey. Cumulative distribution networks and the derivative-sum-product algorithm. In Uncertainty in Artificial Intelligence (UAI), 2008. [13] H. Joe and J. Xu. The estimation method of inference functions for margins for multivariate models. Technical Report 166, Department of Statistics, University of British Columbia, 1996. [14] S. Kirshner. Learning with tree-averaged densities and distributions. In Neural Information Processing Systems (NIPS), 2007. [15] K. Koehler and J. Symanowski. Constructing multivariate distributions with specific marginal distributions. Journal of Multivariate Distributions, 55:261–282, 1995. [16] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT, 2009. [17] D. Kurwicka and R. Cooke. The vine copula method for representing high dimensional dependent distributions: Applications to continuous belief nets. In The Winter Simulation Conference, 2002. [18] E. Liebscher. Modelling and estimation of multivariate copulas. Technical report, Working paper, University of Applied Sciences, Merseburg, 2006. [19] F. Lindskog, A. McNeil, and U. Schmock. Kendall’s tau for elliptical distributions. Credit Risk - measurement, evaluation and management, pages 149–156, 2003. [20] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:22952328, 2010. [21] M. Meila and M. Jordan. Estimating dependency structure as a hidden variable. In Neural Information Processing Systems (NIPS), 1998. [22] P. Morillas. A method to obtain new copulas from a given one. Metrika, 61:169–184, 2005. [23] R. Nelsen. An Introduction to Copulas. Springer, 2007. [24] E. Parzen. On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065–1076, 1962. [25] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [26] C. Savu and M. Trede. Hierarchical archimedean copulas. In the Conf on High Frequency Finance, 2006. [27] A. Schwaighofer, M. Dejori, V. Tresp, and M. Stetter. Structure Learning with Nonparametric Decomposable Models. In the International Conference on Artificial Neural Networks, 2007. [28] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464, 1978. [29] A. Sklar. Fonctions de repartition a n dimensions et leurs marges. Publications de l’Institut de Statistique de L’Universite de Paris, 8:229–231, 1959. 9
2010
60
4,103
Evidence-Specific Structures for Rich Tractable CRFs Anton Chechetka Carnegie Mellon University antonc@cs.cmu.edu Carlos Guestrin Carnegie Mellon University guestrin@cs.cmu.edu Abstract We present a simple and effective approach to learning tractable conditional random fields with structure that depends on the evidence. Our approach retains the advantages of tractable discriminative models, namely efficient exact inference and arbitrarily accurate parameter learning in polynomial time. At the same time, our algorithm does not suffer a large expressive power penalty inherent to fixed tractable structures. On real-life relational datasets, our approach matches or exceeds state of the art accuracy of the dense models, and at the same time provides an order of magnitude speedup. 1 Introduction Conditional random fields (CRFs, [1]) have been successful in modeling complex systems, with applications from speech tagging [1] to heart motion abnormality detection [2]. A key advantage of CRFs over other probabilistic graphical models (PGMs, [3]) stems from the observation that in almost all applications, some variables are unknown at test time (we will denote such variables X), but others, called the evidence E, are known at test time. While other PGM formulations model the joint distribution P(X, E), CRFs directly model conditional distributions P(X | E). The discriminative approach adopted by CRFs allows for better approximation quality of the learned conditional distribution P(X | E), because the representational power of the model is not “wasted” on modeling P(E). However, the better approximation comes at a cost of increased computational complexity for both structure [4] and parameter learning [1] as compared to generative models. In particular, unlike Bayesian networks or junction trees [3], (a) the likelihood of a CRF structure does not decompose into a combination of small subcomponent scores, making many existing approaches to structure learning inapplicable, and, (b) instead of computing optimal parameters in closed form, with CRFs one has to resort to gradient-based methods. Moreover, computing the gradient of the log-likelihood with respect to the CRF parameters requires inference in the current model to be done for every training datapoint. For high-treewidth models, even approximate inference is NP-hard [5]. To overcome the extra computational challenges posed by the conditional random fields, practitioners usually resort to several of the following approximations throughout the process: • CRF structure is specified by hand, leading to suboptimal structures. • Approximate inference during parameter learning results in suboptimal parameters. • Approximate inference at test time results in suboptimal results [5]. • Replacing the CRF conditional likelihood objective with a more tractable one (e.g. [6]) results in suboptimal models (both in terms of learned structure and parameters). Not only do all of the above approximation techniques lack any quality guarantees, but also combining several of them in the same system serves to further compound the errors. A well-known way to avoid approximations in CRF parameter learning is to restrict the models to have low treewidth, where the dependencies between the variables X have a tree-like structure. For 1 such models, parameter learning and inference can be done exactly1; only structure learning involves approximations. The important dependencies between the variables X, however, usually cannot all be captured with a single tree-like structure, so low-treewidth CRFs are rarely used in practice. In this paper, we argue that it is the commitment to a single CRF structure irrespective of the evidence E that makes tree-like CRFs an inferior option. We show that tree CRFs with evidence-dependent structure, learned by a generalization of the Chow-Liu algorithm [7], (a) yield results equal to or significantly better than densely-connected CRFs on real-life datasets, and (b) are an order of magnitude faster than the dense models. More specifically, our contributions are as follows: • Formally define CRFs with evidence-specific (ES) structure. • Observe that, given the ES structures, CRF feature weights can be learned exactly. • Generalize the Chow-Liu algorithm [7] to learn evidence-specific structures for tree CRFs. • Generalize tree CRFs with evidence-specific structure (ESS-CRFs) to the relational setting. • Demonstrate empirically the superior performance of ESS-CRFs over densely connected models in terms of both accuracy and runtime on real-life relational models. 2 Conditional random fields A conditional random field with pairwise features2 defines a conditional distribution P(X |E) as P(X | E) = Z−1(E) exp X (i,j)∈T X k wijkfijk(Xi, Xj, E)  , (1) where functions f are called features, w are feature weights, Z(E) is the normalization constant (which depends on evidence), and T is the set of edges of the model. To reflect the fact that P(X | E) depends on the weights w, we will write P(X |E,w). To apply a CRF model, one first defines the set of features f. A typical feature may mean that two pixels i and j in the same image segment tend to have have similar colors: f(Xi, Xj, E)≡I(Xi =Xj, |colori−colorj|<δ), where I(·) is an indicator function. Given the features f and training data D that consists of fully observed assignments to X and E, the optimal feature weights w∗maximize the conditional log-likelihood (CLLH) of the data: w∗=arg max X (X,E)∈D logP(X|E,w) = arg max X (X,E)∈D  X (i,j)∈T,k wijkfijk(Xi,Xj,E) −logZ(E,w))  . (2) The problem (2) does not have a closed form solution, but has a unique global optimum that can be found using any gradient-based optimization technique because of the following fact [1]: Fact 1 Conditional log-likelihood (2), abbreviated CLLH, is concave in w. Moreover, ∂log P(X|E, w) ∂wijk = fijk(Xi,Xj,E) −EP (Xi,Xj|E,w) [fijk(Xi,Xj,E)] , (3) where EP denotes expectation with respect to a distribution P. Convexity of the negative CLLH objective and the closed-form expression for the gradient lets us use convex optimization techniques such as L-BFGS [9] to find the unique optimum w∗. However, the gradient (3) contains the conditional distribution over XiXj, so computing (3) requires inference in the model for every datapoint. Time complexity of the exact inference is exponential in the treewidth of the graph defined by edges T [5]. Therefore, exact evaluation of the CLLH objective (2)and gradient (3) and exact inference at test time are all only feasible for models with low-treewidth T. Unfortunately, restricting the space of models to only those with low treewidth severely decreases the expressive power of CRFs. Complex dependencies of real-life distributions usually cannot be adequately captured by a single tree-like structure, so most of the models used in practice have high treewidth, making exact inference infeasible. Instead, approximate inference techniques, such as 1Here and in the rest of the paper, by “exact parameter learning” we will mean “with arbitrary accuracy in polynomial time” using standard convex optimization techniques. This is in contrast to closed form exact parameter learning possible for generative low-treewidth models representing the joint distribution P(X, E). 2In this paper, we only consider the case of pairwise dependencies, that is, features f that depend on at most two variables from X (but may depend on arbitrary many variables from E). Our approach can be in principle extended to CRFs with higher order dependencies, but Chow-Liu algorithm for structure learning will have to be replaced with an algorithm that learns low-treewidth junction trees, such as [8]. 2 belief propagation [10, 11] or sampling [12] are used for parameter learning and at test time. Approximate inference is NP-hard [5], so approximate inference algorithms have very few result quality guarantees. Greater expressive power of the models is thus obtained at the expense of worse quality of estimated parameters and inference. Here, we show an alternative way to increase expressive power of tree-like structured CRFs without sacrificing optimal weights learning and exact inference at test time. In practice, our approach is much better suited for relational than for propositional settings, because of much higher parameters dimensionality in the propositional case. However, we first present in detail the propositional case theory to better convey the key high-level ideas. 3 Evidence-specific structure for CRFs Observe that, given a particular evidence value E, the set of edges T in the CRF formulation (1) actually can be viewed as a supergraph of the conditional model over X. An edge (r, s) ∈T can be “disabled” in the following sense: if for E = E the edge features are identically zero, frsk(Xr, Xs, E) ≡0, regardless of the values of Xr and Xs, then X (i,j)∈T X k wijkfijk(Xi, Xj, E) ≡ X (i,j)∈T \(r,s) X k wijkfijk(Xi, Xj, E), and so for evidence value E, the model (1) with edges T is equivalent to (1) with (r −s) removed from T. The following notion of effective CRF structure, captures the extra sparsity: Definition 2 Given the CRF model (1) and evidence value E = E, the effective conditional model structure T(E = E) is the set of edges corresponding to features that are not identically zero: T(E = E) = {(i, j) | (i, j) ∈T, ∃k, xi, xj s.t. fijk(xi, xj, E) ̸= 0} . If T(E) has low treewidth for all values of E, inference and parameter learning using the effective structure are tractable, even if a priori structure T has high treewidth. Unfortunately, in practice the treewidth of T(E) is usually not much smaller than the treewidth of T. Low-treewidth effective structures are rarely used, because treewidth is a global property of the graph (even computing treewidth is NP-complete [13]), while feature design is a local process. In fact, it is the ability to learn optimal weights for a set of mutually correlated features without first understanding the inter-feature dependencies that is the key advantage of CRFs over other PGM formulations. Achieving low treewidth for the effective structures requires elaborate feature design, making model construction very difficult. Instead, in this work, we separate construction of low-treewidth effective structures from feature design and weight learning, to combine the advantages of exact inference and discriminative weights learning, high expressive power of high-treewidth models, and local feature design. Observe that the CRF definition (1) can be written equivalently as P(X | E, w) = Z−1(E, w) exp nX ij X k wijk × (I((i, j) ∈T) · fijk(Xi, Xj, E)) o . (4) Even though (1) and (4) are equivalent, in (4) the structure of the model is explicitly encoded as multiplicative component of the features. In addition to the feature values f, the effective structure of the model is now controlled by the indicator functions I(·). These indicator functions provide us with a way to control the treewidth of the effective structures independently of the features. Traditionally, it has been assumed that the a priori structure T of a CRF model is fixed. However, such an assumption is not necessary. In this work, we assume that the structure is determined by the evidence E and some parameters u : T = T(E, u). The resulting model, which we call a CRF with evidence-specific structure (ESS-CRF), defines a conditional distribution P(X | E, w, u) as follows P(X |E,w,u) = Z−1(E,w,u) exp nX ij X k wijk (I((i, j) ∈T(E, u)) · fijk(Xi, Xj, E)) o . (5) The dependence of the structure T on E and u can have different forms. We will provide one example of an algorithm for constructing evidence-specific CRF structures shortly. ESS-CRFs have an important advantage over the traditional parametrization: in (5) the parameters u that determine the model structure are decoupled from the feature weights w. As a result, the problem of structure learning (i.e., optimizing u) can be decoupled from feature selection (choosing f) and feature weights learning (optimizing w). Such a decoupling makes it much easier to guarantee that the effective structure of the model has low treewidth by relegating all the necessary global computation to the structure construction algorithm T = T(E, u). For any fixed choice of a structure construction algorithm T(·, ·) and structure parameters u, as long as T(·, ·) is guaranteed to return low-treewidth structures, learning optimal feature weights w∗and inference at test time can be done exactly, because Fact 1 directly extends to feature weights w in ESS-CRFs: 3 Algorithm 1: Standard CRF approach Define features fijk(Xi, Xj, E), implicitly defining the high-treewidth CRF structure T. 1 Optimize weights w to maximize conditional LLH (2) of the training data. 2 Use approximate inference to compute CLLH objective (2) and gradient (3). foreach E in test data do 3 Use conditional model (1) to define the conditional distribution P(X | E, w). 4 Use approximate inference to compute the marginals or the most likely assignment to X. Algorithm 2: CRF with evidence-specific structures approach Define features fijk(Xi, Xj, E). 1 Choose structure learning alg. T(E, u) that is guaranteed to return low-treewidth structures. Define or learn from data parameters u for the structure construction algorithm T(·, ·). Optimize weights w to maximize conditional LLH log P(X | E, u, w) of the training data. 2 Use exact inference to compute CLLH objective (2) and gradient (3). foreach E in test data do 3 Use conditional model (5) to define the conditional distribution P(X | E, w, u). 4 Use exact inference to compute the marginals or the most likely assignment to X. Observation 3 Conditional log-likelihood logP(X |E,w,u) of ESS-CRFs (5) is concave in w. Also, ∂logP(X|E,w,u) ∂wijk =I((i, j)∈T(E, u)) fijk(Xi,Xj,E)−EP (Xi,Xj|E,w,u) [fijk(Xi,Xj,E)]  . (6) To summarize, instead of the standard CRF workflow (Alg. 1), we propose ESS-CRFs (Alg. 2). The standard approach has approximations (with little, if any, guarantees on the result quality) at every stage (lines 1,2,4), while in our ESS-CRF approach only structure selection (line 1) involves an approximation. Next, we present a simple but effective algorithm for learning evidence-specific tree structures, based on an existing algorithm for generative models. Many other existing structure learning algorithms can be similarly adapted to learn evidence-specific models of higher treewidth. 4 Conditional Chow-Liu algorithm for tractable evidence-specific structures Learning the most likely PGM structure from data is in most cases intractable. Even for Markov random fields (MRFs), which are a special case of CRFs with no evidence, learning the most likely structure is NP-hard (c.f. [8]). However, for one very simple class of MRFs, namely tree-structured models, an efficient algorithm exists [7] that finds the most likely structure. In this section, we adapt this algorithm (called the Chow-Liu algorithm) to learning evidence-specific structures for CRFs. Pairwise Markov random fields are graphical models that define a distribution over X as a normalized product of low-dimensional potentials: P(X) ≡Z−1 Q (i,j)∈T ψ(Xi, Xj), Notice that pairwise MRFs are a special case of CRFs with fij = log ψij, wij = 1 and E = ∅. Unlike tree CRFs, however, likelihood of tree MRF structures decomposes into contributions of individual edges: LLH(T) = X (i,j)∈T I(Xi, Xj) − X Xi∈X H(Xi), (7) where I(·, ·) is the mutual information and H(·) is entropy. Therefore, as shown in [7], the most likely structure can be obtained by taking the maximum spanning tree of a fully connected graph, where the weight of an edge ij is I(Xi, Xj). Pairwise marginals have relatively low dimensionality, so the marginals and corresponding mutual informations can be estimated from data accurately, which makes Chow-Liu algorithm a useful one for learning tree-structured models. Given the concrete value E of evidence E, one can write down the conditional version of the tree structure likelihood (7) for that particular value of evidence: LLH(T | E) = X (i,j)∈T IP (·|E)(Xi, Xj) − X Xi∈X HP (·|E)(Xi). (8) If exact conditional distributions P(Xi, Xj | E) were available, then the same Chow-Liu algorithm would find the optimal conditional structure. Unfortunately, estimating conditional distributions P(Xi, Xj | E) with fixed accuracy in general requires the amount of data exponential in the dimensionality of E [14]. However, we can still plug in approximate conditionals bP(· | E) learned from 4 Algorithm 3: Conditional Chow-Liu algorithm for learning evidence-specific tree structures // Parameter learning stage. u∗is found e.g. using L-BFGS with bP(·) as in (9) foreach Xi, Xj ∈X do u∗ ij ←arg max P (X,E)∈Dtrain log bP(Xi, Xj | E, uij) 1 // Constructing structures at test time foreach E ∈Dtest do 2 foreach Xi, Xj ∈X do set edge weight rij(E, u∗ ij) ←I b P(Xi,Xj|E,u∗ ij)(Xi, Xj) 3 T(E, u∗) ←maximum spanning tree(r(E, u∗)) 4 Algorithm 4: Relational ESS-CRF algorithm - parameter learning stage Learn structure parameters u∗using conditional Chow-Liu algorithm (Alg. 3) 1 Let P(X | E, R, w, u) be defined as in (11) 2 w∗←arg maxw log bP(X | E, R, w, u∗) // Find e.g. with L-BFGS using the gradient (12) 3 data using any standard density estimation technique3 In particular, with the same features fijk that are used in the CRF model, one can train a logistic regression model for bP(· | E) : bP(Xi, Xj | E, uij) = Z−1 ij (E, uij) exp nX k uijkfijk(Xi, Xj, E) o . (9) Essentially, a logistic regression model is a small CRF over only two variables. Exact optimal weights u∗can be found efficiently using standard convex optimization techniques. The resulting evidence-specific structure learning algorithm T(E, u) is summarized in Alg 3. Alg 3 always returns a tree, and the better the quality of the estimators (9), the better the quality of the resulting structures. Importantly, Alg. 3 is by no means the only choice for the ESS-CRF approach. Other edge scores, e.g. from [4], and edge selection procedures, e.g. [8, 15] for higher treewidth junction trees, can be used as components in the same way as Chow-Liu algorithm is used in Alg. 3. 5 Relational CRFs with evidence-specific sructure Traditional (also called propositional) PGMs are not well suited for dealing with relational data, where every variable is an entity of some type, and entities are related to each other via different types of links. Usually, there are relatively few entity types and link types. For example, the webpages on the internet are linked via hyperlinks, and social networks link people via friendship relationships. Relational data violates the i.i.d. data assumption of traditional PGMs, and huge dimensionalities of relational datasets preclude learning meaningful propositional models. Instead, several formulations of relational PGMs have been proposed [16] to work with relational data, including relational CRFs. The key property of all these formulations is that the model is defined using a few template potentials defined on the abstract level of variable types and replicated as necessary for concrete entities. More concretely, in relational CRFs every variable Xi is assigned a type mi out of the set M of possible types. A binary relation R ∈R, corresponding to a specific type of link between two variables, specifies the types of its input arguments, and a set of features f R k (·, ·, E) and feature weights wR k . We will write Xi, Xj ∈inst(R, X) if the types of Xi and Xj match the input types specified by the relation R and there is a link of type R between Xi and Xj in the data (for example, a hyperlink between two webpages). The conditional distribution P(X | E) is then generalized from the propositional CRF (1) by copying the template potentials for every instance of a relation: P(X | E, R, w) = Z−1(E, w) exp X R∈R X Xi,Xj∈inst(R,X) X k wR k f R k (Xi, Xj, E)  (10) Observe that the only meaningful difference of the relational CRF (10) from the propositional formulation (1) is that the former shares the same parameters between different edges. By accounting for parameter sharing, it is straightforward to adapt our ESS-CRF formulation to the relational setting. We define the relational ESS-CRF conditional distribution as P(X |E,R,w,u) ∝exp X R∈R X Xi,Xj∈inst(R,X)I((i, j)∈T(E,u)) X kwR k f R k (Xi, Xj, E)  (11) 3Notice that the approximation error from bP(·) is the only source of approximations in all our approach. 5 10 2 10 3 10 4 −35 −30 −25 −20 Train set size Test LLH TEMPERATURE ESS−CRF + structure reg. Chow−Liu CRF ESS−CRF 10 2 10 3 −20 −18 −16 −14 Train set size Test LLH TRAFFIC ESS−CRF ESS−CRF + structure reg. Chow−Liu CRF 0 0.05 0.1 0.15 0.2 Classification error WebKB − Classification Error SVM ESS−CRF RMN M3N Figure 1: Left: test LLH for TEMPERATURE. Middle: TRAFFIC. Right: classification errors for WebKB. Given the structure learning algorithm T(·, ·) that is guaranteed to return low-treewidth structures, one can learn optimal feature weights w∗and perform inference at test time exactly: Observation 4 Relational ESS-CRF log-likelihood is concave with respect to w. Moreover, ∂logP(X|E,R,w,u) ∂wR k =I(ij ∈T(E,u)) X Xi,Xj∈inst(R,X) f R k (Xi,Xj,E) −EP (·|E,R,w,u)  f R k (Xi,Xj,E)  . (12) Conditional Chow-Liu algorithm (Alg. 3) can be also extended to the relational setting by using templated logistic regression weights for estimating edge conditionals. The resulting algorithm is shown as Alg. 4. Observe that the test phase of Alg. 4 is exactly the same as for Alg. 3. In the relational setting, one only needs to learn O(|R|) parameters, regardless of the dataset size, for both structure selection and feature weights, as opposed to O(|X|2) parameters for the propositional case. Thus, relational ESS-CRFs are typically much less prone to overfitting than propositional ones. 6 Experiments We have tested the ESS-CRF approach on both propositional and relational data. With the large number of parameters needed for the propositional case (O(|X|2)), our approach is only practical for cases of abundant data. So our experiments with propositional data serve only to prove the concept, verifying that ESS-CRF can successfully learn a model better than a single tree baseline. In contrast to the propositional settings, in the relational cases the relatively low parameter space dimensionality (O(|R|2)) almost eliminates the overfitting problem. As a result, on relational datasets ESS-CRF is a very attractive approach in practice. Our experiments show ESS-CRFs comfortably outperforming state of the art high-treewidth discriminative models on several real-life relational datasets. 6.1 Propositional models We compare ESS-CRFs with fixed tree CRFs, where the tree structure learned by the Chow-Liu algorithm using P(X). We used TEMPERATURE sensor network data [17] (52 discretized variables) and San Francisco TRAFFIC data [18] (we selected 32 variables). In both cases, 5 variables were used as evidence E and the rest as unknowns X. The results are in Fig. 1. We have found it useful to regularize the conditional Chow-Liu (Alg. 3) by only choosing at test time from the edges that have been selected often enough during training. In Fig. 1 we plot results for both regularized (red) and unregularized (blue). One can see that in the limit of plentiful data ESS-CRF does indeed outperform the fixed tree baseline. However, because the space of available models is much larger for ESS-CRF, overfitting becomes an important issue and regularization is important. 6.2 Relational models Face recognition. We evaluate ESS-CRFs on two relational models. The first model, called FACES, aims to improve face recognition in collections of related images using information about similarity between different faces in addition to the standard single-face features. The key idea is that whenever two people in different images look similar, they are more likely to be the same person. Our model has a variable Xi, denoting the label, for every face blob. Pairwise features f(Xi, Xj, E), based on blob color similarity, indicate how close two faces are in appearance. Single-variable features f(Xi, E) encode information such as the output of an off-the-shelf standalone face classifier or face location within the image (see [19] for details). The model is used in a semi-supervised way: at test time, a PGM is instantiated jointly over the train and test entities, values of the train entities are fixed to the ground truth, and inference finds the (approximately) most likely labels for the test entities. 6 0 500 1000 1500 2000 0.5 0.6 0.7 0.8 0.9 1 Time, seconds Accuracy FACES 1 − ACCURACY MLN max MLN sum MLN+ max ESS−CRF MLN+ sum 0 50 100 0.65 0.7 0.75 0.8 0.85 0.9 Time, seconds Accuracy FACES 2 − ACCURACY ESS−CRF MLN max MLN sum MLN+ sum MLN+ max 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Time, seconds Accuracy FACES 3 − ACCURACY ESS−CRF MLN+ MLN 0 500 1000 1500 2000 2500 3000 Time, seconds FACES 1 − TIME TO CONVERGENCE Inference Parameter learning ESS−CRF MLN max MLN sum MLN+ sum MLN+ max 0 50 100 150 200 250 300 Time, seconds FACES 2 − TIME TO CONVERGENCE ESS−CRF MLN+ sum MLN max MLN sum MLN+ max Inference Parameter learning 0 10 20 30 40 50 60 Time, seconds FACES 3 − TIME TO CONVERGENCE ESS−CRF MLN+ max MLN+ sum MLN max MLN sum Parameter learning Inference Figure 2: Results for FACES datasets. Top: evolution of classification accuracy as inference progresses over time. Stars show the moment when ESS-CRF finishes running. Horizontal dashed line indicates resulting accuracy. For FACES 3, sum-product and max-product gave the same accuracy. Bottom: time to convergence. We compare ESS-CRFs with a dense relational PGM encoded by a Markov logic network (MLN, [20]) using the same features. We used a state of the art MLN implementations in the Alchemy package [21] with MC-SAT sampling algorithm for discriminative parameter learning, and belief propagation [22] for inference. For the MLN, we had to threshold the pairwise features indicating the likelihood of label agreement and set those under the threshold to 0 to prevent (a) oversmoothing and (b) very long inference times. Also, to prevent oversmoothing by the MLN, we have found it useful to scale down the pairwise feature weights learned during training, thus weakening the smoothing effect of any single edge in the model4. We denote models with so adjusted weights as MLN+. No thresholding or weights adjustment was done for ESS-CRFs. Figure 2 shows the results on three separate datasets: FACES 1 with 1720 images, 4 unique people and 100 training images in every fold, FACES 2 with 245 images, 9 unique people and 50 training images, and FACES 3 with 352 images, 24 unique people and 70 training images. We tried both sumproduct and max-product BP for inference, denoted as sum and max correspondingly in Fig. 2. For ESS-CRF the choice made no difference. One can see that (a) ESS-CRF model provides superior (FACES 2 and 3) or equal (FACES 1) accuracy to the dense MLN model, even with extra heuristic weights tweaking for the MLN, (b) ESS-CRF is more than an order of magnitude faster. One can see that for the FACES model, ESS-CRF is clearly superior to the high-treewidth alternative. Hypertext data. For WebKB data (see [23] for details), the task is to label webpages from four computer science departments as course, faculty, student, project, or other, given their text and link structure. We compare ESS-CRFs to high-treewidth relational Markov networks (RMNs, [23]), max-margin Markov networks (M3Ns, [24]) and a standalone SVM classifier. All the relational PGMs use the same single-variable features encoding the webpage text, and pairwise features encoding the link structure. The baseline SVM classifier only uses single-variable features. RMNs and ESS-CRFs are trained to maximize the conditional likelihood of the labels, while M3Ns maximize the margin in likelihood between the correct assignment and all of the incorrect ones, explicitly targeting the classification. The results are in Fig. 1. Observe that ESS-CRF matches the accuracy of high-treewidth RMNs, again showing that the smaller expressive power of tree models can be fully compensated by exact parameter learning and inference. ESS-CRF is much faster than the RMN, taking only 50 sec. to train and 0.3 sec. to test on a single core of a 2.7GHz Opteron CPU. RMN and M3N models take about 1500 sec. each to train on a 700MHz Pentium III. Even accounting for the CPU speed difference, the speedup is significant. ESS-CRF does not achieve the accuracy of M3Ns, which use a different objective more directly related to the classification problem as opposed to density estimation. Still, the RMN results indicate that it may be possible to match the M3N accuracy with much faster tractable ESS models by replacing the CRF conditional likelihood objective with the max-margin objective, which is an important direction of future work. 4Because the number of pairwise relations in the model grows quadratically with the number of variables, the “per-variable force of smoothing” grows with the dataset size, hence the need to adjust. 7 7 Related work and conclusions Related work. Two cornerstones of our ESS-CRF approach, namely using models that become more sparse when evidence is instantiated, and using multiple tractable models to avoid restrictions on the expressive power inherent to low-treewidth models, have been discussed in the existing literature. First, context-specific independence (CSI, [25]) has been long used both for speeding up inference [25] and regularizing the model parameters [26]. However, so far CSI has been treated as a local property of the model, which made reasoning about the resulting treewidth of evidencespecific models impossible. Thus, the full potential of exact inference for models with CSI remained unused. Our work is a step towards fully exploiting that potential. Multiple tractable models, such as trees, are widely used as components of mixtures (e.g. [27]), including mixtures of all possible trees [28], to approximate distributions with rich inherent structure. Unlike the mixture models, our approach of selecting a single structure for any given evidence value has the advantage of allowing for efficient exact decoding of the most probable assignment to the unknowns X using the Viterbi algorithm [29]. Both for the mixture models and our approach, joint optimization of the structure and weights (u and w in our notation) is infeasible due to many local optima of the objective. Our one-shot structure learning algorithm, as we empirically demonstrated, works well in practice. It is also much faster then expectation maximization [30] - the standard way to train mixture models. Learning the CRF structure in general is NP-hard, which follows from the hardness results for the generative models (c.f. [8]). Moreover, CRF structure learning is further complicated by the fact the CRF structure likelihood does not decompose into scores of local graph components, as do scores for some generative models [3]. Existing work on CRF structure learning thus provides only local guarantees. In practice, the hardness of CRF structure learning leads to high popularity of heuristics: chain and skip-chain [32] structures are often used, as well as grid-like structures. All the approaches that do learn structure from data can be broadly divided into three categories. First, the CRF structure can be defined via the sparsity pattern of the feature weights, so one can use L1 regularization penalty to achieve sparsity during weight learning [2]. The second type of approaches greedily adds the features to the CRF model so as to maximize the immediate improvement in the (approximate) model likelihood (e.g. [31]). Finally, one can try to approximate the CRF structure score as a combination of local scores [15, 4] and use an algorithm for learning generative structures (where the score actually decomposes). ESS-CRF also falls in this category of approaches. Although there are some negative theoretical results about learnability of even the simplest CRF structures using local scores [4], such approaches often work well in practice [15]. Learning the weights is straightforward for tractable CRFs, because the log-likelihood is concave [1] and the gradient (3) can be used with mature convex optimization techniques. So far, exact weights learning was mostly used for special hand-crafted structures, such as chains [1, 32], but in this work we use arbitrary trees. For dense structures, computing the gradient (3) exactly is intractable as even approximate inference in general models is NP-hard [5]. As a result, approximate inference techniques, such as belief propagation [10, 11] or Gibbs sampling [12] are employed, without guarantees on the quality of the result. Alternatively, an approximation of the objective (e.g. [6]) is used, also yielding suboptimal weights. Our experiments showed that exact weight learning for tractable models gives an advantage in approximation quality and efficiency over dense structures. Conclusions and future work. To summarize, we have shown that in both propositional and relational settings, tractable CRFs with evidence-specific structures, a class of models with expressive power greater than any single tree-structured model, can be constructed by relying only on the globally optimal results of efficient algorithms (logistic regression, Chow-Liu algorithm, exact inference in tree-structured models, L-BFGS for convex differentiable functions). Whereas traditional CRF workflow (Alg. 1) involves approximation without any quality guaranteed on multiple stages of the process, our approach, ESS-CRF (Alg. 2), has just one source of approximation, namely conditional structure scores. We have demonstrated on real-life relational datasets that our approach matches or exceeds the accuracy of state of the art dense discriminative models, and at the same time provide more than a factor of magnitude speedup. Important future work directions are generalizing ESS-CRF to larger treewidths and max-margin weights learning for better classification. Acknowledgements. This work is supported by NSF Career IIS-0644225 and ARO MURI W911NF0710287 and W911NF0810242. We thank Ben Taskar for sharing the WebKB data. FACES model and data were developed jointly with Denver Dash and Matthai Philipose. 8 References [1] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [2] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion abnormality detection. In CVPR, 2008. [3] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. 2009. [4] J. K. Bradley and C. Guestrin. Learning tree conditional random fields. In ICML, to appear, 2010. [5] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(1-2), 1996. [6] C. Sutton and A. McCallum. Piecewise pseudolikelihood for efficient CRF training. In ICML, 2007. [7] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans. on Inf. Theory, 14(3), 1968. [8] D. Karger and N. Srebro. Learning Markov networks: Maximum bounded tree-width graphs. In SODA’01. [9] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3), 1989. [10] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. 1988. [11] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS, 2000. [12] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, PAMI-6(6), 1984. [13] S. Arnborg, D. G. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic and Discrete Methods, 8(2), 1987. [14] W. H¨ardle, M. M¨uller, S. Sperlich, and A. Werwatz. Nonparametric and Semiparametric Models. 2004. [15] D. Shahaf, A. Chechetka, and C. Guestrin. Learning thin junction trees via graph cuts. In AISTATS, 2009. [16] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning. The MIT Press, 2007. [17] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong. Model-driven data acquisition in sensor networks. In VLDB, 2004. [18] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In UAI’05. [19] A. Chechetka, D. Dash, and M. Philipose. Relational learning for collective classification of entities in images. In AAAI Workshop on Statistical Relational AI, 2010. [20] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1-2), 2006. [21] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, and P. Domingos. The alchemy system for statistical relational AI. Technical report, University of Washington, Seattle, WA., 2009. [22] J. Gonzalez, Y. Low, and C. Guestrin. Residual splash for optimally parallelizing belief propagation. In AISTATS, 2009. [23] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In UAI, 2002. [24] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [25] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks. In UAI, 1996. [26] M. desJardins, P. Rathod, and L. Getoor. Bayesian network learning with abstraction hierarchies and context-specific independence. In ECML, 2005. [27] B. Thiesson, C. Meek, D. Chickering, and D. Heckerman. Learning mixtures of DAG models. In UAI’97. [28] M. Meil˘a and M. I. Jordan. Learning with mixtures of trees. JMLR, 1, 2001. [29] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, IT-13, 1967. [30] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Computational Statistics & Data Analysis, 19(2), 1995. [31] A. Torralba, K. P. Murphy, and W. T. Freeman. Contextual models for object detection using boosted random fields. In NIPS, 2004. [32] C. Sutton and A. McCallum. Collective segmentation and labeling of distant entities in information extraction. In ICML Workshop on Statistical Relational Learning and Its Connections, 2004. 9
2010
61
4,104
Beyond Actions: Discriminative Models for Contextual Group Activities Tian Lan School of Computing Science Simon Fraser University tla58@sfu.ca Yang Wang Department of Computer Science University of Illinois at Urbana-Champaign yangwang@uiuc.edu Weilong Yang School of Computing Science Simon Fraser University wya16@sfu.ca Greg Mori School of Computing Science Simon Fraser University mori@cs.sfu.ca Abstract We propose a discriminative model for recognizing group activities. Our model jointly captures the group activity, the individual person actions, and the interactions among them. Two new types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. Different from most of the previous latent structured models which assume a predefined structure for the hidden layer, e.g. a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. Our experimental results demonstrate that by inferring this contextual information together with adaptive structures, the proposed model can significantly improve activity recognition performance. 1 Introduction Look at the two persons in Fig. 1(a), can you tell they are doing two different actions? Once the entire contexts of these two images are revealed (Fig. 1(b)) and we observe the interaction of the person with other persons in the group, it is immediately clear that the first person is queuing, while the second person is talking. In this paper, we argue that actions of individual humans often cannot be inferred alone. We instead focus on developing methods for recognizing group activities by modeling the collective behaviors of individuals in the group. Before we proceed, we first clarify some terminology used throughout the rest of the paper. We use action to denote a simple, atomic movement performed by a single person. We use activity to refer to a more complex scenario that involves a group of people. Consider the examples in Fig. 1(b), each frame describes a group activity: queuing and talking, while each person in a frame performs a lower level action: talking and facing right, talking and facing left, etc. Our proposed approach is based on exploiting two types of contextual information in group activities. First, the activity of a group and the collective actions of all the individuals serve as context (we call it the group-person interaction) for each other, hence should be modeled jointly in a unified framework. As shown in Fig. 1, knowing the group activity (queuing or talking) helps disambiguate individual human actions which are otherwise hard to recognize. Similarly, knowing most of the persons in the scene are talking (whether facing right or left) allows us to infer the overall group activity (i.e. talking). Second, the action of an individual can also benefit from knowing the actions of other surrounding persons (which we call the person-person interaction). For example, consider Fig. 1(c). The fact that the first two persons are facing the same direction provides a strong cue that 1 (a) (b) (c) Figure 1: Role of context in group activities. It is often hard to distinguish actions from each individual person alone (a). However, if we look at the whole scene (b), we can easily recognize the activity of the group and the action of each individual. In this paper, we operationalize on this intuition and introduce a model for recognizing group activities by jointly consider the group activity, the action of each individual, and the interaction among certain pairs of individual actions (c). both of them are queuing. Similarly, the fact that the last two persons are facing each other indicates they are more likely to be talking. Related work: Using context to aid visual recognition has received much attention recently. Most of the work on context is in scene and object recognition. For example, work has been done on exploiting contextual information between scenes and objects [13], objects and objects [5, 16], objects and so-called “stuff” (amorphous spatial extent, e.g. trees, sky) [11], etc. Most of the previous work in human action recognition focuses on recognizing actions performed by a single person in a video (e.g. [2, 17]). In this setting, there has been work on exploiting contexts provided by scenes [12] or objects [10] to help action recognition. In still image action recognition, object-action context [6, 9, 23, 24] is a popular type of context used for human-object interaction. The work in [3] is the closest to ours. In that work, person-person context is exploited by a new feature descriptor extracted from a person and its surrounding area. Our model is directly inspired by some recent work on learning discriminative models that allow the use of latent variables [1, 6, 15, 19, 25], particularly when the latent variables have complex structures. These models have been successfully applied in many applications in computer vision, e.g. object detection [8, 18], action recognition [14, 19], human-object interaction [6], objects and attributes [21], human poses and actions [22], image region and tag correspondence [20], etc. So far only applications where the structures of latent variables are fixed have been considered, e.g. a tree-structure in [8, 19]. However in our applications, the structures of latent variables are not fixed and have to be inferred automatically. Our contributions: In this paper, we develop a discriminative model for recognizing group activities. We highlight the main contributions of our model. (1) Group activity: most of the work in human activity understanding focuses on single-person action recognition. Instead, we present a model for group activities that dynamically decides on interactions among group members. (2) Group-person and person-person interaction: although contextual information has been exploited for visual recognition problems, ours introduces two new types of contextual information that have not been explored before. (3) Adaptive structures: the person-person interaction poses a challenging problem for both learning and inference. If we naively consider the interaction between every pair of persons, the model might try to enforce two persons to have take certain pairs of labels even though these two persons have nothing to do with each other. In addition, selecting a subset of connections allows one to remove “clutter” in the form of people performing irrelevant actions. Ideally, we would like to consider only those person-person interactions that are strong. To this end, we propose to use adaptive structures that automatically decide on whether the interaction of two persons should be considered. Our experimental results show that our adaptive structures significantly outperform other alternatives. 2 Contextual Representation of Group Activities Our goal is to learn a model that jointly captures the group activity, the individual person actions, and the interactions among them. We introduce two new types of contextual information, group-person 2 (a) (b) Figure 2: Graphical illustration of the model in (a). The edges represented by dashed lines indicate the connections are latent. Different types of potentials are denoted by lines with different colors in the example shown in (b). interaction and person-person interaction. Group-person interaction represents the co-occurrence between the activity of a group and the actions of all the individuals. Person-person interaction indicates that the action of an individual can benefit from knowing the actions of other people in the same scene. We present a graphical model representing all the information in a unified framework. One important difference between our model and previous work is that in addition to learning the parameters in the graphical model, we also automatically infer the graph structures (see Sec. 3). We assume an image has been pre-processed (i.e. by running a person detector) so the persons in the image have been found. On the training data, each image is associated with a group activity label, and each person in the image is associated with an action label. 2.1 Model Formulation A graphical representation of the model is shown in Fig. 2. We now describe how we model an image I. Let I1, I2, . . . , Im be the set of persons found in the image I, we extract features x from the image I in the form of x = (x0, x1, . . . , xm), where x0 is the aggregation of feature descriptors of all the persons in the image (we call it root feature vector), and xi(i = 1, 2, . . . , m) is the feature vector extracted from the person Ii. We denote the collective actions of all the persons in the image as h = (h1, h2, . . . , hm), where hi ∈H is the action label of the person Ii and H is the set of all possible action labels. The image I is associated with a group activity label y ∈Y, where Y is the set of all possible activity labels. We assume there are connections between some pairs of action labels (hj, hk). Intuitively speaking, this allows the model to capture important correlations between action labels. We use an undirected graph G = (V, E) to represent (h1, h2, . . . , hm), where a vertex vi ∈V corresponds to the action label hi, and an edge (vj, vk) ∈E corresponds to the interactions between hj and hk. We use fw(x, h, y; G) to denote the compatibility of the image feature x, the collective action labels h, the group activity label y, and the graph G = (V, E). We assume fw(x, h, y; G) is parameterized by w and is defined as follows: fw(x, h, y; G) = w⊤Ψ(y, h, x; G) (1a) = w⊤ 0 φ0(y, x0) + X j∈V w⊤ 1 φ1(xj, hj) + X j∈V w⊤ 2 φ2(y, hj) + X j,k∈E w⊤ 3 φ3(y, hj, hk) (1b) The model parameters w are simply the combination of four parts, w = {w1, w2, w3, w4}. The details of the potential functions in Eq. 1 are described in the following: Image-Action Potential w⊤ 1 φ1(xj, hj): This potential function models the compatibility between the j-th person’s action label hj and its image feature xj. It is parameterized as: w⊤ 1 φ1(xj, hj) = X b∈H w⊤ 1b 1(hj = b) · xj (2) where xj is the feature vector extracted from the j-th person and we use 1(·) to denote the indicator function. The parameter w1 is simply the concatenation of w1b for all b ∈H. 3 Action-Activity Potential w⊤ 2 φ2(y, hj): This potential function models the compatibility between the group activity label y and the j-th person’s action label hj. It is parameterized as: w⊤ 2 φ2(y, hj) = X a∈Y X b∈H w2ab · 1(y = a) · 1(hj = b) (3) Action-Action Potential w⊤ 3 φ3(y, hj, hk): This potential function models the compatibility between a pair of individuals’ action labels (hj, hk) under the group activity label y, where (j, k) ∈E corresponds to an edge in the graph. It is parameterized as: w⊤ 3 φ3(y, hj, hk) = X a∈Y X b∈H X c∈H w3abc · 1(y = a) · 1(hj = b) · 1(hk = c) (4) Image-Activity Potential w⊤ 0 φ0(y, x0): This potential function is a root model which measures the compatibility between the activity label y and the root feature vector x0 of the whole image. It is parameterized as: w⊤ 0 φ0(y, x0) = X a∈Y w⊤ 0a 1(y = a) · x0 (5) The parameter w0a can be interpreted as a root filter that measures the compatibility of the class label a and the root feature vector x0. 3 Learning and Inference We now describe how to infer the label given the model parameters (Sec. 3.1), and how to learn the model parameters from a set of training data (Sec. 3.2). If the graph structure G is known and fixed, we can apply standard learning and inference techniques of latent SVMs. For our application, a good graph structure turns out to be crucial, since it determines which person interacts (i.e. provides action context) with another person. The interaction of individuals turns out to be important for group activity recognition, and fixing the interaction (i.e. graph structure) using heuristics does not work well. We will demonstrate this experimentally in Sec. 4. We instead develop our own inference and learning algorithms that automatically infer the best graph structure from a particular set. 3.1 Inference Given the model parameters w, the inference problem is to find the best group activity label y∗for a new image x. Inspired by the latent SVM [8], we define the following function to score an image x and a group activity label y: Fw(x, y) = max Gy max hy fw(x, hy, y; Gy) = max Gy max hy w⊤Ψ(x, hy, y; Gy) (6) We use the subscript y in the notations hy and Gy to emphasize that we are now fixing on a particular activity label y. The group activity label of the image x can be inferred as: y∗= arg maxy Fw(x, y). Since we can enumerate all the possible y ∈Y and predict the activity label y∗of x, the main difficulty of solving the inference problem is the maximization over Gy and hy according to Eq. 6. Note that in Eq. 6, we explicitly maximize over the graph G. This is very different from previous work which typically assumes the graph structure is fixed. The optimization problem in Eq. 6 is in general NP-hard since it involves a combinatorial search. We instead use an coordinate ascent style algorithm to approximately solve Eq. 6 by iterating the following two steps: 1. Holding the graph structure Gy fixed, optimize the action labels hy for the ⟨x, y⟩pair: hy = arg max h′ w⊤Ψ(x, h′, y; Gy) (7) 2. Holding hy fixed, optimize graph structure Gy for the ⟨x, y⟩pair: Gy = arg max G′ w⊤Ψ(x, hy, y; G′) (8) 4 The problem in Eq. 7 is a standard max-inference problem in an undirected graphical model. Here we use loopy belief propagation to approximately solve it. The problem in Eq. 8 is still an NP-hard problem since it involves enumerating all the possible graph structures. Even if we can enumerate all the graph structures, we might want to restrict ourselves to a subset of graph structures that will lead to efficient inference (e.g. when using loopy BP in Eq. 7). One obvious choice is to restrict G′ to be a tree-structured graph, since loopy BP is exact and tractable for tree structured models. However, as we will demonstrate in Sec. 4, the tree-structured graph built from simple heuristic (e.g. minimum spanning tree) does not work that well. Another choice is to choose graph structures that are “sparse”, since sparse graphs tend to have fewer cycles, and loopy BP tends to be efficient in graphs with fewer cycles. In this paper, we enforce the graph sparsity by setting a threshold d on the maximum degree of any vertex in the graph. When hy is fixed, we can formulate an integer linear program (ILP) to find the optimal graph structure (Eq. 8) with the additional constraint that the maximum vertex degree is at most d. Let zjk = 1 indicate that the edge (j, k) is included in the graph, and 0 otherwise. The ILP can be written as: max z X j∈V X k∈V zjkψjk, s.t. X j∈V zjk ≤d, X k∈V zjk ≤d, zjk = zkj, zjk ∈{0, 1}, ∀j, k (9) where we use ψjk to collectively represent the summation of all the pairwise potential functions in Eq. 1 for the pairs of vertices (j, k). Of course, the optimization problem in Eq. 9 is still hard due to the integral constraint zjk ∈{0, 1}. But we can relax the value of zjk to a real value in the range of [0, 1]. The solution of the LP relaxation might have fractional numbers. To get integral solutions, we simply round them to the closest integers. 3.2 Learning Given a set of N training examples ⟨xn, hn, yn⟩(n = 1, 2, . . . , N), we would like to train the model parameter w that tends to produce the correct group activity y for a new test image x. Note that the action labels h are observed on training data, but the graph structure G (or equivalently the variables z) are unobserved and will be automatically inferred. A natural way of learning the model is to adopt the latent SVM formulation [8, 25] as follows: min w,ξ≥0,Gy 1 2||w||2 + C N X n=1 ξn (10a) s.t. max Gyn fw(xn, hn, yn; Gyn) −max Gy max hy fw(xn, hy, y; Gy) ≥∆(y, yn) −ξn, ∀n, ∀y (10b) where ∆(y, yn) is a loss function measuring the cost incurred by predicting y when the groundtruth label is yn. In standard multi-class classification problems, we typically use the 0-1 loss ∆0/1 defined as: ∆0/1(y, yn) =  1 if y ̸= yn 0 otherwise (11) The constrained optimization problem in Eq. 10 can be equivalently written as an unconstrained problem: min w,ξ 1 2||w||2 + C N X n=1 (Ln −Rn) (12a) where Ln = max y max hy max Gy (∆(y, yn) + fw(xn, hy, y; Gy)), Rn = max Gyn fw(xn, hn, yn; Gyn)(12b) We use the non-convex bundle optimization in [7] to solve Eq. 12. In a nutshell, the algorithm iteratively builds an increasingly accurate piecewise quadratic approximation to the objective function. During each iteration, a new linear cutting plane is found via a subgradient of the objective function and added to the piecewise quadratic approximation. Now the key issue is to compute two subgradients ∂wLn and ∂wRn for a particular w, which we describe in detail below. First we describe how to compute ∂wLn. Let (y∗, h∗, G∗) be the solution to the following optimization problem: max y max h max G ∆(y, yn) + fw(xn, h, y; G) (13) 5 (a) (b) (c) (d) Figure 3: Different structures of person-person interaction. Each node here represents a person in a frame. Solid lines represent connections that can be obtained from heuristics. Dashed lines represent latent connections that will be inferred by our algorithm. (a) No connection between any pair of nodes; (b) Nodes are connected by a minimum spanning tree; (c) Any two nodes within a Euclidean distance ε are connected (which we call ε-neighborhood graph); (d) Connections are obtained by adaptive structures. Note that (d) is the structure of person-person interaction of the proposed model. Then it is easy to show that the subgradient ∂wLn can be calculated as ∂wLn = Ψ(xn, y∗, h∗; G∗). The inference problem in Eq. 13 is similar to the inference problem in Eq. 6, except for an additional term ∆(y, yn). Since the number of possible choices of y is small (e.g.|Y| = 5) in our case), we can enumerate all possible y ∈Y and solve the inference problem in Eq. 6 for each fixed y. Now we describe how to compute ∂wRn, let ˆG be the solution to the following optimization problem: max G′ fw(xn, hn, yn; G′) (14) Then we can show that the subgradient ∂wRn can be calculated as ∂wRn = Ψ(xn, yn, hn; ˆG). The problem in Eq. 14 can be approximately solved using the LP relaxation of Eq. 9. Using the two subgradients ∂wLn and ∂wRn, we can optimize Eq. 10 using the algorithm in [7]. 4 Experiments We demonstrate our model on the collective activity dataset introduced in [3]. This dataset contains 44 video clips acquired using low resolution hand held cameras. In the original dataset, all the persons in every tenth frame of the videos are assigned one of the following five categories: crossing, waiting, queuing, walking and talking, and one of the following eight pose categories: right, frontright, front, front-left, left, back-left, back and back-right. Based on the original dataset, we define five activity categories including crossing, waiting, queuing, walking and talking. We define forty action labels by combining the pose and activity information, i.e. the action labels include crossing and facing right, crossing and facing front-right, etc. We assign each frame into one of the five activity categories, by taking the majority of actions of persons (ignoring their pose categories) in that frame. We select one fourth of the video clips from each activity category to form the test set, and the rest of the video clips are used for training. Rather than directly using certain raw features (e.g. the HOG descriptor [4]) as the feature vector xi in our framework, we train a 40-class SVM classifier based on the HOG descriptor of each individual and their associated action labels. In the end, each feature vector xi is represented as a 40-dimensional vector, where the k-th entry of this vector is the score of classifying this instance to the k-th class returned by the SVM classifier. The root feature vector x0 of an image is also represented as a 40-dimensional vector, which is obtained by taking an average over all the feature vectors xi (i = 1, 2, ..., m) in the same image. Results and Analysis: In order to comprehensively evaluate the performance of the proposed model, we compare it with several baseline methods. The first baseline (which we call global bag-ofwords) is a SVM model with linear kernel based on the global feature vector x0 with a bag-of-words style representation. The other baselines are within our proposed framework, with various ways of setting the structures of the person-person interaction. The structures we have considered are illustrated in Fig. 3(a)-(c), including (a) no pairwise connection; (b) minimum spanning tree; (c) graph obtained by connecting any two vertices within a Euclidean distance ε (ε-neighborhood graph) with ε = 100, 200, 300. Note that in our proposed model the person-person interactions are latent (shown in Fig. 3(d)) and learned automatically. The performance of different structures of person-person in6 (a) (b) Figure 4: Confusion matrices for activity classification: (a) global bag-of-words (b) our approach. Rows are ground-truths, and columns are predictions. Each row is normalized to sum to 1. Method Overall Mean per-class global bag-of-words 70.9 68.6 no connection 75.9 73.7 minimum spanning tree 73.6 70.0 ε-neighborhood graph, ε = 100 74.3 72.9 ε-neighborhood graph, ε = 200 70.4 66.2 ε-neighborhood graph, ε = 300 62.2 62.5 Our Approach 79.1 77.5 Table 1: Comparison of activity classification accuracies of different methods. We report both the overall and mean per-class accuracies due to the class imbalance. The first result (global bag-of-words) is tested in the multi-class SVM framework, while the other results are in the framework of our proposed model but with different structures of person-person interaction. The structures are visualized in Fig. 3. teraction are evaluated and compared. We summarize the comparison in Table 1. Since the test set is imbalanced, e.g. the number of crossing examples is more than twice that of the queuing or talking examples, we report both overall and mean per-class accuracies. As we can see, for both overall and mean per-class accuracies, our method achieves the best performance. The proposed model significantly outperforms global bag-of-words. The confusion matrices of our method and the baseline global bag-of-words are shown in Fig. 4. There are several important conclusions we can draw from these experimental results: Importance of group-person interaction: The best result of the baselines comes from no connection between any pair of nodes, which clearly outperforms global bag-of-words. It demonstrates the effectiveness of modeling group-person interaction, i.e. connection between y and h in our model. Importance of adaptive structures of person-person interaction: In Table 1, the pre-defined structures such as the minimum spanning tree and the ε-neighborhood graph do not perform as well as the one without person-person interaction. We believe this is because those pre-defined structures are all based on heuristics and are not properly integrated with the learning algorithm. As a result, they can create interactions that do not help (and sometimes even hurt) the performance. However, if we consider the graph structure as part of our model and directly infer it using our learning algorithm, we can make sure that the obtained structures are those useful for differentiating various activities. Evidence for this is provided by the big jump in terms of the performance by our approach. We visualize the classification results and the learned structure of person-person interaction of our model in Fig. 6. 5 Conclusion We have presented a discriminative model for group activity recognition which jointly captures the group activity, the individual person actions, and the interactions among them. We have exploited two new types of contextual information: group-person interaction and person-person interaction. We also introduce an adaptive structures algorithm that automatically infers the optimal structure of person-person interaction in a latent SVM framework. Our experimental results demonstrate that our proposed model outperforms other baseline methods. 7 (a) (b) (c) (d) (e) (f) Figure 5: Visualization of the weights across pairs of action classes for each of the five activity classes. Light cells indicate large values of weights. Consider the example (a), under the activity label crossing, the model favors seeing actions of crossing with different poses together (indicated by the area bounded by the red box). We can also take a closer look at the weights within actions of crossing, as shown in (f). we can see that within the crossing category, the model favors seeing the same pose together, indicated by the light regions along the diagonal. It also favors some opposite poses, e.g. back-right with front-left. These make sense since people always cross street in either the same or the opposite directions. Crossing Waiting Queuing Walking Talking Figure 6: (Best viewed in color) Visualization of the classification results and the learned structure of personperson interaction. The top row shows correct classification examples and the bottom row shows incorrect examples. The labels C, S, Q, W, T indicate crossing, waiting, queuing, walking and talking respectively. The labels R, FR, F, FL, L, BL, B, BR indicate right, front-right, front, front-left, left, back-left, back and back-right respectively. The yellow lines represent the learned structure of person-person interaction, from which some important interactions for each activity can be obtained, e.g. a chain structure which connects persons facing the same direction is “important” for the queuing activity. 8 References [1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. In Advances in Neural Information Processing Systems, 2003. [2] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In IEEE International Conference on Computer Vision, 2005. [3] W. Choi, K. Shahid, and S. Savarese. What are they doing? : Collective activity classification using spatio-temporal relationship among people. In 9th International Workshop on Visual Surveillance, 2009. [4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recogn., 2005. [5] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In IEEE International Conference on Computer Vision, 2009. [6] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for static human-object interactions. In Workshop on Structured Models in Computer Vision, 2010. [7] T.-M.-T. Do and T. Artieres. Large margin training for hidden markov models with partially observed states. In International Conference on Machine Learning, 2009. [8] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008. [9] A. Gupta, A. Kembhavi, and L. S. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10):1775–1789, 2009. [10] D. Han, L. Bo, and C. Sminchisescu. Selection and context for action recognition. In IEEE International Conference on Computer Vision, 2009. [11] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In European Conference on Computer Vision, 2008. [12] M. Marszalek, I. Laptev, and C. Schmid. Actions in context. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009. [13] K. P. Murphy, A. Torralba, and W. T. Freeman. Using the forest to see the trees: A graphicsl model relating features, objects, and scenes. In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2004. [14] J. C. Niebles, C.-W. Chen, , and L. Fei-Fei. Modeling temporal structure of decomposable motion segments for activity classification. In European Conference of Computer Vision, 2010. [15] A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10):1848–1852, June 2007. [16] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In IEEE International Conference on Computer Vision, 2007. [17] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In 17th International Conference on Pattern Recognition, 2004. [18] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. In Advances in Neural Information Processing Systems. MIT Press, 2009. [19] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recogn., 2009. [20] Y. Wang and G. Mori. A discriminative latent model of image region and object tag correspondence. In Advances in Neural Information Processing Systems (NIPS), 2010. [21] Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In European Conference on Computer Vision, 2010. [22] W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In CVPR, 2010. [23] B. Yao and L. Fei-Fei. Grouplet: a structured image representation for recognizing human and object interactions. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, June 2010. [24] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction activities. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, June 2010. [25] C.-N. Yu and T. Joachims. Learning structural SVMs with latent variables. In International Conference on Machine Learning, 2009. 9
2010
62
4,105
Decoding Ipsilateral Finger Movements from ECoG Signals in Humans Yuzong Liu1, Mohit Sharma2, Charles M. Gaona2, Jonathan D. Breshears3, Jarod Roland 3, Zachary V. Freudenburg1, Kilian Q. Weinberger1, and Eric C. Leuthardt2,3 1Department of Computer Science and Engineering, Washington University in St. Louis 2Department of Biomedical Engineering, Washington University in St. Louis 3Department of Neurosurgery, Washington University School of Medicine Abstract Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. Our results show high decoding accuracies in all cases which are always above chance. We also show that significant accuracies can be achieved with the use of only a fraction of all the features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable to existing BCI techniques. 1 Introduction Note by authors after publication: The results in Figure 3 could not be reproduced in subsequent experiments and should be considered invalid. We apologize for this mishap. Other results in this paper are not affected. The evolving understanding of motor function in the brain has led to novel Brain Computer Interface (BCI) platforms that can potentially assist patients with severe motor disabilities. A BCI is a device that can decode human intent from brain activity alone in order to create an alternate communication and control channel for people with severe motor impairments [39]. This brain-derived control is dependent on the emerging understanding of cortical physiology as it pertains to motor function. Examples are seen in the seminal discoveries by Georgopoulus and Schwartz that neurons in motor cortex show directional tuning and, when taken as a population, can predict direction and speed of arm movements in monkey models [12, 19]. In the subsequent two decades, these findings were translated to substantial levels of brain-derived control in monkey models and preliminary human clinical trials [14, 34]. Another example is seen in Pfurtschellers work in analyzing electroencephalography (EEG). His group was one of the first to describe the changes in amplitudes in sensorimotor rhythms associated with motor movement [24]. As a result, both Pfurtscheller and Wolpaw have used these signals to achieve basic levels of control in humans with amyotrophic lateral sclerosis (ALS) and spinal cord injury [25, 40]. All these methods are based on a functioning motor cortex capable of controlling the contralateral limb. This is the 1 exact situation that does not exist in unilateral stroke. Hence, these systems to date offer little hope for patients suffering from hemispheric stroke. For a BCI to assist a hemiparetic patient, the implant will likely need to utilize unaffected cortex ipsilateral to the affected limb (opposite the side of the stroke). To do so, an expanded understanding of how and to what degree of complexity motor and motor associated cortex encodes ipsilateral hand movements is essential. Electrocorticography (ECoG), or signal recorded from the surface of the brain, offers an excellent opportunity to further define what level of motor information can be deciphered from human ipsilateral cortex related to movements (e.g. gross motor movements versus fine motor kinematics of individual finger movements). The ECoG signal is more robust compared to the EEG signal: its magnitude is typically five times larger, its spatial resolution as it relates to independent signals is much greater (0.125 versus 3.0 cm for EEG), and its frequency bandwidth is significantly higher (0-550 Hz versus 0- 40 Hz for EEG) [11, 30]. When analyzed on a functional level, many studies have revealed that different frequency bandwidths carry highly specific and anatomically focal information about cortical processing. Thus far, however, no studies have utilized these ECoG spectral features to definitively analyze and decode cortical processing of the specific kinematics of ipsilateral finger movements. In the past year, the first demonstration of this concept of utilizing ipsilateral motor signals for simple device control have been published both with ECoG (in healthy subjects) and MEG (in stroke patients) [4, 38]. In this study we set out to further explore the decoding of individual finger movements of the ipsilateral hand that could potentially be utilized for more sophisticated BCIs in the future. We studied 3 subjects who required invasive monitoring for seizure localization. Each had electrode arrays placed over the frontal lobe and a portion of sensorimotor cortex for approximately a week. Each subject performed individual finger tasks and the concurrent ECoG signal was recorded and analyzed. The principal results show that individual ipsilateral finger movements can be decoded with high accuracy. Through machine learning techniques, our group was able to determine the intent to flex and extend individual finger movements of the ipsilateral hand. These results indicate that an ECoG based BCI platform could potentially operate a hand orthotic based on ipsilateral motor signals. This could provide a neuroprosthetic alternative to patients with hemispheric stroke who have otherwise failed non-invasive and medical rehabilitative techniques. 2 Data Collection The subjects in this study were three patients (females; 8, 36, 48 years of age) with intractable epilepsy who underwent temporary placement of intracranial electrode arrays to localize seizure foci prior to surgical resection. All had normal levels of cognitive function and all were right-handed. Subject 1 had a right hemispheric 8×8 grid while subjects 2 and 3 had left hemispheric 8×8 grids. All gave informed consent. The study was approved by the Washington University Human Research Protection Office. Each subject sat in their hospital bed 75 cm from a 17-inch LCD video screen. In this study, the subject wore a data glove on the each hand to precisely monitor finger movements. Each hand rested on a table in front of the screen. The screen randomly cued the patient to flex and extend a given finger (e.g., left index finger, right ring finger, etc.). A cue came up on the monitor and as long as it was present, subjects would, at a self-paced speed, move the indicated finger from the flexed to the extended position until the cue disappeared. They were instructed on the method prior to participation. Each cued task period would last 2 seconds with a randomized rest period between 1.5 and 2.5 seconds(i.e., a trial). There were on average 30 trials per finger for a given subject. For subject 1, the thumb data recording was found to be noisy and hence was eliminated from any further analysis. Visual cues were presented using the BCI2000 program [27]. All motor hand kinematics were monitored by the patient wearing a USB linked 5DT Data Glove 5 Ultras (Fifth Dimension, Irvine, CA) on each hand. These data gloves are designed to measure finger flexure with one sensor per finger at up to 8-bit flexure resolution. The implanted platinum electrode arrays were 8×8 electrode arrays(Ad-Tech, Racine, WI and PMT, Chanhassen, MN). The grid and system setup details are described elsewhere [38]. ECoG signals were acquired using BCI2000, stored, and converted to MATLAB files for further processing and analysis. All electrodes were referenced to an inactive intracranial electrode. The sampling frequency was 1200 Hz and data acquisition is band-pass filtered from 0.15 to 500 Hz. 2 2.1 Data Preprocessing Gabor Filter Analysis All ECoG data sets were visually inspected and re-referenced with respect to the common average to account for any unwanted environmental noise. For these analyses, the timeseries ECoG data was converted into the frequency domain using a Gabor filter bank [17]. Spectral amplitudes between 0 and 550 Hz were analyzed on a logarithmic scale. The finger positions from the data glove were converted into velocities. These frequency responses and velocities were then used as an input to machine learning algorithms described below. Inherent in this is the estimation of the lag between the ECoG signal and the actual finger movement. As part of the modeling process, the value of this variable which resulted in the best decoding accuracy was chosen for further analysis. Average time lags were then used to align the ECoG signal to the finger movement signal. Those features optimized for predicting individual finger movement were then reviewed in light of anatomic location and spectral association in each subject. Dimensionality Reduction Due to the high dimensionality of the spectral data (#channels(N) × #frequencies(F)), it is important to reduce the dimensions in order to build a more conducive machine learning algorithm. Principle component analysis, or PCA, is among the most popular dimensionality reduction algorithm. PCA projects the original high-dimensional feature space into a much lower principle subspace, such that the variance of low-dimensional data is maximized. In the real-time decoding task, we use PCA to reduce the input data. However, in the weight analysis, we preserve all the N × F features because we want to study the effect of using all the features. Electrode Co-Registration Radiographs were used to identify the stereotactic coordinates of each grid electrode [10], and cortical areas were defined the GetLOC package for ECoG electrode localization [18]. Stereotactically defined electrodes were mapped to the standardized brain model. The experimental results were then collated with these anatomical mapping data. 3 Algorithms In this section, we describe the machine learning algorithms used for the finger movement decoding tasks. We focus on three different settings: 1. binary classification, 2. multiclass classification and 3. multitask classification. All the data is split into a training and a testing dataset. We chose our parameters based on a validation dataset split from the training dataset. Binary Classification We treat the finger movement detection problem as a binary classification setting. The data is presented as a time series with feature vector xt and velocity label yt at time t. The goal is to predict if at time t, a finger is moving (yt = 1) or not (yt = −1). For this purpose, we adapted logistic regression (LR) [26] and binary support vector machines (SVM) [7]. Both classifiers learn parameters (w, b) ∈Rd ×R. The prediction at time t is computed as ˆyt = sign(w⊤xt + b). The vector w is learned with the following optimization problem min (w,b) T X t=1 L(w⊤xt + b, yt) + λ|w|q. (1) Here, λ ≥0 is the regularization constant that trades off weight sparsity with complexity. The norm of the regularization can be the ℓ1 norm (q = 1) or the ℓ2 norm (q = 2). The ℓ1 norm has the tendency to result in sparse classifiers which assign non-zero weights to only a small subset of the available features. This allows us to infer which brain regions and frequencies are most important for accurate predictions. The ℓ2 norm tends to yield slightly better classification results (and is easier to optimize) but is not as interpretable as it typically assigns small weights to many features. The loss functions L differ for the two above mentioned algorithms. We will denote the loss function for logistic regression as Llr and for SVMs as Lsvm. The exact definitions are: Llr(z, y) = log(1 + exp(−yz)) Lsvm(z, y) = max(1 −yz, 0) (2) Multiclass Classification A second setting of interest is the differentiation of fingers. Here we do not want to predict if a finger is moving but which one. Consequently, at any time point t we could have one of K possible labels, such as “Index Finger” (yt = 1), “Ring Finger” (yt = 2), etc. We adopt the Crammer and Singer multi-class adaptation of support vector machines (MCSVM) [8]. For each class k ∈{1, . . . , K}, we learn class-specific parameters wk, bk. The loss only focuses on 3 pairwise comparisons between the different classes and ensures that w⊤ k xt + bk ≥w⊤ r xt + br + 1 if yt = k for any r ̸= k. For completeness, we re-state the optimization problem: min (w1,b1),...,(wK,bK) T X t=1 X r̸=yt max(1 + wT r xt + br −(wT ytxyt + byt), 0) + λ K X k=1 |wk|q. (3) Similar to the scenario of binary classification, the constant λ ≥0 regulates the trade-off between complexity and sparseness. Multitask Learning In the movement detection setting, each finger is learned as an independent classification problem. In the finger discrimination setting, we actively discriminate between the individual fingers. Multitask learning (MTL) is a way to combine the binary finger movement detection problems by learning them jointly [5]. In the setting of brain decoding, it seems reasonable to assume that there are certain features which are associated with the general cortical processing of finger movements. This is analogous to the notion of language processing and articulation in cortical areas. Functional magnetic resonance imaging (fMRI) studies have shown that although speech is represented in general cortical areas, individual features specific to different kinds of words can be found [16, 23]. We adopt the MTL adaptation for SVMs of [9], and an analogous framework for logistic regression, which leverages the commonalities across learning tasks by modeling them explicitly with an additional shared weight vector w0. The prediction at time t for finger k is defined as ˆyt = (w0 + wk)⊤xt. The corresponding optimization problem becomes min w0,w1,...,wK λ0|w0| + K X k=0 T X t=1 L((w0 + wk)⊤xt, yt) + λk|wk|q. (4) The parameter λ0 regulates how much of the learning is shared. If λ0 →+∞, then w0 = 0 and we reduce our setting to the original binary classification mentioned above. On the other hand, setting λ0 = 0 and λk>0 ≫0 will result in weight vectors wk>0 = 0. As a result, one would learn only a single classifier with weight vector w0 for generic finger movement. 4 Results In this section we evaluate our algorithms for ipsilateral decoding on three subjects. First, we approximate the time-lag between ECoG signal and finger movement, then we present decoding results on finger movement detection, discrimination and also joint decoding of all fingers in one hand. 0 150 300 450 600 750 0 0.2 0.4 0.6 0.8 1 Time Lag (ms) Area Under the Curve Index Middle Ring Little Average Figure 1: Decoding time lag for ipsilateral finger movement in Subject 1. The x-axis is the presumed time lag δT (ms) between input feature vectors and target labels, and the y-axis is the area under the ROC curve computed from L1-regularized logistic regression model. The bold black line is the average AUC, and the best decoding time-lag is indicated by the black dotted line. Time Lag We first study the effects of decoding time lag between cortical signal and movement using features. The decoding accuracy is computed by shifting the feature dataset xt and the target dataset yt by a presumed number of sample points (i.e. we are evaluating the performance of decoder h: h(xt) = yt+δT , by increasing the value of δT ). The best time lag is selected as the value of δT which leads to best decoding accuracy. Figure 1 shows the decoding accuracy as a function of time-lag for four individual finger movements in Subject 1. Offsets between 0 and 800 ms are tested for all fingers and an average offset time is computed. The average time lag for the ipsilateral finger movement for Subject 1 is observed to be around 158 ms. This is in accordance with previous studies by our group which show similar time lags between cortical activity and actual movements [38]. All further analysis is based on cortical activity (features) shifted relative to movement by the average time-lag reported here. 4 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Positive Rate True Positive Rate Index Middle Ring Little (a) Subject 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Positive Rate True Positive Rate Thumb Index Middle Ring Little (b) Subject 2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Positive Rate True Positive Rate Thumb Index Middle Ring Little (c) Subject 3 Figure 2: ROC curve for the ipsilateral finger movement decoder. Horizontal axis shows the false positive rate, and the vertical axis shows the true positive rate. The dotted line is the accuracy of a random classifier. Classifiers that have higher area under the ROC curve, or AUC, indicate better classification performance. Detecting Finger Movement We characterize the movement detection task as a binary classification. We first set a threshold thresh, and label the targets yt as 1 if the velocity at time t vt ≥thresh, and -1 otherwise. Then, we use ℓ1-regularized logistic regression for the binary classification. We use receiver operating characteristic (ROC) curve to evaluate the performance of the binary classification. ROC curve is widely used in signal estimation and detection theory, and is a graphical plot of true positive rate versus the false positive rate. ROC analysis allows user to pick the optimal discrimination threshold for the binary classifier. We pick regularizer λ from validation dataset. Figure 2 shows the result of ROC curve for three subjects. This demonstrates that ℓ1-regularized logistic regression is a powerful tool in detecting finger movement. Finger Discrimination In this section, we study how to discriminate which finger has made the movement. We first extract the sample points of which the finger is moving from the time-series. We then apply multiclass SVM to do the classification. The result is shown as the confusion matrices in Figure 3, and the colorbar shows the accuracy. Each row of the matrix represents the finger that actually moved and each column represents predicted finger. The elements of the matrix shows the percentage of all movements of a particular finger that has been classified as particular predicted finger. Note that the accuracy by a random multiclass classifier is 1/(number of fingers). It can be concluded that the ECoG signal contains useful information to discriminate individual finger movement. !"#$%&'() !"#$%&'() !"#$%&'() Actual Movement Predicted Movement Figure 3: Note from authors after publication: Results in this figure are invalid (see note in introduction).Confusion matrix of finger movement multiclass classification. The rows are the actual movement, and the columns are the predicted movement. 4.1 Learning Commonality from the Brain Activity In this section, we present how multitask learning improves the performance of the classifier. Although multitask learning has been employed in the context of brain signal decoding [2], we are the first to decode ECoG signals in humans. We group all the individual finger movement together, such that each task has similarity with others. First of all, we evaluate the performance of single-task 5 learning using SVM. Then, we study the SVM-based multitask learning. As we show in Equation 4, we make trade-off between modeling joint component and and modeling class-specific components by adjusting parameters λ0 and λ. We search a number of regularization constant (λ0, λ), and pick up the parameters that lead to highest average AUC for all tasks. Table 1 shows the comparison of SVM-based single task learning and multitask learning. Here we evaluate the multitask learning algorithm based on the improvement of (1-AUC); (1-AUC) stands for the area above the curve. The average improvement of the decoder for three patients is 25.53%, 5.60%, and 18.57%, respectively. This confirms our assumption that there exists brain activity that controls the finger movement, irrespective of any particular finger. By carefully searching the best parameters that regulates the trade-off between learning commonality among all finger movement and specificity of exact finger movement, the classification algorithm can be significantly improved. We also compare the ℓ1/ℓ2regularized logistic regression-based multitask learning with SVM-based multitask learning. There is an improvement on (1-AUC) for logistic regression-based multitask learning. Again, it illustrates that multitask learning is particularly helpful in learning similar tasks that are controlled by the brain. However, we prefer SVM-based multitask learning because of the larger improvement. Subject 1 Subject 2 Subject 3 AUC STL MTL STL MTL STL MTL Thumb N/A N/A 0.7710 0.7845 0.7680 0.8611 Index 0.8477 0.8494 0.9061 0.8948 0.7454 0.8242 Middle 0.8393 0.8569 0.9021 0.8990 0.9459 0.9481 Ring 0.8000 0.8561 0.8888 0.8894 0.7404 0.7479 Little 0.7425 0.7865 0.7124 0.7586 0.7705 0.7801 Table 1: Comparison of SVM-based single-task learning (STL) and SVM-based multi-task learning (MTL). The parameters are chosen from validation dataset: λ0 = 10−2 and λ = 104 for Subject 1, λ0 = 1 and λ = 102 for Subject 2, and λ0 = 102 and λ = 10−2 for Subject 3. The best decoding performance is indicated in bold. 5 Weight Analysis An important part of decoding finger movements from cortical activity is to map the features back to cortical domain. Physiologically, it is important to understand the features which contribute most to the decoding algorithms i.e. the features with the highest weights. As shown in Table 2 below, the decoding accuracy, indicated by AUC, does not change much as we increase the number of features used for classification. This signifies that from the large feature set used for decoding, a few features form the core and are the most important. To visualize these core features, we mapped the top 30 features back to the brain. Figure 4 above shows the normalized weights from the features used to classify finger movements from non-movements. It is apparent from the figure that the features with the highest weights fall in the DLPFC and premotor areas. This is what we would expect since these two areas are the one’s most involved in the planning of motor movements. As previously reported, the frequency range with the highest weights falls in the lower frequencies in ipsilateral movements [38]. In our case, the frequencies fall in the delta-alpha range. As noted by Tallon-Baudry, attention networks of the brain affect the oscillatory synchrony as low as theta-alpha range frequencies [31]. # features 1 2 4 8 16 32 64 256 4096 AUC 0.681 0.717 0.755 0.787 0.803 0.807 0.807 0.807 0.808 Table 2: The area under the curve (AUC) as a function of the number of features used for classification. Features were selected in decreasing order of their respective absolute weights from logistic regression with ℓ1 regularization. 6 Subject 1 Subject 2 Subject 3 Figure 4: Brain map representing the weights of the top 30 features of the three subjects. It represents the variability in cortical processing of ipsilateral finger movements. It can also be seen that cortical processing occurs as a network involving dorsolateral prefrontal cortex, pre-motor and motor areas. The frequency range for these features is in the delta and alpha range i.e. the low frequency range. 6 Discussion The notion that motor cortex plays a role in ipsilateral body movements was first asserted by NybergHansen et al. that 15% of corticospinal neurons did not decussate in cats [22]. Originally this was felt to represent more axial motor control. Further studies in single-neuron recordings in monkey models extended this observation to include ipsilateral hand and finger function. Tanji et al. demonstrated that a small percentage of primary motor cortical neurons showed increased activity with ipsilateral hand movements [32]. This site was found to be anatomically distinct from contralateral hand sites and, when stimulated, produced ipsilateral hand movements [1]. Additionally, a larger subset of premotor neurons was found to demonstrate more robust activations with cues to initiate movement during both ipsilateral and contralateral movements than with primary motor sites [3, 6]. These findings in animal models support the conclusion that a small percent of motor and a larger percent of premotor cortex participate in control of ipsilateral limb and hand movements. In humans, there appears to be a dichotomy in how motor regions contribute depending on whether the primary or non-primary motor cortex is examined. Using fMRI Newton et al. demonstrated that there was a negative change from baseline in fMRI bold sequence in M1 associated with ipsilateral movements and postulated this to represent increased inhibition [21]. Verstynen et al., however, recently published contrasting results. Their group showed that anatomically distinct primary motor sites demonstrated increased activation that became more pronounced during the execution of complex movements [36]. The role that premotor cortex plays appears to be distinct from that of primary motor cortex. In normal subjects, fMRI shows that there is more robust bilateral activation of the dorsal premotor cortex with either contralateral or ipsilateral hand movements [15]. The findings by Huang, et al. (2004) demonstrated that ipsilateral premotor areas have magnetoencephalography (MEG) dipole peak latencies that significantly precede contralateral M1 sensorimotor cortex in performing unilateral finger movements. Using electroencephalography (EEG), ipsilateral hand movements have been shown to induce alteration in cortical potentials prior to movement; this is referred to as premotor positivity [33, 29]. Spectral analyses of EEG signals have shown bihemispheric low-frequency responses with various finger and hand movements. Utilizing electrocorticography (ECoG), Wisneski et al more definitively demonstrated that the cortical physiology associated with ipsilateral hand movements was associated with lower frequency spectral changes, an earlier timing, and premotor predominant cortical localization, when compared to cortical physiology that was associated with contralateral hand movements [38]. Taken together, these findings support more of a motor planning role, rather than execution role, in ipsilateral hand actions. Decoding the information present in the ECoG signal with regard to ipsilateral finger movements is important in defining the potential use of BCI methodologies for patients with hemispheric dysfunction due to stroke or trauma. If high resolution motor kinematics can be decoded from the ECoG signal (e.g. individual finger flexion and extension), a BCI platform could potentially be created to restore function to a stroke induced paretic hand. Since up to one-half of hemispheric stroke 7 patients are chronically left with permanent loss of function in their affected hand, this could have substantial clinical impact [20]. Functional imaging has shown these severely affected patients to have increased activity in the premotor regions of their unaffected hemispheres [28, 37]. The exact role this activity plays is still unclear. It may simply be an indicator of a more severe outcome [35] or an adapative mechanism to optimize an already poor situation [13]. Thus, incomplete recovery and its association with heightened ipsilateral activation may reflect the up-regulation of motor planning with an inability to execute or actuate the selected motor choice. In this situation, a BCI may provide a unique opportunity to aid in actuating the nascent premotor commands. By decoding the brain signals associated with a given motor intention, the BCI may then convert these signals into commands that could control a robotic assist device that would allow for improved hand function (i.e., a robotic glove that opens and closes the hand or a functional electrical simulator that operates the nerves and muscles of the hand). The BCI would allow the ipsilateral premotor cortex to bypass the physiological bottleneck determined by injured and dysfunctional contralateral primary cortex (due to stroke) and the small and variable percentage of uncrossed motor fibers from ipsilateral M1. This new methodology would allow for restoration of function in chronically and severely affected subjects for whom methods of rehabilitation have not accomplished a sufficiently recovery. 7 Conclusion To our knowledge, this work describes the first instance of successful detection of individual finger movements from human ipsilateral ECoG signals. In this paper, we present a general decoding framework using the following algorithms: (1) ℓ1-regularized logistic regression for detecting finger movement; (2) Multiclass support vector machines to discriminate between fingers; and (3) First demonstration of multitask learning into the ECoG signal to improve decoding accuracy. The results presented here suggest that there exists information on the cortex ipsilateral to the moving fingers which can be decoded with high accuracy using machine learning algorithms. These results present a great potential in the world of neuroprosthetics and BCI. For patients suffering from stroke and hemiparesis, decoding finger movements from the unaffected hemisphere can be of tremendous help. Our future goals involve simultaneous decoding of finger and arm movements (using standard center out joystick task) from both ipsilateral and contralateral hemispheres. Another important goal is the real-time use of these decoding results and demonstrate their utility in the world of BCI. References [1] H. Aizawa, H. Mushiake, M. Inase, and J. Tanji. An output zone of the monkey primary motor cortex specialized for bilateral hand movement. Experimental Brain Research, 82(1):219–221, 1990. [2] M. Alamgir, M. Grosse-Wentrup, and Y. Altun. Multitask learning for brain-computer interfaces. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 9:17–24, 2010. [3] C. Brinkman and R. Porter. Supplementary motor area in the monkey: activity of neurons during performance of a learned motor task. Journal of Neurophysiology, 42(3):681, 1979. [4] E. Buch, C. Weber, L. Cohen, C. Braun, M. Dimyan, T. Ard, J. Mellinger, A. Caria, S. Soekadar, A. Fourkas, et al. Think to move: a neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke, 39(3):910, 2008. [5] R. Caruana. Multitask learning. Machine learning, 28:41–75, 1997. [6] P. Cisek, D. Crammond, and J. Kalaska. Neural activity in primary motor and dorsal premotor cortex in reaching tasks with the contralateral versus ipsilateral arm. Journal of neurophysiology, 89(2):922, 2003. [7] C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995. [8] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265–292, 2002. [9] T. Evgeniou and M. Pontil. Regularized multi–task learning. In KDD, pages 109–117, 2004. [10] P. Fox, J. Perlmutter, and M. Raichle. A stereotactic method of anatomical localization for positron emission tomography. Journal of Computer Assisted Tomography, 9(1):141, 1985. [11] W. Freeman, M. Holmes, B. Burke, and S. Vanhatalo. Spatial spectra of scalp eeg and emg from awake humans. Clinical Neurophysiology, 114(6):1053–1068, 2003. [12] A. Georgopoulos, J. Kalaska, R. Caminiti, and J. Massey. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. Journal of Neuroscience, 2(11):1527, 1982. [13] C. Gerloff, K. Bushara, A. Sailer, E. Wassermann, R. Chen, T. Matsuoka, D. Waldvogel, G. Wittenberg, K. Ishii, L. Cohen, et al. Multimodal imaging of brain reorganization in motor areas of the contralesional hemisphere of well recovered patients after capsular stroke. Brain, 129(3):791, 2006. [14] L. Hochberg, M. Serruya, G. Friehs, J. Mukand, M. Saleh, A. Caplan, A. Branner, D. Chen, R. Penn, and J. Donoghue. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099):164–171, 2006. 8 [15] H. Johansen-Berg, M. Rushworth, M. Bogdanovic, U. Kischka, S. Wimalaratna, and P. Matthews. The role of ipsilateral premotor cortex in hand movement after stroke. Proceedings of the National Academy of Sciences, 99(22):14518, 2002. [16] M. Just, V. Cherkassky, S. Aryal, and T. Mitchell. A neurosemantic theory of concrete noun representation based on the underlying brain codes. 2010. [17] E. Leuthardt, Z. Freudenberg, D. Bundy, and J. Roland. Microscale recording from human motor cortex: implications for minimally invasive electrocorticographic brain-computer interfaces. Journal of Neurosurgery: Pediatrics, 27(1), 2009. [18] K. Miller, S. Makeig, A. Hebb, R. Rao, M. Dennijs, and J. Ojemann. Cortical electrode localization from x-rays and simple mapping for electrocorticographic research: The. Journal of neuroscience methods, 162(1-2):303–308, 2007. [19] D. Moran and A. Schwartz. Motor cortical representation of speed and direction during reaching. Journal of Neurophysiology, 82(5):2676, 1999. [20] H. Nakayama, H. Jørgensen, H. Raaschou, and T. Olsen. Recovery of upper extremity function in stroke patients: the copenhagen stroke study. Archives of physical medicine and rehabilitation, 75(4):394, 1994. [21] J. Newton, A. Sunderland, and P. Gowland. fmri signal decreases in ipsilateral primary motor cortex during unilateral hand movements are related to duration and side of movement. Neuroimage, 24(4):1080–1087, 2005. [22] R. Nyberg-Hansen and A. Brodal. Sites of termination of corticospinal fibers in the cat. an experimental study with silver impregnation methods. The Journal of Comparative Neurology, 120(3):369–391, 2004. [23] S. Petersen, P. Fox, M. Posner, M. Mintum, and M. Raichle. Positron emission tomographic studies of the cortical anatomy of single-word processing. Cognitive psychology: key readings, page 109, 2004. [24] G. Pfurtscheller and A. Aranibar. Event-related cortical desynchronization detected by power measurements of scalp EEG* 1. Electroencephalography and Clinical Neurophysiology, 42(6):817–826, 1977. [25] G. Pfurtscheller, C. Guger, G. Muller, G. Krausz, and C. Neuper. Brain oscillations control hand orthosis in a tetraplegic. Neuroscience letters, 292(3):211–214, 2000. [26] S. Ryali and V. Menon. Feature selection and classification of fmri data using logistic regression with l1 norm regularization. NeuroImage, 47:S57, 2009. [27] G. Schalk, D. McFarland, T. Hinterberger, N. Birbaumer, and J. Wolpaw. Bci2000: a general-purpose brain-computer interface system. IEEE Transactions on Biomedical Engineering, 51(6):1034–1043, 2004. [28] R. Seitz, P. Hoflich, F. Binkofski, L. Tellmann, H. Herzog, and H. Freund. Role of the premotor cortex in recovery from middle cerebral artery infarction. Archives of neurology, 55(8):1081, 1998. [29] H. Shibasaki and M. Kato. Movement-associated cortical potentials with unilateral and bilateral simultaneous hand movement. Journal of Neurology, 208(3):191–199, 1975. [30] R. Srinivasan, P. Nunez, R. Silberstein, E. Inc, and O. Eugene. Spatial filtering and neocortical dynamics: estimates of eeg coherence. IEEE Transactions on Biomedical Engineering, 45(7):814–826, 1998. [31] C. Tallon-Baudry. Oscillatory synchrony and human visual cognition. Journal of Physiology-Paris, 97(2-3):355–363, 2003. [32] J. Tanji, K. Okano, and K. Sato. Neuronal activity in cortical motor areas related to ipsilateral, contralateral, and bilateral digit movements of the monkey. Journal of neurophysiology, 60(1):325, 1988. [33] I. Tarkka and M. Hallett. Cortical topography of premotor and motor potentials preceding self-paced, voluntary movement of dominant and non-dominant hands. Electroencephalography and Clinical Neurophysiology, 75(1-2):36–43, 1990. [34] D. Taylor and A. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Aug. 17 2004. US Patent App. 10/495,207. [35] A. Turton, S. Wroe, N. Trepte, C. Fraser, and R. Lemon. Contralateral and ipsilateral emg responses to transcranial magnetic stimulation during recovery of arm and hand function after stroke. Electroencephalography and Clinical Neurophysiology/Electromyography and Motor Control, 101(4):316–328, 1996. [36] T. Verstynen, J. Diedrichsen, N. Albert, P. Aparicio, and R. Ivry. Ipsilateral motor cortex activity during unimanual hand movements relates to task complexity. Journal of Neurophysiology, 93(3):1209, 2005. [37] C. Weiller, F. Chollet, K. Friston, R. Wise, and R. Frackowiak. Functional reorganization of the brain in recovery from striatocapsular infarction in man. Annals of Neurology, 31(5):463–472, 2004. [38] K. Wisneski, N. Anderson, G. Schalk, M. Smyth, D. Moran, and E. Leuthardt. Unique cortical physiology associated with ipsilateral hand movements and neuroprosthetic implications. Stroke, 39(12):3351, 2008. [39] J. Wolpaw, N. Birbaumer, D. McFarland, G. Pfurtscheller, and T. Vaughan. Brain-computer interfaces for communication and control. Clinical neurophysiology, 113(6):767–791, 2002. [40] J. Wolpaw and D. McFarland. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proceedings of the National Academy of Sciences of the United States of America, 101(51):17849, 2004. 9
2010
63
4,106
New Adaptive Algorithms for Online Classification Francesco Orabona DSI Universit`a degli Studi di Milano Milano, 20135 Italy orabona@dsi.unimi.it Koby Crammer Department of Electrical Enginering The Technion Haifa, 32000 Israel koby@ee.technion.ac.il Abstract We propose a general framework to online learning for classification problems with time-varying potential functions in the adversarial setting. This framework allows to design and prove relative mistake bounds for any generic loss function. The mistake bounds can be specialized for the hinge loss, allowing to recover and improve the bounds of known online classification algorithms. By optimizing the general bound we derive a new online classification algorithm, called NAROW, that hybridly uses adaptive- and fixed- second order information. We analyze the properties of the algorithm and illustrate its performance using synthetic dataset. 1 Introduction Linear discriminative online algorithms have been shown to perform very well on binary and multiclass labeling problems [10, 6, 14, 3]. These algorithms work in rounds, where at each round a new instance is given and the algorithm makes a prediction. After the true class of the instance is revealed, the learning algorithm updates its internal hypothesis. Often, such update is taking place only on rounds where the online algorithm makes a prediction mistake or when the confidence in the prediction is not sufficient. The aim of the classifier is to minimize the cumulative loss it suffers due to its prediction, such as the total number of mistakes. Until few years ago, most of these algorithms were using only first-order information of the input features. Recently [1, 8, 4, 12, 5, 9], researchers proposed to improve online learning algorithms by incorporating second order information. Specifically, the Second-Order-Perceptron (SOP) proposed by Cesa-Bianchi et al. [1] builds on the famous Perceptron algorithm with an additional data-dependent time-varying “whitening” step. Confidence weighted learning (CW) [8, 4] and the adaptive regularization of weights algorithm (AROW) [5] are motivated from an alternative view: maintaining confidence in the weights of the linear models maintained by the algorithm. Both CW and AROW use the input data to modify the weights as well and the confidence in them. CW and AROW are motivated from the specific properties of natural-language-precessing (NLP) data and indeed were shown to perform very well in practice, and on NLP problems in particular. However, the theoretical foundations of this empirical success were not known, especially when using only the diagonal elements of the second order information matrix. Filling this gap is one contribution of this paper. In this paper we extend and generalizes the framework for deriving algorithms and analyzing them through a potential function [2]. Our framework contains as a special case the second order Perceptron and a (variant of) AROW. While it can also be used to derive new algorithms based on other loss functions. For carefully designed algorithms, it is possible to bound the cumulative loss on any sequence of samples, even adversarially chosen [2]. In particular, many of the recent analyses are based on the online convex optimization framework, that focuses on minimizing the sum of convex functions. 1 Two common view-points for online convex optimization are of regularization [15] or primal-dual progress [16, 17, 13]. Recently new bounds have been proposed for time-varying regularizations in [18, 9], focusing on the general case of regression problems. The proof technique derived from our framework extends the work of Kakade et al. [13] to support time varying potential functions. We also show how the use of widely used classification losses, as the hinge loss, allows us to derive new powerful mistake bounds superior to existing bounds. Moreover the framework introduced supports the design of aggressive algorithms, i.e. algorithms that update their hypothesis not only when they make a prediction mistake. Finally, current second order algorithms suffer from a common problem. All these algorithms maintain the cumulative second-moment of the input features, and its inverse, qualitatively speaking, is used as a learning rate. Thus, if there is a single feature with large second-moment in the prefix of the input sequence, its effective learning rate would drop to a relatively low value, and the learning algorithm will take more time to update its value. When the instances are ordered such that the value of this feature seems to be correlated with the target label, such algorithms will set the value of weight corresponding to this feature to a wrong value and will decrease its associated learning rate to a low value. This combination makes it hard to recover from the wrong value set to the weight associated with this feature. Our final contribution is a new algorithm that adapts the way the second order information is used. We call this algorithm Narrow Adaptive Regularization Of Weights (NAROW). Intuitively, it interpolates its update rule from adaptive-second-order-information to fixed-secondorder-information, to have a narrower decrease of the learning rate for common appearing features. We derive a bound for this algorithm and illustrate its properties using synthetic data simulations. 2 Online Learning for Classification We work in the online binary classification scenario where learning algorithms work in rounds. At each round t, an instance xt ∈Rd is presented to the algorithm, which then predicts a label ˆyt ∈{−1, +1}. Then, the correct label yt is revealed, and the algorithm may modify its hypothesis. The aim of the online learning algorithm is to make as few mistakes as possible (on any sequence of samples/labels {(xt, yt)}T t=1). In this paper we focus on linear prediction functions of the form ˆyt = sign(w⊤ t xt). We strive to design online learning algorithms for which it is possible to prove a relative mistakes bound or a loss bound. Typical such analysis bounds the cumulative loss the algorithm suffers, PT t=1 ℓ(wt, xt, yt), with the cumulative loss of any classifier u plus an additional penalty called the regret, R(u) + PT t=1 ℓ(u, xt, yt). Given that we focus on classification, we are more interested in relative mistakes bound, where we bound the number of mistakes of the learner with R(u) + PT t=1 ℓ(u, xt, yt). Since the classifier u is arbitrary, we can choose, in particular, the best classifier that can be found in hindsight given all the samples. Often R(·) depends on a function measuring the complexity of u and the number of samples T, and ℓis a non-negative loss function. Usually ℓ is chosen to be a convex upper bound of the 0/1 loss. We will also denote by ℓt(u) = ℓ(u, xt, yt). In the following we denote by M to be the set of round indexes for which the algorithm performed a mistake. We assume that the algorithm always update if it rules in such events. Similarly, we denote by U the set of the margin error rounds, that is, rounds in which the algorithm updates its hypothesis and the prediction is correct, but the loss ℓt(wt) is different from zero. Their cardinality will be indicated with M and U respectively. Formally, M = {t : sign(w⊤ t xt) ̸= yt & wt ̸= wt+1}, and U = {t : sign(w⊤ t xt) = yt & wt ̸= wt+1}. An algorithm that updates its hypothesis only on mistake rounds is called conservative (e.g. [3]). Following previous naming convention [3], we call aggressive an algorithm that updates is rule on rounds for which the loss ℓt(wt) is different from zero, even if its prediction was correct. We define now few basic concepts from convex analysis that will be used in the paper. Given a convex function f : X →R, its sub-gradient ∂f(v) at v satisfies: ∀u ∈X, f(u) −f(v) ≥(u − v)·∂f(v). The Fenchel conjugate of f, f ∗: S →R, is defined by f ∗(u) = supv∈S v ·u−f(v)  . A differentiable function f : X →R is β-strongly convex w.r.t. a norm ∥· ∥if for any u, v ∈S and α ∈(0, 1), h(αu + (1 −α)v) ≤αf(u) + (1 −α)f(v) −β 2 α(1 −α) ∥u −v∥2. Strong convexity turns out to be a key property to design online learning algorithms. 2 3 General Algorithm and Analysis We now introduce a general framework to design online learning algorithms and a general lemma which serves as a general tool to prove their relative regret bounds. Our algorithm builds on previous algorithms for online convex programming with a one significant difference. Instead of using a fixed link function as first order algorithms, we allow a sequence of link functions ft(·), one for each time t. In a nutshell, the algorithm maintains a weight vector θt. Given a new examples it uses the current link function ft to compute a prediction weight vector wt. After the target label is received it sets the new weight θt+1 to be the sum of θt and minus the gradient of the loss at wt. The algorithm is summarized in Fig. 1. The following lemma is a generalization of Corollary 7 in [13] and Corollary 3 in [9], for online learning. All the proofs can be found in the Appendix. Lemma 1. Let ft, t = 1, . . . , T be βt-strongly convex functions with respect to the norms ∥· ∥f1, . . . , ∥· ∥fT over a set S and let ∥· ∥f ∗ i be the respective dual norms. Let f0(0) = 0, and x1, . . . , xT be an arbitrary sequence of vectors in Rd. Assume that algorithm in Fig. 1 is run on this sequence with the functions fi. Then, for any u ∈S, and any λ > 0 we have T X t=1 ηtz⊤ t  1 λwt −u  ≤fT (λu) λ + T X t=1 η2 t ∥zt∥2 f ∗ t 2λβt + 1 λ(f ∗ t (θt) −f ∗ t−1(θt)) ! . This Lemma can appear difficult to interpret, but we now show that it is straightforward to use the lemma to recover known bounds of different online learning algorithms. In particular we can state the following Corollary that holds for any convex loss ℓthat upper bounds the 0/1 loss. 1: Input: A series of strongly convex functions f1, . . . , fT . 2: Initialize: θ1 = 0 3: for t = 1, 2, . . . , T do 4: Receive xt 5: Set wt = ∇f ∗ t (θt) 6: Predict ˆyt = sign(w⊤ t xt) 7: Receive yt 8: if ℓt(wt) > 0 then 9: zt = ∂ℓt(wt) 10: θt+1 = θt −ηtzt 11: else 12: θt+1 = θt 13: end if 14: end for Figure 1: Prediction algorithm Corollary 1. Define B = PT t=1(f ∗ t (θt) −f ∗ t−1(θt)). Under the hypothesis of Lemma 1, if ℓis convex and it upper bounds the 0/1 loss, and ηt = η, then for any u ∈S the algorithm in Fig. 1 has the following bound on the maximum number of mistakes M, M ≤ T X t=1 ℓt(u) + fT (u) η + η T X t=1 ∥zt∥2 f ∗ t 2βt + B η . (1) Moreover if ft(x) ≤ft+1(x), ∀x ∈S, t = 0, . . . , T −1 then B ≤0. A similar bound has been recently presented in [9] as a regret bound. Yet, there are two differences. First, our analysis bounds the number of mistakes, a more natural quantity in classification setting, rather than of a general loss function. Second, we retain the additional term B which may be negative, and thus possibly provide a better bound. Moreover, to choose the optimal tuning of η we should know quantities that are unknown to the learner. We could use adaptive regularization methods, as the one proposed in [16, 18], but in this way we would lose the possibility to prove mistake bounds for second order algorithms, like the ones in [1, 5]. In the next Section we show how to obtain bounds with an automatic tuning, using additional assumptionion on the loss function. 3.1 Better bounds for linear losses The hinge loss, ℓ(u, xt, yt) = max(1 −ytu⊤xt, 0), is a very popular evaluation metric in classification. It has been used, for example, in Support Vector Machines [7] as well as in many online learning algorithms [3]. It has also been extended to the multiclass case [3]. Often mistake bounds are expressed in terms of the hinge loss. One reason is that it is a tighter upper bound of the 0/1 loss compared to other losses, as the squared hinge loss. However, this loss is particularly interesting for us, because it allows an automatic tuning of the bound in (1). In particular it is easy to verify that it satisfies the following condition ℓ(u, xt, yt) ≥1 + u⊤∂ℓt(wt), ∀u ∈S, wt : ℓt(wt) > 0 . (2) 3 Thanks to this condition we can state the following Corollary for any loss satisfying (2). Corollary 2. Under the hypothesis of Lemma 1, if fT (λu) ≤λ2fT (u), and ℓsatisfies (2), then for any u ∈S, and any λ > 0 we have X t∈M∪U ηt ≤L + λfT (u) + 1 λ B + X t∈M∪U  η2 t 2βt ∥zt∥2 f ∗ t −ηtw⊤ t zt ! , where L = P t∈M∪U ηtℓt(u), and B = PT t=1(f ∗ t (θt) −f ∗ t−1(θt)). In particular, choosing the optimal λ, we obtain X t∈M∪U ηt ≤L + p 2fT (u) v u u t2B + X t∈M∪U η2 t βt ∥zt∥2 f ∗ t −2ηtw⊤ t zt  . (3) The intuition and motivation behind this Corollary is that a classification algorithm should be independent of the particular scaling of the hyperplane. In other words, wt and αwt (with α > 0) make exactly the same predictions, because only the sign of the prediction matters. Exactly this independence in a scale factor allows us to improve the mistake bound (1) to the bound of (3). Hence, when (2) holds, the update of the algorithm becomes somehow independent from the scale factor, and we have the better bound. Finally, note that when the hinge loss is used, the vector θt is updated as in an aggressive version of the Perceptron algorithm, with a possible variable learning rate. 4 New Bounds for Existing Algorithms We now show the versatility of our framework, proving better bounds for some known first order and second order algorithms. 4.1 An Aggressive p-norm Algorithm We can use the algorithm in Fig. 1 to obtain an aggressive version of the p-norm algorithm [11]. Set ft(u) = 1 2(q−1)∥u∥2 q, that is 1-strongly convex w.r.t. the norm ∥· ∥q. The dual norm of ∥· ∥q is ∥· ∥p, where 1/p + 1/q = 1. Moreover set ηt = 1 in mistake error rounds, so using the second bound of Corollary 2, and defining R such that ∥xt∥2 p ≤R2, we have M ≤L + s ∥u∥2q q −1 s X t∈M∪U η2 t ∥xt∥2p + 2ηtytw⊤ t xt  − X t∈U ηt ≤L + s ∥u∥2q q −1 s MR2 + X t∈U η2 t ∥xt∥2p + 2ηtytw⊤ t xt  − X t∈U ηt . Solving for M we have M ≤L + 1 2(q −1)∥u∥2 qR2 + R ∥u∥q √q −1 s 1 4(q −1)∥u∥2qR2 + L + D − X t∈U ηt, (4) where L = P t∈M∪U ηtℓt(u), and D = P t∈U  η2 t ∥xt∥2 p+2ηtytw⊤ t xt R2 −ηt  . We have still the freedom to set ηt in margin error rounds. If we set ηt = 0, the algorithm of Fig. 1 becomes the p-norm algorithm and we recover its best bound [11]. However if 0 ≤ηt ≤min  R2−2ytw⊤ t xt ∥xt∥2 p , 1  we have that D is negative, and L ≤P t∈M∪U ℓt(u). Hence the aggressive updates gives us a better bound, thanks to last term that is subtracted to the bound. In the particular case of p = q = 2 we recover the Perceptron algorithm. In particular the minimum of D, under the constraint ηt ≤1, can be found setting ηt = min  R2/2−ytw⊤ t xt ∥xt∥2 , 1  . If R is equal to √ 2, we recover the PA-I update rule, when C = 1. However note that the mistake bound in (4) is better than the one proved for PA-I in [3] and the ones in [16]. Hence the bound (4) provides the first theoretical justification to the good performance of the PA-I, and it can be seen as a general evidence supporting the aggressive updates versus the conservative ones. 4 4.2 Second Order Algorithms Figure 2: NLP Data: the number of words vs. the word-rank on two sentiment data sets. We show now how to derive in a simple way the bound of the SOP [1] and the one of AROW [5]. Set ft(x) = 1 2x⊤Atx, where At = At−1 + xtx⊤ t r , r > 0 and A0 = I. The functions ft are 1-strongly convex w.r.t. the norms ∥x∥2 ft = x⊤Atx. The dual functions of ft(x), f ∗ t (x), are equal to 1 2x⊤A−1 t x, while ∥x∥2 f ∗ t is x⊤A−1 t x. Denote by χt = x⊤ t A−1 t−1xt and mt = ytx⊤ t A−1 t−1θt. With these definitions it easy to see that the conservative version of the algorithm corresponds directly to SOP. The aggressive version corresponds to AROW, with a minor difference. In fact, the prediction of the algorithm in Fig. 1 specialized in this case is ytw⊤ t xt = mt r r+χt , on the other hand AROW predicts with mt. The sign of the predictions is the same, but here the aggressive version is updating when mt r r+χt ≤1, while AROW updates if mt ≤1. To derive the bound, observe that using Woodbury matrix identity we have f ∗ t (θt) −f ∗ t−1(θt) = − (x⊤ t A−1 t−1θt)2 2(r+x⊤ t A−1 t−1xt) = − m2 t 2(r+χt). Using the second bound in Corollary 2, and setting ηt = 1 we have M + U ≤L + p u⊤AT u v u u t X t∈M∪U  x⊤ t A−1 t xt + 2ytw⊤ t xt − m2 t r + χt  ≤L + s ∥u∥2 + 1 r X t∈M∪U (u⊤xt)2 v u u tr log(det(AT )) + X t∈M∪U  2ytw⊤ t xt − m2 t r + χt  ≤L + s r∥u∥2 + X t∈M∪U (u⊤xt)2 s log(det(AT )) + X t∈M∪U mt(2r −mt) r(r + χt) . This bound recovers the SOP’s one in the conservative case, and improves slightly the one of AROW for the aggressive case. It would be possible to improve the AROW bound even more, setting ηt to a value different from 1 in margin error rounds. We leave the details for a longer version of this paper. 4.3 Diagonal updates for AROW Both CW and AROW has an efficient version that use diagonal matrices instead of full ones. In this case the complexity of the algorithm becomes linear in dimension. Here we prove a mistake bound for the diagonal version of AROW, using Corollary 2. We denote Dt = diag{At}, where At is defined as in SOP and AROW, and ft(x) = 1 2x⊤Dtx. Setting ηt = 1, and using the second bound in Corollary 2 and Lemma 12 in [9], we have1 M + U ≤ X t∈M∪U ℓt(u) + v u u tuT DT u r d X i=1 log P t∈M∪U x2 t,i r + 1 ! + 2U ! = X t∈M∪U ℓt(u) + v u u t∥u∥2 + 1 r d X i=1 u2 i X t∈M∪U x2 t,i v u u tr d X i=1 log P t∈M∪U x2 t,i r + 1 ! + 2U . The presence of a mistake bound allows us to theoretically analyze the cases where this algorithm could be advantageous respect to a simple Perceptron. In particular, for NLP data the features are binary and it is often the case that most of the features are zero most of the time. On the other hand, 1We did not optimize the constant multiplying U in the bound. 5 these “rare” features are usually the most informative ones (e.g. [8]). Fig. 2 shows the number of times each feature (word) appears in two sentiment datasets vs the word rank. Clearly there are few very frequent words and many rate words. These exact properties were used to originally derive the CW algorithm. Our analysis justifies this derivation. Concretely, the above considerations leads us to think that the optimal hyperplane u will be such that d X i=1 u2 i X t∈M∪U x2 t,i ≈ X i∈I u2 i X t∈M∪U x2 t,i ≤ X i∈I u2 i s ≈s∥u∥2 where I is the set of the informative and rare features and s is the maximum number of times these features appear in the sequence. In general each time that Pd i=1 u2 i P t∈M∪U x2 t,i ≤s∥u∥2 with s small enough, it is possible to show that, with an optimal tuning of r, this bound is better of the Perceptron’s one. In particular, using a proof similar to the one in [1], in the conservative version of this algorithm, it is enough to have s < MR2 2d , and to set r = sMR2 MR2−2sd. 5 A New Adaptive Second Order Algorithm We now introduce a new algorithm with an update rule that interpolates from adaptive-second-orderinformation to fixed-second-order-information. We start from the first bound in Corollary 2. We set ft(x) = 1 2x⊤Atx, where At = At−1+ xtx⊤ t rt , and A0 = I. This is similar to the regularization used in AROW and SOP, but here we have rt > 0 changing over time. Again, denote χt = x⊤ t A−1 t−1xt, and set ηt = 1. With this choices, we obtain the bound M + U ≤ X t∈M∪U ℓt(u) + λ∥u∥2 2 + X t∈M∪U λ(u⊤xt)2 2rt + χtrt 2λ(rt + χt) −mt(2rt −mt) 2λ(rt + χt)  , that holds for any λ > 0 and any choice of rt > 0. We would like to choose rt at each step to minimize the bound, in particular to have a small value of the sum λ(u⊤xt)2 rt + χtrt λ(rt+χt). Altough we do not know the values of (u⊤xt)2 and λ, still we can have a good trade-off setting rt = χt bχt−1 when χt ≥ 1 b and rt = +∞otherwise. Here b is a parameter. With this choice we have that χtrt rt+χt = 1 b, and (u⊤xt)2 rt = χt(u⊤xt)2b rt+χt , when χt ≥1 b. Hence we have M + U −λ∥u∥2 2 − X t∈M∪U ℓt(u) ≤ X t:bχt>1 λbχt(u⊤xt)2 2(rt + χt) + 1 2λb  + 1 2λ X t:bχt≤1 χt − X t∈M∪U mt(2rt −mt) 2λ(rt + χt) ≤λb X t:bχt>1 χt∥u∥2R2 2(rt + χt) + 1 2λ X t∈M∪U min 1 b , χt  − X t∈M∪U mt(2rt −mt) 2λ(rt + χt) ≤1 2λbR2∥u∥2 log det(AT ) + 1 2λ X t∈M∪U min 1 b , χt  − X t∈M∪U mt(2rt −mt) 2λ(rt + χt) , where in the last inequality we used an extension of Lemma 4 in [5] to varying values of rt. Tuning λ we have M + U ≤ X t∈M∪U ℓt(u) + ∥u∥R r 1 bR2 + log det(AT ) s X t∈M∪U  min (1, bχt) −bmt(2rt −mt) rt + χt  . This algorithm interpolates between a second order algorithm with adaptive second order information, like AROW, and one with a fixed second order information. Even the bound is in between these two worlds. In particular the matrix At is updated only if χt ≥1 b, preventing its eigenvalues from growing too much, as in AROW/SOP. We thus call this algorithm NAROW, since its is a new adaptive algorithm, which narrows the range of possible eigenvalues of the matrix At. We illustrate empirically its properties in the next section. 6 −20 −10 0 10 20 −25 −20 −15 −10 −5 0 5 10 15 20 25 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 −20 −10 0 10 20 −25 −20 −15 −10 −5 0 5 10 15 20 25 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 −20 −10 0 10 20 −25 −20 −15 −10 −5 0 5 10 15 20 25 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 −20 −10 0 10 20 −25 −20 −15 −10 −5 0 5 10 15 20 25 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 1000 2000 3000 4000 5000 100 200 300 400 500 600 700 800 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 200 400 600 800 1000 1200 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 50 100 150 200 250 300 350 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 50 100 150 200 250 300 350 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 100 200 300 400 500 600 700 800 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 200 400 600 800 1000 1200 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 50 100 150 200 250 300 350 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad 1000 2000 3000 4000 5000 50 100 150 200 250 300 350 Examples Cumulative Number of Mistakes PA AROW NAROW AdaGrad Figure 3: Top: Four sequences used for training, the colors represents the ordering in the sequence from blue to yellow, to red. Middle: cumulative number of mistakes of four algorithms on data with no labels noise. Bottom: results when training using data with 10% label-noise. 6 Experiments We illustrate the characteristics of our algorithm NAROW using a synthetic data generated in a similar manner of previous work [4]. We repeat its properties for completeness. We generated 5, 000 points in R20 where the first two coordinates were drawn from a 45◦rotated Gaussian distribution with standard deviation 1 and 10. The remaining 18 coordinates were drawn from independent Gaussian distributions N (0, 8.5). Each point’s label depended on the first two coordinates using a separator parallel to the long axis of the ellipsoid, yielding a linearly separable set. Finally, we ordered the training set in four different ways: from easy examples to hard examples (measured by the signed distance to the separating-hyperplane), from hard examples to easy examples, ordered by their signed value of the first feature, and by the signed value of the third (noisy) feature - that is by xi × y for i = 1 and i = 3 - respectively. An illustration of these ordering appears in the top row of Fig. 3, the colors code the ordering of points from blue via yellow to red (last points). We evaluated four algorithms: version I of the passive-aggressive (PA-I) algorithm [3], AROW [5], AdaGrad [9] and NAROW. All algorithms, except AdaGrad, have one parameter to be tuned, while AdaGrad has two. These parameters were chosen on a single random set, and the plots summarizes the results averaged over 100 repetitions. The second row of Fig. 3 summarizes the cumulative number of mistakes averaged over 100 repetitions and the third row shows the cumulative number of mistakes where 10% artificial label noise was used. (Mistakes are counted using the unnoisy labels.) Focusing on the left plot, we observe that all the second order algorithms outperform the single first order algorithm - PA-I. All algorithms make few mistakes when receiving the first half of the data - the easy examples. Then all algorithms start to make more mistakes - PA-I the most, then AdaGrad and closely following NAROW, and AROW the least. In other words, AROW was able to converge faster to the target separating hyperplane just using “easy” examples which are far from the separating hyperplane, then NAROW and AdaGrad, with PA-I being the worst in this aspect. The second plot from the left, showing the results for ordering the examples from hard to easy. All algorithms follow a general trend of making mistakes in a linear rate and then stop making mistakes when the data is easy and there are many possible classifiers that can predict correctly. Clearly, 7 AROW and NAROW stop making mistakes first, then AdaGrad and PA-I last. A similar trend can be found in the noisy dataset, with each algorithm making relatively more mistakes. The third and fourth columns tell a similar story, although the plots in the third column summarize results when the instances are ordered using the first feature (which is informative with the second) and the plots in the fourth column summarize when the instances are ordered using the third uninformative feature. In both cases, all algorithms do not make many mistakes in the beginning, then at some point, close to the middle of the input sequence, they start making many mistakes for a while, and then they converge. In terms of total performance: PA-I makes more mistakes, then AdaGrad, AROW and NAROW. However, NAROW starts to make many mistakes before the other algorithms and takes more “examples” to converge until it stopped making mistakes. This phenomena is further shown in the bottom plots where label noise is injected. We hypothesize that this relation is due to the fact that NAROW does not let the eigenvalues of the matrix A to grow unbounded. Since its inverse is proportional to the effective learning rate, it means that it does not allow the learning rate to drop too low as opposed to AROW and even to some extent AdaGrad. 7 Conclusion We presented a framework for online convex classification, specializing it for particular losses, as the hinge loss. This general tool allows to design theoretical motivated online classification algorithms and to prove their relative mistake bound. In particular it supports the analysis of aggressive updates. Our framework also provided a missing bound for AROW for diagonal matrices. We have shown its utility proving better bounds for known online algorithms, and proposing a new algorithm, called NAROW. This is a hybrid between adaptive second order algorithms, like AROW and SOP, and a static second order one. We have validated it using synthetic datasets, showing its robustness to the malicious orderings of the sample, comparing it with other state-of-art algorithms. Future work will focus on exploring the new possibilities offered by our framework and on testing NAROW on real world data. Acknowledgments We thank Nicol`o Cesa-Bianchi for his helpful comments. Francesco Orabona was sponsored by the PASCAL2 NoE under EC grant no. 216886. Koby Crammer is a Horev Fellow, supported by the Taub Foundations. This work was also supported by the German-Israeli Foundation grant GIF-2209-1912. A Appendix Proof of Lemma 1. Define by f ∗ t the Fenchel dual of ft, and ∆t = f ∗ t (θt+1) −f ∗ t−1(θt). We have PT t=1 ∆t = f ∗ T (θT +1) −f ∗ 0 (θ1) = f ∗ T (θT +1). Moreover we have that ∆t = f ∗ t (θt+1) −f ∗ t (θt) + f ∗ t (θt) −f ∗ t−1(θt) ≤f ∗ t (θt) −f ∗ t−1(θt) −ηtz⊤ t ∇f ∗ t (θt) + η2 t 2βt ∥zt∥2 f ∗ t , where we used Theorem 6 in [13]. Moreover using the Fenchel-Young inequality, we have that 1 λ PT t=1 ∆t = 1 λf ∗ T (θT +1) ≥ u⊤θT +1 −1 λfT (λu) = −PT t=1 ηtu⊤zt −1 λfT (λu). Hence putting all togheter we have − T X t=1 ηtu⊤zt −1 λfT (λu) ≤1 λ T X t=1 ∆t ≤1 λ T X t=1 (f ∗ t (θt) −f ∗ t−1(θt) −ηtw⊤ t zt + η2 t 2βt ∥zt∥2 f ∗ t ), where we used the definition of wt in Algorithm 1. Proof of Corollary 1. By convexity, ℓ(wt, xt, yt) −ℓ(u, xt, yt) ≤z⊤ t (wt −u), so setting λ = 1 in Lemma 1 we have the stated bound. For the additional statement, using Lemma 12 in [16] and ft(x) ≤ft+1(x) we have that f ∗ t (x) ≥f ∗ t+1(x), so B ≤0. The additional statement on B is proved using Lemma 12 in [16]. Using it, we have that ft(x) ≤ft+1(x) implies that f ∗ t (x) ≥ f ∗ t+1(x), so we have that B ≤0. Proof of Corollary 2. Lemma 1, the condition on the loss (2), and the hypothesis on fT gives us T X t=1 ηt(1 −ℓt(u)) ≤− T X t=1 ηtu⊤zt ≤λfT (u) + 1 λ T X t=1 η2 t ∥zt∥2 f ∗ t 2βt + B −ηtz⊤ t wt ! . Note that λ is free, so choosing its optimal value we get the second bound. 8 References [1] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order Perceptron algorithm. SIAM Journal on Computing, 34(3):640–668, 2005. [2] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551–585, 2006. [4] K. Crammer, M. Dredze, and F. Pereira. Exact Convex Confidence-Weighted learning. Advances in Neural Information Processing Systems, 22, 2008. [5] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. Advances in Neural Information Processing Systems, 23, 2009. [6] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991, 2003. [7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, 2000. [8] M. Dredze, K. Crammer, and F. Pereira. Online Confidence-Weighted learning. Proceedings of the 25th International Conference on Machine Learning, 2008. [9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Technical Report 2010-24, UC Berkeley Electrical Engineering and Computer Science, 2010. Available at http://cs.berkeley.edu/˜jduchi/ projects/DuchiHaSi10.pdf. [10] Y. Freund and R. E. Schapire. Large margin classification using the Perceptron algorithm. Machine Learning, pages 277–296, 1999. [11] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265–299, 2003. [12] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. In Proc. of the 21st Conference on Learning Theory, 2008. [13] S. Kakade, S. Shalev-Shwartz, and A. Tewari. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization. Technical report, TTI, 2009. http://www.cs.huji.ac.il/ shais/papers/KakadeShalevTewari09.pdf. [14] J. Kivinen, A. Smola, and R. Williamson. Online learning with kernels. IEEE Trans. on Signal Processing, 52(8):2165–2176, 2004. [15] A. Rakhlin and A. Tewari. Lecture notes on online learning. Technical report, 2008. Available at http://www-stat.wharton.upenn.edu/˜rakhlin/papers/online_ learning.pdf. [16] S. Shalev-Shwartz. Online learning: Theory, algorithms, and applications. Technical report, The Hebrew University, 2007. PhD thesis. [17] S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms. Machine Learning Journal, 2007. [18] L. Xiao. Dual averaging method for regularized stochastic learning and online optimization. In Advances in Neural Information Processing Systems 22, pages 2116–2124. 2009. 9
2010
64
4,107
Phoneme Recognition with Large Hierarchical Reservoirs Fabian Triefenbach Azarakhsh Jalalvand Benjamin Schrauwen Jean-Pierre Martens Department of Electronics and Information Systems Ghent University Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium fabian.triefenbach@elis.ugent.be Abstract Automatic speech recognition has gradually improved over the years, but the reliable recognition of unconstrained speech is still not within reach. In order to achieve a breakthrough, many research groups are now investigating new methodologies that have potential to outperform the Hidden Markov Model technology that is at the core of all present commercial systems. In this paper, it is shown that the recently introduced concept of Reservoir Computing might form the basis of such a methodology. In a limited amount of time, a reservoir system that can recognize the elementary sounds of continuous speech has been built. The system already achieves a state-of-the-art performance, and there is evidence that the margin for further improvements is still significant. 1 Introduction Thanks to a sustained world-wide effort, modern automatic speech recognition technology has now reached a level of performance that makes it suitable as an enabling technology for novel applications such as automated dictation, speech based car navigation, multimedia information retrieval, etc. Basically all state-of-the-art systems utilize Hidden Markov Models (HMMs) to compose an acoustic model that captures the relations between the acoustic signal and the phonemes, defined as the basic contrastive units of the sound system of a spoken language. The HMM theory has not changed that much over the years, and the performance growth is slow and for a large part owed to the availability of more training data and computing resources. Many researchers advocate the need for alternative learning methodologies that can supplement or even totally replace the present HMM methodology. In the nineties for instance, very promising results were obtained with Recurrent Neural Networks (RNNs) [1] and hybrid systems both comprising neural networks and HMMs [2], but these systems were more or less abandoned since then. More recently, there was a renewed interest in applying new results originating from the Machine Learning community. Two techniques, namely Deep Belief Networks (DBNs) [3, 4] and Long ShortTerm Memory (LSTM) recurrent neural networks [5], have already been used with great success for phoneme recognition. In this paper we present the first (to our knowledge) phoneme recognizer that employs Reservoir Computing (RC) [6, 7, 8] as its core technology. The basic idea of Reservoir Computing (RC) is that complex classifications can be performed by means of a set of simple linear units that ’read-out’ the outputs of a pool of fixed (not trained) nonlinear interacting neurons. The RC concept has already been successfully applied to time series generation [6], robot navigation [9], signal classification [8], audio prediction [10] and isolated 1 spoken digit recognition [11, 12, 13]. In this contribution we envisage a RC system that can recognize the English phonemes in continuous speech. In a short period (a couple of months) we have been able to design a hierarchical system of large reservoirs that can already compete with many state-of-the-art HMMs that have only emerged after several decades of research. The rest of this paper is organized as follows: in Section 2 we describe the speech corpus we are going to work on, in Section 3 we recall the basic principles of Reservoir Computing, in Section 4 we discuss the architecture of the reservoir system which we propose for performing Large Vocabulary Continuous Speech Recognition (LVCSR), and in Section 5 we demonstrate the potential of this architecture for phoneme recognition. 2 The speech corpus Since the main aim of this paper is to demonstrate that reservoir computing can yield a good acoustic model, we will conduct experiments on TIMIT, an internationally renowned corpus [14] that was specifically designed to support the development and evaluation of such a model. The TIMIT corpus contains 5040 English sentences spoken by 630 different speakers representing eight dialect groups. About 70% of the speakers are male, the others are female. The corpus documentation defines a training set of 462 speakers and a test set of 168 different speakers: a main test set of 144 speakers and a core test set of 24 speakers. Each speaker has uttered 10 sentences: two SA sentences which are the same for all speakers, 5 SX-sentences from a list of 450 sentences (each one thus appearing 7 times in the corpus) and 3 SI-sentences from a set of 1890 sentences (each one thus appearing only once in the corpus). To avoid a biased result, the SA sentences will be excluded from training and testing. For each utterance there is a manual acoustic-phonetic segmentation. It indicates where the phones, defined as the atomic units of the acoustic realizations of the phonemes, begin and end. There are 61 distinct phones, which, for evaluation purposes, are usually reduced to an inventory of 39 symbols, as proposed by [15]. Two types of error rates can be reported for the TIMIT corpus. One is the Classification Error Rate (CER), defined as the percentage of the time the top hypothesis of the tested acoustic model is correct. The second one is the Recognition Error Rate (RER), defined as the ratio between the number of edit operations needed to convert the recognized symbol sequence into the reference sequence, and the number of symbols in that reference sequence. The edit operations are symbol deletions, insertions and substitutions. Both classification and recognition can be performed at the phone and the phoneme level. 3 The basics of Reservoir Computing In this paper, a Reservoir Computing network (see Figure 1) is an Echo State Network [6, 7, 8] consisting of a fixed dynamical system (the reservoir) composed of nonlinear recurrently connected neurons which are left untrained, and a set of linear output nodes (read-out nodes). Each output node is trained to recognize one class (one-vs-all classification). The number of connections between and within layers can be varied from sparsely connected to fully connected. The reservoir neurons have an activation function f(x) = logistic(x). output nodes reservoir input nodes trained output connections random recurrent connections random input connections Figure 1: A reservoir computing network consists of a reservoir of fixed recurrently connected nonlinear neurons which are stimulated by the inputs, and an output layer of trainable linear units. 2 The RC approach avoids the back-propagation through time learning which can be very time consuming and which suffers from the problem of vanishing gradients [6]. Instead, it employs a simple and efficient linear regression learning of the output weights. The latter tries to minimize the mean squared error between the computed and the desired outputs at all time steps. Based on its recurrent connections, the reservoir can capture the long-term dynamics of the human articulatory system to perform speech sound classification. This property should give it an advantage over HMMs that rely on the assumption that subsequent acoustical input vectors are conditionally independent. Besides the ’memory’ introduced through the recurrent connections, the neurons themselves can also integrate information over time. Typical neurons that can accomplish this are Leaky Integrator Neurons (LINs) [16]. With such neurons the reservoir state at time k+1 can be computed as follows: x[k + 1] = (1 −λ)x[k] + λf(Wresx[k] + Winu[k]) (1) with u[k] and x[k] representing the inputs and the reservoir state at time k. The W matrices contain the input and recurrent connection weights. It is common to include a constant bias in u[k]. As long as the leak rate λ < 1, the integration function provides an additional fading memory of the reservoir state. To perform a classification task, the RC network computes the outputs at time k by means of the following linear equation: y[k] = Wout x[k] (2) The reservoir state in this equation is augmented with a constant bias. If the reservoir states at the different time instants form the columns of a large state matrix X and if the corresponding desired outputs form the columns of a matrix D, the optimal Wout emerges from the following equations: Wout = arg min W  1 N  ||X W −D||2 + ϵ ||W||2 (3) Wout = (XTX + ϵ I)−1(XTD) (4) with N being the number of frames. The regularization constant ϵ aims to limit the norm of the output weights (this is the so-called Tikhonov or ridge regression). For large training sets, as common in speech processing, the matrices XTX and XTD are updated on-line in order to suppress the need for huge storage capacity. In this paper, the regularization parameter ϵ was fixed to 10−8. This regularization is equivalent to adding Gaussian noise with a variance of 10−8 to the reservoir state variables. 4 System architecture The main objective of our research is to build an RC-based LVCSR system that can retrieve the words from a spoken utterance. The general architecture we propose for such a system is depicted in Figure 2. The preprocessing stage converts the speech waveform into a sequence of acoustic Figure 2: Hierarchical reservoir architecture with multiple layers. feature vectors representing the acoustic properties in subsequent speech frames. This sequence is supplied to a hierarchical system of RC networks. Each reservoir is composed of LINs which are fully connected to the inputs and to the 41 outputs. The latter represent the distinct phonemes of the language. The outputs of the last RC network are supplied to a decoder which retrieves the most likely linguistic interpretation of the speech input, given the information computed by the RC 3 networks and given some prior knowledge of the spoken language. In this paper, the decoder is a phoneme recognizer just accommodating a bigram phoneme language model. In a later stage it will be extended with other components: (1) a phonetic dictionary comprising all the words of the system’s vocabulary and their common pronunciations, expressed as phoneme sequences, and (2) a n-gram language model describing the probabilities of each word, given the preceding (n-1) words. We conjecture that the integration time of the LINs in the first reservoir should ideally be long enough to capture the co-articulations between successive phonemes emerging from the dynamical constraints of the articulatory system. On the other hand, it has to remain short enough to avoid that information pointing to the presence of a short phoneme is too much blurred by the left phonetic context. Furthermore, we argue that additional reservoirs can correct some of the errors made by the first reservoir. Indeed, such an error correcting reservoir can guess the correct labels from its inputs, and take the past phonetic context into account in an implicit way to refine the decision. This is in contrast to an HMM system which adopts an explicit approach, involving separate models for several thousands of context-dependent phonemes. In the next subsections we provide more details about the different parts of our recognizer, and we also discuss the tuning of some of its control parameters. 4.1 Preprocessing The preprocessor utilizes the standard Mel Frequency Cepstral Coefficient (MFCC) analysis [17] encountered in most state-of-the-art LVCSR systems. The analysis is performed on 25 ms Hammingwindowed speech frames, and subsequent speech frames are shifted over 10 ms with respect to each other. Every 10 ms a 39-dimensional feature vector is generated. It consists of 13 static parameters, namely the log-energy and the first 12 MFCC coefficients, their first order derivatives (the velocity or ∆parameters), and their second order derivatives (the acceleration or ∆∆parameters). In HMM systems, the training is insensitive to a linear rescaling of the individual features. In RC systems however, the input and recurrent weights are not trained and drawn from predefined statistical distributions. Consequently, by rescaling the features, the impact of the inputs on the activations of the reservoir neurons is changed as well, which makes it compulsory to employ an appropriate input scaling [8]. To establish a proper input scaling the acoustic feature vector is split into six sub-vectors according to the dimensions (energy, cepstrum) and (static, velocity, acceleration). Then, each feature ai, (i = 1, .., 39) is normalized to zi = αs (ui −ui) with ui being the mean of ui and s (s = 1, .., 6) referring to the sub-vector (group) the feature belongs to. The aim of αs is to ensure that the norm of each sub-vector is one. If the zi were supplied to the reservoir, each sub-vector would on average have the same impact on the reservoir neuron activations. Therefore, in a second stage, the zi are rescaled to ui = βszi with βs representing the relative importance of sub-vector s in the reservoir neuron activations. The normalization constants αs straightly follow from a statistical analysis of the Table 1: Different types of acoustic information in the input features and their optimal scale factors. Energy features Cepstral features group name log(E) ∆log(E) ∆∆log(E) c1...12 ∆c1...12 ∆∆c1...12 norm factor α 0.27 1.77 4.97 0.10 0.61 1.75 scale factor β 1.75 1.25 1.00 1.25 0.50 0.25 acoustic feature vectors. The factors βs are free parameters that were selected such that the phoneme classification error of a single reservoir system of 1000 neurons is minimized on the validation set. The obtained factors (see Table 1) confirm that the static features are more important than the velocity and the acceleration features. The proposed rescaling has the following advantages: it preserves the relative importance of the individual features within a sub-vector, it is fully defined by six scaling parameters αsβs, it takes only a minimal computational effort, and it is actually supposed to work well for any speech corpus. 4 4.2 Sequence decoding The decoder in our present system performs a Viterbi search for the most likely phonemic sequence given the acoustic inputs and a bigram phoneme language model. The search is driven by a simple model for the conditional likelihood p(y|m) that the reservoir output vector y is observed during the acoustical realization of phoneme m. The model is based on the cosine similarity between y + 1 and a template vector tm = [0, .., 0, 1, 0, .., 0], with its nonzero element appearing at position m. Since the template vector is a unity vector, we compute p(y|m) as p(y|m) =  max[0, < y + 1, tm >) √< y + 1, y + 1 >] κ , (5) with < x, y > denoting the dot product of vectors x and y. Due to the offset, we can ensure that the components of y + 1 are between 0 and 1 most of the time. The maximum operator prevents the likelihoods from becoming negative occasionally. The exponent κ is a free parameters that will be tuned experimentally. It controls the relative importance of the acoustic model and the bigram phoneme language model. 4.3 Reservoir optimization The training of the reservoir output nodes is based on Equations (3) and (4) and the desired phoneme labels emerge from a time synchronized phonemic transcription. The latter was derived from the available acoustic-phonetic segmentation of TIMIT. For all experiments reported in this paper, we have used the modular RC toolkit OGER1 developed at Ghent University. The recurrent weights of the reservoir are not trained but randomly drawn from statistical distributions. The input weights emerge from a uniform distribution between −U and +U, the recurrent weights from a zero-mean Gaussian distribution with a variance V . The value of U controls the relative importance of the inputs in the activation of the reservoir neurons and is often called the input scale factor (ISF). The variance V directly determines the spectral radius (SR), defined as the largest absolute eigenvalue of the recurrent weight matrix. The SR describes the dynamical excitability of the reservoir [6, 8]. The SR and the ISF must be jointly optimized. To do so, we used 1000 neuron reservoirs, supplied with inputs that were normalized according to the procedure reviewed in the previous section. We found that SR = 0.4 and ISF = 0.4 yield the best performance, but for SR ∈(0.3...0.8) and for ISF ∈(0.2...1.0), the performance is quite stable. Another parameter that must be optimized is the leak rate, denoted as λ. It determines the integration time of the neurons. If the nonlinear function is ignored and the time between frames is Tf, the reservoir neurons represent a first-order leaky integrator with a time constant τ that is related to λ by λ = 1 −e−Tf /τ. As stated before, the integration time should be long enough to capture the relevant co-articulation effects and short enough to constrain the information blurring over subsequent phonemes. This is confirmed by Figure 3 showing how the phoneme CER of a single reservoir system changes as a function of the integrator time constant. The optimal value is 40 ms, 0 20 40 60 80 100 120 140 160 180 39 40 41 42 43 44 CER [in %] integration time [in ms] Figure 3: The phoneme Classification Error Rate (CER) as a function of the integration time (in ms) and completely in line with psychophysical data concerning the post and pre-masking properties of the human auditory system. In [18] for instance, it is shown that these properties can be explained by means of a second order low-pass filter with real poles corresponding to time constants of 8 and 40 ms respectively (it is the largest constant that determines the integration time here). 1http://reservoir-computing.org/organic/engine 5 It has been reported [19] that one can easily reduce the number of recurrent connections in a RC network without much affecting its performance. We have found that limiting the number of connection to 50 per neuron does not harm the performance while it dramatically reduces the required computational resources (memory and computation time). 5 Experiments Since our ultimate goal is to perform LVCSR, and since LVCSR systems work with a dictionary of phonemic transcriptions, we have worked with phonemes rather than with phones. As in [20] we consider the 41 phoneme symbols one encounters in a typical phonetic dictionary like COMLEX [21]. The 41 symbols are very similar to the 39 symbols of the reduced phone set proposed by [15], but with one major difference, namely, that a phoneme string does not contain any silences referring to closures of plosive sounds (e.g. the closure /kcl/ of phoneme /k/). By ignoring confusions between /sh/ and /zh/ and between /ao/ and /aa/ we finally measure phoneme error rates for 39 classes, in order to make them more compliant with the phone error rates for 39 classes reported in other papers. Nevertheless, we will see later that phoneme recognition is harder to accomplish than phone recognition. This is because the closures are easy to recognize and contribute to a low phone error rate. In phoneme recognition there are no closure anymore. In what follows, all parameter tuning is performed on the TIMIT training set (divided into independent training and development sets), and all error rates are measured on the main test set. The bigram phoneme language model used for the sequence decoding step is created from the phonemic transcriptions of the training utterances. 5.1 Single reservoir systems In a first experiment we assess the performance of a single reservoir system as a function of the reservoir size, defined as the number of neurons in the reservoir. The phoneme targets during training are derived from the manual acoustic-phonetic segmentation, as explained in Section 4.3. We increase the number of neurons from 500 to 20000. The corresponding number of trainable parameters then changes from 20K to 800K. The latter figure corresponds to the number of trainable parameters in an HMM system comprising 1200 independent Gaussian mixture distributions of 8 mixtures each. Figure 4 shows that the phoneme CER on the training set drops by about 4% every time the reservoir size is doubled. The phoneme CER on the test set shows a similar trend, but the slope is decreasing from 4% at low reservoir sizes to 2% at 20000 neurons (nodes). At that point the CER on the test Figure 4: The Classification Error Rate (CER) at the phoneme level for the training and test set as a function of the reservoir size. set is 30.6% and the corresponding RER (not shown) is 31.4%. The difference between the test and the training error is about 8%. Although the figures show that an even larger reservoir will perform better, we stopped at 20000 nodes because the storage and the inversion of the large matrix XTX are getting problematic. Before starting to investigate even larger reservoirs, we first want to verify our hypothesis that adding a second (equally large) layer can lead to a better performance. 6 5.2 Multilayer reservoir systems Usually, a single reservoir system produces a number of competing outputs at all time steps, and this hampers the identification of the correct phoneme sequence. The left panel of Figure 5 shows the outputs of a reservoir of 8000 nodes in a time interval of 350 ms. Our hypothesis was that the observed confusions are not arbitrary, and that a second reservoir operating on the outputs of the first reservoir system may be able to discover regularities in the error patterns. And indeed, the outputs of this second reservoir happen to exhibit a larger margin between the winner and the competition, as illustrated in the right panel of Figure 5. Figure 5: The outputs of the first (left) and the second (right) layer of a two-layer system composed of two 8000 node reservoirs. The shown interval is 350 ms long. In Figure 6, we have plotted the phoneme CER and RER as a function of the number of reservoirs (layers) and the size of these reservoirs. We have thus far only tested systems with equally large reservoirs at every layer. For the exponent κ, we have just tried κ = 0.5, 0.7 and 1, and we have selected the value yielding the best balance between insertions and deletions. Figure 6: The phoneme CERs and RERs for different combinations of number of nodes and layers For all reservoir sizes, the second layer induces a significant improvement of the CER by 3-4% absolute. The corresponding improvements of the recognition error rates are a little bit less but still significant. The best RER obtained with a two-layer system comprising reservoirs of 20000 nodes is 29.1%. Both plots demonstrate that a third layer does not cause any additional gain when the reservoir size is large enough. However, this might also be caused by the fact that we did not systematically optimize the parameters (SR, leak rate, regularization parameter, etc.) for each large system configuration we investigated. We just chose sensible values which were retrieved from tests with smaller systems. 5.3 Comparison with the state-of-the-art In Table 2 we have listed some published results obtained on TIMIT with state-of-the-art HMM systems and other recently proposed research systems. We have also included the results of own experiments we conducted with SPRAAK2 [22], a recently launched HMM-based speech recognition toolkit. In order to provide an easier comparison, we also build a phone recognition system based on the same design parameters that were optimized for phoneme recognition. All phone RERs are 2http://www.spraak.org 7 calculated on the core test set, while the phoneme RERs were measured on the main test set. We do this because most figures in speech community papers apply to these experimental settings. Our final results were obtained with systems that were trained on the full training data (including the development set). Before discussing our figures in detail we emphasize that the two figures for SPRAAK confirm our earlier statement that phoneme recognition is harder than phone recognition. Table 2: Phoneme and Phone Recognition Error Rates (in %) obtained with state-of-the-art systems. System description Phone RER Phoneme RER used test set core test main test Reservoir Computing (this paper) 26.8 29.1 CD-HMM (SPRAAK Toolkit) 25.6 28.1 CD-HMM [20] 28.7 Recurrent Neural Networks [1] 26.1 LSTM+CTC [5] (24.6) Bayesian Triphone HMM [23] 24.4 Deep Belief Networks [4] 23.0 Hierarchical HMM + MLPs [20] (23.4) Given the fact that SPRAAK seems to achieve state-of-the-art performance, it is fair to conclude from the figures in Table 2 that our present system is already competitive with other modern HMM systems. It is also fair to say that better systems do exist, like the Deep Belief Network system [4] and the hierarchical HMM system with multiple Multi-Layer Perceptrons (MLPs) on top of an HMM system [20]. Note however that the latter system also employs complex temporal patterns (TRAPs) as input features. These patterns are much more powerful than the simple MFCC vectors used in all other systems we cite. Furthermore, the LSTM+CTC [5] results too must be considered with some care since they were obtained with a bidirectional system. Such a system is impractical in many application since it has to wait until the end of a speech utterance to start the recognition. We therefore put the results of the latter two systems between brackets in Table 2. To conclude this discussion, we also want to mention some training and execution times. The training of our two-layer 20K reservoir systems takes about 100 hours on a single core 3.0 GHz PC, while recognition takes about two seconds of decoding per second of speech. 6 Conclusion and future work In this paper we showed for the first time that good phoneme recognition on TIMIT can be achieved with a system based on Reservoir Computing. We demonstrated that in order to achieve this, we need large reservoirs (at least 20000 nodes) which are configured in a hierarchical way. By stacking two reservoir layers, we were able to achieve error rates that are competitive with what is attainable using state-of-the-art HMM technology. Our results support the idea that reservoirs can exploit long-term dynamic properties of the articulatory system in continuous speech recognition. It is acknowledged though that other techniques such as Deep Belief Networks are still outperforming our present system, but the plots and the discussions presented in the course of this paper clearly show a significant margin for further improvement of our system in the near future. To achieve this improvement we will investigate even larger reservoirs with 50000 and more nodes and we will more thoroughly optimize the parameters of the different reservoirs. Furthermore, we will explore the use of sparsely connected outputs and multi-frame inputs in combination with PCAbased dimensionality reduction. Finally, we will develop an embedded training scheme that permits the training of reservoirs on much larger speech corpora for which only orthographic representations are distributed together with the speech data. Acknowledgement The work presented in this paper is funded by the EC FP7 project ORGANIC (FP7-231267). 8 References [1] A. Robinson. An application of recurrent neural nets to phone probability estimation. IEEE Trans. on Neural Networks, 5:298–305, 1994. [2] H. Bourlard and N. Morgan. Continuous speeh recognition by connectionist statistical methods. IEEE Trans. on Neural Networks, 4:893–909, 1993. [3] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527–1554, 2006. [4] A. Mohamed, G. Dahl, and G. Hinton. Deep belief networks for phone recognition. In NIPS Workshop on Deep Learning for Speech Recognition and Related Applications, 2009. [5] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18:602–610, 2005. [6] H. Jaeger. Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and the echo state network approach (48 pp). Technical report, German National Research Center for Information Technology, 2002. [7] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531–2560, 2002. [8] D. Verstraeten, B. Schrauwen, M. D’Haene, and D. Stroobandt. An experimental unification of reservoir computing methods. Neural Networks, 20:391–403, 2007. [9] E. Antonelo, B. Schrauwen, and J. Van Campenhout. Generative modeling of autonomous robots and their environments using reservoir computing. Neural Processing Letters, 26(3):233–249, 2007. [10] G. Holzmann and H. Hauser. Echo state networks with filter neurons and a delay & sum readout. Neural Networks, 23:244–256, 2010. [11] D. Verstraeten, B. Schrauwen, and D. Stroobandt. Isolated word recognition using a liquid state machine. In Proceedings of the 13th European Symposium on Artificial Neural Networks (ESANN), pages 435–440, 2005. [12] M. Skowronski and J. Harris. Automatic speech recognition using a predictive echo state network classifier. Neural Networks, 20(3):414–423, 2007. [13] B. Schrauwen. A hierarchy of recurrent networks for speech recognition. In NIPS Workshop on Deep Learning for Speech Recognition and Related Applications, 2009. [14] J. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallett, and N. Dahlgren. The DARPA TIMIT acousticphonetic continuous speech corpus cd-rom. Technical report, National Institute of Standards and Technology, 1993. [15] K.F. Lee and H-W. Hon. Speaker-independent phone recognition using hidden markov models. In IEEE Trans. on Acoustics, Speech and Signal Processing, ASSP, volume 37, pages 1641–1648, 1989. [16] H. Jaeger, M. Lukosevicius, D. Popovici, and U. Siewert. Optimization and applications of echo state networks with leaky-integrator neurons. Neural Networks, 20:335–352, 2007. [17] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. on Acoustics Speech & Signal Processing, 28:357– 366, 1980. [18] L. Van Immerseel and J.P. Martens. Pitch and voiced/unvoiced determination with an auditory model. Acoustical Society of America, 91(6):3511–3526, June 1992. [19] B. Schrauwen, L. Buesing, and R. Legenstein. Computational power and the order-chaos phase transition in reservoir computing. In Proc. Advances in Neural Information Processing Systems (NIPS), volume 21, pages 1425–1432, 2008. [20] P. Schwarz, P. Matejka, and J. Cernocky. Hierarchical structures of neural networks for phoneme recognition. In Proc. International Conference on Acoustics, Speech and Signal Processing, pages 325–328, 2006. [21] Linguistic Data Consortium. COMLEX english pronunciation lexicon, 2009. [22] K. Demuynck, J. Roelens, D. Van Compernolle, and P. Wambacq. SPRAAK: An open source speech recognition and automatic annotation kit. In Procs. Interspeech 2008, page 495, 2008. [23] J. Ming and F.J. Smith. Improved phone recognition using bayesian triphone models. IEEE Trans. on Acoustics, Speech and Signal Processing, ASSP, 1:409–412, 1998. 9
2010
65
4,108
Learning Multiple Tasks using Manifold Regularization Arvind Agarwal∗ Hal Daum´e III∗ Department of Computer Science University of Maryland College Park, MD 20740 arvinda@cs.umd.edu hal@umiacs.umd.edu Samuel Gerber Scientific Computing and Imaging Institute University of Utah Salt Lake City, Utah 84112 sgerber@cs.utah.edu Abstract We present a novel method for multitask learning (MTL) based on manifold regularization: assume that all task parameters lie on a manifold. This is the generalization of a common assumption made in the existing literature: task parameters share a common linear subspace. One proposed method uses the projection distance from the manifold to regularize the task parameters. The manifold structure and the task parameters are learned using an alternating optimization framework. When the manifold structure is fixed, our method decomposes across tasks which can be learnt independently. An approximation of the manifold regularization scheme is presented that preserves the convexity of the single task learning problem, and makes the proposed MTL framework efficient and easy to implement. We show the efficacy of our method on several datasets. 1 Introduction Recently, it has been shown that learning multiple tasks together helps learning [8, 19, 9] when the tasks are related, and one is able to use an appropriate notion of task relatedness. There are many ways by which one can enforce the relatedness of the tasks. One way to do so is to assume that two tasks are related if their parameters are “close”. This notion of relatedness is usually incorporated in the form of a regularizer [4, 16, 13] or a prior [15, 22, 21]. In this work we present a novel approach for multitask learning (MTL) that considers a notion of relatedness based on ideas from manifold regularization1. Our approach is based on the assumption that the parameters of related tasks can not vary arbitrarily but rather lie on a low dimensional manifold. A similar idea underlies the standard manifold learning problems: the data does not change arbitrarily, but instead follows a manifold structure. Our assumption is also a generalization of the assumption made in [1] which assumes that all tasks share a linear subspace, and a learning framework consists of learning this linear subspace and task parameters simultaneously. We remove the linear constraint from this problem, and assume that the tasks instead share a non-linear subspace. In our proposed approach we learn the task parameters and the task-manifold alternatively, learning one while keeping the other fixed, similar to [4]. First, we learn all task parameters using a single task learning (STL) method, and then use these task parameters to learn the initial task manifold. The task-manifold is then used to relearn the task parameters using manifold regularization. Learning of manifold and task parameters is repeated until convergence. We emphasize that when we learn the task parameters (keeping the manifold structure fixed), the MTL framework decomposes across the ∗This work was done at School of Computing, University of Utah, Salt Lake City, Utah 1It is not to be confused with the manifold regularization presented in [7]. We use the projection distance for regularization while Belkin et.al. use the graph structure (graph Laplacian). 1 tasks, which can be learned independently using standard method such as SVMs. Note that unlike most manifold learning algorithms, our framework learns an explicit representation of the manifold and naturally extends to new tasks. Whenever a new task arrives, one can simply use the existing manifold to learn the parameters of the new task. For a new task, our MTL model is very efficient as it does not require relearning all tasks. As shown later in the examples, our method is simple, and can be implemented with only a small change to the existing STL algorithms. Given a black box for manifold learning, STL algorithms can be adapted to the proposed MTL setting. To make the proposed framework even simpler, we provide an approximation which preserves the convexity of the STL problem. We emphasize that this approximation works very well in practice. All the experimental results used this approximation. 2 Related Work In MTL, task relatedness is a fundamental question and models differ in the ways they answer this question. Like our method, most of the existing methods first assume a structure that defines the task relatedness, and then incorporate this structure in the MTL framework in the form of a regularizer [4, 16, 13]. One plausible approach is to assume that all task parameters lie in a subspace [1]. The tasks are learned by forcing the parameters to lie in a common linear subspace therefore exploiting the assumed relatedness in the model. Argyriou et.al. [4] later generalized this work by using a function F to model the shared structure. In this work, the relatedness structure is forced by applying a function F on a covariance matrix D which yields a regularization of the form tr(F(D)WW T ) on the parameters W. Here, the function F can model different kind of relatedness structures among tasks including the linear subspace structure [1]. Given a function F, this framework learns both, the relatedness matrix D and the task parameters W. One of the limitations of this approach is the dependency on F which has to be provided externally. In an informal way, F introduces the non-linearity and it is not clear as what the right choice of F is. Our framework generalizes the linear framework by introducing the nonlinearity through the manifold structure learned automatically from the data, and thus avoids the need of any external function. Argyriou et. al. extend their work [4] in [2, 3] where non-linearity is introduced by considering a kernel function on the input data, and then learning the linear subspace in the Hilbert space. This method in spirit is very similar to our method except that we learn an explicit manifold therefore our method is naturally extensible to new tasks. Another work that models the task relatedness in the form of proximity of the parameters is [16] which assumes that task parameters wt for each task is close to some common task w0 with some variance vt. These vt and w0 are learned by minimizing the Euclidean norm which is again equivalent to working in the linear space. This idea is later generalized by [13], where tasks are clustered, and regularized with respect to the cluster they belong to. The task parameters are learned under this cluster assumption by minimizing a combination of different penalty functions. There is another line of work [10], where task relatedness is modeled in term of a matrix B which needs to be provided externally. There is also a large body of work on multitask learning that find the shared structure in the tasks using Bayesian inference [23, 24, 9], which in spirit, is similar to the above approaches, but done in a Bayesian way. It is to be noted that all of the above methods either work in a linear setting or require external function/matrix to enforce the nonlinearity. In our method, we work in the non-linear setting without using any external function. 3 Multitask Learning using Manifold In this section we describe the proposed MTL framework. As mentioned earlier, our framework assumes that the tasks parameters lie on a manifold which is a step further to the assumption made in [1] i.e., the task parameters lie on a linear subspace or share a common set of features. Similar to the linear subspace algorithm [1] that learns the task parameters (and the shared subspace) by regularizing the STL framework with the orthogonal projections of the task parameters onto the subspace, we propose to learn the task parameters (and non-linear subspace i.e., task-manifold) by 2 regularizing the STL with the projection distance of the task parameters from this task-manifold (see Figure 1). We begin with some notations. Let T be the total number of tasks, and for each task t, let Xt = {x1, . . . xnt} be the set of examples and Yt = {y1, . . . ynt} be the corresponding labels. Each example xi ∈Rd is a d dimensional vector, and yi is a label; yi ∈{+1, −1} in case of a classification problem, and a real value yi ∈R in case of regression problem. nt is the number of examples in task t. For the simplicity of the notations, we assume that all tasks have the same number of examples i.e. n1 = . . . = nT = n, though in practice they may vary. Now for each task t, let θt be the parameter vector, referred as the task parameter. w w∗ Figure 1: Projection of the estimated parameters w of the task in hand on the manifold learned from all tasks parameters. w∗is the optimal parameter. Given example-label pairs set (Xt, Yt) for task t, a learning problem would be to find a function ft that for any future example x, predicts the correct value of y i.e. y = ft(x). A standard way to learn this function is to minimize the loss between the value predicted by the function and the true value. Let L be such a loss function. Let k be a kernel defined on the input examples k : Rd × Rd →R and Hk be the reproducing kernel Hilbert space (RKHS) associated with the kernel k. Restricting ft to the functions in the RKHS and denoting it by f(x, θt) = ⟨θt, φ(x)⟩, single task learning solves the following optimization problem: θ∗ t = arg min θt X x∈Xt L(f(x; θt), y) + λ ||ft||2 Hk , (1) here λ is a regularization parameter. Note that the kernel is assumed to be common for all tasks hence does not have the subscript t. This is equivalent to saying that all tasks belong to the same RKHS. Now one can extend the above STL framework to the multitask setting. In MTL, tasks are related, this notion of relatedness is incorporated through a regularizer. Let u be such regularizer, then MTL solves: (θ∗ 1, . . . θ∗ T ) = arg min (θ1,...θT ) T X t=1 “ X x∈Xt L(f(x; θt), y) + λ ||ft||2 Hk ” + γu(θ1 . . . θT ), (2) where γ is a trade off parameter similar to λ that trades off the amount of MTL regularization. As mentioned in Section 2, there are many ways in which this regularizer can be implemented. For example, for the assumption that the task parameters are close to a common task θ0, regularizer would just be ∥θt −θ0∥2. In our approach, we split the regularizer u(θ1, . . . , θT ) into T different regularizers u(θt, M) such that u(θt, M) regularizes the parameter of task t while considering the effect of other tasks through the manifold M. The optimization problem under such regularizer can be written as: (θ∗ 1, . . . θ∗ T ) = arg min (θ1,...θT ),M T X t=1 “ X x∈Xt L(f(x; θt), y) + λ ||ft||2 Hk + γu(θt, M) ” . (3) Note that optimization is now performed over both task parameters and the manifold. If manifold structure M is fixed then the above optimization problem decomposes into T independent optimization problems. In our approach, the regularizer depends on the structure of the manifold constructed from the task parameters {θ1, . . . θT }. Let M be such manifold, and PM(θt) be the projection distance of θt from the manifold. Now one can use this projection distance as a regularizer u(θt, M) in the cost function since all task parameters are assumed to lie on the task manifold M. The cost function is now given by: CP = T X t=1 “ X x∈Xt L(f(x; θt), y) + λ ||ft||2 Hk + γPM(θt) ” . (4) Since the manifold structure is not known, the cost function (4) needs to be optimized simultaneously for the task parameters (θ1 . . . θT ) and for the task-manifold M. Optimizing for θ and M jointly is a hard optimization problem, therefore we resort to the alternating optimization. We first 3 fix the task parameters and learn the manifold. Next, we fix the manifold M, and learn the task parameters by minimizing (4). In order to minimize (4) for the task parameters, we need an expression for PM i.e. an expression for computing the projection distance of task parameters from the manifold. More precisely, we only need the gradient of PM not the function itself since we will solve this problem using gradient descent. 3.1 Manifold Regularization Our approach relies heavily on the capability to learn a manifold, and to be able to compute the gradient of the projection distances onto the manifold. Much recent work in manifold learning focused on uncovering low dimensional representation [18, 6, 17, 20] of the data. These approaches do not provide the tools crucial to this work i.e., the gradient of the projection distance. Recent work [11] addresses this issues and proposes a manifold learning algorithm, based on the idea of principal surfaces [12]. It explicitly represents the manifold in the ambient space as a parametric surface which can be used to compute the projection distance and its gradient. For the sake of completeness, we briefly describe this method (for details refer [11]). The method is based on minimizing the expected reconstruction error E[g(h(θ)) −θ] of the task parameter θ onto the manifold M. Here h is the mapping from the manifold to the lower dimensional Euclidean space and g is the mapping from the lower dimensional Euclidean space to the manifold. Thus, the composition g ◦h maps a point belonging to manifold to the manifold, using the mapping to the Euclidean space as an intermediate step. Note that θ and g(h(θ)) are usually not the same. These mappings g and h can be formulated in terms of kernel regressions over the data points: h(θ) = T X j=1 Kθ(θ −θj) PT l=1 Kθ(θ −θl) zj (5) with Kθ a kernel function and zj a set of parameters to be estimated in the manifold learning process. Similarly g(r) = T X j=1 Kr(r −h(θj)) PT l=1 Kr(r −h(θl)) θj (6) again with Kr a kernel function. Note that in the limit, the kernel regression converges to the conditional expectation g(r) = E[(θ1, . . . , θT )|r] where expectation is taken with respect to probability distribution p(θ), parameters are assumed to be sampled from. If h is an orthogonal projection, this yields a principal surface [12], i.e informally g passes through the middle of the density. In [11] it is shown that in the limit, as the number of samples to learn from increases, h indeed yields an orthogonal projection onto g. Under this orthogonal projection, the estimation of the parameters zi, i.e. the manifold learning, can be done through gradient descent on the sample mean of the projection distance 1 T PT i=1 g(h(θi)) −θi using a global manifold learning approach for initialization. Once h is estimated, the projection distance is immediate by PM = ∥θ −g(h(θ))∥2 = ∥θ −θM∥2 (7) For the optimization of (4) we need the gradient of the projection distance which is dPM(θ) dθ = 2(g(h(θ)) −θ)dg(r) dr |r=h(θ) dh(θ) dθ . (8) The projection distance for a single task parameters is O(n) due to the definition of h and g as kernel regressions which show up in the projection distance gradient in dg(r) dr |r=h(θ) and dh(θ) dθ . This is fairly expensive therefore we propose an approximation, justified by the convergence to an orthogonal projection of h, to the exact projection gradient. For an orthogonal projection the term dg(r) dr |r=h(θ) dh(θ) dθ vanishes ( dh(θ) dθ is orthogonal to the tangent plane dg(r) dr |r=h(θ)of the projected point) and the gradient simplifies to dPM(θ) dθ = 2(g(h(θ)) −θ), (9) which is exactly the gradient of (7) assuming that the projection of θ onto the manifold is fixed. A further advantage of this approximation , besides a computational speedup, is that no non-convexities are introduced due to the regularization. 4 Algorithm 1 MTL using Manifold Regularization Input: {xi, yi}n i=1 for t = 1 . . . T. Output: θ1, . . . θT . Initialize: Learn θ1, . . . θT independently. Learn the task-manifold using θ1, . . . θT . while it < numIter do for t = 1 to T do Learn θt using (4) with (7) or (10). end for Relearn the task-manifold using θ1, . . . θT . end while The proposed manifold regularization approximation allows to use any STL method without much change in the optimization of the STL problem. The proposed method for MTL pipelines manifold learning with the STL. Using (7) one can write the (4) as: CP = T X t=1 “ X x∈Xt L(f(x; θt), y) + λ ||θt||2 + γ ˛˛˛ ˛˛˛θt −˜θM t ˛˛˛ ˛˛˛ 2 ” (10) here ˜θM t is the fixed projection of the θ on the manifold. Note that in the proposed approximation of the above expression, ˜θM t is fixed while computing the gradient i.e., one does not have to worry about moving the projection of the point on the manifold during the gradient step. Although in the following example, we will solve (10) for linear kernel, extension for the non-linear kernels is straightforward under the proposed approximation. This approximation allows one to treat the manifold regularizer similar to the RKHS regularizer ∥θt∥2 and solve the generalized learning problem (4) with non-linear kernels. Note that ∥θt −˜θM t ∥2 is a monotonic function of θ so it does not violate the representer theorem. 3.2 Example: Linear Regression In this section, we solve the optimization problem (4) for the linear regression model. This is the model we have used in all of our experiments. In the learning framework (4), the loss function is L(x, y, wt) = (y −⟨wt, x⟩)2 with linear kernel k(x, y) = ⟨x, y⟩. We have changed the notations for parameters from θ to w to differentiate the linear regression from the general framework. The cost function for linear regression can now be written as: CP = T X t=1 “ X x∈Xt (y −⟨wt, x⟩)2 + λ 2 ||wt||2 + γPM(wt) ” (11) This cost function may be convex or non-convex depending upon the manifold terms PM(wt). The first two terms are convex. If one uses the approximation (10), this problem becomes convex and has the form similar to STL. The solution under this approximation is given by: wt = ` (λ + γ)I + ⟨Xt, XT t ⟩ ´−1` XtY T t + γ ˜wM t ´ (12) where I is a d × d identity matrix, Xt is a d × n example matrix, and Yt is a row vector of corresponding labels. ˜wM t is the orthogonal projection of w on the manifold. 3.3 Algorithm Description The algorithm for MTL with manifold regularization is straightforward and shown in Algorithm 1. The algorithm begins with the STL setting i.e., each task parameter is learned independently. These learned task parameters are then used to estimate the task-manifold. Keeping the manifold structure fixed, we relearn all task parameters using manifold regularization. Equation (9) is used to compute the gradient of the projection distance used in relearning the parameters. This step gives us the explicit representation of the projection in the case of a linear kernel while a set of weights in the case of a non-linear kernel. Current code available for computing the projection [11] only handles points in the Euclidean space (RKHS with linear kernel), not in a general RKHS, though in theory, it is possible to extend the current code to general RKHS. Once the parameters for all tasks are learned, the manifold is re-estimated based on the updated task parameters. This process is repeated for a fixed number of iterations (in our experiments we use 5 iterations). 5 4 Experiments In this section, we consider the regression task and show the experimental results of our method. We evaluate our method on both synthetic and real datasets. 4.1 Synthetic Dataset First, we evaluate our method on a synthetic data. This data is generated from the task parameters sampled from a known manifold (swiss roll). The data is generated by first sampling the points from the 3-dimensional swiss roll, and then using these points as the task parameters to generate the examples using the linear regression model. We sample 100 tasks, and for each task we generate 2 examples. The number of examples per task is kept low for two reasons. First, the task at hand (this is linear) is a relatively easy task and more number of examples give a nearly perfect regression model with the STL method itself, leaving almost no room for improvement. Second, MTL in the real world makes sense only when the number of examples per task is low. In all of our experiments, we compare our approach with the approach presented in [4] for two reasons. First, this is the approach most closely related to our approach (this makes linear assumption while we make the non-linear assumption), and second, code is available online2 . In all our experiments we report the root mean square error (RMSE) [4]. For a set of 100 tasks, taskwise results for the synthetic data is shown in Figure 2(a). In this figure, the x-axis represents the RMSE of the STL model while the y-axis is the RMSE of the MTL model. Figure 2(a) shows the performance of the MTL model relative to the STL model. Each point (x, y) in the figure represents the (STL,MTL) pair. Blue dots denote the MTL performance of our method while green crosses denote the performance of the baseline method [4]. The red line denote the points where MTL and STL performed equally. Any point above the red line shows that the RMSE of MTL is higher (bad case) while points below denote that RMSE of MTL is lower (good case). It is clear from Figure 2(a) that our method is able to use the manifold information therefore outperform both STL and MTL-baseline methods. We improve the performance of almost all tasks with respect to STL, while MTL-baseline improves the performance of only few tasks. Note the mean performance improvement (reduction in RMSE i.e. RMSE of (STL-MTL)) of all tasks in our method and in the baseline-MTL. We get an improvement of +0.0131 while baseline has the negative performance improvement of −0.0204. For the statistical significance, reported numbers are averaged over 10 runs. Hyperparameters of both models (baseline and ours (λ and γ)) were tuned on a small dataset chosen randomly. 4.2 Real Regression Dataset We now evaluate our method on two real datasets school dataset and computer survey dataset [14], the same datasets as used in the baseline model [4]. Moreover they have also been used in previous MTL studies, for example, school dataset in [5, 10] and computer dataset in [14]. Computer This dataset is a survey of 190 students who rated the likelihood of purchasing one of 20 different personal computers. Here students correspond to the tasks and computers correspond to the examples. Each student rated all of the 20 computers on a scale of 0-10, therefore giving 20 labeled examples per task. Each computer (input example) is represented by 13 different computer characteristics (RAM, cache, CPU, price etc.). Training and test sets were obtained by splitting the dataset into 75% and 25%, thus giving 15 examples for training and 5 examples for testing. School This dataset 3 is from the Inner London Education Authority and consists of the examination scores of 15362 students from 139 schools in London. Here, each school corresponds to a task, thus a total of 139 tasks. The input consists of the year of the examination, 4 school-specific and 3 student-specific attributes. Following [5, 4], each categorical feature is replaced with binary 2For a fair comparison, we use the code provided by the author, available at http://ttic.uchicago. edu/˜argyriou/code/mtl_feat/mtl_feat.tar. 3Available at http://www.cmm.bristol.ac.uk/learning-training/ multilevel-m-support/datasets.shtml 6 0.05 0.1 0.15 0.2 0.25 0.05 0.1 0.15 0.2 0.25 n=2, T=100, AvgManifold=0.0131 AvgBaseline=−0.0204 RMSE (STL) RMSE (MTL) (a) 0 50 100 150 200 250 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 Number of examples per task Avg RMSE STL MTL−Manifold MTL−Baseline (b) Figure 2: Taskwise performance on the synthetic dataset. The red line marks where STL and MTL perform equally. Any points above it represent the tasks whose RMSE increased through the MTL framework while those below showed performance improvement (reduced RMSE). Green crosses are the baseline method and blue dots are the manifold method. Avg{Manifold,Baseline} in the title is the mean performance improvement of all tasks over STL. (b) Average RMSE vs number of examples for school dataset 1 1.5 2 2.5 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 n=15, T=190, AvgManifold=0.2302 AvgBaseline=−0.9121 RMSE (STL) RMSE (MTL) (a) 1 1.5 2 2.5 3 3.5 4 1 1.5 2 2.5 3 3.5 4 n=10, T=139, AvgManifold=0.1458 AvgBaseline=0.1563 RMSE (STL) RMSE (MTL) (b) Figure 3: Taskwise performance on (a) computer and (b) school datasets. features, giving us a total of 26 features. We again split the dataset into 75% training and 25% testing. Similar to the synthetic dataset, hyperparameters of the baseline method and manifold method (γ and λ) were tuned on a small validation dataset picked randomly from the training set. In the experiments, whenever we are required to use fewer number of examples, examples were chosen randomly. In such experiments, reported numbers were averaged over 10 runs for the statistical significance. Note that the fewer the examples, the higher the variance because of randomness. In order to see if learning tasks simultaneously helps, we did not consider the zero value while tuning the hyperparameters of MTL to avoid the reduction of MTL method to STL ones. Figure 3(a) and Figure 3(b) shows the taskwise performance of the computer and school datasets respectively. We note that for the computer dataset, we perform significantly better than both STL and the baseline methods. The baseline method performs worse than the STL method, therefore giving a negative average performance improvement of −0.9121. We believe that this is because the tasks are related non-linearly. For the school dataset, we perform better than both STL and the baseline method though relative performance improvement is not as significant as in the computer dataset. On the school dataset, the baseline method has a mixed behavior relative to the STL method, performing good on some tasks while performing worse on others. In both of these datasets, we observe that our method does not cause the negative transfer i.e. causing a task to perform worse than the STL. Although we have not used anything in our problem formulation to avoid negative transfer, this observation is interesting. Note that almost all of the existing MTL methods suffer from the negative transfer phenomena. We emphasize that the baseline method has two parameters 7 that are very important, the regularization parameter and the P. In our experiments we found that the baseline method is very sensitive to both of these parameters. In order to have a fair and competitive comparison, we used the best value of these parameters, tuned on a small validation dataset picked randomly from the training set. 0 50 100 150 200 0.5 1 1.5 2 2.5 3 Number of tasks Avg RMSE STL MTL−Manifold MTL−Baseline (a) 0 50 100 150 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 Number of tasks Avg RMSE STL MTL−Manifold MTL−Baseline (b) Figure 4: RMSE Vs number of tasks for (a) computer dataset (b) school dataset Now we show the performance variation with respect to the number of training examples. Figure 2(b) shows the relative performance of the STL, MTL-baseline and MTL-Manifold for the school dataset. We outperform STL method significantly while we perform comparative to the baseline. Note that when the number of examples is relatively low, the baseline method outperforms our method because we do not have enough examples to estimate the parameters of the task which is used for the manifold construction. But as we increase the number of examples, we get better estimate of the parameters, hence better manifold regularization. For n > 100 we outperform the baseline method by a small amount. Variation of the performance with n is not shown for the computer dataset because computer dataset has only 20 examples per task. Performance variation with respect to the number of tasks for school and computer datasets is shown in Figure 4. We outperform STL method and the baseline method for the computer dataset while perform better/equal on the school dataset. These two plots indicate how the tasks are related in these two datasets. It suggests that tasks in school datasets are related linearly (Manifold and baseline methods have the same performance 4) while tasks in the computer dataset are related non-linearly, which is why baseline method performs poor compared to the STL method. Both datasets exhibit the different behavior as we increase the number of tasks, though behavior relative to the STL method remains constant. This suggests that after a certain number of tasks, performance is not affected by adding more tasks. This is especially true for the computer dataset since it only has 13 features and only a few tasks are required to learn the task relatedness structure. In summary, our method improves the performance over STL in all of these datasets (no negative transfer), while baseline method performs comparatively on the school dataset and performs worse on the computer dataset. 5 Conclusion We have presented a novel method for multitask learning based on a natural and intuitive assumption about the task relatedness. We have used the manifold assumption to enforce the task relatedness which is a generalization of the previous notions of relatedness. Unlike many other previous approaches, our method does not require any other external information e.g. function/matrix other than the manifold assumption. We have performed experiments on synthetic and real datasets, and compared our results with the state-of-the-art method. We have shown that we outperform the baseline method in nearly all cases. We emphasize that unlike the baseline method, we improve over single task learning in almost all cases and do not encounter the negative transfer. 4In the ideal case, the non-linear method should be able to discover the linear structure. But in practice, they might differ, especially when there are fewer number of tasks. This is the reason we perform equal on the school dataset when the number of tasks is high. 8 References [1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS ’06, 2006. [2] A. Argyriou, T. Evgeniou, M. Pontil, A. Argyriou, T. Evgeniou, and M. Pontil. Convex multitask feature learning. In Machine Learning. press, 2007. [3] A. Argyriou, C. A. Micchelli, and M. Pontil. When is there a representer theorem? vector versus matrix regularizers. J. Mach. Learn. Res., 10:2507–2529, 2009. [4] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. In NIPS ’08. 2008. [5] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. JMLR, 4:2003, 2003. [6] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2002. [7] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399–2434, 2006. [8] R. Caruana. Multitask learning. In Machine Learning, pages 41–75, 1997. [9] H. Daum´e III. Bayesian multitask learning with latent hierarchies. In Conference on Uncertainty in Artificial Intelligence ’09, Montreal, Canada, 2009. [10] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. JMLR, 6:615–637, 2005. [11] S. Gerber, T. Tasdizen, and R. Whitaker. Dimensionality reduction and principal surfaces via kernel map manifolds. In In Proceedings of the 2009 International Conference on Computer Vison (ICCV), 2009. [12] T. Hastie. Principal curves and surfaces. PhD thesis, Stanford University, 1984. [13] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In NIPS ’08, 2008. [14] P. J. Lenk, W. S. DeSarbo, P. E. Green, and M. R. Young. Hierarchical bayes conjoint analysis: Recovery of partworth heterogeneity from reduced experimental designs. MARKETING SCIENCE, 1996. [15] Q. Liu, X. Liao, H. L. Carin, J. R. Stack, and L. Carin. Semisupervised multitask learning. IEEE 2009, 2009. [16] C. A. Micchelli and M. Pontil. Regularized multi-task learning. In KDD 2004, pages 109–117, 2004. [17] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, December 2000. [18] J. B. Tenenbaum, V. Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000. [19] S. Thrun and L. Pratt, editors. Learning to learn. Kluwer Academic Publishers, Norwell, MA, USA, 1998. [20] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction. In In ICML 2004, pages 839–846. ACM Press, 2004. [21] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet process priors. J. Mach. Learn. Res., 8:35–63, 2007. [22] K. Yu, V. Tresp, and A. Schwaighofer. Learning gaussian processes from multiple tasks. In ICML ’05, 2005. [23] J. Zhang, Z. Ghahramani, and Y. Yang. Flexible latent variable models for multi-task learning. Mach. Learn., 73(3):221–242, 2008. [24] J. Zhang, J. Zhang, Y. Yang, Z. Ghahramani, and Y. Yang. Learning multiple related tasks using latent independent component analysis. In NIPS ’05, 2005. 9
2010
66
4,109
Group Sparse Coding with a Laplacian Scale Mixture Prior Pierre J. Garrigues IQ Engines, Inc. Berkeley, CA 94704 pierre.garrigues@gmail.com Bruno A. Olshausen Helen Wills Neuroscience Institute School of Optometry University of California, Berkeley Berkeley, CA 94720 baolshausen@berkeley.edu Abstract We propose a class of sparse coding models that utilizes a Laplacian Scale Mixture (LSM) prior to model dependencies among coefficients. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, with a Gamma distribution prior over the scale parameter. We show that, due to the conjugacy of the Gamma prior, it is possible to derive efficient inference procedures for both the coefficients and the scale parameter. When the scale parameters of a group of coefficients are combined into a single variable, it is possible to describe the dependencies that occur due to common amplitude fluctuations among coefficients, which have been shown to constitute a large fraction of the redundancy in natural images [1]. We show that, as a consequence of this group sparse coding, the resulting inference of the coefficients follows a divisive normalization rule, and that this may be efficiently implemented in a network architecture similar to that which has been proposed to occur in primary visual cortex. We also demonstrate improvements in image coding and compressive sensing recovery using the LSM model. 1 Introduction The concept of sparsity is widely used in the signal processing, machine learning and statistics communities for model fitting and solving inverse problems. It is also important in neuroscience as it is thought to underlie the neural representations used by the brain. The operation to compute the sparse representation of a signal x ∈Rn with respect to a dictionary of basis functions Φ ∈Rn×m can be implemented via an ℓ1-penalized least-square problem commonly referred to as Basis Pursuit Denoising (BPDN) [2] or Lasso [3] min s 1 2∥x −Φs∥2 2 + µ∥s∥1, (1) where µ is a regularization parameter that controls the tradeoff between the quality of the reconstruction and the sparsity. This approach has been applied to problems such as image coding, compressive sensing [4], or classification [5]. The ℓ1 penalty leads to solutions where typically a large number of coefficients are exactly zero, which is a desirable property to achieve model selection or data compression, or for obtaining interpretable results. The cost function of BPDN is convex, and many efficient algorithms have been recently developed to solve this problem [6, 7, 8, 9]. Minimizing the cost function of BPDN corresponds to MAP inference in a probabilistic model where the coefficients are independent and have Laplacian priors p(si) = λ 2 e−λ|si|. Hence, the signal model assumed by BPDN is linear, generative, and the basis function coefficients are independent. In the context of analysis-based models of natural images (for a review on analysis-based 1 and synthesis-based or generative models see [10]), it has been shown that the linear responses of natural images to Gabor-like filters have kurtotic histograms, and that there can be strong dependencies among these responses in the form of common amplitude fluctuations [11, 12, 13, 14]. It has also been observed in the context of generative image models that the inferred sparse coefficients exhibit pronounced statistical dependencies [15, 16], and therefore the independence assumption is violated. It has been proposed in block-ℓ1 methods to account for dependencies among the coefficients by dividing them into subspaces such that dependencies within the subspaces are allowed, but not across the subspaces [17] . This approach can produce blocking artifacts and has recently been generalized to overlapping subspaces in [18]. Another approach is to only allow certain configurations of active coefficients [19]. We propose in this paper a new class of prior on the basis function coefficients that makes it possible to model their statistical dependencies in a probabilistic generative model, whose inferred representations are more sparse than those obtained with the factorial Laplacian prior, and for which we have efficient inference algorithms. Our approach consists of introducing for each coefficient a hyperprior on the inverse scale parameter λi of the Laplacian distribution. The coefficient prior is thus a mixture of Laplacian distributions which we denote “Laplacian Scale Mixture” (LSM), which is an analogy to the Gaussian scale mixture (GSM) [12]. Higher-order dependencies of feedforward responses of wavelet coefficients [12] or basis functions learned using independent component analysis [14] have been captured using GSMs, and we extend this approach to a generative sparse coding model using LSMs. We define the Laplacian scale mixture in Section 2, and we describe the inference algorithms in the resulting sparse coding models with an LSM prior on the coefficients in Section 3. We present an example of a factorial LSM model in Section 4, and of a non-factorial LSM model in Section 5 that is particularly well suited to signals having the “group sparsity” property. We show that the nonfactorial LSM results in a divisive normalization rule for inferring the coefficients. When the groups are organized topographically and the basis is trained on natural images, the resulting model resembles the neighborhood divisive normalization that has been hypothesized to occur in visual cortex. We also demonstrate that the proposed LSM inference algorithm provides superior performance in image coding and compressive sensing recovery. 2 The Laplacian Scale Mixture distribution A random variable si is a Laplacian scale mixture if it can be written si = λ−1 i ui, where ui has a Laplacian distribution with scale 1, i.e. p(ui) = 1 2e−|ui|, and the multiplier variable λi is a positive random variable with probability p(λi). We also suppose that λi and ui are independent. Conditioned on the parameter λi, the coefficient si has a Laplacian distribution with inverse scale λi, i.e. p(si|λi) = λi 2 e−λi|si|. The distribution over si is therefore a continuous mixture of Laplacian distributions with different inverse scales, and it can be computed by integrating out λi p(si) = Z ∞ 0 p(si| λi)p(λi)dλi = Z ∞ 0 λi 2 e−λi|si|p(λi)dλi. Note that for most choices of p(λi) we do not have an analytical expression for p(si). We denote such a distribution a Laplacian Scale Mixture (LSM). It is a special case of the Gaussian Scale Mixture (GSM) [12] as the Laplacian distribution can be written as a GSM. 3 Inference in a sparse coding model with LSM prior We propose the linear generative model x = Φs + ν = m X i=1 siϕi + ν, (2) where x ∈Rn, Φ = [ϕ1, . . . , ϕm] ∈Rn×m is an overcomplete transform or basis set, and the columns ϕi are its basis functions. ν ∼N(0, σ2In) is small Gaussian noise. The coefficients are endowed with LSM distributions. They can be used to reconstruct x and are called the synthesis coefficients. 2 Given a signal x, we wish to infer its sparse representation s in the dictionary Φ. We consider in this section the computation of the maximum a posteriori (MAP) estimate of the coefficients s given the input signal x. Using Bayes’ rule we have p(s | x) ∝p(x | s)p(s), and therefore the MAP estimate ˆs is given by ˆs = arg min s {−log p(s | x)} = arg min s {−log p(x | s) −log p(s)}. (3) In general it is difficult to compute the MAP estimate with an LSM prior on s since we do not necessarily have an analytical expression for the log-likelihood log p(s). However, we can compute the complete log-likelihood log p(s, λ) analytically log p(s, λ) = log p(s | λ) + log p(λ) = −λi|si| + log λi 2 + log p(λ). Hence, if we also observed the latent variable λ, we would have an objective function that can be maximized with respect to s. The standard approach in machine learning when confronted with such a problem is the Expectation-Maximization (EM) algorithm, and we derive in this Section an EM algorithm for the MAP estimation of the coefficients. We use Jensen’s inequality and obtain the following upper bound on the posterior likelihood −log p(s | x) ≤−log p(x | s) − Z λ q(λ) log p(s, λ) q(λ) dλ := L(q, s), (4) which is true for any probability distribution q(λ). Performing coordinate descent in the auxiliary function L(q, s) leads to the following updates that are usually called the E step and the M step. E Step q(t+1) = arg min q L(q, s(t)) (5) M Step s(t+1) = arg min s L(q(t+1), s) (6) Let < . >q denote the expectation with respect to q(λ). The M Step (6) simplifies to s(t+1) = arg min s 1 2σ2 ∥x −Φs∥2 2 + m X i=1 ⟨λi⟩q(t+1) |si|, (7) which is a least-square problem regularized by a weighted sum of the absolute values of the coefficients. It is a quadratic program very similar to BPDN, and we can therefore use efficient algorithms developed for BPDN that take advantage of the sparsity of the solution. This presents a significant computational advantage over the GSM prior where the inferred coefficients are not exactly sparse. We have equality in the Jensen inequality if q(λ) = p(λ | s). The inequality (4) is therefore tight for this particular choice of q, which implies that the E step reduces to q(t+1)(λ) = p(λ | s(t)). Note that in the M step we only need to compute the expectation of λi with respect to the maximizing distribution in the E step. Hence we only need to compute the sufficient statistics ⟨λi⟩p(λ|s(t)) = Z λ λi p(λ | s(t))dλ. (8) Note that the posterior of the multiplier given the coefficient p(λ | s) might be hard to compute. We will see in Section 4.1 that it is tractable if the prior on λ is factorial and each λi has a Gamma distribution, as the Laplacian distribution and the Gamma distribution are conjugate. We can apply the efficient algorithms developed for BPDN to solve (7). Furthermore, warm-start capable algorithms are particularly interesting in this context as we can initialize the algorithm with s(t), and we do not expect the solution to change much after a few iterations of EM. 4 Sparse coding with a factorial LSM prior We propose in this Section a sparse coding model where the distribution of the multipliers is factorial, and each multiplier has a Gamma distribution, i.e. p(λi) = (βα/Γ(α))λα−1 i e−βλi, where α is the shape parameter and β is the inverse scale parameter. With this particular choice of a prior on the multiplier, we can compute the probability distribution of si analytically: p(si) = αβα 2(β + |si|)α+1 . This distribution has heavier tails than the Laplacian distribution. The graphical model corresponding to this generative model is shown in Figure 1. 3 4.1 Conjugacy The Gamma distribution and Laplacian distribution are conjugate, i.e. the posterior probability of λi given si is also a Gamma distribution when the prior over λi is a Gamma distribution and the conditional probability of si given λi is a Laplace distribution with inverse scale λi. Hence, the posterior of λi given si is a Gamma distribution with parameters α + 1 and β + |si|. The conjugacy is a key property that we can use in our EM algorithm proposed in Section 3. We saw that the solution of the E step is given by q(t+1)(λ) = p(λ | s(t)). In the factorial model we have p(λ | s) = Q i p(λi | s(t) i ). The solution of the E step is therefore a product of Gamma distributions with parameters α + 1 and β + |s(t) i |, and the sufficient statistics (8) are given by ⟨λi⟩p(λi|s(t) i ) = α + 1 β + |s(t) i | . (9) A coefficient that has a small value after t iterations but is not exactly zero will have in the next iteration a large reweighting factor λ(t+1) i , which increases the chance that it will be set to zero in the next iteration, resulting in a sparser representation. On the other hand, a coefficient having a large value after t iterations corresponds to a feature that is very salient in the signal x. It is therefore beneficial to reduce its corresponding inverse scale λ(t+1) i such that it is not penalized and can account for as much information as possible. We saw that with the Gamma prior we can compute the distribution of si analytically, and therefore we can compute the gradient of log p(s | x) with respect to s. Hence another inference algorithm is to descend the cost function in (3) directly using a method such as conjugate gradient, or the method proposed in [20] where the authors also exploit the conjugacy of the Laplacian and Gamma priors. We argue here that the EM algorithm is in fact more efficient. The solution of (7) indeed has typically few elements that are non-zero, and the computational complexity scales with the number of non-zero coefficients [6, 7]. On the other hand, a gradient-based method will have a harder time identifying the support of the solution, and therefore the required computations will involve all the coefficients, which is computationally expensive. The update formula (9) is coincidentally equivalent to the reweighted L1 minimization scheme proposed by Cand`es et al. [21]. They solve the following sequence of problems s(t+1) = arg min s m X i=1 λ(t) i |si| subject to ∥x −Φs∥2 ≤δ (10) with update λ(t+1) i = 1/(β + |s(t) i |) (which is identical to our rule when α = 0). The authors show that the solutions achieved by their algorithm are more sparse than the solution where λi = 1 for all i. Whereas they derive this rule from mathematical intuitions regarding the L1 ball, we show that this update rule follows from from Bayesian inference assuming a Gamma prior over λ. It was also shown that evidence maximization in a sparse coding model with an automatic relevance determination prior can also be solved via a sequence of reweighted ℓ1 optimization problems [22]. 4.2 Application to image coding It has been shown that the convex relaxation consisting of replacing the ℓ0 norm with the ℓ1 norm is able to identify the sparsest solution under some conditions on the dictionary of basis functions [23]. However, these conditions are typically not verified for the dictionaries learned from the statistics of natural images [24]. For instance, it was observed in [16] that it is possible to infer sparser representations with a prior over the coefficients that is a mixture of a delta function at zero and a Gaussian distribution than with the Laplacian prior. We show that our proposed inference algorithm also leads to representations that are more sparse, as the LSM prior with Gamma hyperprior has heavier tails than the Laplacian distribution. We selected 1000 16 × 16 image patches at random, and computed their sparse representations in a dictionary with 256 basis functions using both the conventional Laplacian prior and our LSM prior. The dictionary is learned from the statistics of natural images [24] using a Laplacian prior over the coefficients. To ensure that the reconstruction error is the same in both cases, we solve the constrained version of the problem as in [21], where we require that the signal to noise ratio of the reconstruction is equal to 10. We choose β = 0.01 and 5 4 EM iterations. We can see in Figure 2 that the representations using the LSM prior are indeed more sparse by approximately a factor of 2. Note that the computational complexity to compute these sparse representations is much lower than that of [16]. s1 s2 sm sj x1 xn xi λ1 λ2 λm λj φij Figure 1: Graphical model representation of our proposed generative model where the multipliers distribution is factorial. 0 20 40 60 80 100 120 140 Laplacian prior 0 20 40 60 80 100 120 140 LSM prior Sparsity of the representation Figure 2: Sparsity comparison. On the x-axis (resp. y-axis) is the ℓ0 norm of the representation inferred with the Laplacian prior (resp. LSM prior). 5 Sparse coding with a non-factorial model It has been shown that many natural signals such as sound or images have a particular type of higher-order, sparse structure in which active coefficients occur in groups corresponding to basis functions having similar properties (position, orientation, or frequency tuning) [25, 1]. We focus in this Section on a class of signals that has a particular type of higher-order structure where the active coefficients occur in groups. We show here that the LSM prior can be used to capture this group structure in natural images, and we propose an efficient inference algorithm for this case. 5.1 Group sparsity We consider a dictionary Φ such that the basis functions can be divided in a set of disjoint groups or neighborhoods indexed by Nk, i.e. {1, . . . , m} = S k∈Λ Nk, and Ni ∩Nj = ∅if i ̸= j. A signal having the group sparsity property is such that the sparse coefficients occur in groups, i.e. the indices of the nonzero coefficients are given by S k∈Γ Nk, where Γ is a subset of Λ. The group sparsity structure can be captured with the LSM prior by having all the coefficients in a group share the same inverse scale parameter, i.e. for all i ∈Nk, λi = λ(k). The corresponding graphical model is shown in Figure 3. This addresses the case where dependencies are allowed within groups, but not across groups as in the block-ℓ1 method [17]. Note that for some types of dictionaries it is more natural to consider overlapping groups to avoid blocking artifacts. We propose in Section 5.2 inference algorithms for both overlapping and non-overlapping cases. si-1 λ(k) si-2 si si+1 si+2 λ(l) si+3 Figure 3: The two groups N(k) = {i −2, i − 1, i} and N(l) = {i + 1, i + 2, i + 3} are nonoverlapping. si-1 λi-1 si-2 si λi si+1 si+2 λi+2 si+3 λi+1 Figure 4: The basis function coefficients in the neighborhood defined by N(i) = {i−1, i, i+1} share the same multiplier λi. 5.2 Inference In the EM algorithm we proposed in Section 3, the sufficient statistics that are computed in the E step are ⟨λi⟩p(λi|s(t)) for all i. We suppose as in Section 4.1 that the prior on λ(k) is Gamma with 5 parameters α and β. Using the structure of the dependencies in the probabilistic model shown in Figure 3, we have ⟨λi⟩p(λi|s(t)) = λ(k) p(λ(k)|s(t) Nk ) (11) where the index i is in the group Nk, and sNk = (sj)j∈Nk is the vector containing all the coefficients in the group. Using the conjugacy of the Laplacian and Gamma distributions, the distribution of λ(k) given all the coefficients in the neighborhood is a Gamma distribution with parameters α+|Nk| and β + P j∈Nk |sj|, where |Nk| denotes the size of the neighborhood. Hence (11) can be rewritten as follows λ(t+1) (k) = α + |Nk| β + P j∈Nk |s(t) j | . (12) The resulting update rule is a form of divisive normalization. We saw in Section 2 that we can write sk = λ−1 (k)uk, where uk is a Laplacian random variable with scale 1, and thus after convergence we have u(∞) k = (α + |Nk|)s(∞) k /(β + P j∈Nk |s(∞) j |). Such rescaling operations are also thought to play an important role in the visual system. [25] Now let us consider the case where coefficient neighborhoods are allowed to overlap. Let N(i) denote the indices of the neighborhood that is centered around si (see Figure 4 for an example). We propose to estimate the scale parameter λi by only considering the coefficients in N(i), and suppose that they all share the same multiplier λi. In this case the EM update is given by λ(t+1) i = α + |N(i)| β + P j∈N (i) |s(t) j | . (13) Note that we have not derived this rule from a proper probabilistic model. A coefficient is indeed a member of many neighborhoods as shown in Figure 4, and the structure of the dependencies implies p(λi | s) ̸= p(λi | sN(i)). However, we show experimentally that estimating the multiplier using (13) gives good performance. A similar approximation is used in the GSM analysis-based model [26]. Note that the noise shaping algorithm, which bears similarities with the iterative thresholding algorithm developed for BPDN [7], is modified in [27] using an update that is essentially inversely proportional to ours. The authors show improved coding efficiency in the context of natural images. 5.3 Compressive sensing recovery In compressed sensing, we observe a number n of random projections of a signal s0 ∈Rm, and it is in principle impossible to recover s0 if n < m. However, if s0 has p non-zero coefficients, it has been shown in [28] that it is sufficient to use n ∝p log m such measurements. We denote by W ∈Rn×m the measurement matrix and let y = Ws0 be the observations. A standard method to obtain the reconstruction is to use the solution of the Basis Pursuit (BP) problem ˆs = arg min s ∥s∥1 subject to Ws = y. (14) Note that the solution of BP is the solution of BPDN as µ converges to zero in (1), or δ = 0 in (10). If the signal has structure beyond sparsity, one can in principle recover the signal with even fewer measurements using an algorithm that exploits this structure [19, 29]. We therefore compare the performance of BP with the performance of our proposed LSM inference algorithms s(t+1) = arg min s m X i=1 λ(t) i |si| subject to Ws = y. (15) We denote by RWBP the algorithm with the factorial update (9), and RW3BP (resp. RW5BP) the algorithm with our proposed divisive normalization update (13) with group size 3 (resp. 5). We consider 50-dimensional signals that are sparse in the canonical basis and where the neighborhood size is 3. To sample such a signal s ∈R50, we draw a number d of “centroids” i, and we sample three values for si−1, si and si+1 using a normal distribution of variance 1. The groups are thus allowed to overlap. A compressive sensing recovery problem is parameterized by (m, n, d). To explore the problem space we display the results using phase plots as in [30], which plots performance as a function of different parameter settings. We fix m = 50 and parameterize the phase plots using the indeterminacy of the system indexed by δ = n/m, and the approximate sparsity of the system 6 indexed by ρ = 3d/m. We vary δ and ρ in the range [.1, .9] using a 30 by 30 grid. For a given value (δ, ρ) on the grid, we sample 10 sparse signals using the corresponding (m, n, d) parameters. The underlying sparse signal is recovered using the three algorithms and we average the recovery error ∥ˆs −s0∥2/∥s0∥2 for each of them. We show in Figure 5 that RW3BP clearly outperforms RWBP. There is a slight improvement by going from BP to RWBP (see supplementary material), but this improvement is rather small as compared with going from RWBP to RW3BP and RW5BP. This illustrates the importance of using the higher-order structure of the signals in the inference algorithm. The performance of RW3BP and RW5BP is comparable (see supplementary material), which shows that our algorithm is not very sensitive to the choice of the neighborhood size. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 δ RWBP 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 δ RW3BP 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 5: Compressive sensing recovery results using synthetic data. Shown are the phase plots for a sequence of BP problems with the factorial update (RWBP), and a sequence of BP problems with the divisive normalization update with neighborhood size 3 (RW3BP). On the x-axis is the sparsity of the system indexed by ρ = 3d/m, and on the y-axis is the indeterminacy of the system indexed by δ = n/m. At each point (ρ, δ) in the phase plot we display the average recovery error. 5.4 Application to natural images It has been shown that adapting a dictionary of basis functions to the statistics of natural images so as to maximize sparsity in the coefficients results in a set of dictionary elements whose spatial properties match those of V1 (primary visual cortex) receptive fields [24]. However, the basis functions are learned under a probabilistic model where the probability density over the basis functions coefficients is factorial, whereas the sparse coefficients exhibit statistical dependencies [15, 16]. Hence, a generative model with factorial LSM is not rich enough to capture the complex statistics of natural images. We propose here to model these dependencies using a non-factorial LSM model. We fix a topography where the basis functions coefficients are arranged on a 2D grid, and with overlapping neighborhoods of fixed size 3 × 3. The corresponding inference algorithm uses the divisive normalization update (13). We learn the optimal dictionary of basis functions Φ using the learning rule ∆Φ = η (x −Φˆs)ˆsT as in [24], where η is the learning rate, ˆs are the basis functions coefficients inferred under the model (13), and the average is taken over a batch of size 100. We fix n = m = 256, and sample 16 × 16 image patches from a set of whitened images, using a total of 100000 batches. The learned basis functions are shown in Figure 6. We see here that the neighborhoods of size 3 × 3 group basis functions at a similar position, scale and orientation. The topography is similar to how neurons are arranged in the visual cortex, and is reminiscent of the results obtained in topographic ICA [13] and topographic mixture of experts models [31]. An important difference is that our model is based on a generative sparse coding model in which both inference and learning can be implemented via local network interactions [7]. Because of the topographic organization, we also obtain a neighborhoodbased divisive normalization rule. Does the proposed non-factorial model represent image structure more efficiently than those with factorial priors? To answer this question we measured the models’ ability to recover sparse structure in the compressed sensing setting. We note that the basis functions are learned such that they represent the sparse structure in images, as opposed to representing the images exactly (there is a noise term in the generative model (2)). Hence, we design our experiment such that we measure the recovery of this sparse structure. Using the basis functions shown in Figure 6, we first infer the 7 sparse coefficients s0 of an image patch x such that ∥x −Φs0∥2 < δ using the inference algorithm corresponding to the model. We fix δ such that the SNR is 10, and thus the three sparse approximations for the three models contain the same amount of signal power. We then compute random projections y = ˜WΦs0 where ˜W is the random measurements matrix. We attempt to recover the sparse coefficients as in Section 5.3 by substituting W := Φ ˜W, and y := Φs0. We compare the recovery performance ∥Φˆs−Φs0∥2/∥Φs0∥0 for 100 16×16 image patches selected at random, and we use 110 random projections. We can see in Figure 7 that the model with non-factorial LSM prior outperforms the other models as it is able to capture the group sparsity structure in natural images. Figure 6: Basis functions learned in a nonfactorial LSM model with overlapping groups of size 3 × 3 Figure 7: Compressive sensing recovery. On the x-axis is the recovery performance for the factorial LSM model (RWBP), and on the y-axis the recovery performance for the non-factorial LSM model with 3 × 3 overlapping groups (RW3×3BP). RW3×3BP outperforms RWBP. See supplementary material for the comparison between RW3×3BP and BP as well as between RWBP and BP. 6 Conclusion We introduced a new class of probability densities that can be used as a prior for the coefficients in a generative sparse coding model of images. By exploiting the conjugacy of the Gamma and Laplacian prior, we were able to derive an efficient inference algorithm that consists of solving a sequence of reweighted ℓ1 least-square problems, thus leveraging the multitude of algorithms already developed for BPDN. Our framework also makes it possible to capture higher-order dependencies through group sparsity. When applied to natural images, the learned basis functions of the model may be topographically organized according to the specified group structure. We also showed that exploiting the group sparsity results in performance gains for compressive sensing recovery on natural images. An open question is the learning of group structure, which is a topic of ongoing work. We wish to acknowledge support from NSF grant IIS-0705939. References [1] S. Lyu and E. P. Simoncelli. Statistical modeling of images with fields of gaussian scale mixtures. In Advances in Neural Computation Systems (NIPS), Vancouver, Canada, 2006. [2] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1999. [3] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58(1):267–288, 1996. [4] Y. Tsaig and D.L. Donoho. Extensions of compressed sensing. Signal Processing, 86(3):549–571, 2006. [5] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabeled data. Proceedings of the Twenty-fourth International Conference on Machine Learning, 2007. 8 [6] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [7] C.J. Rozell, D.H Johnson, R.G. Baraniuk, and B.A. Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural Computation, 20(10):2526–2563, October 2008. [8] J. Friedman, T. Hastie, H. Hoefling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302–332, 2007. [9] M. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4):586–597, 2007. [10] M. Elad, P. Milanfar, and R. Rubinstein. Analysis vs synthesis in signal priors. Inverse Problems, 23(3):947–968, June 2007. [11] C. Zetzsche, G. Krieger, and B. Wegmann. The atoms of vision: Cartesian or polar? Journal of the Optical Society of America A, 16(7):1554–1565, 1999. [12] M.J. Wainwright, E.P. Simoncelli, and A.S. Willsky. Random cascades on wavelet trees and their use in modeling and analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1), July 2001. [13] A. Hyv¨arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527–1558, 2001. [14] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Computation, 17(2):397–423, February 2005. [15] P. Hoyer and A. Hyv¨arinen. A multi-layer sparse coding network learns contour coding from natural images. Vision Research, 42:1593–1605, 2002. [16] P.J. Garrigues and B.A. Olshausen. Learning horizontal connections in a sparse coding model of natural images. In Advances in Neural Computation Systems (NIPS), Vancouver, Canada, 2007. [17] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, February 2006. [18] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In International Conference on Machine Learning (ICML), 2009. [19] R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde. Model-based compressive sensing. Preprint, August 2008. [20] I. Ramirez, F. Lecumberry, and G. Sapiro. Universal priors for sparse modeling. CAMPSAP, December 2009. [21] E.J. Cand`es, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl., to appear, 2008. [22] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In Advances in Neural Information Processing Systems 20, 2008. [23] J.A. Tropp. Just relax: convex programming methods for identifying sparse signals in noise. IEEE Transactions on Information Theory, 52(3):1030–1051, 2006. [24] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, June 1996. [25] M.J. Wainwright, O. Schwartz, and E.P. Simoncelli. Natural image statistics and divisive normalization: Modeling nonlinearity and adaptation in cortical neurons. In R. Rao, B.A. Olshausen, and M.S. Lewicki, editors, Statistical Theories of the Brain. MIT Press, 2001. [26] J. Portilla, V. Strela, M.J Wainwright, and E.P. Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Transactions on Image Processing, 12(11):1338–1351, 2003. [27] R.M. Figueras and E.P. Simoncelli. Statistically driven sparse image representation. In Proc 14th IEEE Int’l Conf on Image Proc, volume 6, pages 29–32, September 2007. [28] E. Cand`es. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006. [29] V. Cevher, , M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using markov random fields. In Advances in Neural Computation Systems (NIPS), Vancouver, B.C., Canada, 2008. [30] D. Donoho and Y. Tsaig. Fast solution of l 1-norm minimization problems when the solution may be sparse. preprint, 2006. [31] S. Osindero, M. Welling, and G.E. Hinton. Topographic product models applied to natural scene statistics. Neural Computation, 18(2):381–414, 2006. 9
2010
67
4,110
Fractionally Predictive Spiking Neurons Sander M. Bohte CWI, Life Sciences Amsterdam, The Netherlands S.M.Bohte@cwi.nl Jaldert O. Rombouts CWI, Life Sciences Amsterdam, The Netherlands J.O.Rombouts@cwi.nl Abstract Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1 Introduction A key issue in computational neuroscience is the interpretation of neural signaling, as expressed by a neuron’s sequence of action potentials. An emerging notion is that neurons may in fact encode information at multiple timescales simultaneously [1, 2, 3, 4]: the precise timing of spikes may be conveying high-frequency information, and slower measures, like the rate of spiking, may be relating low-frequency information. Such multi-timescale encoding comes naturally, at least for sensory neurons, as the statistics of the outside world often exhibit self-similar multi-timescale features [5] and the magnitude of natural signals can extend over several orders. Since neurons are limited in the rate and resolution with which they can emit spikes, the mapping of large dynamic-range signals into spike-trains is an integral part of attempts at understanding neural coding. Experiments have extensively demonstrated that neurons adapt their response when facing persistent changes in signal magnitude. Typically, adaptation changes the relation between the magnitude of the signal and the neuron’s discharge rate. Since adaptation thus naturally relates to neural coding, it has been extensively scrutinized [6, 7, 8]. Importantly, adaptation is found to additionally exhibit features like dynamic gain control, when the standard deviation but not the mean of the signal changes [1], and long-range time-dependent changes in the spike-rate response are found in response to large magnitude signal steps, with the changes following a power-law decay (e.g. [9]). Tying the notions of self-similar multi-scale natural signals and adaptive neural coding together, it has recently been suggested that neuronal adaptation allows neuronal spiking to communicate a fractional derivative of the actual computed signal [10, 4]. Fractional derivatives are a generalization of standard ‘integer’ derivatives (‘first order’, ‘second order’), to real valued derivatives (e.g. ‘0.5th order’). A key feature of such derivatives is that they are non-local, and rather convey information over essentially a large part of the signal spectrum [10]. 1 Here, we show how neural spikes can encode temporal signals when the spike-train itself is taken as the fractional derivative of the signal. We show that this is the case for a signal approximated by a sum of shifted power-law kernels starting at respective times ti and decaying proportional to 1/(t −ti)β. Then, the fractional derivative of this approximated signal corresponds to a sum of spikes at times ti, provided that the order of fractional differentiation α is equal to 1 −β: a spiketrain is the α = 0.2 fractional derivative of a signal approximated by a sum of power-law kernels with exponent β = 0.8. Such signal encoding with power-law kernels can be carried out for example with simple standard thresholding spiking neurons with a refractory reset following a power-law. As fractional derivatives contain information over many time-ranges, they are naturally suited for predicting signals. This links to notions of predictive coding, where neurons communicate deviations from expected signals rather than the signal itself. Predictive coding has been suggested as a key feature of neuronal processing in e.g. the retina [11]. For self-similar scale-free signals, future signals may be influenced by past signals over very extended time-ranges: so-called longmemory. For example, fractional Brownian motion (fBm) can exhibit long-memory, depending on their Hurst-parameter H. For H > 0.5 fBM models which exhibit long-range dependence (longmemory) where the autocorrelation-function follows a power-law decay [12]. The long-memory nature of signals approximated with sums of power-law kernels naturally extends this signal approximation into the future along the autocorrelation of the signal, at least for self-similar 1/f γ like signals. The key “predictive” assumption we make is that a neuron’s spike-train up to time t contains all the information that the past signal contributes to the future signal t′ > t. The correspondence between a spike-train as a fractional derivative and a signal approximated as a sum of power-law kernels is only exact when spike-trains are taken as a sum of Dirac-δ functions and the power-law kernels as 1/tβ. As both responses are singular, neurons would only be able to approximate this. We show empirically how sums of (approximated) 1/tβ power-law kernels can accurately approximate long-memory fBm signals via simple difference thresholding, in an online greedy fashion. Thus encodings signals, we show that the power-law kernels approximate synthesized signals with about half the number of spikes to obtain the same Signal-to-Noise-Ratio, when compared to the same encoding method using similar but exponentially decaying kernels. We further demonstrate the approximation of sine wave modulated white-noise signals with sums of power-law kernels. The resulting spike-trains, expressed as “instantaneous spike-rate”, exhibit the phase-presession as in [4], with suppression of activity on the “back” of the sine-wave modulation, and stronger suppression for lower values of the power-law exponent (corresponding to a higher order for our fractional derivative). We find the effect is stronger when encoding the actual sine wave envelope, mimicking the difference between thalamic and cortical neurons reported in [4]. This may suggest that these cortical neurons are more concerned with encoding the sine wave envelope. The power-law approximation also allows for the transparent and straightforward implementation of temporal signal filtering by a post-synaptic, receiving neuron. Since neural decoding by a receiving neuron corresponds to adding a power-law kernel for each received spike, modifying this receiving power-law kernel then corresponds to a temporal filtering operation, effectively exploiting the wide-spectrum nature of power-law kernels. This is particularly relevant, since, as has been amply noted [9, 14], power-law dynamics can be closely approximated by a weighted sum or cascade of exponential kernels. Temporal filtering would then correspond to simply tuning the weights for this sum or cascade. We illustrate this notion with an encoding/decoding example for both a high-pass and low-pass filter. 2 Power-law Signal Encoding Neural processing can often be reduced to a Linear-Non-Linear (LNL) filtering operation on incoming signals [15] (figure 1), where inputs are linearly weighted and then passed through a non-linearity to yield the neural activation. As this computation yields analog activations, and neurons communicate through spikes, the additional problem faced by spiking neurons is to decode the incoming signal and then encode the computed LNL filter again into a spike-train. The standard spiking neuron model is that of Linear-Nonlinear-Poisson spiking, where spikes have a stochastic relationship to the computed activation [16]. Here, we interpret the spike encoding and decoding in the light of processing and communicating signals with fractional derivatives [10]. At least for signals with mainly (relatively) high-frequency components, it has been well established that a neural signal can be decoded with high fidelity by associating a fixed kernel with each spike, 2 Neuron i Σ α “LNL” n D α D α D n i i n x (t) x (t) x (t) x (t) x (t) 1 x (t) 1 Figure 1: Linear-Non-Linear filter, with spike-decoding front-end and spike-encoding back-end. and summing these kernels [17]; keeping track of doublets and triplet spikes allows for even greater fidelity. This approach however only worked for signals with a frequency response lacking low frequencies [17]. Low-frequency changes lead to “adaptation”, where the kernel is adapted to fit the signal again [18]. For long-range predictive coding, the absence of low frequencies leaves little to predict, as the effective correlation time of the signals is then typically very short as well [17]. Using the notion of predictive coding in the context of (possible) long-range dependencies, we define the goal of signal encoding as follows: let a signal xj(t) be the result of the continuous-time computation in neuron j up to time t, and let neuron j have emitted spikes tj up to time t. These spikes should be emitted such that the signal xj(t′) for t′ < t is decoded up to some signal-to-noise ratio, and these spikes should be predictive for xj(t′) for t′ > t in the sense that no additional spikes are needed at times t′ > t to convey the predictive information up to time t. Taking kernels as a signal filter of fixed width, as in the general approach in [17] has the important drawback that the signal reconstruction incurs a delay for the duration of the filter: its detection cannot be communicated until the filter is actually matched to the signal. This is inherent to any backward-looking filter-maching solution. Alternatively, a predictive coding approach could rely on only on a very short backward looking filter, minimizing the delay in the system, and continuously computing a forward predictive signal. At any time in the future then, only deviations of the actual signal from this expectation are communicated. 2.1 Spike-trains as fractional derivative As recent work has highlighted the possibility that neurons encode fractional derivatives, it is noteworthy that the non-local nature of fractional calculus offers a natural framework for predictive coding. In particular, as we will show, when we assume that the predictive information about the future signal is fully contained in the current set of spikes, a signal approximated as a sum of powerlaw kernels corresponds to a fractional derivative in the form of a sum of Dirac-δ functions, which the neuron can obviously communicate through timed spikes. The fractional derivative r(t) of a signal x(t) is denoted as Dαx(t), and intuitively expresses: r(t) = dα dtα x(t), where α is the fractional order, e.g. 0.5. This is most conveniently computed through the Fourier transformation in the frequency domain, as a simple multiplication: R(ω) = H(ω)X(ω), where the Fourier-transformed fractional derivative operator H(ω) is by definition (iω)α [10], and X(ω) and R(ω) are the Fourier transforms of x(t) and r(t) respectively. We assume that neurons carry out predictive coding by emitting spikes such that all predictive information is contained in the current spikes, and no more spikes will be fired if the signal follows this prediction. Approximating spikes by Dirac-δ functions, we take the spike-train up to some time t0 to be the fractional derivative of the past signal and be fully predictive for the expected influence the 3 0 0.1 0.2 0.3 0.4 time (s) time (s) Fractionally Predicting Spikes 0 0.1 0.2 0.3 0.4 Non−singular kernels x(t) r(t) α-exp(τ=10ms) x(t) r(t) t0 t0 time (ms) k=400 k=50 k=10 Power-law kernel approximation, β = 0.5 0 100 200 300 400 500 0 100 200 300 400 500 time (ms) Power−law kernel as sum of exponents a) c) b) d) Figure 2: a) Signal x(t) and corresponding fractional derivative r(t): 1/tβ power-laws and deltafunctions; b) power-law approximation, timed to spikes; compared to sum of α-functions (black dashed line). c) Approximated 1/tβ power-law kernel for different values of k from eq. (2). d) The approximated 1/tβ power-law kernel (blue line) can be decomposed as a weighted sum of α-functions with various decay time-constants (dashed lines). past signal has on the future signal: r(t) = X ti<t0 δ(t −ti) The task is to find a signal ˆx(t) that corresponds to an approximation of the actual signal x(t) up to t0, and where the predicted signal contribution x(t) for t > t0 due to x(t < t0) does not require additional future spikes. We note that a sum of power-law decaying kernels with power-law t−β for β = 1 −α corresponds to such a fractional derivative: the Fourier-transform for a power-law decaying kernel of form t−β is proportional to (iω)β−1, hence for a signal that just experienced a single step from 0 to 1 at time t we get: R(ω) = (iω)α(iω)β−1, and setting β = 1 −α yields a constant in Fourier-space, which of course is the Fourier-transform of δ(t). It is easy to check that shifted power-law decaying kernels, e.g. (t −ta)−β correspond to a shifted fractional derivative δ(t −ta), and the fractional derivative of a sum of shifted power-law decaying kernels corresponds to a sum of shifted delta-functions. Note that for decaying power-laws, we need β > 0, and for fractional derivatives we require α > 0. Thus, with the reverse reasoning, a signal approximated as the sum of power-law decaying kernels corresponds to a spike-train with spikes positioned at the start of the kernel, and, beyond a current time t, this sum of decaying kernels is is interpreted as a prediction of the extent to which the future signal can be predicted by the past signal. Obviously, both the Dirac-δ function and the 1/tβ kernels are singular (figure 2a) and can only be approximated. For real applications, only some part of the 1/tβ curve can be considered, effectively leaving the magnitude of the kernel and the high frequency component (the extend to which the initial 1/tβ peak is approximated) as free parameters. Figure 2b illustrates the signal approximated by a random spikes train; as compared to a sum of exponentially decaying α-kernels, the longmemory effects of power-law decay kernels is evident. 4 2.2 Practical encoding To explore the efficacy of the power-law kernel approach to signal encoding/decoding, we take a standard thresholding online approximation approach, where neurons communicate only deviations between the current computed signal x(t) and the emitted approximated signal ˆx(t) exceeding some threshold θ. The emitted signal ˆx(t) is constructed as the (delayed) sum of filter kernels κ each starting at the time of the emitted spike: ˆx(t) = X tj<t κ(t −(tj + ∆)), the delay ∆corresponds to the time-window over which the neuron considers the difference between computed and emitted signal. In a spiking neuron, such computation would be implemented simply by for instance a refractory current following a power-law. Allowing for both positive and negative spikes (corresponding to tightly coupled neurons with reversed threshold polarity [17]), this would expand to: ˆx(t) = X t+ j <t κ(t −(t+ j + ∆)) − X t− j <t κ(t −(t− j + ∆)). Considering just the fixed time-window thresholding approach, a spike is emitted each time the difference between the computed signal x(t) and the emitted signal ˆx(t) plus (or minus) the kernel κ(t) summed over some time-window exceeds the threshold θ: r(t0) = δ(t0) if t0 X τ=t0−∆ |x(τ) −ˆx(τ)| −|x(τ) −(ˆx(τ) + κ(τ))|) > θ, = −δ(t0) if t0 X τ=t0−∆ |x(τ) −ˆx(τ)| −|x(τ) −(ˆx(τ) −κ(τ))|) > θ, (1) the signal approximation improvement is computed here as the absolute value of the difference between the current signal noise and the signal noise when a kernel is added (or subtracted). As an approximation of 1/tβ power-law kernels, we let the kernel first quickly rise, and then decay according to the power-law. For a practical implementation, we use a 1/tβ signal multiplied by a modified version of the logistic sigmoid function logsig(t) = 1/(1 + exp(−t)): v(t, k) = 2 logsig(kt) −1, such that the kernel becomes: κ(t) = λv(t, k)1/tβ, (2) where κ(t) is zero for t′ < t, and parameter k determines the angle of the initial increasing part of the kernel. The resulting kernel is further scaled by a factor λ to achieve a certain signal approximation precision (kernels for power-law exponential β = 0.5 and several values of k are shown in figure 2c). As an aside, the resulting (normalized) power-law kernel can very accurately be approximated over multiple orders of magnitude by a sum of just 11 α-function exponentials (figure 2d). Next, we compare the efficiency of signal approximation with power-law predictive kernels as compared to the same approximation using standard fixed kernels. For this, we synthesize self-similar signals with long-range dependencies. We first remark on some properties of self-similar signals with power-law statistics, and on how to synthesize them. 2.3 Self-similar signals with power-law statistics There is extensive literature on the synthesis of statistically self-similar signals with 1/f-like statistics, at least going back to Kolmogorov [19] and Mandelbrot [20]. Self-similar signals exhibit slowly decaying variances, long-range dependencies and a spectral density following a power law. Importantly, for wide-sense self-similar signals, the autocorrelation functions also decays following a power-law. Although various distinct classes of self-similar signals with 1/f-like statistics exist [12], fractional Brownian motion (fBm) is a popular model for many natural signals. Fractional Brownian motion is characterized by its Hurst-paramater H, where H = 0.5 corresponds to regular Brownian motion, and fBM models with H > 0.5 exhibit long-range (positive) dependence. The spectral density of an fBm signal is proportional to a power-law, 1/f γ, where γ = 2H +1. We used fractional Brownian motion to generate self-similar signals for various H values, using the wfbm function from the Matlab wavelet toolbox. 5 0 2 4 6 8 10 12 14 16 −50 0 50 100 150 200 250 time (s) signal Signal approximation w/ Power−law kernel s(t) s(t) 0 500 0 1 time (ms) approx exp kernel power−law 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 12 14 16 18 20 22 24 26 28 SNR (± σ) β SNR for different H−factors for mean spikes/s rate of 48Hz H=0.6 H=0.75 H=0.9 H=0.6, 75% exp. H=.6 Figure 3: Left: example of encoding of fBm signal with power-law kernels. Using an exponentially decaying kernel (inset) required 1398 spikes vs. 618 for the power-law kernel (k = 50), for the same SNR. Right: SNR for various β power-law exponents using a fixed number of spikes (48Hz), with curves for different H-parameters, each curve averaged over five 16s signals. The dashed blue curve plots the H = 0.6 curve, using less spikes (36Hz); the flat bottom dotted line shows the average performance of the non-power-law exponentially decaying kernel, also for H = 0.6. 3 Signal encoding/decoding 3.1 Encoding long-memory self-similar signals We applied the thresholded kernel approximation outlined above to synthesized fBm signals with H > 0.5, to ensure long-term dependence in the signal. An example of such encoding is given in figure 3, left panel, using both positive and negative spikes, (inset, red line: the power-law kernel used). When encoding the same signal with kernels without the power-law tail (inset, blue line), the approximation required more than twice as many spikes for the same Signal-to-Noise-Ratio (SNR). In figure 3, right panel, we compared the encoding efficacy for signals with different H-parameters, as a function of the power-law exponent, using the same number of spikes for each signal (achieved by changing the λ parameter and the threshold θ). We find that more slowly varying signals, corresponding to higher H-parameters, are better encoded by the power-law kernels, More surprisingly, we find and signals are consistently best encoded for low β-values, in the order of 0.1−0.3. Similar results were obtained for different values of k in equation (2). We should remark that without negative spikes, there is no longer a clear performance advantage for power-law kernels (even for large β): where power-law kernels are beneficial on the rising part of a signal, they lose on downslopes where their slow decay cannot follow the signal. 3.2 Sine-wave modulated white-noise Fractional derivatives as an interpretation of neuronal firing-rate has been put forward by a series of recent papers [10, 21, 4], where experimental evidence was presented to suggest such an interpretation. A key finding in [4] was that the instantaneous firing rate of neurons along various processing stages of a rat’s whisker movement exhibit a phase-lead relative to the amplitude of the movement modulation. The phase-lead was found to be greater for cortical neurons as compared to thalamic neurons. When the firing rate corresponds to the α-order fractional derivative, the phase-lead would correspond to greater fractional order α in the cortical neurons [10] . We used the sumof-power-laws to approximate both the sine-wave-modulated white noise and the actual sine-wave itself, and found similar results (figure 4): smaller power-law exponents, in our interpretation also corresponding to larger fractional derivative orders, lead to increasingly fewer spikes at the back of the sine-wave (both in the case where we encode the signal with both positive and negative spikes – then counting only the positive spikes – and when the signal is approximated with only positive spikes – not shown). We find an increased phase-lead when approximating the actual sine-wave ker6 0 2 4 6 8 10 12 14 16 1 1.5 2 2.5 3 3.5 4 normalized rate β = 0.9 β = 0.5 signal β = 0.9 β = 0.5 signal Approximation white noise with sine wave modulation Sine-wave approximation 0 2 4 6 8 10 12 14 16 0 0.5 1 1.5 2 2.5 3 time(s) time(s) Figure 4: Sinewave phase-lead. Left: when encoding sine-wave modulated white noise (inset); right: encoding the sine-wave signal itself (inset). Average firing rate is computed over 100ms, and normalized to match the sine-wave kernel. 100 102 10 time(ms) 10−5 100 10 freq(Hz) 0 5000 10000 −40 −20 0 20 40 60 80 time(ms) signal approximation 10 0 10 2 10 4 10 −5 10 0 10 5 −40 −30 −20 −10 0 10 20 30 40 50 60 0 −40 −30 −20 −10 0 10 20 30 40 50 60 5000 10000 time(ms) 100 102 104 time(ms) 10−5 100 105 freq(Hz) low pass filter high pass filter 0 100 200 300 400 500 0 1 time (ms) power−law kernel as sum of exponents Figure 5: Illustration of frequency filtering with modified decoding kernels. The square boxes show the respective kernels in both time and frequency space. See text for further explanation. nel as opposed to the white-noise modulation, suggesting that perhaps cortical neurons more closely encode the former as compared to thalamic neurons. 3.3 Signal Frequency Filtering For a receiving neuron i to properly interpret a spike-train r(t)j from neuron j, both neurons would need to keep track of past events over extended periods of time: current spikes have to be added to or subtracted from the future expectation signal that was already communicated through past spikes. The required power-law processes can be implemented in various manners, for instance as a weighted sum or a cascade of exponential processes [9, 10]. A natural benefit of implementing power-law kernels as a weighted sum or cascade of exponentials is that a receiving neuron can carry out temporal signal filtering simply by tuning the respective weight parameters for the kernel with which it decodes spikes into a signal approximation. In figure 5, we illustrate this with power-law kernels that are transformed into high-pass and lowpass filters. We first approximated our power-law kernel (2) with a sum of 11 exponentials (depicted in the left-center inset). Using this approximation, we encoded the signal (figure 5, center). The signal was then reconstructed using the resultant spikes, using the power-law kernel approximation, but with some zeroed out exponentials (respectively the slowly decaying exponentials for the highpass filter, and the fast-decaying kernels for the low-pass filter). Figure 5, most right, shows the resulting filtered signal approximations. Obviously, more elaborate tuning of the decoding kernel with a larger sum of kernels can approximate a vast variety of signal filters. 7 4 Discussion Taking advantage of the relationship between power-laws and fractional derivatives, we outlined the peculiar fact that a sum of Dirac-δ functions, when taken as a fractional derivative, corresponds to a signal in the form of a sum of power-law kernels. Exploiting the obvious link to spiking neural coding, we showed how a simple thresholding spiking neuron can compute a signal approximation as a sum of power-law kernels; importantly, such a simple thresholding spiking neuron closely fits standard biological spiking neuron models, when the refractory response follows a power-law decay (e.g. [22]). We demonstrated the usefulness of such an approximation when encoding slowly varying signals, finding that encoding with power-law kernels significantly outperformed similar but exponentially decaying kernels that do not take long-range signal dependencies into account. Compared to the work where the firing rate is considered as a fractional derivative, e.g. [10], the present formulation extends the notion of neural coding with fractional derivatives to individual spikes, and hence finer temporal variations: each spike effectively encodes very local signal variations, while also keeping track of long-range variations. The interpretation in [10] of the fractional derivative r(t) as a rate leads to a 1:1 relation between the fractional derivative order and the powerlaw decay exponent of adaptation of about 0.2 [10, 13, 9]. For such fractional derivative α, our derivation implies a power-law exponent for the power law kernels β = 1 −α ≈0.8, consistent with our sine-wave reconstruction, as well as with recent adapting spiking neuron models [22]. We find that when signals are approximated with non-coupled positive and negative neurons (i.e. one neuron encodes the positive part of the signal, the other the negative), such much faster-decaying power-law kernels encode more efficiently than slower decaying ones. Non-coupled signal encoding obviously fair badly when signals rapidly change polarity; this however seems consistent with human illusory experiences [23]. As noted, the singularity of 1/tβ power-law kernels means that initial part of the kernel can only be approximated. Here, we initially focused our simulation on the use of long-range power-law kernels for encoding slowly varying signals. A more detailed approximation of this initial part of the kernel may be needed to incorporate effects like gain modulation [24, 8], and determine up to what extent the power-law kernels already account for this phenomenon. This would also provide a natural link to existing neural models of spike-frequency adaptation, e.g. [25], as they are primarily concerned with modeling the spiking neuron behavior rather than the computational aspects. We used a greedy online thresholding process to determine when a neuron would spike to approximate a signal, this in contrast to offline optimization methods that place spikes at optimal times, like Smith & Lewicki [26]. The key difference of course is that the latter work is concerned with decoding a signal, and in effect attempts to determine the effective neural (temporal) filter. As we aimed to illustrate in the signal filtering example, these notions are not mutually exclusive: a receiving neuron could very well filter the incoming signal with a carefully shaped weighted sum of kernels, and then, when the filter is activated, signal the magnitude of the match through fractional spiking. Predictive coding seeks to find a careful balance between encoding known information as well as future, derived expectations [27]. It does not seem unreasonable to formulate this balance as a nogoing-back problem, where current computations are projected forward in time, and corrected where needed. In terms of spikes, this would correspond to our assumption that, absent new information, no additional spikes need to be fired by a neuron to transmit this forward information. The kernels we find are somewhat in contrast to the kernels found by Bialek et. al. [17], where the optimal filter exhibited both a negative and a positive part and no long-range “tail”. Several practical issues may contribute to this difference, not least the relative absence of low frequency variations, as well as the fact that the signal considered is derived from the fly’s H1 neurons. These two neurons have only partially overlapping receptive fields, and the separation into positive and negative spikes is thus slightly more intricate. We need to remark though that we see no impediment for the presented signal approximation to be adapted to such situations, or situations where more than two neurons encode fractions of a signal, as in population coding, e.g. [28]. The issue of long-range temporal dependencies as discussed here seems to be relatively unappreciated. Long-range power-law dynamics potentially offer a variety of “hooks” for computation through time [9], like for temporal difference learning and relative temporal computations (and possibly exploiting spatial and temporal statistical correspondences [29]). Acknowledgement: JOR supported by NWO Grant 612.066.826, SMB partly by NWO Grant 639.021.203. 8 References [1] A.L. Fairhall, G.D. Lewen, W. Bialek, and R.R.R. van Steveninck. Multiple timescales of adaptation in a neural code. In NIPS, volume 13. The MIT Press, 2001. [2] B. Wark, A. Fairhall, and F. Rieke. Timescales of inference in visual adaptation. Neuron, 61(5):750–761, 2009. [3] S. Panzeri, N. Brunel, N.K. Logothetis, and C. Kayser. Sensory neural codes using multiplexed temporal scales. Trends in Neurosciences, page in press, 2010. [4] B.N. Lundstrom, A.L. Fairhall, and M. Maravall. Multiple Timescale Encoding of Slowly Varying Whisker Stimulus Envelope in Cortical and Thalamic Neurons In Vivo. J. of Neurosci, 30(14):50–71, 2010. [5] JH Van Hateren. Processing of natural time series of intensities by the visual system of the blowfly. Vision Research, 37(23):3407–3416, 1997. [6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26(3):695–702, 2000. [7] B. Wark, B.N. Lundstrom, and A. Fairhall. Sensory adaptation. Current opinion in neurobiology, 17(4):423–429, 2007. [8] M. Famulare and A.L. Fairhall. Feature selection in simple neurons: how coding depends on spiking dynamics. Neural Computation, 22:1–18, 2009. [9] P.J. Drew and LF Abbott. Models and properties of power-law adaptation in neural systems. Journal of neurophysiology, 96(2):826, 2006. [10] B.N. Lundstrom, M.H. Higgs, W.J. Spain, and A.L. Fairhall. Fractional differentiation by neocortical pyramidal neurons. Nature neuroscience, 11(11):1335–1342, 2008. [11] T. Hosoya, S.A. Baccus, and M. Meister. Dynamic predictive coding by the retina. Nature, 436:71–77, 2005. [12] G.W. Wornell. Signal processing with fractals: a wavelet based approach. Prentice Hall, NJ, 1999. [13] Z. Xu, JR Payne, and ME Nelson. Logarithmic time course of sensory adaptation in electrosensory afferent nerve fibers in a weakly electric fish. Journal of neurophysiology, 76(3):2020, 1996. [14] S. Fusi, PJ Drew, and LF Abbott. Cascade models of synaptically stored models. Neuron, 45:1–14, 2005. [15] C.M. Bishop. Neural networks for pattern recognition. Oxford University Press, USA, 1995. [16] EJ Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12(2):199–213, 2001. [17] F. Rieke, D. Warland, and W. Bialek. Spikes: exploring the neural code. The MIT Press, 1999. [18] A.L. Fairhall, G.D. Lewen, W. Bialek, and R.R.R. van Steveninck. Efficiency and ambiguity in an adaptive neural code. Nature, 412(6849):787–792, 2001. [19] A. Kolmogorov. Wienersche Spiralen und einige andere interessante kurven in Hilbertschen raum. Computes Rendus (Doklady) Academic Sciences USSR (NS), 26:115–118, 1940. [20] B.B. Mandelbrot and J.W. Van Ness. Fractional Brownian motions, fractional noises and applications. SIAM review, 10(4):422–437, 1968. [21] B.N. Lundstrom, M. Famulare, L.B. Sorensen, W.J. Spain, and A.L. Fairhall. Sensitivity of firing rate to input fluctuations depends on time scale separation between fast and slow variables in single neurons. Journal of Computational Neuroscience, 27(2):277–290, 2009. [22] C Pozzorini, R Naud, S Mensi, and W Gerstner. Multiple timescales of adaptation in single neuron models. In Front. Comput. Neurosci.: Bernstein Conference on Computational Neuroscience, 2010. [23] A A Stocker and E P Simoncelli. Visual motion aftereffects arise from a cascade of two isomorphic adaptation mechanisms. J. Vision, 9(9):1–14, 2009. [24] S. Hong, B.N. Lundstrom, and A.L. Fairhall. Intrinsic gain modulation and adaptive neural coding. PLoS Computational Biology, 4(7), 2008. [25] R. Jolivet, A. Rauch, HR Luescher, and W. Gerstner. Integrate-and-Fire models with adaptation are good enough: predicting spike times under random current injection. NIPS, 18:595–602, 2006. [26] E. Smith and M.S. Lewicki. Efficient coding of time-relative structure using spikes. Neural Computation, 17(1):19–45, 2005. [27] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. Arxiv physics/0004057, 2000. [28] Q.J.M. Huys, R.S. Zemel, R. Natarajan, and P. Dayan. Fast population coding. Neural Computation, 19(2):404–441, 2007. [29] O. Schwartz, A. Hsu, and P. Dayan. Space and time in visual context. Nature Rev. Neurosci., 8(11), 2007. 9
2010
68
4,111
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models Han Liu Kathryn Roeder Larry Wasserman Carnegie Mellon University Pittsburgh, PA 15213 Abstract A challenging problem in estimating high-dimensional graphical models is to choose the regularization parameter in a data-dependent way. The standard techniques include K-fold cross-validation (K-CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Though these methods work well for low-dimensional problems, they are not suitable in high dimensional settings. In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs. The method has a clear interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. This interpretation requires essentially no conditions. Under mild conditions, we show that StARS is partially sparsistent in terms of graph estimation: i.e. with high probability, all the true edges will be included in the selected model even when the graph size diverges with the sample size. Empirically, the performance of StARS is compared with the state-of-the-art model selection procedures, including K-CV, AIC, and BIC, on both synthetic data and a real microarray dataset. StARS outperforms all these competing procedures. 1 Introduction Undirected graphical models have emerged as a useful tool because they allow for a stochastic description of complex associations in high-dimensional data. For example, biological processes in a cell lead to complex interactions among gene products. It is of interest to determine which features of the system are conditionally independent. Such problems require us to infer an undirected graph from i.i.d. observations. Each node in this graph corresponds to a random variable and the existence of an edge between a pair of nodes represent their conditional independence relationship. Gaussian graphical models [4, 23, 5, 9] are by far the most popular approach for learning high dimensional undirected graph structures. Under the Gaussian assumption, the graph can be estimated using the sparsity pattern of the inverse covariance matrix. If two variables are conditionally independent, the corresponding element of the inverse covariance matrix is zero. In many applications, estimating the the inverse covariance matrix is statistically challenging because the number of features measured may be much larger than the number of collected samples. To handle this challenge, the graphical lasso or glasso [7, 24, 2] is rapidly becoming a popular method for estimating sparse undirected graphs. To use this method, however, the user must specify a regularization parameter λ that controls the sparsity of the graph. The choice of λ is critical since different λ’s may lead to different scientific conclusions of the statistical inference. Other methods for estimating high dimensional graphs include [11, 14, 10]. They also require the user to specify a regularization parameter. The standard methods for choosing the regularization parameter are AIC [1], BIC [19] and cross validation [6]. Though these methods have good theoretical properties in low dimensions, they are not suitable for high dimensional problems. In regression, cross-validation has been shown to overfit the data [22]. Likewise, AIC and BIC tend to perform poorly when the dimension is large relative to the sample size. Our simulations confirm that these methods perform poorly when used with glasso. 1 A new approach to model selection, based on model stability, has recently generated some interest in the literature [8]. The idea, as we develop it, is based on subsampling [15] and builds on the approach of Meinshausen and B¨uhlmann [12]. We draw many random subsamples and construct a graph from each subsample (unlike K-fold cross-validation, these subsamples are overlapping). We choose the regularization parameter so that the obtained graph is sparse and there is not too much variability across subsamples. More precisely, we start with a large regularization which corresponds to an empty, and hence highly stable, graph. We gradually reduce the amount of regularization until there is a small but acceptable amount of variability of the graph across subsamples. In other words, we regularize to the point that we control the dissonance between graphs. The procedure is named StARS: Stability Approach to Regularization Selection. We study the performance of StARS by simulations and theoretical analysis in Sections 4 and 5. Although we focus here on graphical models, StARS is quite general and can be adapted to other settings including regression, classification, clustering, and dimensionality reduction. In the context of clustering, results of stability methods have been mixed. Weaknesses of stability have been shown in [3]. However, the approach was successful for density-based clustering [17]. For graph selection, Meinshausen and B¨uhlmann [12] also used a stability criterion; however, their approach differs from StARS in its fundamental conception. They use subsampling to produce a new and more stable regularization path then select a regularization parameter from this newly created path, whereas we propose to use subsampling to directly select one regularization parameter from the original path. Our aim is to ensure that the selected graph is sparse, but inclusive, while they aim to control the familywise type I errors. As a consequence, their goal is contrary to ours: instead of selecting a larger graph that contains the true graph, they try to select a smaller graph that is contained in the true graph. As we will discuss in Section 3, in specific application domains like gene regulatory network analysis, our goal for graph selection is more natural. 2 Estimating a High-dimensional Undirected Graph Let X = ( X(1), . . . , X(p) )T be a random vector with distribution P. The undirected graph G = (V, E) associated with P has vertices V = {X(1), . . . , X(p)} and a set of edges E corresponding to pairs of vertices. In this paper, we also interchangeably use E to denote the adjacency matrix of the graph G. The edge corresponding to X(j) and X(k) is absent if X(j) and X(k) are conditionally independent given the other coordinates of X. The graph estimation problem is to infer E from i.i.d. observed data X1, . . . , Xn where Xi = (Xi(1), . . . , Xi(p))T . Suppose now that P is Gaussian with mean vector µ and covariance matrix Σ. Then the edge corresponding to X(j) and X(k) is absent if and only if Ωjk = 0 where Ω= Σ−1. Hence, to estimate the graph we only need to estimate the sparsity pattern of Ω. When p could diverge with n, estimating Ωis difficult. A popular approach is the graphical lasso or glasso [7, 24, 2]. Using glasso, we estimate Ωas follows: Ignoring constants, the log-likelihood (after maximizing over µ) can be written as ℓ(Ω) = log |Ω| −trace (bΣΩ ) where bΣ is the sample covariance matrix. With a positive regularization parameter λ, the glasso estimator bΩ(λ) is obtained by minimizing the regularized negative log-likelihood bΩ(λ) = arg min Ω≻0 { −ℓ(Ω) + λ||Ω||1 } (1) where ||Ω||1 = ∑ j,k |Ωjk| is the elementwise ℓ1-norm of Ω. The estimated graph bG(λ) = (V, bE(λ)) is then easily obtained from bΩ(λ): for i ̸= j, an edge (i, j) ∈bE(λ) if and only if the corresponding entry in bΩ(λ) is nonzero. Friedman et al. [7] give a fast algorithm for calculating bΩ(λ) over a grid of λs ranging from small to large. By taking advantage of the fact that the objective function in (1) is convex, their algorithm iteratively estimates a single row (and column) of Ω in each iteration by solving a lasso regression [21]. The resulting regularization path bΩ(λ) for all λs has been shown to have excellent theoretical properties [18, 16]. For example, Ravikumar et al. [16] show that, if the regularization parameter λ satisfies a certain rate, the corresponding estimator bΩ(λ) could recover the true graph with high probability. However, these types of results are either asymptotic or non-asymptotic but with very large constants. They are not practical enough to guide the choice of the regularization parameter λ in finite-sample settings. 2 3 Regularization Selection In Equation (1), the choice of λ is critical because λ controls the sparsity level of bG(λ). Larger values of λ tend to yield sparser graphs and smaller values of λ yield denser graphs. It is convenient to define Λ = 1/λ so that small Λ corresponds to a more sparse graph. In particular, Λ = 0 corresponds to the empty graph with no edges. Given a grid of regularization parameters Gn = {Λ1, . . . , ΛK}, our goal of graph regularization parameter selection is to choose one bΛ ∈Gn, such that the true graph E is contained in bE(bΛ) with high probability. In other words, we want to “overselect” instead of “underselect”. Such a choice is motivated by application problems like gene regulatory networks reconstruction, in which we aim to study the interactions of many genes. For these types of studies, we tolerant some false positives but not false negatives. Specifically, it is acceptable that an edge presents but the two genes corresponding to this edge do not really interact with each other. Such false positives can generally be screened out by more fine-tuned downstream biological experiments. However, if one important interaction edge is omitted at the beginning, it’s very difficult for us to re-discovery it by follow-up analysis. There is also a tradeoff: we want to select a denser graph which contains the true graph with high probability. At the same time, we want the graph to be as sparse as possible so that important information will not be buried by massive false positives. Based on this rationale, an “underselect” method, like the approach of Meinshausen and B¨uhlmann[12], does not really fit our goal. In the following, we start with an overview of several state-of-the-art regularization parameter selection methods for graphs. We then introduce our new StARS approach. 3.1 Existing Methods The regularization parameter is often chosen using AIC or BIC. Let bΩ(Λ) denote the estimator corresponding to Λ. Let d(Λ) denote the degree of freedom (or the effective number of free parameters) of the corresponding Gaussian model. AIC chooses Λ to minimize −2ℓ (bΩ(Λ) ) + 2d(Λ) and BIC chooses Λ to minimize −2ℓ (bΩ(Λ) ) + d(Λ) · log n. The usual theoretical justification for these methods assumes that the dimension p is fixed as n increases; however, in the case where p > n this justification is not applicable. In fact, it’s even not straightforward how to estimate the degree of freedom d(Λ) when p is larger than n . A common practice is to calculate d(Λ) as d(Λ) = m(Λ)(m(Λ) −1)/2 + p where m(Λ) denotes the number of nonzero elements of bΩ(Λ). As we will see in our experiments, AIC and BIC tend to select overly dense graphs in high dimensions. Another popular method is K-fold cross-validation (K-CV). For this procedure the data is partitioned into K subsets. Of the K subsets one is retained as the validation data, and the remaining K −1 ones are used as training data. For each Λ ∈Gn, we estimate a graph on the K −1 training sets and evaluate the negative log-likelihood on the retained validation set. The results are averaged over all K folds to obtain a single CV score. We then choose Λ to minimize the CV score over he whole grid Gn. In regression, cross-validation has been shown to overfit [22]. Our experiments will confirm this is true for graph estimation as well. 3.2 StARS: Stability Approach to Regularization Selection The StARS approach is to choose Λ based on stability. When Λ is 0, the graph is empty and two datasets from P would both yield the same graph. As we increase Λ, the variability of the graph increases and hence the stability decreases. We increase Λ just until the point where the graph becomes variable as measured by the stability. StARS leads to a concrete rule for choosing Λ. Let b = b(n) be such that 1 < b(n) < n. We draw N random subsamples S1, . . . , SN from X1, . . . , Xn, each of size b. There are (n b ) such subsamples. Theoretically one uses all (n b ) subsamples. However, Politis et al. [15] show that it suffices in practice to choose a large number N of subsamples at random. Note that, unlike bootstrapping [6], each subsample is drawn without replacement. For each Λ ∈Gn, we construct a graph using the glasso for each subsample. This results in N estimated edge matrices bEb 1(Λ), . . . , bEb N(Λ). Focus for now on one edge (s, t) and one value of Λ. Let ψΛ(·) denote the glasso algorithm with the regularization parameter Λ. For any subsample Sj let ψΛ st(Sj) = 1 if the algorithm puts an edge and ψΛ st(Sj) = 0 if the algorithm does not put an edge between (s, t). Define θb st(Λ) = P(ψΛ st(X1, . . . , Xb) = 1). To estimate θb st(Λ), we use a U-statistic of order b, namely, bθb st(Λ) = 1 N N ∑ j=1 ψΛ st(Sj). 3 Now define the parameter ξb st(Λ) = 2θb st(Λ)(1 −θb st(Λ)) and let bξb st(Λ) = 2bθb st(Λ)(1 −bθb st(Λ)) be its estimate. Then ξb st(Λ), in addition to being twice the variance of the Bernoulli indicator of the edge (s, t), has the following nice interpretation: For each pair of graphs, we can ask how often they disagree on the presence of the edge: ξb st(Λ) is the fraction of times they disagree. For Λ ∈Gn, we regard ξb st(Λ) as a measure of instability of the edge across subsamples, with 0 ≤ξb st(Λ) ≤1/2. Define the total instability by averaging over all edges: bDb(Λ) = ∑ s<t bξb st/ (p 2 ) . Clearly on the boundary bDb(0) = 0, and bDb(Λ) generally will increase as Λ increases. However, when Λ gets very large, all the graphs will become dense and bDb(Λ) will begin to decrease. Subsample stability for large Λ is essentially an artifact. We are interested in stability for sparse graphs not dense graphs. For this reason we monotonize bDb(Λ) by defining Db(Λ) = sup0≤t≤Λ bDb(t). Finally, our StARS approach chooses Λ by defining bΛs = sup { Λ : Db(Λ) ≤β } for a specified cut point value β. It may seem that we have merely replaced the problem of choosing Λ with the problem of choosing β, but β is an interpretable quantity and we always set a default value β = 0.05. One thing to note is that all quantities bE, bθ, bξ, bD depend on the subsampling block size b. Since StARS is based on subsampling, the effective sample size for estimating the selected graph is b instead of n. Compared with methods like BIC and AIC which fully utilize all n data points. StARS has some efficiency loss in low dimensions. However, in high dimensional settings, the gain of StARS on better graph selection significantly dominate this efficiency loss. This fact is confirmed by our experiments. 4 Theoretical Properties The StARS procedure is quite general and can be applied with any graph estimation algorithms. Here, we provide its theoretical properties. We start with a key theorem which establishes the rates of convergence of the estimated stability quantities to their population means. We then discuss the implication of this theorem on general gaph regularization selection problems. Let Λ be an element in the grid Gn = {Λ1, . . . , ΛK} where K is a polynomial of n. We denote Db(Λ) = E( bDb(Λ)). The quantity bξb st(Λ) is an estimate of ξb st(Λ) and bDb(Λ) is an estimate of Db(Λ). Standard U-statistic theory guarantees that these estimates have good uniform convergence properties to their population quantities: Theorem 1. (Uniform Concentration) The following statements hold with no assumptions on P. For any δ ∈(0, 1), with probability at least 1 −δ, we have ∀Λ ∈Gn, max s<t |bξb st(Λ) −ξb st(Λ)| ≤ √ 18b (2 log p + log(2/δ)) n , (2) max Λ∈Gn | bDb(Λ) −Db(Λ)| ≤ √ 18b (log K + 4 log p + log (1/δ)) n . (3) Proof. Note that bθb st(Λ) is a U-statistic of order b. Hence, by Hoeffding’s inequality for U-statistics [20], we have, for any ϵ > 0, P(|bθb st(Λ) −θb st(Λ)| > ϵ) ≤2 exp ( −2nϵ2/b ) . (4) Now bξb st(Λ) is just a function of the U-statistic bθb st(Λ). Note that |bξb st(Λ) −ξb st(Λ)| = 2|bθb st(Λ)(1 −bθb st(Λ)) −θb st(Λ)(1 −θb st(Λ))| (5) = 2|bθb st(Λ) − (bθb st(Λ) )2 −θb st(Λ) + ( θb st(Λ) )2| (6) ≤ 2|bθb st(Λ) −θb st(Λ)| + 2| (bθb st(Λ) )2 − ( θb st(Λ) )2| (7) ≤ 2|bθb st(Λ) −θb st(Λ)| + 2|(bθb st(Λ) −θb st(Λ))(bθb st(Λ) + θb st(Λ))| (8) ≤ 2|bθb st(Λ) −θb st(Λ)| + 4|bθb st(Λ) −θb st(Λ)| (9) = 6|bθb st(Λ) −θb st(Λ)|, (10) we have |bξb st(Λ) −ξb st(Λ)| ≤6|bθb st(Λ) −θb st(Λ)|. Using (4) and the union bound over all the edges, we obtain: for each Λ ∈Gn, P(max s<t |bξb st(Λ) −ξb st(Λ)| > 6ϵ) ≤2p2 exp ( −2nϵ2/b ) . (11) 4 Using two union bound arguments over the K values of Λ and all the p(p −1)/2 edges, we have: P ( max Λ∈Gn | bDb(Λ) −Db(Λ)| ≥ϵ ) ≤ |Gn| · p(p −1) 2 · P(max s<t |bξb st(Λ) −ξb st(Λ)| > ϵ) (12) ≤ K · p4 · exp ( −nϵ2/(18b) ) . (13) Equations (2) and (3) follow directly from (11) and the above exponential probability inequality. Theorem 1 allows us to explicitly characterize the high-dimensional scaling of the sample size n, dimensionality p, subsampling block size b, and the grid size K. More specifically, we get n b log ( np4K ) →∞=⇒max Λ∈Gn | bDb(Λ) −Db(Λ)| P→0 (14) by setting δ = 1/n in Equation (3). From (14), let c1, c2 be arbitrary positive constants, if b = c1 √n, K = nc2, and p ≤exp (nγ) for some γ < 1/2, the estimated total stability bDb(Λ) still converges to its mean Db(Λ) uniformly over the whole grid Gn. We now discuss the implication of Theorem 1 to graph regularization selection problems. Due to the generality of StARS, we provide theoretical justifications for a whole family of graph estimation procedures satisfying certain conditions. Let ψ be a graph estimation procedure. We denote bEb(Λ) as the estimated edge set using the regularization parameter Λ by applying ψ on a subsampled dataset with block size b. To establish graph selection result, we start with two technical assumptions: (A1) ∃Λo ∈Gn, such that maxΛ≤Λo∧Λ∈Gn Db(Λ) ≤β/2 for large enough n. (A2) For any Λ ∈Gn and Λ ≥Λo, P ( E ⊂bEb(Λ) ) →1 as n →∞. Note that Λo here depends on the sample size n and does not have to be unique. To understand the above conditions, (A1) assumes that there exists a threshold Λo ∈Gn, such that the population quantity Db(Λ) is small for all Λ ≤Λo. (A2) requires that all estimated graphs using regularization parameters Λ ≥Λo contain the true graph with high probability. Both assumptions are mild and should be satisfied by most graph estimation algorithm with reasonable behaviors. More detailed analysis on how glasso satisfies (A1) and (A2) will be provided in the full version of this paper. There is a tradeoff on the design of the subsampling block size b . To make (A2) hold, we require b to be large. However, to make bDb(Λ) concentrate to Db(Λ) fast, we require b to be small. Our suggested value is b = ⌊10√n⌋, which balances both the theoretical and empirical performance well. The next theorem provides the graph selection performance of StARS: Theorem 2. (Partial Sparsistency): Let ψ to be a graph estimation algorithm. We assume (A1) and (A2) hold for ψ using b = ⌊10√n⌋and |Gn| = K = nc1 for some constant c1 > 0. Let bΛs ∈Gn be the selected regularization parameter using the StARS procedure with a constant cutting point β. Then, if p ≤exp (nγ) for some γ < 1/2, we have P ( E ⊂bEb(bΛs) ) →1 as n →∞. (15) Proof. We define An to be the event that maxΛ∈Gn | bDb(Λ) −Db(Λ)| ≤β/2. The scaling of n, K, b, p in the theorem satisfies the L.H.S. of (14), which implies that P(An) →1 as n →∞. Using (A1), we know that, on An, max Λ≤Λo∧Λ∈Gn bDb(Λ) ≤max Λ∈Gn | bDb(Λ) −Db(Λ)| + max Λ≤Λo∧Λ∈Gn Db(Λ) ≤β. (16) This implies that, on An, bΛs ≥Λo. The result follows by applying (A2) and a union bound. 5 Experimental Results We now provide empirical evidence to illustrate the usefulness of StARS and compare it with several state-of-the-art competitors, including 10-fold cross-validation (K-CV), BIC, and AIC. For StARS we always use subsampling block size b(n) = ⌊10 · √n] and set the cut point β = 0.05. We first 5 quantitatively evaluate these methods on two types of synthetic datasets, where the true graphs are known. We then illustrate StARS on a microarray dataset that records the gene expression levels from immortalized B cells of human subjects. On all high dimensional synthetic datasets, StARS significantly outperforms its competitors. On the microarray dataset, StARS obtains a remarkably simple graph while all competing methods select what appear to be overly dense graphs. 5.1 Synthetic Data To quantitatively evaluate the graph estimation performance, we adapt the criteria including precision, recall, and F1-score from the information retrieval literature. Let G = (V, E) be a pdimensional graph and let bG = (V, bE) be an estimated graph. We define precision = | bE ∩E|/| bE|, recall = | bE ∩E|/|E|, and F1-score = 2 · precision · recall/(precision + recall). In other words, Precision is the number of correctly estimated edges divided by the total number of edges in the estimated graph; recall is the number of correctly estimated edges divided by the total number of edges in the true graph; the F1-score can be viewed as a weighted average of the precision and recall, where an F1-score reaches its best value at 1 and worst score at 0. On the synthetic data where we know the true graphs, we also compare the previous methods with an oracle procedure which selects the optimal regularization parameter by minimizing the total number of different edges between the estimated and true graphs along the full regularization path. Since this oracle procedure requires the knowledge of the truth graph, it is not a practical method. We only present it here to calibrate the inherent challenge of each simulated scenario. To make the comparison fair, once the regularization parameters are selected, we estimate the oracle and StARS graphs only based on a subsampled dataset with size b(n) = ⌊10√n⌋. In contrast, the K-CV, BIC, and AIC graphs are estimated using the full dataset. More details about this issue were discussed in Section 3. We generate data from sparse Gaussian graphs, neighborhood graphs and hub graphs, which mimic characteristics of real-wolrd biological networks. The mean is set to be zero and the covariance matrix Σ = Ω−1. For both graphs, the diagonal elements of Ωare set to be one. More specifically: 1. Neighborhood graph: We first uniformly sample y1, . . . , yn from a unit square. We then set Ωij = Ωji = ρ with probability (√ 2π )−1 exp ( −4∥yi −yj∥2) . All the rest Ωij are set to be zero. The number of nonzero off-diagonal elements of each row or column is restricted to be smaller than ⌊1/ρ⌋. In this paper, ρ is set to be 0.245. 2. Hub graph: The rows/columns are partitioned into J equally-sized disjoint groups: V1 ∪ V2 . . . ∪VJ = {1, . . . , p}, each group is associated with a “pivotal” row k. Let |V1| = s. We set Ωik = Ωki = ρ for i ∈Vk and Ωik = Ωki = 0 otherwise. In our experiment, J = ⌊p/s⌋, k = 1, s + 1, 2s + 1, . . ., and we always set ρ = 1/(s + 1) with s = 20. We generate synthetic datasets in both low-dimensional (n = 800, p = 40) and high-dimensional (n = 400, p = 100) settings. Table 1 provides comparisons of all methods, where we repeat the experiments 100 times and report the averaged precision, recall, F1-score with their standard errors. Table 1: Quantitative comparison of different methods on the datasets from the neighborhood and hub graphs. Neighborhood graph: n =800, p=40 Neighborhood graph: n=400, p =100 Methods Precision Recall F1-score Precision Recall F1-score Oracle 0.9222 (0.05) 0.9070 (0.07) 0.9119 (0.04) 0.7473 (0.09) 0.8001 (0.06) 0.7672 (0.07) StARS 0.7204 (0.08) 0.9530 (0.05) 0.8171 (0.05) 0.6366 (0.07) 0.8718 (0.06) 0.7352 (0.07) K-CV 0.1394 (0.02) 1.0000 (0.00) 0.2440 (0.04) 0.1383 (0.01) 1.0000 (0.00) 0.2428 (0.01) BIC 0.9738 (0.03) 0.9948 (0.02) 0.9839 (0.01) 0.1796 (0.11) 1.0000 (0.00) 0.2933 (0.13) AIC 0.8696 (0.11) 0.9996 (0.01) 0.9236 (0.07) 0.1279 (0.00) 1.0000 (0.00) 0.2268 (0.01) Hub graph: n =800, p=40 Hub graph: n=400, p =100 Methods Precision Recall F1-score Precision Recall F1-score Oracle 0.9793 (0.01) 1.0000 (0.00) 0.9895 (0.01) 0.8976 (0.02) 1.0000 (0.00) 0.9459 (0.01) StARS 0.4377 (0.02) 1.0000 (0.00) 0.6086 (0.02) 0.4572 (0.01) 1.0000 (0.00) 0.6274 (0.01) K-CV 0.2383 (0.09) 1.0000 (0.00) 0.3769 (0.01) 0.1574 (0.01) 1.0000 (0.00) 0.2719 (0.00) BIC 0.4879 (0.05) 1.0000 (0.00) 0.6542 (0.05) 0.2155 (0.00) 1.0000 (0.00) 0.3545 (0.01) AIC 0.2522 (0.09) 1.0000 (0.00) 0.3951 (0.00) 0.1676 (0.00) 1.0000 (0.00) 0.2871 (0.00) For low-dimensional settings where n ≫p, the BIC criterion is very competitive and performs the best among all the methods. In high dimensional settings, however, StARS clearly outperforms all 6 the competing methods for both neighborhood and hub graphs. This is consistent with our theory. At first sight, it might be surprising that for data from low-dimensional neighborhood graphs, BIC and AIC even outperform the oracle procedure! This is due to the fact that both BIC and AIC graphs are estimated using all the n = 800 data points, while the oracle graph is estimated using only the subsampled dataset with size b(n) = ⌊10 · √n⌋= 282. Direct usage of the full sample is an advantage of model selection methods that take the general form of BIC and AIC. In high dimensions, however, we see that even with this advantage, StARS clearly outperforms BIC and AIC. The estimated graphs for different methods in the setting n = 400, p = 100 are provided in Figures 1 and 2, from which we see that the StARS graph is almost as good as the oracle, while the K-CV, BIC, and AIC graphs are overly too dense. (a) True graph (b) Oracle graph (c) StARS graph (d) K-CV graph (e) BIC graph (f) AIC graph Figure 1: Comparison of different methods on the data from the neighborhood graphs (n = 400, p = 100). 5.2 Microarray Data We apply StARS to a dataset based on Affymetrix GeneChip microarrays for the gene expression levels from immortalized B cells of human subjects. The sample size is n = 294. The expression levels for each array are pre-processed by log-transformation and standardization as in [13]. Using a sub-pathway subset of 324 correlated genes, we study the estimated graphs obtained from each method under investigation. The StARS and BIC graphs are provided in Figure 3. We see that the StARS graph is remarkably simple and informative, exhibiting some cliques and hub genes. In contrast, the BIC graph is very dense and possible useful association information is buried in the large number of estimated edges. The selected graphs using AIC and K-CV are even more dense than the BIC graph and will be reported elsewhere. A full treatment of the biological implication of these two graphs validated by enrichment analysis will be provided in the full version of this paper. 6 Conclusions The problem of estimating structure in high dimensions is very challenging. Casting the problem in the context of a regularized optimization has led to some success, but the choice of the regularization parameter is critical. We present a new method, StARS, for choosing this parameter in high dimensional inference for undirected graphs. Like Meinshausen and B¨uhlmann’s stability selection approach [12], our method makes use of subsampling, but it differs substantially from their 7 (a) True graph (b) Oracle graph (c) StARS graph (d) K-CV graph (e) BIC graph (f) AIC graph Figure 2: Comparison of different methods on the data from the hub graphs (n = 400, p = 100). (a) StARS graph (b) BIC graph Figure 3: Microarray data example. The StARS graph is more informative graph than the BIC graph. approach in both implementation and goals. For graphical models, we choose the regularization parameter directly based on the edge stability. Under mild conditions, StARS is partially sparsistent. However, even without these conditions, StARS has a simple interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. Empirically, we show that StARS works significantly better than existing techniques on both synthetic and microarray datasets. Although we focus here on graphical models, our new method is generally applicable to many problems that involve estimating structure, including regression, classification, density estimation, clustering, and dimensionality reduction. 8 References [1] Hirotsugu Akaike. Information theory and an extension of the maximum likelihood principle. Second International Symposium on Information Theory, (2):267–281, 1973. [2] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d’Aspremont. Model selection through sparse maximum likelihood estimation. Journal of Machine Learning Research, 9:485–516, March 2008. [3] Shai Ben-david, Ulrike Von Luxburg, and David Pal. A sober look at clustering stability. In Proceedings of the Conference of Learning Theory, pages 5–19. Springer, 2006. [4] Arthur P. Dempster. Covariance selection. Biometrics, 28:157–175, 1972. [5] David Edwards. Introduction to graphical modelling. Springer-Verlag Inc, 1995. [6] Bradley Efron. The jackknife, the bootstrap and other resampling plans. SIAM [Society for Industrial and Applied Mathematics], 1982. [7] Jerome H. Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2007. [8] Tilman Lange, Volker Roth, Mikio L. Braun, and Joachim M. Buhmann. Stability-based validation of clustering solutions. Neural Computation, 16(6):1299–1323, 2004. [9] Steffen L. Lauritzen. Graphical Models. Oxford University Press, 1996. [10] Han Liu, John Lafferty, and J. Wainwright. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295–2328, 2009. [11] Nicolai Meinshausen and Peter B¨uhlmann. High dimensional graphs and variable selection with the Lasso. The Annals of Statistics, 34:1436–1462, 2006. [12] Nicolai Meinshausen and Peter B¨uhlmann. Stability selection. To Appear in Journal of the Royal Statistical Society, Series B, Methodological, 2010. [13] Renuka R. Nayak, Michael Kearns, Richard S. Spielman, and Vivian G. Cheung. Coexpression network based on natural variation in human gene expression reveals gene interactions and functions. Genome Research, 19(11):1953–1962, November 2009. [14] Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735–746, 2009. [15] Dimitris N. Politis, Joseph P. Romano, and Michael Wolf. Subsampling (Springer Series in Statistics). Springer, 1 edition, August 1999. [16] Pradeep Ravikumar, Martin Wainwright, Garvesh Raskutti, and Bin Yu. Model selection in Gaussian graphical models: High-dimensional consistency of ℓ1-regularized MLE. In Advances in Neural Information Processing Systems 22, Cambridge, MA, 2009. MIT Press. [17] Alessandro Rinaldo and Larry Wasserman. Generalized density clustering. arXiv/0907.3454, 2009. [18] Adam J. Rothman, Peter J. Bickel, Elizaveta Levina, and Ji Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494–515, 2008. [19] Gideon Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6:461–464, 1978. [20] Robert J. Serfling. Approximation theorems of mathematical statistics. John Wiley and Sons, 1980. [21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, Methodological, 58:267–288, 1996. [22] Larry Wasserman and Kathryn Roeder. High dimensional variable selection. Annals of statistics, 37(5A):2178–2201, January 2009. [23] Joe Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley, 1990. [24] Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007. 9
2010
69
4,112
Tiled convolutional neural networks Quoc V. Le, Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang Wei Koh, Andrew Y. Ng Computer Science Department, Stanford University {quocle,jngiam,zhenghao,danchia,pangwei,ang}@cs.stanford.edu Abstract Convolutional neural networks (CNNs) have been successfully applied to many tasks such as digit and object recognition. Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on hardcoding. We propose tiled convolution neural networks (Tiled CNNs), which use a regular “tiled” pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights. By pooling over neighboring units, this architecture is able to learn complex invariances (such as scale and rotational invariance) beyond translational invariance. Further, it also enjoys much of CNNs’ advantage of having a relatively small number of learned parameters (such as ease of learning and greater scalability). We provide an efficient learning algorithm for Tiled CNNs based on Topographic ICA, and show that learning complex invariant features allows us to achieve highly competitive results for both the NORB and CIFAR-10 datasets. 1 Introduction Convolutional neural networks (CNNs) [1] have been successfully applied to many recognition tasks. These tasks include digit recognition (MNIST dataset [2]), object recognition (NORB dataset [3]), and natural language processing [4]. CNNs take translated versions of the same basis function, and “pool” over them to build translational invariant features. By sharing the same basis function across different image locations (weight-tying), CNNs have significantly fewer learnable parameters which makes it possible to train them with fewer examples than if entirely different basis functions were learned at different locations (untied weights). Furthermore, CNNs naturally enjoy translational invariance, since this is hard-coded into the network architecture. However, one disadvantage of this hard-coding approach is that the pooling architecture captures only translational invariance; the network does not, for example, pool across units that are rotations of each other or capture more complex invariances, such as out-of-plane rotations. Is it better to hard-code translational invariance – since this is a useful form of prior knowledge – or let the network learn its own invariances from unlabeled data? In this paper, we show that the latter is superior and describe an algorithm that can do so, outperforming convolutional methods. In particular, we present tiled convolutional networks (Tiled CNNs), which use a novel weight-tying scheme (“tiling”) that simultaneously enjoys the benefit of significantly reducing the number of learnable parameters while giving the algorithm flexibility to learn other invariances. Our method is based on only constraining weights/basis functions k steps away from each other to be equal (with the special case of k = 1 corresponding to convolutional networks). In order to learn these invariances from unlabeled data, we employ unsupervised pretraining, which has been shown to help performance [5, 6, 7]. In particular, we use a modification of Topographic ICA (TICA) [8], which learns to organize features in a topographical map by pooling together groups 1 Figure 1: Left: Convolutional Neural Networks with local receptive fields and tied weights. Right: Partially untied local receptive field networks – Tiled CNNs. Units with the same color belong to the same map; within each map, units with the same fill texture have tied weights. (Network diagrams in the paper are shown in 1D for clarity.) of related features. By pooling together local groups of features, it produces representations that are robust to local transformations [9]. We show in this paper how TICA can be efficiently used to pretrain Tiled CNNs through the use of local orthogonality. The resulting Tiled CNNs pretrained with TICA are indeed able to learn invariant representations, with pooling units that are robust to both scaling and rotation. We find that this improves classification performance, enabling Tiled CNNs to be competitive with previously published results on the NORB [3] and CIFAR-10 [10] datasets. 2 Tiled CNNs CNNs [1, 11] are based on two key concepts: local receptive fields, and weight-tying. Using local receptive fields means that each unit in the network only “looks” at a small, localized region of the input image. This is more computationally efficient than having full receptive fields, and allows CNNs to scale up well. Weight-tying additionally enforces that each first-layer (simple) unit shares the same weights (see Figure 1-Left). This reduces the number of learnable parameters, and (by pooling over neighboring units) further hard-codes translational invariance into the model. Even though weight-tying allows one to hard-code translational invariance, it also prevents the pooling units from capturing more complex invariances, such as scale and rotation invariance. This is because the second layer units are constrained to pool over translations of identical bases. In this paper, rather than tying all of the weights in the network together, we instead develop a method that leaves nearby bases untied, but far-apart bases tied. This lets second-layer units pool over simple units that have different basis functions, and hence learn a more complex range of invariances. We call this local untying of weights “tiling.” Tiled CNNs are parametrized by a tile size k: we constrain only units that are k steps away from each other to be tied. By varying k, we obtain a spectrum of models which trade off between being able to learn complex invariances, and having few learnable parameters. At one end of the spectrum we have traditional CNNs (k = 1), and at the other, we have fully untied simple units. Next, we will allow our model to use multiple “maps,” so as to learn highly overcomplete representations. A map is a set of pooling units and simple units that collectively cover the entire image (see Figure 1-Right). When varying the tiling size, we change the degree of weight tying within each map; for example, if k = 1, the simple units within each map will have the same weights. In our model, simple units in different maps are never tied. By having units in different maps learn different features, our model can learn a rich and diverse set of features. Tiled CNNs with multiple maps enjoy the twin benefits of (i) being able to represent complex invariances, by pooling over (partially) untied weights, and (ii) having a relatively small number of learnable parameters. 2 Figure 2: Left: TICA network architecture. Right: TICA first layer filters (2D topography, 25 rows of W). Unfortunately, existing methods for pretraining CNNs [11, 12] are not suitable for untied weights; for example, the CDBN algorithm [11] breaks down without the weight-tying constraints. In the following sections, we discuss a pretraining method for Tiled CNNs based on the TICA algorithm. 3 Unsupervised feature learning via TICA TICA is an unsupervised learning algorithm that learns features from unlabeled image patches. A TICA network [9] can be described as a two-layered network (Figure 2-Left), with square and square-root nonlinearities in the first and second layers respectively. The weights W in the first layer are learned, while the weights V in the second layer are fixed and hard-coded to represent the neighborhood/topographical structure of the neurons in the first layer. Specifically, each second layer hidden unit pi pools over a small neighborhood of adjacent first layer units hi. We call the hi and pi simple and pooling units, respectively. More precisely, given an input pattern x(t), the activation of each second layer unit is pi(x(t); W, V ) = qPm k=1 Vik(Pn j=1 Wkjx(t) j )2. TICA learns the parameters W through finding sparse feature representations in the second layer, by solving: minimize W PT t=1 Pm i=1 pi(x(t); W, V ), subject to WW T = I (1) where the input patterns {x(t)}T t=1 are whitened.1 Here, W ∈Rm×n and V ∈Rm×m, where n is the size of the input and m is the number of hidden units in a layer. V is a fixed matrix (Vij = 1 or 0) that encodes the 2D topography of the hidden units hi. Specifically, the hi units lie on a 2D grid, with each pi connected to a contiguous 3x3 (or other size) block of hi units.2 The case of each pi being connected to exactly one hi corresponds to standard ICA. The orthogonality constraint WW T = I provides competitiveness and ensures that the learned features are diverse. One important property of TICA is that it can learn invariances even when trained only on unlabeled data, as demonstrated in [8, 9]. This is due both to the pooling architecture, which gives rise to pooling units that are robust to local transformations of their inputs, and the learning algorithm, which promotes selectivity by optimizing for sparsity. This combination of robustness and selectivity is central to feature invariance, which is in turn essential for recognition tasks [13]. If we choose square and square-root activations for the simple and pooling units in the Tiled CNN, we can view the Tiled CNN as a special case of a TICA network, with the topography of the pooling units specifying the matrix V .3 Crucially, Tiled CNNs incorporate local receptive fields, which play an important role in speeding up TICA. We discuss this next. 1Whitening means that they have been linearly transformed to have zero mean and identity covariance. 2For illustration, however, the figures in this paper depict xi, hi and pi in 1D and show a 1D topography. 3The locality constraint, in addition to being biologically motivated by the receptive field organization patterns in V1, is also a natural approximation to the original TICA algorithm as the original learned receptive 3 4 Local receptive fields in TICA Tiled CNNs typically perform much better at object recognition when the learned representation consists of multiple feature maps (Figure 1-Right). This corresponds to training TICA with an overcomplete representation (m > n). When learning overcomplete representations [14], the orthogonality constraint cannot be satisfied exactly, and we instead try to satisfy an approximate orthogonality constraint [15]. Unfortunately, these approximate orthogonality constraints are computationally expensive and have hyperparameters which need to be extensively tuned. Much of this tuning can be avoided by using score matching [16], but this is computationally even more expensive, and while orthogonalization can be avoided altogether with topographic sparse coding, those models are also expensive as they require further work either for inference at prediction time [9, 14] or for learning a decoder unit at training time [17]. We can avoid approximate orthogonalization by using local receptive fields, which are inherently built into Tiled CNNs. With these, the weight matrix W for each simple unit is constrained to be 0 outside a small local region. This locality constraint automatically ensures that the weights of any two simple units with non-overlapping receptive fields are orthogonal, without the need for an explicit orthogonality constraint. Empirically, we find that orthogonalizing partially overlapping receptive fields is not necessary for learning distinct, informative features either. However, orthogonalization is still needed to decorrelate units that occupy the same position in their respective maps, for they look at the same region on the image. Fortunately, this local orthogonalization is cheap: for example, if there are l maps and if each receptive field is restricted to look at an input patch that contains s pixels, we would only need to orthogonalize the rows of a l-by-s matrix to ensure that the l features over these s pixels are orthogonal. Specifically, so long as l ≤s, we can demand that these l units that share an input patch be orthogonal. Using this method, we can learn networks that are overcomplete by a factor of about s (i.e., by learning l = s maps), while having to orthogonalize only matrices that are l-by-s. This is significantly lower in cost than standard TICA. For l maps, our computational cost is O(ls2n), compared to standard TICA’s O(l2n3). In general, we will have l × k × s learnable parameters for an input of size n. We note that setting k to its maximum value of n −s + 1 gives exactly the untied local TICA model outlined in the previous section.4 5 Pretraining Tiled CNNs with local TICA Algorithm 1 Unsupervised pretraining of Tiled CNNs with TICA (line search) Input: {x(t)}T t=1, W, V, k, s // k is the tile size, s is the receptive field size Output: W repeat f old ←PT t=1 Pm i=1 qPm k=1 Vik ` Pn j=1 Wkjx(t) j ´2, g ← ∂ ˆ PT t=1 Pm i=1 r Pm k=1 Vik ` Pn j=1 Wkjx(t) j ´2˜ ∂W f new ←+∞, α ←1 while f new ≥f old do W new ←W −αg W new ←localize(W new, s) W new ←tie weights(W new, k) W new ←orthogonalize local RF(W new) f new ←PT t=1 Pm i=1 qPm k=1 Vik ` Pn j=1 W new kj x(t) j ´2 α ←0.5α end while W ←W new until convergence Our pretraining algorithm, which is based on gradient descent on the TICA objective function (1), is shown in Algorithm 1. The innermost loop is a simple implementation of backtracking linesearch. fields tend to be very localized, even without any explicit locality constraint. For example, when trained on natural images, TICA’s first layer weights usually resemble localized Gabor filters (Figure 2-Right). 4For a 2D input image of size nxn and local RF of size sxs, the maximum value of k is (n −s + 1)2. 4 In orthogonalize local RF(W new), we only orthogonalize the weights that have completely overlapping receptive fields. In tie weights, we enforce weight-tying by averaging each set of tied weights. The algorithm is trained by batch projected gradient descent and usually requires little tuning of optimization parameters. This is because TICA’s tractable objective function allows us to monitor convergence easily. In contrast, other unsupervised feature learning algorithms such as RBMs [6] and autoencoders [18] require much more parameter tuning, especially during optimization. 6 Experiments 6.1 Speed-up Figure 3: Speed-up of Tiled CNNs compared to standard TICA. We first establish that the local receptive fields intrinsic to Tiled CNNs allows us to implement TICA learning for overcomplete representations in a much more efficient manner. Figure 3 shows the relative speed-up of pretraining Tiled CNNs over standard TICA using approximate fixed-point orthogonalization (W = 3 2W −1 2WW T W)[15]. These experiments were run on 10000 images of size 32x32 or 50x50, with s = 8. We note that the weights in this experiment were left fully untied, i.e., k = n−s+1. Hence, the speed-up observed here is not from an efficient convolutional implementation, but purely due to the local receptive fields. Overcoming this computational challenge is the key that allows Tiled CNNs to successfully use TICA to learn features from unlabeled data.5 6.2 Classification on NORB Next, we show that TICA pretraining for Tiled CNNs performs well on object recognition. We start with the normalized-uniform set for NORB, which consists of 24300 training examples and 24300 test examples drawn from 5 categories. In our case, each example is a preprocessed pair of 32x32 images.6 In our classification experiments, we fix the size of each local receptive field to 8x8, and set V such that each pooling unit pi in the second layer pools over a block of 3x3 simple units in the first layer, without wraparound at the borders. The number of pooling units in each map is exactly the same as the number of simple units. We densely tile the input images with overlapping 8x8 local receptive fields, with a step size (or “stride”) of 1. This gives us 25 × 25 = 625 simple units and 625 pooling units per map in our experiments on 32x32 images. A summary of results is reported in Table 1. 6.2.1 Unsupervised pretraining We first consider the case in which the features are learned purely from unsupervised data. In particular, we use the NORB training set itself (without the labels) as a source of unsupervised data 5All algorithms are implemented in MATLAB, and executed on a computer with 3.0GHz CPU, 9Gb RAM. While orthogonalization alone is 104 times faster in Tiled CNNs, other computations such as gradient calculations reduce its overall speed-up factor to 10x-250x. 6Each NORB example is a binocular pair of 96x96 images. To reduce processing time, we downsampled each 96x96 image to 32x32 pixels. Hence, each simple unit sees 128 pixels from an 8x8 patch from each of the two binocular images. The input was whitened using ZCA (Zero-Phase Components Analysis). 5 Table 1: Test set accuracy on NORB Algorithm Accuracy Tiled CNNs (with finetuning) (Section 6.2.2) 96.1% Tiled CNNs (without finetuning) (Section 6.2.1) 94.5% Standard TICA (10x overcomplete) 89.6% Convolutional Neural Networks [19], [12] 94.1% , 94.4% 3D Deep Belief Networks [19] 93.5% Support Vector Machines [20] 88.4% Deep Boltzmann Machines [21] 92.8 % with which to learn the weights W of the Tiled CNN. We call this initial phase the unsupervised pretraining phase. After learning a feature representation from the unlabeled data, we train a linear classifier on the output of the Tiled CNN network (i.e., the activations of the pooling units) on the labeled training set. During this supervised training phase, only the weights of the linear classifier were learned, while the lower weights of the Tiled CNN model remained fixed. We train a range of models to investigate the role of the tile size k and the number of maps l.7 The test set accuracy results of these models are shown in Figure 4-Left. Using a randomly sampled hold-out validation set of 2430 examples (10%) taken from the training set, we selected a convolutional model with 48 maps that achieved an accuracy of 94.5% on the test set, indicating that Tiled CNNs learned purely on unsupervised data compare favorably to many state-of-the-art algorithms on NORB. 6.2.2 Supervised finetuning of W Next, we study the effects of supervised finetuning [23] on the models produced by the unsupervised pretraining phase. Supervised finetuning takes place after unsupervised pretraining, but before the supervised training of the classifier. Using softmax regression to calculate the gradients, we backpropagated the error signal from the output back to the learned features in order to update W, the weights of the simple units in the Tiled CNN model. During the finetuning step, the weights W were adjusted without orthogonalization. The results of supervised finetuning on our models are shown in Figure 4-Right. As above, we used a validation set comprising 10% of the training data for model selection. Models with larger numbers of maps tended to overfit and hence performed poorly on the validation set. The best performing fine-tuned model on the validation set was the model with 16 maps and k = 2, which achieved a test-set accuracy of 96.1%. This substantially outperforms standard TICA, as well as the best published results on NORB to this date (see Table 1). Figure 5: Test set accuracy on full and limited training sets 6.2.3 Limited training data To test the ability of our pretrained features to generalize across rotations and lighting conditions given only a weak supervised signal, we limited the labeled training set to comprise only examples with a particular set of viewing angles and lighting conditions. Specifically, NORB contains images spanning 9 elevations, 18 azimuths and 6 lighting conditions, and we trained our linear classifier only on data with elevations {2, 4, 6}, azimuths {10, 18, 24} and 7We used an SVM [22] as the linear classifier and determined C by cross-validation over {10−4, 10−3, . . . , 104}. Models were trained with various untied map sizes k ∈{1, 2, 9, 16, 25} and number of maps l ∈{4, 6, 10, 16}. When k = 1, we were able to use an efficient convolutional implementation to scale up the number of maps in the models, allowing us to train additional models with l ∈{22, 36, 48}. 6 Figure 4: Left: NORB test set accuracy across various tile sizes and numbers of maps, without finetuning. Right: NORB test set accuracy, with finetuning. lighting conditions {1, 3, 5}. Thus, for each object instance, the linear classifier sees only 27 training images, making for a total of 675 out of the possible 24300 training examples. Using the pretrained network in Section 6.2.1, we trained a linear classifier on these 675 labeled examples. We obtained an accuracy of 72.2% on the full test set using the model with k = 2 and 22 maps. A smaller, approximately 2.5x overcomplete model with k = 2 and 4 maps obtained an accuracy of 64.9%. In stark contrast, raw pixel performance dropped sharply from 80.2% with a full supervised training set, to a near-chance level of 20.8% on this limited training set (Figure 5). These results demonstrate that Tiled CNNs perform well even with limited labeled data. This is most likely because the partial weight-tying results in a relatively small number of learnable parameters, reducing the need for large amounts of labeled data. 6.3 Classification on CIFAR-10 Table 2: Test set accuracy on CIFAR-10 Algorithm Accuracy Deep Tiled CNNs (s=4, with finetuning) (Section 6.3.2) 73.1% Tiled CNNs (s=8, without finetuning) (Section 6.3.1) 66.1% Standard TICA (10x, fixed-point orthogonalization) 56.1% Raw pixels [10] 41.1% RBM (one layer, 10000 units, finetuning) [10] 64.8% RBM (two layers, 10000 units, finetuning both layers) [10] 60.3% RBM (two layers, 10000 units, finetuning top layer) [10] 62.2% mcRBM (convolutional, trained on two million tiny images) [24] 71.0% Local Coordinate Coding (LCC) [25] 72.3% Improved Local Coordinate Coding (Improved LCC) [25] 74.5% The CIFAR-10 dataset contains 50000 training images and 10000 test images drawn from 10 categories.8 A summary of results for is reported in Table 2. 6.3.1 Unsupervised pretraining and supervised finetuning As before, models were trained with tile size k ∈ {1, 2, 25}, and number of maps l ∈ {4, 10, 16, 22, 32}. The convolutional model (k = 1) was also trained with l = 48 maps. This 48-map convolutional model performed the best on our 10% hold-out validation set, and achieved a test set accuracy of 66.1%. We find that supervised finetuning of these models on CIFAR-10 causes overfitting, and generally reduces test-set accuracy; the top model on the validation set, with 32 maps and k = 1, only achieves 65.1%. 8Each CIFAR-10 example is a 32x32 RGB image, also whitened using ZCA. Hence, each simple unit sees three patches from three channels of the color image input (RGB). 7 6.3.2 Deep Tiled CNNs We additionally investigate the possibility of training a deep Tiled CNN in a greedy layer-wise fashion, similar to models such as DBNs [6] and stacked autoencoders [26, 18]. We constructed this network by stacking two Tiled CNNs, each with 10 maps and k = 2. The resulting four-layer network has the structure W1 →V1 →W2 →V2, where the weights W1 are local receptive fields of size 4x4, and W2 is of size 3x3, i.e., each unit in the third layer “looks” at a 3x3 window of each of the 10 maps in the first layer. These parameters were chosen by an efficient architecture search [27] on the hold-out validation set. The number of maps in the third and fourth layer is also 10. After finetuning, we found that the deep model outperformed all previous models on the validation set, and achieved a test set accuracy of 73.1%. This demonstrates the potential of deep Tiled CNNs to learn more complex representations. 6.4 Effects of optimizing the pooling units When the tile size is 1 (i.e., a fully tied model), a na¨ıve approach to learn the filter weights is to directly train the first layer filters using small patches (e.g., 8x8) randomly sampled from the dataset, with a method such as ICA. This method is computationally more attractive and probably easier to implement. Here, we investigate if such benefits come at the expense of classification accuracy. We use ICA to learn the first layer weights on CIFAR-10 with 16 filters. These weights are then used in a Tiled CNN with a tile size of 1 and 16 maps. This method is compared to pretraining the model of the same architecture with TICA. For both methods, we do not use finetuning. Interestingly, classification on the test set show that the na¨ıve approach results in significantly reduced classification accuracy: the na¨ıve approach obtains 51.54% on the test set, while pretraining with TICA achieves 58.66%. These results confirm that optimizing for sparsity of the pooling units results in better features than just na¨ıvely approximating the first layer weights. 7 Discussion and Conclusion Our results show that untying weights is beneficial for classification performance. Specifically, we find that selecting a tile size of k = 2 achieves the best results for both the NORB and CIFAR-10 datasets, even with deep networks. More importantly, untying weights allow the networks to learn more complex invariances from unlabeled data. By visualizing [28, 29] the range of optimal stimulus that activate each pooling unit in a Tiled CNN, we found units that were scale and rotationally invariant.9 We note that a standard CNN is unlikely to be invariant to these transformations. A natural choice of the tile size k would be to set it to the size of the pooling region p, which in this case is 3. In this case, each pooling unit always combines simple units which are not tied. However, increasing the tile size leads to a higher degree of freedom in the models, making them susceptible to overfitting (learning unwanted non-stationary statistics of the dataset). Fortunately, the Tiled CNN only requires unlabeled data for training, which can be obtained cheaply. Our preliminary results on networks pretrained using 250000 unlabeled images from the Tiny images dataset [30] show that performance increases as k goes from 1 to 3, flattening out at k = 4. This suggests that when there is sufficient data to avoid overfitting, setting k = p can be a very good choice. In this paper, we introduced Tiled CNNs as an extension of CNNs that support both unsupervised pretraining and weight tiling. The idea of tiling, or partial untying of filter weights, is a parametrization of a spectrum of models which includes both fully-convolutional and fully-untied weight schemes as natural special cases. Furthermore, the use of local receptive fields enable our models to scale up well, producing massively overcomplete representations that perform well on classification tasks. These principles allow Tiled CNNs to achieve competitive results on the NORB and CIFAR-10 object recognition datasets. Importantly, tiling is directly applicable and can potentially benefit a wide range of other feature learning models. Acknowledgements: We thank Adam Coates, David Kamm, Andrew Maas, Andrew Saxe, Serena Yeung and Chenguang Zhu for insightful discussions. This work was supported by the DARPA Deep Learning program under contract number FA8650-10-C-7020. 9These visualizations are available at http://ai.stanford.edu/∼quocle/. 8 References [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document recognition. Proceeding of the IEEE, 1998. [2] P. Simard, D. Steinkraus, and J. Platt. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, 2003. [3] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In CVPR, 2004. [4] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, 2008. [5] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning: Transfer learning from unlabeled data. In ICML, 2007. [6] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006. [7] D. Erhan, A. Courville, Y. Bengio, and P. Vincent. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 2010. [8] A. Hyvarinen and P. Hoyer. Topographic independent component analysis as a model of V1 organization and receptive fields. Neural Computation, 2001. [9] A. Hyvarinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer, 2009. [10] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, U. Toronto, 2009. [11] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009. [12] M.A. Ranzato K. Jarrett, K. Kavukcuoglu and Y. LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009. [13] I. Goodfellow, Q.V. Le, A. Saxe, H. Lee, and A.Y. Ng. Measuring invariances in deep networks. In NIPS, 2010. [14] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 1996. [15] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley Interscience, 2001. [16] A. Hyvarinen. Estimation of non-normalized statistical models using score matching. JMLR, 2005. [17] K. Kavukcuoglu, M.A. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In CVPR, 2009. [18] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layerwise training of deep networks. In NIPS, 2007. [19] V. Nair and G. Hinton. 3D object recognition with deep belief nets. In NIPS, 2009. [20] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. In Large-Scale Kernel Machines, 2007. [21] R. Salakhutdinov and H. Larochelle. Efficient learning of Deep Boltzmann Machines. In AISTATS, 2010. [22] R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874, 2008. [23] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006. [24] M. Ranzato and G. Hinton. Modeling pixel means and covariances using factorized third-order boltzmann machines. In CVPR, 2010. [25] K. Yu and T. Zhang. Improved local coordinate coding using local tangents. In ICML, 2010. [26] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. [27] A. Saxe, M. Bhand, Z. Chen, P. W. Koh, B. Suresh, and A. Y. Ng. On random weights and unsupervised feature learning. In Workshop: Deep Learning and Unsupervised Feature Learning (NIPS), 2010. [28] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network. Technical report, University of Montreal, 2009. [29] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 2005. [30] R. Fergus A. Torralba and W. T. Freeman. 80 million tiny images: a large dataset for non-parametric object and scene recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008. 9
2010
7
4,113
More data means less inference: A pseudo-max approach to structured learning David Sontag Microsoft Research Ofer Meshi Hebrew University Tommi Jaakkola CSAIL, MIT Amir Globerson Hebrew University Abstract The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used. Many prediction problems in machine learning applications are structured prediction tasks. For example, in protein folding we are given a protein sequence and the goal is to predict the protein’s native structure [14]. In parsing for natural language processing, we are given a sentence and the goal is to predict the most likely parse tree [2]. In these and many other applications, we can formalize the structured prediction problem as taking an input x (e.g., primary sequence, sentence) and predicting y (e.g., structure, parse) according to y = arg maxˆy∈Y θ · φ(x, ˆy), where φ(x, y) is a function that maps any input and a candidate assignment to a feature vector, Y denotes the space of all possible assignments to the vector y, and θ is a weight vector to be learned. This paper addresses the problem of learning structured prediction models from data. In particular, given a set of labeled examples {(xm, ym)}M m=1, our goal is to find a vector θ such that for each example m, ym = arg maxy∈Y θ · φ(xm, y), i.e. one which separates the training data. For many structured prediction models, maximization over Y is computationally intractable. This makes it difficult to apply previous algorithms for learning structured prediction models, such as structured perceptron [2], stochastic subgradient [10], and cutting-plane algorithms [5], which require making a prediction at every iteration (equivalent to repeatedly solving an integer linear program). Given training data, we can consider the space of parameters Θ that separate the data. This space can be defined by the intersection of a large number of linear inequalities. A recent approach to getting around the hardness of prediction is to use linear programming (LP) relaxations to approximate the maximization over Y [4, 6, 9]. However, separation with respect to a relaxation places stronger constraints on the parameters. The target solution, an integral vertex in the LP, must now distinguish itself also from possible fractional vertexes that arise due to the relaxation. The relaxations can therefore be understood as optimizing over an inner bound of Θ. This set may be empty even if the training data is separable with exact inference [6]. Another obstacle to using LP relaxations for learning is that solving the LPs can be very slow. In this paper we ask whether it is possible to learn while avoiding inference altogether. We propose a new learning algorithm, inspired by pseudo-likelihood [1], that optimizes over an outer bound of Θ. Learning involves optimizing over only a small number of constraints per data point, and thus can be performed quickly, even for complex structured prediction models. We show that, if the data available for learning is “nice”, this algorithm is consistent, i.e. it will find some θ ∈Θ. This is an example of how having the right data can circumvent the hardness of learning for structured prediction. 1 We also investigate the limitations of the proposed method. We show that the problem of even deciding whether a given data set is separable is NP-hard, and thus learning in a strict sense is no easier than prediction. Thus, we should not expect for our algorithm, or any other polynomial time algorithm, to always succeed at learning from an arbitrary finite data set. To our knowledge, this is the first result characterizing the hardness of exact learning for structured prediction. Finally, we show empirically that our algorithm allows us to successfully learn the parameters for both multi-label prediction and protein side-chain placement. The performance of the algorithm is improved as more data becomes available, as our theoretical results anticipate. 1 Pseudo-Max method We consider the general structured prediction problem. The input space is denoted by X and the set of all possible assignments by Y. Each y ∈Y corresponds to n variables y1, . . . , yn, each with k possible states. The classifier uses a (given) function φ(x, y) : X, Y →Rd and (learned) weights θ ∈Rd, and is defined as y(x; θ) = arg maxˆy∈Y f(ˆy; x, θ) where f is the discriminant function f(y; x, θ) = θ · φ(x, y). Our analysis will focus on functions φ whose scope is limited to small sets of the yi variables, but for now we keep the discussion general. Given a set of labeled examples {(xm, ym)}M m=1, the goal of the typical learning problem is to find weights θ that correctly classify the training examples. Consider first the separable case. Define the set of separating weight vectors, Θ =  θ | ∀m, y ∈Y, f(ym; xm, θ) ≥f(y; xm, θ)+e(y, ym) . e is a loss function (e.g., zero-one or Hamming) such that e(ym, ym) = 0 and e(y, ym) > 0 for y ̸= ym, which serves to rule out the trivial solution θ = 0.1 The space Θ is defined by exponentially many constraints per example, one for each competing assignment. In this work we consider a much simpler set of constraints where, for each example, we only consider the competing assignments obtained by modifying a single label yi, while fixing the other labels to their value at ym. The pseudo-max set, which is an outer bound on Θ, is given by Θps =  θ | ∀m, i, yi, f(ym; xm, θ) ≥f(ym −i, yi; xm, θ) + e(yi, ym i ) . (1) Here ym −i denotes the label ym without the assignment to yi. When the data is not separable, Θ will be the empty set. Instead, we may choose to minimize the hinge loss, ℓ(θ) = P m maxy  f(y; xm, θ) −f(ym; xm, θ) + e(y, ym)  , which can be shown to be an upper bound on the training error [13]. When the data is separable, minθ ℓ(θ) = 0. Note that regularization may be added to this objective. The corresponding pseudo-max objective replaces the maximization over all of y with maximization over a single variable yi while fixing the other labels to their value at ym:2,3 ℓps(θ) = M X m=1 n X i=1 max yi  f(ym −i, yi; xm, θ) −f(ym; xm, θ) + e(yi, ym i )  . (2) Analogous to before, we have minθ ℓps(θ) = 0 if and only if θ ∈Θps. The objective in Eq. 2 is similar in spirit to pseudo-likelihood objectives used for maximum likelihood estimation of parameters of Markov random fields (MRFs) [1]. The pseudo-likelihood estimate is provably consistent when the data generating distribution is a MRF of the same structure as used in the pseudo-likelihood objective. However, our setting is different since we only get to view the maximizing assignment of the MRF rather than samples from it. Thus, a particular x will always be paired with the same y rather than samples y drawn from the conditional distribution p(y|x; θ). The pseudo-max constraints in Eq. 1 are also related to cutting plane approaches to inference [4, 5]. In the latter, the learning problem is solved by repeatedly looking for assignments that violate the separability constraint (or its hinge version). Our constraints can be viewed as using a very small 1An alternative formulation, which we use in the next section, is to break the symmetry by having part of the input not be multiplied by any weight. This will also rule out the trivial solution θ = 0. 2It is possible to use maxi instead of P i, and some of our consistency results will still hold. 3The pseudo-max approach is markedly different from a learning method which predicts each label yi independently, since the objective considers all i simultaneously (both at learning and test time). 2 x2 x1 J∗+ x1 + x2 = 0 J∗+ x2 = 0 J∗+ x1 = 0 x1 = 0 x2 = 0 y = (0, 0) y = (1, 0) y = (0, 1) y = (1, 1) ï1 ï0.5 0 0.5 1 0 0.05 0.1 0.15 0.2 J g(J12) c1=0 c1=1 c1=ï1 Figure 1: Illustrations for a model with two variables. Left: Partitioning of X induced by configurations y(x) for some J∗> 0. Blue lines carve out the exact regions. Red lines denote the pseudo-max constraints that hold with equality. Pseudo-max does not obtain the diagonal constraint coming from comparing configurations y = (1, 1) and (0, 0), since these differ by more than one coordinate. Right: One strictly-convex component of the ℓps(J) function (see Eq. 9). The function is shown for different values of c1, the mean of the x1 variable. subset of assignments for the set of candidate constraint violators. We also note that when exact maximization over the discriminant function f(y; x, θ) is hard, the standard cutting plane algorithm cannot be employed since it is infeasible to find a violated constraint. For the pseudo-max objective, finding a constraint violation is simple and linear in the number of variables.4 It is easy to see (as will be elaborated on next) that the pseudo-max method does not in general yield a consistent estimate of θ, even in the separable case. However, as we show, consistency can be shown to be achieved under particular assumptions on the data generating distribution p(x). 2 Consistency of the Pseudo-Max method In this section we show that if the feature generating distribution p(x) satisfies particular assumptions, then the pseudo-max approach yields a consistent estimate. In other words, if the training data is of the form {(xm, y(xm; θ∗))}M m=1 for some true parameter vector θ∗, then as M →∞the minimum of the pseudo-max objective will converge to θ∗(up to equivalence transformations). The section is organized as follows. First, we provide intuition for the consistency results by considering a model with only two variables. Then, in Sec. 2.1, we show that any parameter θ∗can be identified to within arbitrary accuracy by choosing a particular training set (i.e., choice of xm). This in itself proves consistency, as long as there is a non-zero probability of sampling this set. In Sec. 2.2 we give a more direct proof of consistency by using strict convexity arguments. For ease of presentation, we shall work with a simplified instance of the structured learning setting. We focus on binary variables, yi ∈{0, 1}, and consider discriminant functions corresponding to Ising models, a special case of pairwise MRFs (J denotes the vector of “interaction” parameters): f(y; x, J) = P ij∈E Jijyiyj + P i yixi (3) The singleton potential for variable yi is yixi and is not dependent on the model parameters. We could have instead used Jiyixi, which would be more standard. However, this would make the parameter vector J invariant to scaling, complicating the identifiability analysis. In the consistency analysis we will assume that the data is generated using a true parameter vector J∗. We will show that as the data size goes to infinity, minimization of ℓps(J) yields J∗. We begin with an illustrative analysis of the pseudo-max constraints for a model with only two variables, i.e. f(y; x, J) = Jy1y2 +y1x1 +y2x2. The purpose of the analysis is to demonstrate general principles for when pseudo-max constraints may succeed or fail. Assume that training samples are generated via y(x) = argmaxy f(y; x, J∗). We can partition the input space X into four regions, {x ∈X : y(x) = ˆy} for each of the four configurations ˆy, shown in Fig. 1 (left). The blue lines outline the exact decision boundaries of f(y; x, J∗), with the lines being given by the constraints 4The methods differ substantially in the non-separable setting where we minimize ℓps(θ), using a slack variable for every node and example, rather than just one slack variable per example as in ℓ(θ). 3 in Θ that hold with equality. The red lines denote the pseudo-max constraints in Θps that hold with equality. For x such that y(x) = (1, 0) or (0, 1), the pseudo-max and exact constraints are identical. We can identify J∗by obtaining samples x = (x1, x2) that explore both sides of one of the decision boundaries that depends on J∗. The pseudo-max constraints will fail to identify J∗if the samples do not sufficiently explore the transitions between y = (0, 1) and y = (1, 1) or between y = (1, 0) and y = (1, 1). This can happen, for example, when the input samples are dependent, giving only rise to the configurations y = (0, 0) and y = (1, 1). For points labeled (1, 1) around the decision line J∗+ x1 + x2 = 0, pseudo-max can only tell that they respect J∗+ x1 ≥0 and J∗+ x2 ≥0 (dashed red lines), or x1 ≤0 and x2 ≤0 for points labeled (0, 0). Only constraints that depend on the parameter are effective for learning. For pseudo-max to be able to identify J∗, the input samples must be continuous, densely populating the two parameter dependent decision lines that pseudo-max can use. The two point sets in the figure illustrate good and bad input distributions for pseudo-max. The diagonal set would work well with the exact constraints but badly with pseudo-max, and the difference can be arbitrarily large. However, the input distribution on the right, populating the J∗+ x2 = 0 decision line, would permit pseudo-max to identify J∗. 2.1 Identifiability of True Parameters In this section, we show that it is possible to approximately identify the true model parameters, up to model equivalence, using the pseudo-max constraints and a carefully chosen linear number of data points. Consider the learning problem for structured prediction defined on a fixed graph G = (V, E) where the parameters to be learned are pairwise potential functions θij(yi, yj) for ij ∈E and single node fields θi(yi) for i ∈V . We consider discriminant functions of the form f(y; x, θ) = P ij∈E θij(yi, yj) + P i θi(yi) + P i xi(yi), (4) where the input space X = R|V |k specifies the single node potentials. Without loss of generality, we remove the additional degrees of freedom in θ by restricting it to be in a canonical form: θ ∈Θcan if for all edges θij(yi, yj) = 0 whenever yi = 0 or yj = 0, and if for all nodes, θi(yi) = 0 when yi = 0. As a result, assuming the training set comes from a model in this class, and the input fields xi(yi) exercise the discriminant function appropriately, we can hope to identify θ∗∈Θcan. Indeed, we show that, for some data sets, the pseudo-max constraints are sufficient to identify θ∗. Let Θps({ym, xm}) be the set of parameters that satisfy the pseudo-max classification constraints Θps({ym, xm}) =  θ | ∀m, i, yi ̸= ym i , f(ym; xm, θ) ≥f(ym −i, yi; xm, θ) . (5) For simplicity we omit the margin losses e(ym i , yi), since the input fields xi(yi) already suffice to rule out the trivial solution θ = 0. Proposition 2.1. For any θ∗∈Θcan, there is a set of 2|V |(k −1) + 2|E|(k −1)2 examples, {xm, y(xm; θ∗)}, such that any pseudo-max consistent θ ∈Θps({ym, xm}) ∩Θcan is arbitrarily close to θ∗. The proof is given in the supplementary material. To illustrate the key ideas, we consider the simpler binary discriminant function discussed in Eq. 3. Note that the binary model is already in the canonical form since Jijyiyj = 0 whenever yi = 0 or yj = 0. For any ij ∈E, we show how to choose two input examples x1 and x2 such that any J consistent with the pseudo-max constraints for these two examples will have Jij ∈[J∗ ij −ϵ, J∗ ij + ϵ]. Repeating this for all of the edge parameters then gives the complete set of examples. The input examples we need for this will depend on J∗. For the first example, we set the input fields for all neighbors of i (except j) in such a way that we force the corresponding labels to be zero. More formally, we set x1 k < −|N(k)| maxl |J∗ kl| for k ∈N(i)\j, resulting in y1 k = 0, where y1 = y(x1). In contrast, we set x1 j to a large value, e.g. x1 j > |N(j)| maxl |J∗ jl|, so that y1 j = 1. Finally, for node i, we set x1 i = −J∗ ij + ϵ so as to obtain a slight preference for y1 i = 1. All other input fields can be set arbitrarily. As a result, the pseudo-max constraints pertaining to node i are f(y1; x1, J) ≥f(y1 −i, yi; x1, J) for yi = 0, 1. By taking into account the label assignments for y1 i and its neighbors, and by removing terms that are the same on both sides of the equation, we get Jij + x1 i + x1 j ≥Jijyi + yix1 i + x1 j, which, for yi = 0, implies that Jij + x1 i ≥0 or Jij −J∗ ij + ϵ ≥0. The second example x2 differs only in terms of the input field for i. In particular, we set x2 i = −J∗ ij −ϵ so that y2 i = 0. This gives Jij ≤J∗ ij + ϵ, as desired. 4 2.2 Consistency via Strict Convexity In this section we prove the consistency of the pseudo-max approach by showing that it corresponds to minimizing a strictly convex function. Our proof only requires that p(x) be non-zero for all x ∈ Rn (a simple example being a multi-variate Gaussian) and that J∗is finite. We use a discriminant function as in Eq. 3. Now, assume the input points xm are distributed according to p(x) and that ym are obtained via ym = arg maxy f(y; xm, J∗). We can write the ℓps(J) objective for finite data, and its limit when M →∞, compactly as: ℓps(J) = 1 M X m X i max yi  (yi −ym i ) xm i + X k∈N(i) Jkiym k  → X i Z p(x) max yi  (yi −yi(x)) xi + X k∈N(i) Jkiyk(x)  dx (6) where yi(x) is the label of i for input x when using parameters J∗. Starting from the above, consider the terms separately for each i. We partition the integral over x ∈Rn into exclusive regions according to the predicted labels of the neighbors of i (given x). Define Sij = {x : yj(x) = 1 and yk(x) = 0 for k ∈N(i)\j}. Eq. 6 can then be written as ℓps(J) = X i  ˆgi({Jik}k∈N(i)) + X k∈N(i) gik(Jik)  , (7) where gik(Jik) = R x∈Sik p(x) maxyi[(yi−yi(x))(xi+Jik)]dx and ˆgi({Jik}k∈N(i)) contains all of the remaining terms, i.e. where either zero or more than one neighbor is set to one. The function ˆgi is convex in J since it is a sum of integrals over convex functions. We proceed to show that gik(Jik) is strictly convex for all choices of i and k ∈N(i). This will show that ℓps(J) is strictly convex since it is a sum over functions strictly convex in each one of the variables in J. For all values xi ∈(−∞, ∞) there is some x in Sij. This is because for any finite xi and finite J∗, the other xj’s can be chosen so as to give the y configuration corresponding to Sij. Now, since p(x) has full support, we have P(Sij) > 0 and p(x) > 0 for any x in Sij. As a result, this also holds for the marginal pi(xi|Sij) over xi within Sij. After some algebra, we obtain: gij(Jij) = P(Sij) Z ∞ −∞ pi(xi|Sij) max [0, xi + Jij] dxi − Z x∈Sij p(x)yi(x)(xi + Jij)dx The integral over the yi(x)(xi + Jij) expression just adds a linear term to gij(Jij). The relevant remaining term is (for brevity we drop P(Sij), a strictly positive constant, and the ij index): h(J) = Z ∞ −∞ pi(xi|Sij) max [0, xi + J] dxi = Z ∞ −∞ pi(xi|Sij)ˆh(xi, J)dxi (8) where we define ˆh(xi, J) = max [0, xi + J]. Note that h(J) is convex since ˆh(xi, J) is convex in J for all xi. We want to show that h(J) is strictly convex. Consider J′ < J and α ∈(0, 1) and define the interval I = [−J, −αJ −(1 −α)J′]. For xi ∈I it holds that: αˆh(xi, J) + (1 −α)ˆh(xi, J′) > ˆh(xi, αJ + (1 −α)J′) (since the first term is strictly positive and the rest are zero). For all other x, this inequality holds but is not necessarily strict (since ˆh is always convex in J). We thus have after integrating over x that αh(J) + (1 −α)h(J′) > h(αJ + (1 −α)J′), implying h is strictly convex, as required. Note that we used the fact that p(x) has full support when integrating over I. The function ℓps(J) is thus a sum of strictly convex functions in all its variables (namely g(Jik)) plus other convex functions of J, hence strictly convex. We can now proceed to show consistency. By strict convexity, the pseudo-max objective is minimized at a unique point J. Since we know that ℓps(J∗) = 0 and zero is a lower bound on the value of ℓps(J), it follows that J∗is the unique minimizer. Thus we have that as M →∞, the minimizer of the pseudo-max objective is the true parameter vector, and thus we have consistency. As an example, consider the case of two variables y1, y2, with x1 and x2 distributed according to N(c1, 1), N(0, 1) respectively. Furthermore assume J∗ 12 = 0. Then simple direct calculation yields: g(J12) = c1 + J12 √ 2π Z −c1 −J12−c1 e−x2/2dx − 1 √ 2π e−c2 1/2 + 1 √ 2π e−(J12+c1)2/2 (9) which is indeed a strictly convex function that is minimized at J = 0 (see Fig. 1 for an illustration). 5 3 Hardness of Structured Learning Most structured prediction learning algorithms use some form of inference as a subroutine. However, the corresponding prediction task is generally NP-hard. For example, maximizing the discriminant function defined in Eq. 3 is equivalent to solving Max-Cut, which is known to be NP-hard. This raises the question of whether it is possible to bypass prediction during learning. Although prediction may be intractable for arbitrary MRFs, what does this say about the difficulty of learning with a polynomial number of data points? In this section, we show that the problem of deciding whether there exists a parameter vector that separates the training data is NP-hard. Put in the context of the positive results in this paper, these hardness results show that, although in some cases the pseudo-max constraints yield a consistent estimate, we cannot hope for a certificate of optimality. Put differently, although the pseudo-max constraints in the separable case always give an outer bound on Θ (and may even be a single point), Θ could be the empty set – and we would never know the difference. Theorem 3.1. Given labeled examples {(xm, ym)}M m=1 for a fixed but arbitrary graph G, it is NP-hard to decide whether there exists parameters θ such that ∀m, ym = arg maxy f(y; xm, θ). Proof. Any parameters θ have an equivalent parameterization in canonical form (see section Sec. 2.1, also supplementary). Thus, the examples will be separable if and only if they are separable by some θ ∈Θcan. We reduce from unweighted Max-Cut. The Max-Cut problem is to decide, given an undirected graph G, whether there exists a cut of at least K edges. Let G be the same graph as G, with k = 3 states per variable. We construct a small set of examples where a parameter vector will exist that separates the data if and only if there is no cut of K or more edges in G. Let θ be parameters in canonical form equivalent to θ ′ ij(yi, yj) = 1 if (yi, yj) ∈{(1, 2), (2, 1)}, 0 if yi = yj, and −n2 if (yi, yj) ∈{(1, 3), (2, 3), (3, 1), (3, 2)}. We first construct 4n + 8|E| examples, using the technique described in Sec. 2.1 (also supplementary material), which when restricted to the space Θcan, constrain the parameters to equal θ. We then use one more example (xm, ym) where ym = 3 (every node is in state 3) and, for all i, xm i (3) = K−1 n and xm i (1) = xm i (2) = 0. The first two states encode the original Max-Cut instance, while the third state is used to construct a labeling ym that has value equal to K −1, and is otherwise not used. Let K∗be the value of the maximum cut in G. If in any assignment to the last example there is a variable taking the state 3 and another variable taking the state 1 or 2, then the assignment’s value will be at most K∗−n2, which is less than zero. By construction, the 3 assignment has value K −1. Thus, the optimal assignment must either be 3 with value K −1, or some combination of states 1 and 2, which has value at most K∗. If K∗> K −1 then 3 is not optimal and the examples are not separable. If K∗≤K −1, the examples are separable. This result illustrates the potential difficulty of learning in worst-case graphs. Nonetheless, many problems have a more restricted dependence on the input. For example, in computer vision, edge potentials may depend only on the difference in color between two adjacent pixels. Our results do not preclude positive results of learnability in such restricted settings. By establishing hardness of learning, we also close the open problem of relating hardness of inference and learning in structured prediction. If inference problems can be solved in polynomial time, then so can learning (using, e.g., structured perceptron). Thus, when learning is hard, inference must be hard as well. 4 Experiments To evaluate our learning algorithm, we test its performance on both synthetic and real-world datasets. We show that, as the number of training samples grows, the accuracy of the pseudo-max method improves and its speed-up gain over competing algorithms increases. Our learning algorithm corresponds to solving the following, where we add L2 regularization and use a scaled 0-1 loss, e(yi, ym i ) = 1{yi ̸= ym i }/nm (nm is the number of labels in example m): min θ C P m nm M X m=1 nm X i=1 max yi h f(ym −i, yi; xm, θ) −f(ym; xm, θ) + e(yi, ym i ) i + ∥θ∥2 . (10) We will compare the pseudo-max method with learning using structural SVMs, both with exact inference and LP relaxations [see, e.g., 4]. We use exact inference for prediction at test time. 6 (a) Synthetic (b) Reuters 10 1 10 2 10 3 0 0.05 0.1 0.15 0.2 Train size Test error exact LP−relaxation pseudo−max 10 1 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 Train size Test error exact LP−relaxation pseudo−max Figure 2: Test error as a function of train size for various algorithms. Subfigure (a) shows results for a synthetic setting, while (b) shows performance on the Reuters data. In the synthetic setting we use the discriminant function f(y; x, θ) = P ij∈E θij(yi, yj) + P i xiθi(yi), which is similar to Eq. 4. We take a fully connected graph over n = 10 binary labels. For a weight vector θ∗(sampled once, uniformly in the range [−1, 1], and used for all train/test sets) we generate train and test instances by sampling xm uniformly in the range [−5, 5] and then computing the optimal labels ym = arg maxy∈Y f(y; xm, θ∗). We generate train sets of increasing size (M = {10, 50, 100, 500, 1000, 5000}), run the learning algorithms, and measure the test error for the learned weights (with 1000 test samples). For each train size we average the test error over 10 repeats of sampling and training. Fig. 2(a) shows a comparison of the test error for the three learning algorithms. For small numbers of training examples, the test error of pseudo-max is larger than that of the other algorithms. However, as the train size grows, the error converges to that of exact learning, as our consistency results predict. We also test the performance of our algorithm on a multi-label document classification task from the Reuters dataset [7]. The data consists of M = 23149 training samples, and we use a reduction of the dataset to the 5 most frequent labels. The 5 label variables form a fully connected pairwise graph structure (see [4] for a similar setting). We use random subsamples of increasing size from the train set to learn the parameters, and then measure the test error using 20000 additional samples. For each sample size and learning algorithm, we optimize the trade-off parameter C using 30% of the training data as a hold-out set. Fig. 2(b) shows that for the large data regime the performance of pseudo-max learning gets close to that of the other methods. However, unlike the synthetic setting there is still a small gap, even after seeing the entire train set. This could be because the full dataset is not yet large enough to be in the consistent regime (note that exact learning has not flattened either), or because the consistency conditions are not fully satisfied: the data might be non-separable or the support of the input distribution p(x) may be partial. We next apply our method to the problem of learning the energy function for protein side-chain placement, mirroring the learning setup of [14], where the authors train a conditional random field (CRF) using tree-reweighted belief propagation to maximize a lower bound on the likelihood.5 The prediction problem for side-chain placement corresponds to finding the most likely assignment in a pairwise MRF, and fits naturally into our learning framework. There are only 8 parameters to be learned, corresponding to a reweighting of known energy terms. The dataset consists of 275 proteins, where each MRF has several hundred variables (one per residue of the protein) and each variable has on average 20 states. For prediction we use CPLEX’s ILP solver. Fig. 3 shows a comparison of the pseudo-max method and a cutting-plane algorithm which uses an LP relaxation, solved with CPLEX, for finding violated constraints.6 We generate training sets of increasing size (M = {10, 50, 100, 274}), and measure the test error for the learned weights on the remaining examples.7 For M = 10, 50, 100 we average the test error over 3 random train/test splits, whereas for M = 274 we do 1-fold cross validation. We use C = 1 for both algorithms. 5The authors’ data and results are available from: http://cyanover.fhcrc.org/recomb-2007/ 6We significantly optimized the cutting-plane algorithm, e.g. including a large number of initial cuttingplanes and restricting the weight vector to be positive (which we know to hold at optimality). 7Specifically, for each protein we compute the fraction of correctly predicted χ1 and χ2 angles for all residues (except when trivial, e.g. just 1 state). Then, we compute the median of this value across all proteins. 7 0 50 100 150 200 250 0.25 0.255 0.26 0.265 0.27 Train size Test error (χ1 and χ2) pseudo−max LP−relaxation Soft rep 0 50 100 150 200 250 0 50 100 150 200 250 Train size Time to train (minutes) pseudo−max LP−relaxation Figure 3: Training time (for one train/test split) and test error as a function of train size for both the pseudomax method and a cutting-plane algorithm which uses a LP relaxation for inference, applied to the problem of learning the energy function for protein side-chain placement. The pseudo-max method obtains better accuracy than both the LP relaxation and HCRF (given roughly five times more data) for a fraction of the training time. The original weights (“Soft rep” [3]) used for this energy function have 26.7% error across all 275 proteins. The best previously reported parameters, learned in [14] using a Hidden CRF, obtain 25.6% error (their training set included 55 of these 275 proteins, so this is an optimistic estimate). To get a sense of the difficulty of this learning task, we also tried a random positive weight vector, uniformly sampled from the range [0, 1], obtaining an error of 34.9% (results would be much worse if we allowed the weights to be negative). Training using pseudo-max with 50 examples, we learn parameters in under a minute that give better accuracy than the HCRF. The speed-up of training with pseudo-max (using CPLEX’s QP solver) versus cutting-plane is striking. For example, for M = 10, pseudo-max takes only 3 seconds, a 1000-fold speedup. Unfortunately the cutting-plane algorithm took a prohibitive amount of time to be able to run on the larger training sets. Since the data used in learning for protein side-chain placement is both highly non-separable and relatively little, these positive results illustrate the potential wide-spread applicability of the pseudo-max method. 5 Discussion The key idea of our method is to find parameters that prefer the true assignment ym over assignments that differ from it in only one variable, in contrast to all other assignments. Perhaps surprisingly, this weak requirement is sufficient to achieve consistency given a rich enough input distribution. One extension of our approach is to add constraints for assignments that differ from ym in more than one variable. This would tighten the outer bound on Θ and possibly result in improved performance, but would also increase computational complexity. We could also add such competing assignments via a cutting-plane scheme so that optimization is performed only over a subset of these constraints. Our work raises a number of important open problems: It would be interesting to derive generalization bounds to understand the convergence rate of our method, as well as understanding the effect of the distribution p(x) on these rates. The distribution p(x) needs to have two key properties. On the one hand, it needs to explore the space Y in the sense that a sufficient number of labels need to be obtained as the correct label for the true parameters (this is indeed used in our consistency proofs). On the other hand, p(x) needs to be sufficiently sensitive close to the decision boundaries so that the true parameters can be inferred. We expect that generalization analysis will depend on these two properties of p(x). Note that [11] studied active learning schemes for structured data and may be relevant in the current context. How should one apply this learning algorithm to non-separable data sets? We suggested one approach, based on using a hinge loss for each of the pseudo constraints. One question in this context is, how resilient is this learning algorithm to label noise? Recent work has analyzed the sensitivity of pseudo-likelihood methods to model mis-specification [8], and it would be interesting to perform a similar analysis here. Also, is it possible to give any guarantees for the empirical and expected risks (with respect to exact inference) obtained by outer bound learning versus exact learning? Finally, our algorithm demonstrates a phenomenon where more data can make computation easier. Such a scenario was recently analyzed in the context of supervised learning [12], and it would be interesting to combine the approaches. Acknowledgments: We thank Chen Yanover for his assistance with the protein data. This work was supported by BSF grant 2008303 and a Google Research Grant. D.S. was supported by a Google PhD Fellowship. 8 References [1] J. Besag. The analysis of non-lattice data. The Statistician, 24:179–195, 1975. [2] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, 2002. [3] G. Dantas, C. Corrent, S. L. Reichow, J. J. Havranek, Z. M. Eletr, N. G. Isern, B. Kuhlman, G. Varani, E. A. Merritt, and D. Baker. High-resolution structural and thermodynamic analysis of extreme stabilization of human procarboxypeptidase by computational protein design. Journal of Molecular Biology, 366(4):1209 – 1221, 2007. [4] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. In Proceedings of the 25th International Conference on Machine Learning 25, pages 304–311. ACM, 2008. [5] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27–59, 2009. [6] A. Kulesza and F. Pereira. Structured learning with approximate inference. In Advances in Neural Information Processing Systems 20, pages 785–792. 2008. [7] D. Lewis, , Y. Yang, T. Rose, and F. Li. RCV1: a new benchmark collection for text categorization research. JMLR, 5:361–397, 2004. [8] P. Liang and M. I. Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In Proceedings of the 25th international conference on Machine learning, pages 584–591, New York, NY, USA, 2008. ACM Press. [9] A. F. T. Martins, N. A. Smith, and E. P. Xing. Polyhedral outer approximations with application to natural language parsing. In ICML 26, pages 713–720, 2009. [10] N. Ratliff, J. A. D. Bagnell, and M. Zinkevich. (Online) subgradient methods for structured prediction. In AISTATS, 2007. [11] D. Roth and K. Small. Margin-based active learning for structured output spaces. In Proc. of the European Conference on Machine Learning (ECML). Springer, September 2006. [12] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In Proceedings of the 25th international conference on Machine learning, pages 928–935. ACM, 2008. [13] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In Advances in Neural Information Processing Systems 16, pages 25–32. 2004. [14] C. Yanover, O. Schueler-Furman, and Y. Weiss. Minimizing and learning energy functions for side-chain prediction. Journal of Computational Biology, 15(7):899–911, 2008. 9
2010
70
4,114
Lifted Inference Seen from the Other Side : The Tractable Features Abhay Jha Vibhav Gogate Alexandra Meliou Dan Suciu Computer Science & Engineering University of Washington Washington, WA 98195 {abhaykj,vgogate,ameli,suciu}@cs.washington.edu Abstract Lifted Inference algorithms for representations that combine first-order logic and graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable elimination, belief propagation etc.) and improve its efficiency by exploiting repeated structure in the first-order model. In this paper, we propose an approach from the other side in that we use techniques from logic for probabilistic inference. In particular, we define a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible. Our rules yield new tractable classes that could not be solved efficiently by any of the existing techniques. 1 Introduction Recently, there has been a push towards combining logical and probabilistic approaches in Artificial Intelligence. It is motivated in large part by the representation and reasoning challenges in real world applications: many domains such as natural language processing, entity resolution, target tracking and Bio-informatics contain both rich relational structure, and uncertain and incomplete information. Logic is good at handling the former but lacks the representation power to model the latter. On the other hand, probability theory is good at modeling uncertainty but inadequate at handling relational structure. Many representations that combine logic and graphical models, a popular probabilistic representation [1, 2], have been proposed over the last few years. Among them, Markov logic networks (MLNs) [2, 3] are arguably the most popular one. In its simplest form, an MLN is a set of weighted first-order logic formulas, and can be viewed as a template for generating a Markov network. Specifically, given a set of constants that model objects in the domain, it represents a ground Markov network that has one (propositional) feature for each grounding of each (first-order) formula with constants in the domain. Until recently, most inference schemes for MLNs were propositional: inference was carried out by first constructing a ground Markov network and then running a standard probabilistic inference algorithm over it. Unfortunately, the ground Markov network is typically quite large, containing millions and sometimes even billions of inter-related variables. This precludes the use of existing probabilistic inference algorithms, as they are unable to handle networks at this scale. Fortunately, in some cases, one can perform lifted inference in MLNs without grounding out the domain. Lifted inference treats sets of indistinguishable objects as one, and can yield exponential speed-ups over propositional inference. Many lifted inference algorithms have been proposed over the last few years (c.f. [4, 5, 6, 7]). All of them are based on the same principle: take an existing probabilistic inference algorithm and try 1 Interpretation in English Feature Weight Most people don’t smoke ¬Smokes(X) 1.4 Most people don’t have asthma ¬Asthma(X) 2.3 Most people aren’t friends ¬Friends(X,Y) 4.6 People who have asthma don’t smoke Asthma(X) ⇒¬Smokes(X) 1.5 Asthmatics don’t have smoker friends Asthma(X) ∧Friends(X,Y) ⇒¬Smokes(Y) 1.1 Table 1: An example MLN (modified from [10]). to lift it by carrying out inference over groups of random variables that behave similarly during the algorithm’s execution. In other words, these algorithms are basically lifted versions of standard probabilistic inference algorithms. For example, first-order variable elimination [4, 5, 7] lifts the standard variable elimination algorithm [8, 9], while lifted Belief propagation [10] lifts Pearl’s Belief propagation [11, 12]. In this paper, we depart from existing approaches, and present a new approach to lifted inference from the other, logical side. In particular, we propose a set of rewriting rules that exploit the structure of the logical formulas for inference. Each rule takes an MLN as input and expresses its partition function as a combination of partition functions of simpler MLNs (if the preconditions of the rule are satisfied). Inference is tractable if we can evaluate an MLN using these set of rules. We analyze the time complexity of our algorithm and identify new tractable classes of MLNs, which have not been previously identified. Our work derives heavily from database literature in which inference techniques based on manipulating logical formulas (queries) have been investigated rigorously [13, 14]. However, the techniques that they propose are not lifted. Our algorithm extends their techniques to lifted inference, and thus can be applied to a strictly larger class of probabilistic models. To summarize, our algorithm is truly lifted, namely we never ground the model, and it offers guarantees on the running time. This comes at a cost that we do not allow arbitrary MLNs. However, the set of tractable MLNs is quite large, and includes MLNs that cannot be solved in PTIME by any of the existing lifted approaches. The small toy MLN given in Table 1 is one such example. This MLN is also out of reach of state-of-the-art propositional inference approaches such as variable elimination [8, 9], which are exponential in treewidth. This is because the treewidth of the ground Markov network is polynomial in the number of constants in the domain. 2 Preliminaries In this section we will cover some preliminaries and notation used in the rest of the paper. A feature (fi) is constructed using constants, variables, and predicates. Constants, denoted with small-case letters (e.g. a), are used to represent a particular object. An upper-case letter (e.g. X) indicates a variable associated with a particular domain (∆X), ranging over all objects in its domain. Predicate symbols (e.g. Friends) are used to represent relationships between the objects. For example, Friends(bob,alice) denotes that Alice (represented by constant alice) and Bob (constant bob) are friends. An atom is a predicate symbol applied to a tuple of variables or constants. For example, Friends(bob,X) and Friends(bob,alice) are atoms. A conjunctive feature is of the form ∀¯X r1 ∧r2 ∧· · · ∧rk, where each ri is an atom or the negation of an atom, and ¯X are the variables used in the atoms. Similarly, a disjunctive feature is of the form ∀¯X r1 ∨r2 ∨· · ·∨rk. For example, fc : ∀X ¬Smokes(X)∧Asthma(X) is a conjunctive feature, while fd : ∀X ¬Smokes(X) ∨¬Friends(bob,X) is a disjunctive feature. The former asserts everyone in the domain of X has asthma and does not smoke. The latter says that if a person smokes, he/she cannot be friends with Bob. A grounding of a feature is an assignment of the variables to constants from their domain. For example, ¬Smokes(alice) ∨¬Friends(bob,alice) is a grounding of the disjunctive feature fd. We assume that no predicate symbol occurs more than once in a feature i.e. we don’t allow for self-joins. In this work we focus on features containing only universal quantifiers (∀), and will from now on drop the quantification symbol ∀from the notation. Given a set (wi, fi)i=1,k where each fi is a conjunctive or disjunctive feature and wi ∈R is a weight assigned to that feature, we define the following probability distribution over a possible 2 world ω in accordance with Markov Logic Networks (MLN) : Pr(ω) = 1 Z exp X i wiN(fi, ω) ! (1) In Equation (1), a possible world ω can be any subset of tuples from the domain of predicates, Z, the normalizing constant is called the partition function, and N(fi, ω) is the number of groundings of feature fi that are true in the world ω. Table 1 gives an example of a MLN that has been modified from [10]. There is an implicit typesafety assumption in the MLNs, that if a predicate symbol occurs in more than one feature, then the variables used at the same position must have same domain. In the MLN of Table 1, if ∆X = ∆Y = {alice, bob}; then predicates Smokes and Asthma each have two tuples, while Friends has four. Hence, the total number of possible worlds is 22+2+4 = 256. Consider the possible world ω below : Smokes Asthma Friends bob bob (bob,bob) (bob,alice) alice (alice,alice) Then from Equation (1): Pr(ω) = 1 Z exp (1.4 · 1 + 2.3 · 0 + 4.6 · 0 + 1.5 · 1 + 1.1 · 2). In this paper we focus on MLNs, but our algorithm is applicable to other first order probabilistic models as well. 3 Problem Statement In this paper, we are interested in computing the partition function Z(M) of an MLN M. We formulate the partition function in a parametrized form, using the notion of Generating Functions of Counting Programs (CP). A Counting Program is a set of features ¯f along with indeterminates ¯α, where αi is the indeterminate for fi. Given a counting program P = (fi, αi)i=1...k, we define its generating function(GF) FP as follows: FP (¯α) = X ω Y i αN(fi,ω) i (2) The generating function as expressed in Eq. 2 is in general of exponential size in the domain of objects. We want to characterize cases where we can express it more succinctly, and hence compute the partition function faster. Let n be the size of the object domain, and k be the size of our program. Then we are interested in the cases where FP can be computed with following number of arithmetic operations. Closed Form Polynomial in log(n), k Polynomial Expression Polynomial in n, k Pseudo-Polynomial Expression Polynomial in n for bounded k Computing FP refers to evaluating it for one instantiation of parameters ¯α. To illustrate the above cases, let k = 1. Then the pseudo-polynomial and polynomial expression are equivalent. The program (R(X, Y ), α) has GF (1 + α)|∆X||∆Y |, which is in closed form. While the program (R(X) ∧S(X, Y ) ∧T(Y ), α) has GF 2|∆X||∆Y | P|∆X| i=0 |∆X| i   1 + 1+α 2 i|∆Y | , which is a polynomial expression. This polynomial does not have a closed form. In the following section we demonstrate an algorithm that computes the generating function, and allows us to identify cases where the generating function falls under one of the above categories. 4 Computing the Generating Function Asssume a Counting Program P = (fi, αi)i=1,k. In this section, we present some rules that can be used to compute the GF of a CP from simpler CPs. We can then upper bound the size of FP by the 3 choice of rules used. The cases which cannot be evaluated by these rules are still open and we don’t know if the GF in those cases can be expressed succinctly. We will require that all CPs are in normal form to simplify our analysis. Note that the normality requirement does not change the class of CPs that can be solved in PTIME by our algorithm. This is because every CP can converted to an equivalent normal CP in PTIME. 4.1 Normal Counting Programs Definition 4.1 A counting program is called normal if it satisfies the following properties : 1. There are no constants in any feature. 2. If two distinct atoms with the same predicate symbol have variables X and Y in the same position, then ∆X = ∆Y . It is easy to show that: Proposition 4.2 Computing the partition function of an MLN can be reduced in PTIME to computing the generating function of a normal CP. The following example demonstrates how to normalize a set of features. Example 4.3 Consider a CP containing two features Friends(X, Y ) and Friends(bob, Y ). Clearly, it is not in normal form because the second feature contains a constant. To normalize it, we can replace the two features by: (i) Friends1(Y ) ≡Friends(bob, Y ), and (ii) Friends2(Z, Y ) ≡Friends(X, Y ), X ̸= bob, where the domain of Z is ∆Z = ∆X \ bob. Note that we assume criterion 2 is satisfied in MLNs. During the course of algorithm, it may get violated when we replace variables with constants as we’ll see, but we can use the above transformation whenever that happens. So from now on we assume that our CP is normalized. 4.2 Preliminaries and Operators We proceed to establish notation and operators used by our algorithm. Given a feature f, we denote by V ars(f) the set of variables used in its atoms. We assume that variables used in different features must be different. Furthermore, without loss of generality, we assume numeric domains for each logical variable, namely ∆X = {1, . . . , |∆X|}. We define a substitution f[a/X], where X ∈ V ars(f) and a ∈∆X, as the replacement of X with a in every atom of f. P[a/X] applies the substitution fi[a/X] to every feature fi in P. Note that after a substitution, the CP is no longer normal and therefore, we may have to normalize it. Define a relation U among the variables of a CP as follows : U(X, Y ) iff there exist two atoms ri, rj with the same predicate, such that X ∈V ars(ri), Y ∈V ars(rj), and X and Y appear at the same position in ri and rj respectively. Let U be the transitive closure of U. Note that U is an equivalence relation. For a variable X, denote by Unify(X) its equivalence class under U. For example, given two features Smokes(X) ∧¬Asthma(X) and ¬Smokes(Y) ∨¬Friends(Z,Y), we have Unify(X) = Unify(Y ) = {X, Y }. Given a feature, a variable is a root variable iff it appears in every atom of the feature. For some variable X, the set X = Unify(X) is a separator if ∀Y ∈X : Y ∈V ars(fi) implies Y must be a root variable for fi. In the last example, the set {X, Y } is a separator. Notice that, since the program is normal, we have ∆X = ∆Y whenever Y ∈Unify(X), thus, if ¯X is a separator, then we write ∆¯ X for ∆Y for any Y ∈Unify(X). Two variables are called equivalent if there is a bijection from Unify(X) to Unify(Y ) such that for any Z1 ∈Unify(X) and its image Z2 ∈Unify(Y ), Z1 and Z2 always occur together. Next, we define three operators used by our algorithm: splitting, conditioning and Dirichlet convolution. We define a process Split(Y, k) that splits every feature in the CP that contains the variable Y into two features with disjoint domains: one with ∆Y = {k} and the other with ∆Y c = ∆Y −{k}. Both features retain the same indeterminate. Also, Cond(i, r, k) defines a process that removes an atom r from feature fi. Denote f ′ i = fi \ {r}; then Cond(i, r, k) replaces fi with (i) two features (TRUE, αk i ) and (f ′ i, 1) if r ⇒fi, (ii) one feature (f ′ i, 1) if r ⇒¬fi, and (iii) (f ′ i, αi) otherwise. Given two polynomials P = Pn i aiαi and Q = Pm i biαi, their Dirichlet convolution, P∗Q, is defined as: P∗Q = X i,j aibjαij 4 We define a new variant of this operator P∗cQ as: P∗cQ = αmnP ′ 1 α  ∗Q′ 1 α  , where P ′ 1 α  = P (α) αn and Q′ 1 α  = Q(α) αm 4.3 The Algorithm Our algorithm is basically a recursive application of a series of rewriting rules (see rules R1-R6 given below). Each (non-trivial) rule takes a CP as input and if the preconditions for applying it are satisfied, then it expresses the generating function of the input CP as a combination of generating functions of a few simpler CPs. The generating function of the resulting CPs can then be computed (independently) by recursively calling the algorithm on each. The recursion terminates when the generating function of the CP is trivial to compute (SUCCESS) or when none of the rules can be applied (FAILURE). In the case, when algorithm succeeds, we analyze whether the GF is in closed form or is a polynomial expression. Next, we present our algorithm which is essentially a sequence of rules. Given a CP, we go through the rules in order and apply the first applicable rule, which may require us to recursively compute the GF of simpler CPs, for which we continue in the same way. Our first rule uses feature and variable equivalence to reduce the size of the CP. Formally, Rule R1 (Variable and Feature Equivalence Rule) If variables X and Y are equivalent, replace the pair with a single new variable Z in every atom where they occur. Do the same for every pair of variables in Unify(X), Unify(Y ). If two features fi, fj are identical, then we replace them with a single feature fi with indeterminate αiαj that is the product of their individual indeterminates. The correctness of Rule R1 is immediate from the fact that the CP after the transformation is equal to the CP before the transformation. Our second rule specifies some trivial manipulations. Rule R2 (Trivial manipulations) 1. Eliminate FALSE features. 2. If a feature fi is TRUE, then FP = αiFP −fi. 3. If a program P is just a tuple then FP = 1 + α, where α is the indeterminate. 4. If some feature fi has indeterminate αi = 1 (due to R6), then remove all the atoms in fi of a predicate symbol that is present in some other feature. Let N be the product of the domain of the rest of the atoms, then FP = 2NFP −fi. Our third rule utilizes the independence property. Intuitively, given two CPs which are independent, namely they have no atoms in common, the generating function of the joint CP is simply the product of the generating function of the two CPs. Formally, Rule R3 (Independence Rule) If a CP P can be split into two programs P1 and P2 such that the two programs don’t have any predicate symbols in common, then FP = FP1 · FP2. The correctness of Rule R3 follows from the fact that every world ω of P can be written as a concatenation of two disjoint worlds, namely ω = (ω1 ∪ω2) where ω1 and ω2 are the worlds from P1 and P2 respectively. Hence the GF can be written as: FP = X ω1∪ω2 Y fi∈P1 αN(fi,ω1) i Y fi∈P2 αN(fi,ω2) i = X ω1 Y fi∈P1 αN(fi,ω1) i X ω2 Y fi∈P2 αN(fi,ω2) i = FP1 · FP2 (3) The next rule allows us to split a feature if it has a component that is independent of the rest of the program. Note that while the previous rule splits the program into two independent sets of features, this feature enables us to split a single feature. Rule R4 (Dirichlet Convolution Rule) If the program contains feature f = f1 ∧f2, s.t. f1 doesn’t share any variables or symbols with any atom in the program, then FP = Ff1∗FP −f+f2. Similarly if f = f1 ∨f2, then FP = Ff1∗cFP −f+f2. 5 We show the proof for a single feature f, the extension is straightforward. For this, we write GF in a different form as FP (α) = X i C(f, i)αi where the coefficient C(f, i) is exactly the number of worlds where the feature f is satisfied i times. Now assume f = f1 ∧f2, then in any given world ω, if f1 is satisfied n1 times and f2 is satisfied n2 times, then f is satisfied n1n2 times. Hence Ff(α) = X i C(f, i)αi = X i1,i2|i1i2=i C(f1, i1)C(f2, i2)αi = Ff1∗Ff2 Our next rule utilizes the similarity property in addition to the independence property. Given a set P of independent but equivalent CPs, the generating function of the joint CP equals the generating function of any CP, Pi ∈P raised to the power |P|. By definition, every instantiation ¯a of a separator ¯X defines a CP that has no tuple in common with other programs for ¯X = ¯b, ¯a ̸= ¯b. Moreover, all such CPs are equivalent (subject to a renaming of the variables and constants). Thus, we have the following rule: Rule R5 (Power Rule) Let ¯X be a separator. Then FP = FP [¯a/ ¯ X] |∆¯ X| Rule R5 generalizes the inversion and partial inversion operators given in [4, 5]. Its correctness follows in a straight-forward manner from the correctness of the independence rule. Our final rule generalizes the counting arguments presented in [5, 7]. Consider a singleton atom R(X). Conditioning over all possible truth assignments to all groundings of R(X) will yield 2|∆X| independent CPs. Thus, the GF can be written as a sum over the generating functions of 2|∆X| independent CPs. However, the resulting GF has exponential complexity. In some cases, however, the sum can be written efficiently by grouping together GFs that are equivalent. Rule R6 (Generalized Binomial Rule) Let Pred(X) be a singleton atom in some feature. For every Y ∈Unify(X) apply Split(Y, k). Then for every feature fi in the new program containing an atom r = Pred(Y ) apply (fi, αi) ←Cond(i, r, k) and similarly (fi, αi) ←Cond(i, ¬r, ∆Y c −k) for those containing r = Pred(Y c). Let the resulting program be Pk. Then FP = P∆X k=0 ∆X k  FPk. Note that Pk is just one CP whose GF has a parameter k. The proof is a little involved and omitted here for lack of space. Having specified the rules and established their correctness, we now present the main result of this paper: Theorem 4.4 Let P be a Counting Program (CP). • If P can be evaluated using only rules R1, R2, R3 and R5, then it has a closed form. • If P can be evaluated using only rules R1, R2, R3, R4, and R5, then it has a polynomial expression. • If P can be evaluated using rules Rules 1 to 6 then it admits a pseudo-polynomial expression. Computing the dirichlet convolution (Rule R4) requires going through all the coefficients, hence it takes linear time. Thus, we do not have a closed form solution when we apply (Rule R4). Rule R6 implies that we have to recurse over more than one program, hence their repeated application can mean we have to solve number of programs that is exponential in the size of program. Therefore, we can only guarantee a pseudo-polynomial expression if we use this rule. We can now see the effectiveness of generating functions. When we want to recurse over a set of features, simply keeping the partition function for smaller features is not enough; we need more information than that. In particular we need all the coefficients of the generating function. For e.g. we can’t compute the partition function for R(X) ∧S(Y ) with just the partition functions of R(X) and S(Y ). However, if we have their GF, the GF of f = R(X)∧S(Y ) is just a dirichlet convolution of the GF of R(X) and S(Y ). One could also compute the GF of f using a dynamic programming algorithm, which keeps all the coefficients of the generating function. Generating functions let us store this information in a very succinct way. For e.g. if the GF is (1 + α)n, then it is much simpler to use this representation, than keeping all n + 1 binomial coefficients : n k  , k = 0, n. 6 10 1 10 2 10 3 10 −6 10 −4 10 −2 10 0 10 2 10 4 10 6 Domain Size Time (sec) Counting Program (evidence 30%) FOVE (evidence 30%) FOVE extrapolation Figure 1: Our approach vs FOVE for increasing domain sizes. X,Y-axes drawn on a log-scale. 0 20 40 60 80 100 10 −4 10 −2 10 0 10 2 Percentage of Evidence Time (sec) Counting Program (domain size 13) Counting Program (domain size 100) FOVE (domain size 13) Figure 2: Our approach vs FOVE as the evidence increases. Y-axis is drawn on a log scale. 4.4 Examples We illustrate our approach through examples. We will use simple predicate symbols like R, S, T and assume the domain of all variables as [n]. Note that for a single tuple, say R(a) with indeterminate α, GF = 1 + α from rule R2. Now suppose we have a simple program like P = {(R(X), α)} (a single feature R(X) with indeterminate α). Then from rule R5: FP = FP [a/X] n = (1 + α)n. These are both examples of programs with closed form GF. We can evaluate FP with O(log(n)) arithmetic operations, while if we were to write the same GF as P k n k  αk it would require O(n log(n)) operations. The key insight of our approach is representing GFs succinctly. Now assume the following program P with multiple features : R(X1) ∧S(X1, Y1) α S(X2, Y2) ∧T(X2) β Note that (X1, X2) form a separator. Hence using R5, FP = FP [(a,a)/(X1,X2)] n. Now consider program P ′ = P[(a, a)/(X1, X2)]: R(a) ∧S(a, Y1) α S(a, Y2) ∧T(a) β Using R4 twice, for R(a) and T(a) along with R2 (to get the GF for R(a), T(a)); we get FP ′ = (1 + α)∗(1 + β)∗FP ′′, where P ′′ is S(a, Y1) α S(a, Y2) β which is same as (S(a, Y ), αβ) using R1. The GF for this program, as shown earlier is (1 + αβ)n. Now putting values back together, we get: FP ′ = (1 + α)∗(1 + β)∗(1 + αβ)n = 2n+1 + (1 + αβ)n Finally, for the original program: FP = (FP ′)n = 2n+1 + (1 + αβ)nn. Note that this is also in closed form. 5 Experiments The algorithm that we described is based on computing the generating functions of counting programs to perform lifted inference, which approaches the problem from a completely different angle than existing techniques. Due to this novelty, we can solve MLNs that are intractable for other existing lifted algorithms such as first-order variable elimination (FOVE) [5, 6, 7]. Specifically, we demonstrate with our experiments that on some MLNs we indeed outperform FOVE by orders of magnitude. We ran our algorithm on the MLN given in Table 1. The set of features used in this MLN fall into the class of counting programs having a pseudo-polynomial generating function. This is the most general class of features our approach covers, and here our algorithm does not give any guarantees as evidence increases. The evidence in our experiments is randomly generated for the two tables Asthma and Smokes. In our experiments we study the influence of two factors on the runtime: 7 Size of Domain: Identifying tractable features is particularly important for inference in first order models, because (i) grounding can produce very big graphical models and (ii) the treewidth of these models could be very high. As the size of domain increases, our approach should scale better than the existing techniques which can’t do lifted inference on this MLN. All the predicates in this MLN are only defined on one domain, that of persons. Evidence: Since this MLN falls into the class of features for which we give no guarantees as evidence increases, we want to study the behavior of our algorithm in the presence of increasingly more evidence. Fig. 5 displays the execution time of our CP algorithm vs the FOVE approach for domain sizes varying from 5 to 100, at the presence of 30% evidence. All results display average runtimes over 15 repetitions with the same parameter settings. FOVE cannot do lifted inference on this MLN and resorts to grounding. Thus, it could only execute up to the domain size of 18; after that it consistently ran out of memory. The figure also displays the extrapolated data points for FOVE’s behavior in larger domain sizes, and shows its runtime growing exponentially. Our approach on the other hand dominates FOVE by orders of magnitude for those small domains, and finishes within seconds even for domains of size 100. Note that the complexity of our algorithm for this MLN is quadratic. Hence it looks linear on the log-scale. Fig. 5 demonstrates the behavior of the algorithms as the amount of evidence is increased from 0 to 100%. We chose a domain size of 13 to run FOVE, since it couldn’t terminate for higher domain sizes. The figure displays the runtime of our algorithm for domain sizes of 13 and 100. Although for this class of features we do not give guarantees on the running time for large evidence, our algorithm still performs well as the evidence increases. In fact after a point the algorithm gets faster. This is because the main time-consuming rule used in this MLN is R4. R4 chooses a singleton atom in the last feature, say Asthma, and eliminates it. This involves time complexity proportional to the domain of the atom and the running time of the smaller MLN obtained after removing that atom. As evidence increases, the atom corresponding to Asthma may be split into many smaller predicates; but the domain size of each predicate also keeps getting smaller. In particular with 100% evidence, the domain is just 1 and therefore R6 takes constant time! 6 Conclusion and Future Work We have presented a novel approach to lifted inference that uses the theory of generating functions to do efficient inference. We also give guarantees on the theoretical complexity of our approach. This is the first work that tries to address the complexity of lifted inference in terms of only the features (formulas). This is beneficial because using a set of tractable features ensures that inference is always efficient and hence it will scale to large domains. Several avenues remain for future work. For instance, a feature such as transitive closure ( e.g., Friends(X,Y) ∧Friends(Y,Z) ⇒Friends(X,Z)), which occurs quite often in many real world applications, is intractable for our algorithm. In future, we would like to address the complexity of such features by characterizing the completeness of our approach. Another avenue for future work is extending other lifted inference approaches [5, 7] with rules that we have developed in this paper. Unlike our algorithm, the aforementioned algorithms are complete. Namely, when lifted inference is not possible, they ground the domain and resort to propositional inference. But even in those cases, just running a propositional algorithm that does not exploit symmetry is not very efficient. In particular, ground networks generated by logical formulas have some repetition in their structure that is difficult to capture after grounding. Take for example R(X,Y) ∧S(Z,Y). This feature is in PTIME by our algorithm, but if we create a ground markov network by grounding this feature then it can have unbounded treewidth (as big as the domain itself). We think our approach can provide an insight about how to best construct a graphical model from the groundings of a logical formula. This is also another interesting piece of future work that our algorithm motivates. References [1] Lise Getoor and Ben Taskar. Introduction to Statistical Relational Learning. The MIT Press, 2007. 8 [2] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan and Claypool, 2009. [3] Matthew Richardson and Pedro Domingos. Markov logic networks. In Machine Learning, page 2006, 2006. [4] David Poole. First-order probabilistic inference. In IJCAI’03: Proceedings of the 18th international joint conference on Artificial intelligence, pages 985–991, San Francisco, CA, USA, 2003. Morgan Kaufmann Publishers Inc. [5] Rodrigo De Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In IJCAI’05: Proceedings of the 19th international joint conference on Artificial intelligence, pages 1319–1325, San Francisco, CA, USA, 2005. Morgan Kaufmann Publishers Inc. [6] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted probabilistic inference with counting formulas. In AAAI’08: Proceedings of the 23rd national conference on Artificial intelligence, pages 1062–1068. AAAI Press, 2008. [7] K. S. Ng, J. W. Lloyd, and W. T. Uther. Probabilistic modelling, inference and learning using logical theories. Annals of Mathematics and Artificial Intelligence, 54(1-3):159–205, 2008. [8] Nevin Zhang and David Poole. A simple approach to bayesian network computations. In Proceedings of the Tenth Canadian Conference on Artificial Intelligence, pages 171–178, 1994. [9] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence, 113:41–85, 1999. [10] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In AAAI’08: Proceedings of the 23rd national conference on Artificial intelligence, pages 1094–1099. AAAI Press, 2008. [11] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [12] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 467–475, 1999. [13] Nilesh Dalvi and Dan Suciu. Management of probabilistic data: foundations and challenges. In PODS, pages 1–12, New York, NY, USA, 2007. ACM Press. [14] Karl Schnaitter Nilesh Dalvi and Dan Suciu. Computing query probability with incidence algebras. In PODS, 2007. 9
2010
71
4,115
Predictive State Temporal Difference Learning Byron Boots Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 beb@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 ggordon@cs.cmu.edu Abstract We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem. 1 Introduction We wish to estimate the value function of a policy in an unknown decision process in a high dimensional and partially-observable environment. We represent the value function in a linear architecture, as a linear combination of features of (sequences of) observations. A popular family of learning algorithms called temporal difference (TD) methods [1] are designed for this situation. In particular, least-squares TD (LSTD) algorithms [2, 3, 4] exploit the linearity of the value function to estimate its parameters from sampled trajectories, i.e., from sequences of feature vectors of visited states, by solving a set of linear equations. Recently, Parr et al. looked at the problem of value function estimation from the perspective of both model-free and model-based reinforcement learning [5]. The model-free approach (which includes TD methods) estimates a value function directly from sample trajectories. The model-based approach, by contrast, first learns a model of the process and then computes the value function from the learned model. Parr et al. demonstrated that these two approaches compute exactly the same value function [5]. In the current paper, we build on this insight, while simultaneously finding a compact set of features using powerful methods from system identification. First, we look at the problem of improving LSTD from a model-free predictive-bottleneck perspective: given a large set of features of history, we devise a new TD method called Predictive State Temporal Difference (PSTD) learning. PSTD estimates the value function through a bottleneck that 1 preserves only predictive information (Section 3). Second, we look at the problem of value function estimation from a model-based perspective (Section 4). Instead of learning a linear transition model in feature space, as in [5], we use subspace identification [6, 7] to learn a PSR from our samples. Since PSRs are at least as compact as POMDPs, our representation can naturally be viewed as a value-directed compression of a much larger POMDP. Finally, we show that our two improved methods are equivalent. This result yields some appealing theoretical benefits: for example, PSTD features can be explicitly interpreted as a statistically consistent estimate of the true underlying system state. And, the feasibility of finding the true value function can be shown to depend on the linear dimension of the dynamical system, or equivalently, the dimensionality of the predictive state representation—not on the cardinality of the POMDP state space. Therefore our representation is naturally “compressed” in the sense of [8], speeding up convergence. We demonstrate the practical benefits of our method with several experiments: we compare PSTD to competing algorithms on a synthetic example and a difficult optimal stopping problem. In the latter problem, a significant amount of prior work has gone into hand-tuning features. We show that, if we add a large number of weakly relevant features to these hand-tuned features, PSTD can find a predictive subspace which performs much better than competing approaches, improving on the best previously reported result for this problem by a substantial margin. The theoretical and empirical results reported here suggest that, for many applications where LSTD is used to compute a value function, PSTD can be simply substituted to produce better results. 2 Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π0, a transition function T, a reward function R, and a discount factor γ ∈[0, 1]. We seek a policy π, a mapping from states to actions. For a given policy π, the value of state s is defined as the expected discounted sum of rewards when starting in state s and following policy π, Jπ(s) = E [P∞ t=0 γtR(st) | s0 = s, π]. The value function obeys the Bellman equation Jπ(s) = R(s) + γ P s′ Jπ(s′) Pr[s′ | s, π(s)] (1) If we know the transition function T, and if the set of states S is sufficiently small, we can find an optimal policy with policy iteration: pick an initial policy π, use (1) to solve for the value function Jπ, compute the greedy policy for Jπ (setting the action at each state to maximize the right-hand side of (1)), and repeat. However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. We can no longer make decisions or predict reward based on S, but instead must use a history (an ordered sequence of action-observation pairs h = ah 1oh 1 . . . ah t oh t that have been executed and observed prior to time t): R(h), J(h), and π(h) instead of R(s), Jπ(s), and π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories Jπ(h) = wTφH(h) (2) Here w ∈Rj is a parameter vector and φH(h) ∈Rj is a feature vector for a history h. So, we can rewrite the Bellman equation as wTφH(h) = R(h) + γ P o∈O wTφH(hπo) Pr[hπo | hπ] (3) where hπo is history h extended by taking action π(h) and observing o. 2.1 Least Squares Temporal Difference Learning In general we don’t know the transition probabilities Pr[hπo | h], but we do have samples of state features φH t = φH(ht), next-state features φH t+1 = φH(ht+1), and immediate rewards Rt = R(ht). We can thus estimate the Bellman equation wTφH 1:k ≈R1:k + γwTφH 2:k+1 (4) (Here we have used φH 1:k to mean the matrix whose columns are φH t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving this linear system in the least squares 2 sense: ˆwT = R1:k φH 1:k −γφH 2:k+1 †, where † indicates the pseudo-inverse. However, this solution is biased [3], since the independent variables φH t −γφH t+1 are noisy samples of the expected difference E[φH(h) −γ P o∈O φH(hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm finds a consistent estimate of w by rightmultiplying the approximate Bellman equation (Equation 4) by φH t T: ˆwT = 1 k Pk t=1 RtφH t T  1 k Pk t=1 φH t φH t T −γ k Pk t=1 φH t+1φH t T−1 (5) Here, φH t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables but uncorrelated with the noise in our estimates of these variables. As the amount of data k increases, the empirical covariance matrices φH 1:kφH 1:k T/k and φH 2:k+1φH 1:k T/k converge with probability 1 to their population values, and so our estimate of the matrix to be inverted in (5) is consistent. So, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w converges to the true value with probability 1. 3 Predictive Features LSTD provides a consistent estimate of the value function parameters w; but in practice, if the number of features is large relative to the number of training samples, then the LSTD estimate of w is prone to overfitting. This problem can be alleviated by choosing a small set of features that only contains information that is relevant for value function approximation. However, with the exception of LARS-TD [9], there has been little work on how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-always-available expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given a large set of features of history, we would like to find a compression that preserves only relevant information for predicting the value function Jπ. As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. 3.1 Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. The most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward. We call these prediction tasks, collectively, features of the future. We write φT t for the vector of all features of the “future at time t,” i.e., events starting at time t + 1 and continuing forward. Now, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reducedrank regression [10]. We define the following empirical covariance matrices between features of the future and features of histories: bΣT ,H = 1 k Pk t=1 φT t φH t T bΣH,H = 1 k Pk t=1 φH t φH t T (6) Let LH be the lower triangular Cholesky factor of bΣH,H. Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDVT ≈ bΣT ,HL−T H for a truncated SVD [11], where U contains the left singular vectors, V contains the right singular vectors, and D is the diagonal matrix of singular values. (We can tune accuracy by keeping more or fewer singular values, i.e., columns of U, V, or D.) We use the SVD to define a mapping bU from the compressed space up to the space of features of the future, and we define bV to be the 3 optimal compression operator given bU (in a least-squares sense, see [12] for details): bU = UD1/2 bV = bU TbΣT ,H(bΣH,H)−1 (7) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression—but, unlike previous ways to find valuedirected compressions [8], we do not need to know a model of our system ahead of time. For another example, let LT be the Cholesky factor of the empirical covariance of future features bΣT ,T . Then, if we scale features of the future by L−T T , the SVD will preserve the largest possible amount of mutual information between history and future, yielding a canonical correlation analysis [13, 14]. 3.2 Predictive State Temporal Difference Learning Now that we have found a predictive compression operator bV via Equation 7, we can replace the features of history φH t with the compressed features bV φH t in the Bellman recursion, Equation 4: wT bV φH 1:k ≈R1:k + γwT bV φH 2:k+1 (8) The least squares solution for w is still prone to an error-in-variables problem. The instrumental variable φH is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it to unbias the estimate of w. Define the additional covariance matrices: bΣR,H = 1 k Pk t=1 RtφH t T bΣH+,H = 1 k Pk t=1 φH t+1φH t T (9) Then, the corrected Bellman equation is wT bV bΣH,H = bΣR,H + γwT bV bΣH+,H, and solving for w gives us the Predictive State Temporal Difference (PSTD) learning algorithm: wT = bΣR,H  bV bΣH,H −γ bV bΣH+,H † (10) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the model-free algorithm in Equation 10 is, under some circumstances, equivalent to a model-based method which uses subspace identification to learn Predictive State Representations [6, 7]. 4 Predictive State Representations A predictive state representation (PSR) [15] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Just as a history is an ordered sequence of actionobservation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a1o1 . . . aioi that can be executed and observed after time t [15]. The prediction for a test τ after a history h, written τ(h), is the probability that we will see the test observations τ O = o1 . . . oi, given that we intervene [16] to execute the test actions τ A = a1 . . . ai: τ(h) = Pr[τ O | h, do(τ A)]. If Q = {τ1, . . . , τn} is a set of tests, we write Q(h) = (τ1(h), . . . , τn(h))T for the corresponding vector of test predictions. Formally, a PSR consists of five elements ⟨A, O, Q, s1, F⟩. A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions fτ which embody these predictions: τ(h) = fτ(Q(h)). And, m1 = Q(ϵ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: fτ(Q(h)) = rT τ Q(h) for some vector rτ ∈R|Q|. Finally, a core set Q is minimal if the tests in Q are linearly independent [17, 18], i.e., no one test’s prediction is a linear function of the other tests’ predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write Mao for the matrix with rows rT aoτ for τ ∈Q, then we can use Bayes’ Rule to show: Q(hao) = MaoQ(h) Pr[o | h, do(a)] = MaoQ(h) mT∞MaoQ(h) (11) 4 where m∞is a normalizer, defined by mT ∞Q(h) = 1 for all h. In addition to the above PSR parameters, for reinforcement learning we need a reward function R(h) = ηTQ(h) mapping predictive states to immediate rewards, a discount factor γ ∈[0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [6, 7]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m1 →Sm1, mT ∞→mT ∞S−1, and Mao →SMaoS−1 without changing the corresponding dynamical system, since pairs S−1S cancel in Eq. 11. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [19, 20, 14, 21]. 4.1 Learning Transformed PSRs Let Q be a minimal core set of tests, so that n = |Q| is the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal), and let H be the set of all possible histories. As before, write φH t ∈Rℓfor a vector of features of history at time t, and write φT t ∈Rℓfor a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ(h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix ΦT ∈Rℓ×|T | to embody our predictions of future features: an entry of ΦT is the weight of one of the tests in T for calculating the prediction of one of the features in φT . Below we define several covariance matrices, Equation 12(a–d), in terms of the observable quantities φT t , φH t , at, and ot, and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 14 below. First we define ΣH,H, the covariance matrix of features of histories, as E[φH t φH t T | ht ∼ω]. Given k samples, we can approximate this covariance: bΣH,H = 1 kφH 1:kφH 1:k T. (12a) As k →∞, bΣH,H converges to the true covariance ΣH,H with probability 1. Next we define ΣS,H, the cross covariance of states and features of histories. Writing st = Q(ht) for the (unobserved) state at time t, let ΣS,H = E h 1 ks1:kφH 1:k T ht ∼ω (∀t) i . We cannot directly estimate ΣS,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define ΣT ,H, the cross covariance matrix of the features of tests and histories (see [12] for derivations): bΣT ,H ≡1 kφT 1:kφH 1:k T ΣT ,H ≡E[φT t φH t T | ht ∼ω, do(ζ)] = ΦT RΣS,H (12b) where row τ of the matrix R is rτ, the linear function that specifies the prediction of the test τ given the predictions of tests in the core set Q. By do(ζ), we mean to approximate the effect of executing all sequences of actions required by all tests or features of the future at once. This is not difficult in our experiments (in which all tests use compatible action sequences); but see [12] for further discussion. Eq. 12b tells us that, because of our assumptions about linear dimension, the matrix ΣT ,H has factors R ∈R|T |×n and ΣS,H ∈Rn×ℓ. Therefore, the rank of ΣT ,H is no more than n, the linear dimension of the system. We can also see that, since the size of ΣT ,H is fixed, as the number of samples k increases, bΣT ,H →ΣT ,H with probability 1. Next we define ΣH,ao,H, a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, It(o) is an indicator variable for whether we see observation o at step t. bΣH,ao,H ≡1 k Pk t=1 φH t+1It(o)φH t T ΣH,ao,H ≡E [ΣH,ao,H| ht ∼ω (∀t), do(a) (∀t)] (12c) Since the dimensions of each bΣH,ao,H are fixed, as k →∞these empirical covariances converge to the true covariances ΣH,ao,H with probability 1. Finally we define ΣR,H, and approximate the covariance (in this case a vector) of reward and features of history: 5 bΣR,H ≡1 k Pk t=1 RtφH t T ΣR,H ≡E[RtφH t T | ht ∼ω] = ηTΣS,H (12d) Again, as k →∞, bΣR,H converges to ΣR,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φH to s is exact: st = ΣS,HΣ−1 H,HφH t . We discuss how to relax this assumption in [12]. We also need a matrix U such that U TΦT R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, there are reasons to choose U via SVD of a scaled version of ΣT ,H as described in Sec. 3.1. Using our assumptions we can show a useful identity for ΣH,ao,H (for proof details see [12]): ΣS,HΣ−1 H,HΣH,ao,H = MaoΣS,H (13) This identity is at the heart of our learning algorithm: it states that ΣH,ao,H contains a hidden copy of Mao, the main TPSR parameter that we need to learn. We would like to recover Mao via Eq. 13, Mao = ΣS,HΣ−1 H,HΣH,ao,HΣ† S,H; but of course we do not know ΣS,H. Fortunately, it turns out that we can use U TΣT ,H as a stand-in, since this matrix differs only by an invertible transform (Eq. 12b). We now show how to recover a TPSR from the matrices ΣT ,H, ΣH,H, ΣR,H, ΣH,ao,H, and U. Since a TPSR’s predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform [7, 12]. bt ≡U TΣT ,H(ΣH,H)−1φH t = (U TΦT R)st (14a) Bao ≡U TΣT ,H(ΣH,H)−1ΣH,ao,H(U TΣT ,H)† = (U TΦT R)Mao(U TΦT R)−1 (14b) bT η ≡ΣR,H(U TΣT ,H)† = ηT(U TΦT R)−1 (14c) Our PSR learning algorithm is simple: replace each true covariance matrix in Eq. 14 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. 4.2 Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.2 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a PSR or TPSR’s value function is a linear function of state, V (s) = wTs, and is the solution of the PSR Bellman equation [22]: for all s, wTs = bT ηs + γ P o∈O wTBπos, or equivalently, wT = bT η + γ P o∈O wTBπo. Substituting in our learned PSR parameters from Equations 14(a–c), we get wT = bΣR,H(U TbΣT ,H)† + γ P o∈O wTU TbΣT ,H(bΣH,H)−1bΣH,πo,H(U TbΣT ,H)† wTU TbΣT ,H = bΣR,H + γwTU TbΣT ,H(bΣH,H)−1bΣH+,H since, by comparing Eqs. 12c and 9, we can see that P o∈O bΣH,πo,H = bΣH+,H. Now, define bU and bV as in Eq. 7, and let U = bU as suggested above in Sec. 4.1. Then U TbΣT ,H = bV bΣH,H, and wT bV bΣH,H = bΣR,H + γwT bV bΣH+,H =⇒wT = bΣR,H  bV bΣH,H −γ bV bΣH+,H † (15) Eq. 15 is exactly Eq. 10, the PSTD algorithm. So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation—to our knowledge, the first such result for a TD method. 5 Experimental Results 5.1 Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [23]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy’s transition matrix is low rank: the resulting belief distributions lie in a 3-dimensional subspace of the original belief simplex (see [12] for details). 6 1 2 3 4 −10 −5 0 5 10 15 1 2 3 4 −10 −5 0 5 10 15 1 2 3 4 −10 −5 0 5 10 15 0 5 10 15 20 25 30 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 Value State State State Expected Reward Policy Iteration A. B. C. D. LSTD PSTD LARS-TD LSTD (16) LSTD PSTD LARS-TD Threshold J LSTD PSTD LARS-TD J LSTD PSTD LARS-TD J π π π Figure 1: Experimental Results. Error bars indicate standard error. (A) Estimating the value function with a small number of informative features. All three approaches do well. (B) Estimating the value function with a small set of informative features and a large set of random features. LARSTD is designed for this scenario and dramatically outperforms PSTD and LSTD. (C) Estimating the value function with a large set of semi-informative features. PSTD is able to determine a small set of compressed features that retain the maximal amount of information about the value function, outperforming LSTD and LARS-TD. (D) Pricing a high-dimensional derivative via policy iteration. The optimal threshold strategy (sell if price is above a threshold [24]) is in black, LSTD (16 canonical features) is in blue, LSTD (on the full 220 features) is cyan, LARS-TD (feature selection from set of 220) is in green, and PSTD (16 dimensions, compressing 220 features) is in red. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, and PSTD when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by Jπ = R(I −γT π)−1. In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, and PSTD with linear dimension 3 (Figure 1(A)). Each method estimated a reasonable value function. For the second experiment, we added 490 random, uninformative features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD outperforms LSTD and LARS-TD by summarizing these features and efficiently estimating the value function (Figure 1(C)). 5.2 Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the holder must make decisions, and the value of the contract depends on how the holder acts—e.g., with early exercise the holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions, so deciding when to exercise is an optimal stopping problem. Stopping problems provide an ideal testbed for policy evaluation methods, since we can collect a single data set which lets us evaluate any policy: we just choose the “continue” action forever. (We can then evaluate the “stop” action easily in any of the resulting states.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [24]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of each day, the holder may opt to exercise. At exercise the holder receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a “psychic call”: the holder gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼10%. In more detail, if wt is a standard Brownian motion, then the stock price pt evolves as ∇pt = ρpt∇t + σpt∇wt, and we can summarize relevant state at the end of each day as a vector 7 xt ∈R100, with xt = ( pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 )T. This process is Markov and ergodic [24, 25]: xt and xt+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e−ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock’s growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V ∗(x) = supt E[γtG(xt) | x0 = x]. Our goal is to calculate an approximate value function V (x) = wTφH(x), and then use this value function to generate a stopping time min{t | G(xt) ≥V (xt)}. To do so, we sample a sequence of 1,000,000 states xt ∈R100 and calculate features φH of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy “stop if G(xt) ≥wTφH(xt).” Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the “continue” action, with reward 0 at each step. When the policy executes the “stop” action, the reward is G(x) and the next state’s features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a “good” set of 16 features for this data set through repeated trial and error (see [12] and [24, 25]). We greatly expand this set of features, then use PSTD to synthesize a small set of high-quality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison’s sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [24]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [24, 25] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn’t know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. 6 Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [24, 25]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD’s compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. 8 References [1] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9–44, 1988. [2] Justin A. Boyan. Least-squares temporal difference learning. In Proc. Intl. Conf. Machine Learning, pages 49–56. Morgan Kaufmann, San Francisco, CA, 1999. [3] Steven J. Bradtke and Andrew G. Barto. Linear least-squares algorithms for temporal difference learning. In Machine Learning, pages 22–33, 1996. [4] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. J. Mach. Learn. Res., 4:1107– 1149, 2003. [5] Ronald Parr, Lihong Li, Gavin Taylor, Christopher Painter-Wakefield, and Michael L. Littman. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. In ICML ’08: Proceedings of the 25th international conference on Machine learning, pages 752–759, New York, NY, USA, 2008. ACM. [6] Matthew Rosencrantz, Geoffrey J. Gordon, and Sebastian Thrun. Learning low dimensional predictive representations. In Proc. ICML, 2004. [7] Byron Boots, Sajid M. Siddiqi, and Geoffrey J. Gordon. Closing the learning-planning loop with predictive state representations. In Proceedings of Robotics: Science and Systems VI, 2010. [8] Pascal Poupart and Craig Boutilier. Value-directed compression of pomdps. In NIPS, pages 1547–1554, 2002. [9] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML ’09: Proceedings of the 26th Annual International Conference on Machine Learning, pages 521–528, New York, NY, USA, 2009. ACM. [10] Gregory C. Reinsel and Rajabather Palani Velu. Multivariate Reduced-rank Regression: Theory and Applications. Springer, 1998. [11] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1996. [12] Byron Boots and Geoffrey J. Gordon. Predictive state temporal difference learning. Technical report, arXiv.org. [13] Harold Hotelling. The most predictable criterion. Journal of Educational Psychology, 26:139–142, 1935. [14] S. Soatto and A. Chiuso. Dynamic data factorization. Technical report, UCLA, 2001. [15] Michael Littman, Richard Sutton, and Satinder Singh. Predictive representations of state. In Advances in Neural Information Processing Systems (NIPS), 2002. [16] Judea Pearl. Causality: models, reasoning, and inference. Cambridge University Press, 2000. [17] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12:1371–1398, 2000. [18] Satinder Singh, Michael James, and Matthew Rudary. Predictive state representations: A new theory for modeling dynamical systems. In Proc. UAI, 2004. [19] P. Van Overschee and B. De Moor. Subspace Identification for Linear Systems: Theory, Implementation, Applications. Kluwer, 1996. [20] Tohru Katayama. Subspace Methods for System Identification. Springer-Verlag, 2005. [21] Daniel Hsu, Sham Kakade, and Tong Zhang. A spectral algorithm for learning hidden Markov models. In COLT, 2009. [22] Michael R. James, Ton Wessling, and Nikos A. Vlassis. Improving approximate value iteration using memories and predictive state representations. In AAAI, 2006. [23] Sajid Siddiqi, Byron Boots, and Geoffrey J. Gordon. Reduced-rank hidden Markov models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS-2010), 2010. [24] John N. Tsitsiklis and Benjamin Van Roy. Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives. IEEE Transactions on Automatic Control, 44:1840–1851, 1997. [25] David Choi and Benjamin Roy. A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning. Discrete Event Dynamic Systems, 16(2):207–239, 2006. 9
2010
72
4,116
Identifying Dendritic Processing Aurel A. Lazar Department of Electrical Engineering Columbia University New York, NY 10027 aurel@ee.columbia.edu Yevgeniy B. Slutskiy∗ Department of Electrical Engineering Columbia University New York, NY 10027 ys2146@columbia.edu Abstract In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1 Introduction The nature of encoding and processing of sensory information in the visual, auditory and olfactory systems has been extensively investigated in the systems neuroscience literature. Many phenomenological [1, 2, 3] as well as mechanistic [4, 5, 6] models have been proposed to characterize and clarify the representation of sensory information on the level of single neurons. Here we investigate a class of phenomenological neural circuit models in which the time-domain linear processing takes place in the dendritic tree and the resulting aggregate dendritic current is encoded in the spike domain by a spiking neuron. In block diagram form, these neural circuit models are of the [Filter]-[Spiking Neuron] type and as such represent a fundamental departure from the standard Linear-Nonlinear-Poisson (LNP) model that has been used to characterize neurons in many sensory systems, including vision [3, 7, 8], audition [2, 9] and olfaction [1, 10]. While the LNP model also includes a linear processing stage, it describes spike generation using an inhomogeneous Poisson process. In contrast, the [Filter]-[Spiking Neuron] model incorporates the temporal dynamics of spike generation and allows one to consider more biologically-plausible spike generators. We perform identification of dendritic processing in the [Filter]-[Spiking Neuron] model assuming that input signals belong to the space of bandlimited functions, a class of functions that closely model natural stimuli in sensory systems. Under this assumption, we show that the identification of dendritic processing in the above neural circuit becomes mathematically tractable. Using simulated data, we demonstrate that under certain conditions it is possible to identify the impulse response of the dendritic processing filter with arbitrary precision. Furthermore, we show that the identification results fundamentally depend on the bandwidth of test stimuli. The paper is organized as follows. The phenomenological neural circuit model and the identification problem are formally stated in section 2. The Neural Identification Machine and its realization as an algorithm for identifying dendritic processing is extensively discussed in section 3. Performance of the identification algorithm is exemplified in section 4. Finally, section 5 concludes our work. ∗The names of the authors are alphabetically ordered. 1 2 Problem Statement In what follows we assume that the dendritic processing is linear [11] and any nonlinear effects arise as a result of the spike generation mechanism [12]. We use linear BIBO-stable filters (not necessarily causal) to describe the computation performed by the dendritic tree. Furthermore, a spiking neuron model (as opposed to a rate model) is used to model the generation of action potentials or spikes. We investigate a general neural circuit comprised of a filter in cascade with a spiking neuron model (Fig. 1(a)). This circuit is an instance of a Time Encoding Machine (TEM), a nonlinear asynchronous circuit that encodes analog signals in the time domain [13, 14]. Examples of spiking neuron models considered in this paper include the ideal IAF neuron, the leaky IAF neuron and the threshold-and-feedback (TAF) neuron [15]. However, the methodology developed below can be extended to many other spiking neuron models as well. We break down the full identification of this circuit into two problems: (i) identification of linear operations in the dendritic tree and (ii) identification of spike generator parameters. First, we consider problem (i) and assume that parameters of the spike generator can be obtained through biophysical experiments. Then we show how to address (ii) by exploring the space of input signals. We consider a specific example of a neural circuit in Fig. 1(a) and carry out a full identification of that circuit. u(t) Dendritic Processing Spike Generation Filter Spiking Linear v(t) Neuron (tk)k∈Z (a) + u(t) b δ h(t) Dendritic Processing voltage reset to 0 (tk)k∈Z Spike Generation: Ideal IAF Neuron 1 C  v(t) (b) Figure 1: Problem setup. (a) The dendritic processing is described by a linear filter and spikes are produced by a (nonlinear) spiking neuron model. (b) An example of a neural circuit in (a) is a linear filter in cascade with the ideal IAF neuron. An input signal u is first passed through a filter with an impulse response h. The output of the filter v(t) = (u ∗h)(t), t ∈R, is then encoded into a time sequence (tk)k∈Z by the ideal IAF neuron. 3 Neuron Identification Machines A Neuron Identification Machine (NIM) is the realization of an algorithm for the identification of the dendritic processing filter in cascade with a spiking neuron model. First, we introduce several definitions needed to formally address the problem of identifying dendritic processing. We then consider the [Filter]-[Ideal IAF] neural circuit. We derive an algorithm for a perfect identification of the impulse response of the filter and provide conditions for the identification with arbitrary precision. Finally, we extend our results to the [Filter]-[Leaky IAF] and [Filter]-[TAF] neural circuits. 3.1 Preliminaries We model signals u = u(t), t ∈R, at the input to a neural circuit as elements of the Paley-Wiener space Ξ =  u ∈L2(R) supp (Fu) ⊆[−Ω, Ω] , i.e., as functions of finite energy having a finite spectral support (F denotes the Fourier transform). Furthermore, we assume that the dendritic processing filters h = h(t), t ∈R, are linear, BIBO-stable and have a finite temporal support, i.e., they belong to the space H =  h ∈L1(R) supp(h) ⊆[T1, T2] . Definition 1. A signal u ∈Ξ at the input to a neural circuit together with the resulting output T = (tk)k∈Z of that circuit is called an input/output (I/O) pair and is denoted by (u, T). Definition 2. Two neural circuits are said to be Ξ-I/O-equivalent if their respective I/O pairs are identical for all u ∈Ξ. Definition 3. Let P : H →Ξ with (Ph)(t) = (h ∗g)(t), where (h ∗g) denotes the convolution of h with the sinc kernel g ≜sin(Ωt)/(πt), t ∈R. We say that Ph is the projection of h onto Ξ. Definition 4. Signals {ui}N i=1 are said to be linearly independent if there do not exist real numbers {αi}N i=1, not all zero, and real numbers {βi}N i=1 such that PN i=1 αiui(t + βi) = 0. 2 3.2 NIM for the [Filter]-[Ideal IAF] Neural Circuit An example of a model circuit in Fig. 1(a) is the [Filter]-[Ideal IAF] circuit shown in Fig. 1(b). In this circuit, an input signal u ∈Ξ is passed through a filter with an impulse response (kernel) h ∈H and then encoded by an ideal IAF neuron with a bias b ∈R+, a capacitance C ∈R+ and a threshold δ ∈R+. The output of the circuit is a sequence of spike times (tk)k∈Z that is available to an observer. This neural circuit is an instance of a TEM and its operation can be described by a set of equations (formally known as the t-transform [13]): Z tk+1 tk (u ∗h)(s)ds = qk, k ∈Z, (1) where qk ≜Cδ−b(tk+1−tk). Intuitively, at every spike time tk+1 the ideal IAF neuron is providing a measurement qk of the signal v(t) = (u ∗h)(t) on the interval t ∈[tk, tk+1]. Proposition 1. The left-hand side of the t-transform in (1) can be written as a bounded linear functional Lk : Ξ →R with Lk(Ph) = φk, Ph , where φk(t) = 1[tk, tk+1] ∗˜u  (t) and ˜u = u(−t), t ∈R, denotes the involution of u. Proof: Since (u∗h) ∈Ξ, we have (u∗h)(t) = (u∗h∗g)(t), t ∈R, and therefore R tk+1 tk (u∗h)(s)ds = R tk+1 tk (u∗Ph)(s)ds. Now since Ph is bounded, the expression on the right-hand side of the equality is a bounded linear functional Lk : Ξ →R with Lk(Ph) = Z tk+1 tk (u ∗Ph)(s)ds = φk, Ph , (2) where φk ∈Ξ and the last equality follows from the Riesz representation theorem [16]. To find φk, we use the fact that Ξ is a Reproducing Kernel Hilbert Space (RKHS) [17] with a kernel K(s, t) = g(t −s). By the reproducing property of the kernel [17], we have φk(t) = φk, Kt = Lk(Kt). Letting ˜u = u(−t) denote the involution of u and using (2), we obtain φk(t) = 1[tk, tk+1] ∗˜u, Kt = 1[tk, tk+1] ∗˜u  (t). □ Proposition 1 effectively states that the measurements (qk)k∈Z of v(t) = (u ∗h)(t) can be also interpreted as the measurements of (Ph)(t). A natural question then is how to identify Ph from (qk)k∈Z. To that end, we note that an observer can typically record both the input u = u(t), t ∈R and the output T = (tk)k∈Z of a neural circuit. Since (qk)k∈Z can be evaluated from (tk)k∈Z using the definition of qk in (1), the problem is reduced to identifying Ph from an I/O pair (u, T). Theorem 1. Let u be bounded with supp(Fu) = [−Ω, Ω], h ∈H and b/(Cδ) > Ω/π. Then given an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, Ph can be perfectly identified as (Ph)(t) = X k∈Z ckψk(t), where ψk(t) = g(t −tk), t ∈R. Furthermore, c = G+q with G+ denoting the Moore-Penrose pseudoinverse of G, [G]lk = R tl+1 tl u(s −tk)ds for all k, l ∈Z, and [q]l = Cδ −b(tl+1 −tl). Proof: By appropriately bounding the input signal u, the spike density (the average number of spikes over arbitrarily long time intervals) of an ideal IAF neuron is given by D = b/(Cδ) [14]. Therefore, for D > Ω/π the set of the representation functions (ψk)k∈Z, ψk(t) = g(t −tk), is a frame in Ξ [18] and (Ph)(t) = P k∈Z ckψk(t). To find the coefficients ck we note from (2) that ql = φl, Ph = X k∈Z ck φl, ψk = X k∈Z [G]lkck, (3) where [G]lk = φl, ψk = 1[tl, tl+1] ∗˜u, g( · −tk) = R tl+1 tl u(s −tk)ds. Writing (3) in matrix form, we obtain q = Gc with [q]l = ql and [c]k = ck. Finally, the coefficients ck, k ∈Z, can be computed as c = G+q. □ 3 Remark 1. The condition b/(Cδ) > Ω/π in Theorem 1 is a Nyquist-type rate condition. Thus, perfect identification of the projection of h onto Ξ can be achieved for a finite average spike rate. Remark 2. Ideally, we would like to identify the kernel h ∈H of the filter in cascade with the ideal IAF neuron. Note that unlike h, the projection Ph belongs to the space L2(R), i.e., in general Ph is not BIBO-stable and does not have a finite temporal support. Nevertheless, it is easy to show that (Ph)(t) approximates h(t) arbitrarily closely on t ∈[T1, T2], provided that the bandwidth Ωof u is sufficiently large. Remark 3. If the impulse response h(t) = δ(t), i.e., if there is no processing on the (arbitrary) input signal u(t), then ql = R tl+1 tl (u ∗h)(s)ds = R tl+1 tl u(s)ds, l ∈Z. Furthermore, Z tl+1 tl (u ∗Ph)(s)ds = Z tl+1 tl (u ∗h)(s)ds = Z tl+1 tl u(s)ds = Z tl+1 tl (u ∗g)(s)ds, l ∈Z. The above holds if and only if (Ph)(t) = g(t), t ∈R. In other words, if h(t) = δ(t), then we identify Pδ(t) = sin(Ωt)/(πt), the projection of δ(t) onto Ξ. Corollary 1. Let u be bounded with supp(Fu) = [−Ω, Ω], h ∈H and b Cδ > Ω π . Furthermore, let W = (τ1, τ2) so that (τ2 −τ1) > (T2 −T1) and let τ = (τ1 + τ2)/2, T = (T1 + T2)/2. Then given an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, (Ph)(t) can be approximated arbitrarily closely on t ∈[T1, T2] by ˆh(t) = X k: tk∈W ckψk(t), where ψk(t) = g(t −(tk −τ + T)), c = G+q, [G]lk = R tl+1 tl u(s −(tk −τ + T))ds and [q]l = Cδ −b(tl+1 −tl) for all k, l ∈Z, provided that |τ1| and |τ2| are sufficiently large. Proof: Through a change of coordinates t →t′ = (t −τ + T) illustrated in Fig. 2, we obtain W ′ = [τ1 −τ + T, τ2 −τ + T] ⊃[T1, T2] and the set of spike times (tk −τ + T)k: tk∈W . Note that W ′ →R as (τ2 −τ1) →∞. The rest of the proof follows from Theorem 1 and the fact that limt→±∞g(t) = 0. □ From Corollary 1 we see that if the [Filter]-[Ideal IAF] neural circuit is producing spikes with a spike density above the Nyquist rate, then we can use a set of spike times (tk)k: tk∈W from a single temporal window W to identify (Ph)(t) to an arbitrary precision on [T1, T2]. This result is not surprising. Since the spike density is above the Nyquist rate, we could have also used a canonical time decoding machine (TDM) [13] to first perfectly recover the filter output v(t) and then employ one of the widely available LTI system techniques to estimate (Ph)(t). However, the problem becomes much more difficult if the spike density is below the Nyquist rate. 0 t t t T2 0 T2 0 T2 (Ph)(t) h(t) h(t) (Ph)(t) (a) u(t) 0 ˆh(t′) τ 0 0 (tk)k∈Z t t T2 t′ τ1 −τ + T τ2 −τ + T τ1 τ2 T2 T2 W ′ W (b) Figure 2: Change of coordinates in Corollary 1. (a) Top: example of a causal impulse response h(t) with supp(h) = [T1, T2], T1 = 0. Middle: projection Ph of h onto some Ξ. Note that Ph is not causal and supp(Ph) = R. Bottom: h(t) and (Ph)(t) are plotted on the same set of axes. (b) Top: an input signal u(t) with supp(Fu) = [−Ω, Ω]. Middle: only red spikes from a temporal window W = (τ1, τ2) are used to construct ˆh(t). Bottom: Ph is approximated by ˆh(t) on t ∈[T1, T2] using spike times (tk −τ + T)k:tk∈W . 4 Theorem 2. (The Neuron Identification Machine) Let {ui | supp(Fui) = [−Ω, Ω] }N i=1 be a collection of N linearly independent and bounded stimuli at the input to a [Filter]-[Ideal IAF] neural circuit with a dendritic processing filter h ∈H. Furthermore, let Ti = (ti k)k∈Z denote the output of the neural circuit in response to the bounded input signal ui. If PN j=1 b Cδ > Ω π , then (Ph)(t) can be identified perfectly from the collection of I/O pairs {(ui, Ti)}N i=1. Proof: Consider the SIMO TEM [14] depicted in Fig. 3(a). h(t) is the input to a population of N [Filter]-[Ideal IAF] neural circuits. The spikes (ti k)k∈Z at the output of each neural circuit represent distinct measurements qi k = φi k, Ph of (Ph)(t). Thus we can think of the qi k’s as projections of Ph onto (φ1 1, φ1 2, . . . , φ1 k, . . . , φN 1 , φN 2 , . . . , φN k , . . . ). Since the filters are linearly independent [14], it follows that, if {ui}N i=1 are appropriately bounded and PN j=1 b Cδ > Ω π or equivalently if the number of neurons N > ΩCδ πb = Ω πD, the set of functions { (ψj k)k∈Z }N j=1 with ψj k(t) = g(t −tj k), is a frame for Ξ [14], [18]. Hence (Ph)(t) = N X j=1 X k∈Z cj kψj k(t). (4) To find the coefficients ck, we take the inner product of (4) with φ1 l (t), φ2 l (t), ..., φN l (t): φi l, Ph = X k∈Z c1 k φi l, ψ1 k + X k∈Z c2 k φi l, ψ2 k + · · · + X k∈Z cN k φi l, ψN k ≡ qi l, for i = 1, . . . , N, l ∈Z. Letting [Gij]lk = φi l, ψj k , we obtain qi l = X k∈Z  Gi1 lk c1 k + X k∈Z  Gi2 lk c2 k + · · · + X k∈Z  GiN lk cN k , (5) for i = 1, . . . , N, l ∈Z. Writing (5) in matrix form, we have q = Gc, where q = [q1, q2, . . . , qN]T with [qi]l = Cδ −b(ti l+1 −ti l), [Gij]lk = R ti l+1 ti l ui(s −tj k)ds and c = [c1, c2, . . . , cN]T . Finally, to find the coefficients ck, k ∈Z, we compute c = G+q. □ Corollary 2. Let {ui}N i=1 as before, h ∈H and PN j=1 b Cδ > Ω π . Furthermore, let W = (τ1, τ2) so that (τ2 −τ1) > (T2 −T1) and let τ = (τ1 + τ2)/2, T = (T1 + T2)/2. Then given the I/O pairs {(ui, Ti)}N i=1 of the [Filter]-[Ideal IAF] neural circuit, (Ph)(t) can be approximated arbitrarily closely on t ∈[T1, T2] by ˆh(t) = PN j=1 P k: tj k∈W cj kψj k(t), where ψj k(t) = g(t−(tj k −τ +T)), c = G+q, with [Gij]lk = R ti l+1 ti l ui(s−(tj k−τ +T))ds, q = [q1, q2, . . . , qN]T , [qi]l = Cδ−b(ti l+1−ti l) for all k, l ∈Z provided that |τ1| and |τ2| are sufficiently large. Proof: Similar to Corollary 1. □ Corollary 3. Let supp(Fu) = [−Ω, Ω], h ∈H and let  W i ≜ τ i 1, τ i 2  N i=1 be a collection of windows of fixed length (τ i 2 −τ i 1) > (T2 −T1), i = 1, 2, ..., N. Furthermore, let τ i = (τ i 1 + τ i 2)/2, T = (T1 +T2)/2 and let (ti k)k∈Z denote those spikes of the I/O pair (u, T) that belong to W i. Then Ph can be approximated arbitrarily closely on [T1, T2] by ˆh(t) = N X j=1 X k: tk∈W j cj kψj k(t), where ψj k(t) = g(t −(tj k −τ j + T)), c = G+q with [Gij]lk = R ti l+1 ti l u(s −(tj k −τ j + T))ds, q = [q1, q2, . . . , qN]T , [qi]l = Cδ −b(ti l+1 −ti l) for all k, l ∈Z, provided that the number of non-overlapping windows N is sufficiently large. Proof: The input signal u restricted, respectively, to the collection of intervals  W i ≜ τ i 1, τ i 2  N i=1 plays the same role here as the test stimuli {ui}N i=1 in Corollary 2. See also Remark 9 in [14]. □ 5 + + + 1 C  1 C  1 C  δ δ δ b b b u1(t) u2(t) uN(t) (t1 k)k∈Z (t2 k)k∈Z (tN k )k∈Z voltage reset to 0 voltage reset to 0 voltage reset to 0 h(t) (a) + c = G+q  k∈Z c1 kδ(t −t1 k)  k∈Z c2 kδ(t −t2 k)  k∈Z cN k δ(t −tN k ) (Ph)(t) g(t) (t1 k)k∈Z (t2 k)k∈Z (tN k )k∈Z (b) Figure 3: The Neuron Identification Machine. (a) SIMO TEM interpretation of the identification problem with (ti k) = (tk)k:tk∈W i, i = 1, 2, . . . , N. (b) Block diagram of the algorithm in Theorem 2. Remark 4. The methodology presented in Theorem 2 can easily be applied to other spiking neuron models. For example, for the leaky IAF neuron, we have [qi]l = Cδ −bRC " 1 −exp ti l −ti l+1 RC !# , [Gij]lk = Z ti l+1 ti l uis −tj k  exp s −ti l+1 RC ! ds. Similarly, for a threshold-and-feedback (TAF) neuron [15] with a bias b ∈R+, a threshold δ ∈R+, and a causal feedback filter with an impulse response f(t), t ∈R, we obtain [qi]l = δ −b + X k<l f(ti l −ti k), [Gij]lk = uiti l −tj k  . 3.3 Identifying Parameters of the Spiking Neuron Model If parameters of the spiking neuron model cannot be obtained through biophysical experiments, we can use additional input stimuli to derive a neural circuit that is Ξ-I/O-equivalent to the original circuit. For example, consider the circuit in Fig. 1(a). Rewriting the t-transform in (1), we obtain 1 b Z tk+1 tk (u ∗h)(s)ds = Cδ b −(tk+1 −tk) ⇐⇒ Z tk+1 tk (u ∗h′)(s)ds = q′ k, where h′(t) = h(t)/b, t ∈R and q′ k = Cδ/b −(tk+1 −tk). Setting u = 0, we can now compute Cδ/b = (tk+1 −tk). Next we can use the NIM described in Section 3.2 to identify with arbitrary precision the projection Ph′ of h′ onto Ξ. Thus we identify a [Filter]-[Ideal IAF] circuit with a filter impulse response Ph′, a bias b′ = 1, a capacitance C′ = 1 and a threshold δ′ = Cδ/b. This neural circuit is Ξ-I/O-equivalent to the circuit in Fig. 1(b). 4 Examples We now demonstrate the performance of the identification algorithm in Corollary 3. We model the dendritic processing filter using a causal linear kernel h(t) = ce−αt  (αt)3/3! −(αt)5/5!  with t ∈[0, 0.1 s], c = 3 and α = 200. The general form of this kernel was suggested in [19] as a plausible approximation to the temporal structure of a visual receptive field. We use two different bandlimited signals and show that the identification results fundamentally depend on the signal bandwidth Ω. In Fig. 4 the signal is bandlimited to Ω= 2π·25 rad/s, whereas in Fig. 5 it is bandlimited to Ω= 2π·100 rad/s. Although in principle the kernel h has an infinite bandwidth (having a finite temporal support), its effective bandwidth Ω≈2π·100 rad/s (Fig. 6(b)). Thus in Fig. 4 we reconstruct the projection Ph of the kernel h onto Ξ with Ω= 2π ·25 rad/s, whereas in Fig. 5 we reconstruct nearly h itself. 6 0 0.2 0.4 0.6 0.8 1 −1 −0.5 0 0.5 1 (a) Input signal u(t) Amplitude Ω = 2π·25rad/s 0 0.2 0.4 0.6 0.8 1 (b) Output of the [Filter]-[Ideal IAF] neural circuit D = 40 Hz Windows {W i}5 i=1 −0.05 0 0.05 0.1 0.15 −50 0 50 100 Time, [s] Amplitude (c) Original filter vs. the identified filter h, RMSE(ˆh, h) = 1.53e-01 Ph, RMSE(ˆh, Ph) = 2.04e-04 ˆh −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (d) Periodogram Power Spectrum Estimate of u(t) Power, [dB] −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (e) Periodogram Power Spectrum Estimate of h(t) Power, [dB] −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (f ) Periodogram Power Spectrum Estimate of v(t) Frequency, [Hz] Power, [dB] supp(F u) = [-Ω, Ω] supp(F h) ⊃[-Ω, Ω] supp(F v) = [-Ω, Ω] Figure 4: Identifying dendritic processing in the [Filter]-[Ideal IAF] neural circuit. Ω= 2π·25 rad/s. (a) Signal u(t) at the input to the circuit. (b) The output of the circuit is a set of spikes at times (tk)k∈Z. The spike density D = 40 Hz. Note that only 25 spikes from 5 temporal windows are used to construct ˆh. (c) The RMSE between ˆh (red) and Ph (blue) is 2.04 × 10−4. The RMSE between ˆh (red) and h (dashed black) is 1.53 × 10−1. (d)-(f) Spectral estimates of u, h and v = u ∗h. Note that supp(Fu) = [−Ω, Ω] = supp(Fv) but supp(Fh) ⊃[−Ω, Ω]. In other words, both u, v ∈Ξ but h /∈Ξ. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 −1 −0.5 0 0.5 1 (a) Input signal u(t) Amplitude Ω = 2π·100rad/s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 (b) Output of the [Filter]-[Ideal IAF] neural circuit D = 40 Hz Windows {W i}10 i=1 −0.05 0 0.05 0.1 0.15 −50 0 50 100 Time, [s] Amplitude (c) Original filter vs. the identified filter h, RMSE(ˆh, h) = 4.58e-03 Ph, RMSE(ˆh, Ph) = 1.13e-03 ˆh −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (d) Periodogram Power Spectrum Estimate of u(t) Power, [dB] supp(F u) = [-Ω, Ω] −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (e) Periodogram Power Spectrum Estimate of h(t) Power, [dB] supp(F h) ⊃[-Ω, Ω] −150 −100 −50 0 50 100 150 −100 −80 −60 −40 −20 0 (f) Periodogram Power Spectrum Estimate of v(t) Frequency, [Hz] Power, [dB] supp(F v) = [-Ω, Ω] Figure 5: Identifying dendritic processing of the [Filter]-[Ideal IAF] neural circuit. Ω= 2π·100 rad/s. (a) Signal u(t) at the input to the circuit. (b) The output of the circuit is a set of spikes at times (tk)k∈Z. The spike density D = 40 Hz. Note that only 43 spikes from 10 temporal windows are used to construct ˆh. (c) The RMSE between ˆh (red) and Ph (blue) is 1.13 × 10−3. The RMSE between ˆh (red) and h (dashed black) is 4.58 × 10−3. (d)-(f) Spectral estimates of u, h and v = u ∗h. Note that supp(Fu) = [−Ω, Ω] = supp(Fv) but supp(Fh) ⊃[−Ω, Ω]. In other words, both u, v ∈Ξ but h /∈Ξ. 7 Next, we evaluate the filter identification error as a function of the number of temporal windows N and the stimulus bandwidth Ω. By increasing N, we can approximate the projection Ph of h with arbitrary precision (Fig. 6(a)). Note that the estimate ˆh converges to Ph faster for higher average spike rate (spike density D) of the neuron. At the same time, by increasing the stimulus bandwidth Ω, we can approximate h itself with arbitrary precision (Fig. 6(b)). 0 5 10 15 20 25 30 −100 −80 −60 −40 −20 0 20 (a) MSE(ˆh, Ph) vs. the number of temporal windows Number of windows N MSE(ˆh, Ph), [dB] 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 −70 −60 −50 −40 −30 −20 −10 0 (b) MSE(ˆh, h) vs. the input signal bandwidth Input signal bandwidth Ω/(2π), [Hz] MSE(ˆh, h), [dB] D = 20 Hz D = 40 Hz D = 60 Hz D = 60Hz, N = 10 h ˆh h ˆh Ω/(πD1) Ω/(πD2) Ω/(πD3) Figure 6: The Filter Identification Error. (a) MSE(ˆh, Ph) as a function of the number of temporal windows N. The larger the neuron spike density D, the faster the algorithm converges. The impulse response h is the same as in Fig. 4, 5 and the input signal bandwidth is Ω= 2π ·100 rad/s. (b) MSE(ˆh, h) as a function of the input signal bandwidth Ω. The larger the bandwidth, the better the estimate ˆh approximates h. Note that significant improvement is seen even for Ω> 2π·100 rad/s, which is roughly the effective bandwidth of h. 5 Conclusion Previous work in system identification of neural circuits (see [20] and references therein) calls for parameter identification using white noise input stimuli. The identification process for, e.g., the LNP model entails identification of the linear filter, followed by a ‘best-of-fit’ procedure to find the nonlinearity. The performance of such an identification method has not been analytically characterized. In our work, we presented the methodology for identifying dendritic processing in simple [Filter][Spiking Neuron] models from a single input stimulus. The discussed spiking neurons include the ideal IAF neuron, the leaky IAF neuron and the threshold-and-fire neuron. However, the methods presented in this paper are applicable to many other spiking neuron models as well. The algorithm of the Neuron Identification Machine is based on the natural assumption that the dendritic processing filter has a finite temporal support. Therefore, its action on the input stimulus can be observed in non-overlapping temporal windows. The filter is recovered with arbitrary precision from an input/output pair of a neural circuit, where the input is a single signal assumed to be bandlimited. Remarkably, the algorithm converges for a very small number of spikes. This should be contrasted with the reverse correlation and spike-triggered average methods [20]. Finally, the work presented here will be extended to spiking neurons with random parameters. Acknowledgement The work presented here was supported by NIH under the grant number R01DC008701-01. 8 References [1] Maria N. Geffen, Bede M. Broome, Gilles Laurent, and Markus Meister. Neural encoding of rapidly fluctuating odors. Neuron, 61(4):570–586, 2009. [2] Sean J. Slee, Matthew H. Higgs, Adrienne L. Fairhall, and William J. Spain. Two-dimensional time coding in the auditory brainstem. The Journal of Neuroscience, 25(43):9978–9988, October 2005. [3] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal elements of macaque V1 receptive fields. Neuron, Vol. 46:945–956, 2005. [4] Daniel P. Dougherty, Geraldine A. Wright, and Alice C. Yew. Computational model of the cAMPmediated sensory response and calcium-dependent adaptation in vertebrate olfactory receptor neurons. Proceedings of the National Academy of Sciences, 102(30):0415–10420, 2005. [5] Yuqiao Gu, Philippe Lucas, and Jean-Pierre Rospars. Computational model of the insect pheromone transduction cascade. PLoS Computational Biology, 5(3), 2009. [6] Zhuoyi Song, Daniel Coca, Stephen Billings, Marten Postma, Roger C. Hardie, and Mikko Juusola. Biophysical Modeling of a Drosophila Photoreceptor. In Lecture Notes In Computer Science., volume 5863 of Proceedings of the 16th International Conference on Neural Information Processing: Part I, pages 57 – 71. Springer-Verlag, 2009. [7] E.J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12:199–213, 2001. [8] Jonathan W. Pillow and Eero P. Simoncelli. Dimensionality reduction in neural models: An informationtheoretic generalization of spike-triggered average and covariance analysis. Journal of Vision, 6:414–428, 2006. [9] J J Eggermont, A M H J Aersten, and P I M Johannesma. Quantitative characterization procedure for auditory neurons based on the spectra-temporal receptive field. Hearing Research, 10, 1983. [10] Anmo J. Kim, Aurel A. Lazar, and Yevgeniy B. Slutskiy. System identification of Drosophila olfactory sensory neurons. Journal of Computational Neuroscience, 2010. [11] Sydney Cash and Rafael Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron, 22:383–394, 1999. [12] Jonathan Pillow. Neural coding and the statistical modeling of neuronal responses. PhD thesis, New York University, May 2005. [13] Aurel A. Lazar and Laszlo T. T´oth. Perfect recovery and sensitivity analysis of time encoded bandlimited signals. IEEE Transactions on Circuits and Systems-I: Regular Papers, 51(10):2060–2073, October 2004. [14] Aurel A. Lazar and Eftychios A. Pnevmatikakis. Faithful representation of stimuli with a population of integrate-and-fire neurons. Neural Computation, 20(11):2715–2744, November 2008. [15] Justin Keat, Pamela Reinagel, R. Clay Reid, and Markus Meister. Predicting every spike: A model for the responses of visual neurons. Neuron, 30:803–817, June 2001. [16] Michael Reed and Barry Simon. Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis. Academic Press, 1980. [17] Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer Academic Publishers, 2004. [18] Ole Christensen. An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis. Birkh¨auser, 2003. [19] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of Optical Society of America, 2(2), February 1985. [20] Michael C.-K. Wu, Stephen V. David, and Jack L. Gallant. Complete functional characterization of sensory neurons by system identification. Annual Reviews of Neuroscience, 29:477–505, 2006. 9
2010
73
4,117
On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient Jie Tang and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94709 {jietang, pabbeel}@eecs.berkeley.edu Abstract Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U(θ) at the current policy parameterization θ, rather than to obtain a more complete estimate of U(θ), and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines—a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds. 1 Introduction Policy gradient methods have been some of the most effective learning algorithms for dynamic control tasks in robotics. They have been applied to a variety of complex real-world reinforcement learning problems, such as hitting a baseball with an articulated arm robot [1], constrained humanoid robotic motion planning [2], and learning gaits for legged robots [3, 4, 5]. For such robotics tasks real-world trials are typically the most time consuming factor in the learning process. Making efficient use of limited experience is crucial for good performance. In this paper we describe a novel connection between likelihood ratio based policy gradient methods and importance sampling. Specifically, we show that the likelihood ratio policy gradient estimate is equivalent to the gradient of an importance sampled estimate of the expected return function estimated using only data from the current policy. This insight indicates that likelihood ratio policy gradients are quite naive in terms of data use, and suggests an opportunity for novel algorithms which use all past data more efficiently by working with the importance sampled expected return function directly. Our main contributions are as follows. First, we develop algorithms for global search over the importance sampled expected return function, allowing us to make more progress for a given amount of experience. Our approach uses estimates of the importance sampling variance to constrain the search in a principled way. Second, we derive generalizations of optimal policy gradient baselines which are applicable to the importance sampled expected return function. Section 2 describes preliminaries on Markov decision processes (MDPs), policy gradient methods and importance sampling. Section 3 describes the novel connection between importance sampling and likelihood ratio policy gradients, and Section 4 examines our novel minimum variance baselines. Section 5 outlines our proposed method. Section 6 relates our method to prior work. Section 7 demonstrates the effectiveness of the proposed methods on standard reinforcement learning testbeds. 1 2 Preliminaries Markov Decision Processes. A Markov decision process (MDP) is a tuple (S, A, T, R, D, γ, H), where S is a set of states; A is a set of actions/inputs; T = {P(·|s, u)}s,u is a set of state transition probabilities (P(·|s, u) is the state transition distribution upon taking action u in state s); R : S × A 7→R is the reward function; D is a distribution over states from which the initial state s0 is drawn; 0 < γ < 1 is the discount factor; and H is the horizon time of the MDP, so that the MDP terminates after H steps1. A policy π is a mapping from states S to a probability distribution over the set of actions A. We will consider policies parameterized by a vector θ ∈Rn. We denote the expected return of a policy πθ by U(θ) = EP (τ;θ) hPH t=0 γtR(st, ut)|πθ i = P τ P(τ; θ)R(τ). (2.1) Here P(τ; θ) is the probability distribution induced by the policy πθ over all possible state-action trajectories τ = (s0, u0, s1, u1, . . . , sH, uH). We overload notation and let R(τ) = PH t=0 γtR(st, ut) be the (discounted) sum of rewards accumulated along the state-action trajectory τ. Likelihood Ratio Policy Gradient. Likelihood ratio policy gradient methods perform a (stochastic) gradient ascent over the policy parameter space Θ to find a local optimum of U(θ). One wellknown technique called REINFORCE [6, 7] expresses the gradient ∇θU(θ) as follows: g = ∇θU(θ) = EP (τ;θ)[∇θ log P(τ; θ)R(τ)] ≈ˆg = 1 m Pm i=1 ∇θ log P(τ (i); θ)R(τ (i)), where the rightmost expression provides us an unbiased estimate of the policy gradient from m sample paths {τ (1), . . . , τ (m)} obtained from acting under policy πθ. Using the Markov assumption, we can decompose P(τ; θ) into a product of conditional probabilities and we obtain ∇θ log P(τ (i); θ) = PH t=0 ∇θ log πθ(u(i) t |s(i) t ). Hence no access to a dynamics model is required to compute an unbiased estimate of the policy gradient. REINFORCE has been shown to be moderately efficient in terms of number of samples used [6, 7]. To reduce the variance it is common to use baselines. Since EP [∇θ log P(τ; θ)] = ∇θ P τ P(τ; θ) = ∇θ1 = 0 we can add b⊤∇θ log P(τ; θ) (where b is a vector which can be optimized to minimize variance) to the REINFORCE gradient estimate without biasing it [8, 9]. Past work often used a scalar b, resulting in: ∇θU(θ) = EP (τ;θ)[∇θ log P(τ; θ)(R(τ) −b)] ≈ˆg = 1 m Pm i=1 ∇θ log P(τ (i); θ)(R(τ (i)) −b). Importance Sampling. For a general function f and a probability measure P, computing a quantity of interest of the form EP (X)[f(X)] = R x P(x)f(x)dx. can be computationally challenging. The expectation is often approximated with a sample-based estimate. However, samples from P could be difficult to obtain, or P might have very low probability where f takes its largest values. Importance sampling provides an alternative solution which uses samples from a different distribution Q. Given samples from Q, we can estimate the expectation w.r.t. P as: EP (X)[f(X)] = EQ(X) h P (X) Q(X)f(X) i ≈ 1 m Pm i=1 P (x(i)) Q(x(i))f(x(i)) with x(i) ∼Q In the above, we assume Q(x) = 0 ⇒P(x) = 0. Hence, one can sample from a different distribution Q and then simply re-weight the samples to obtain an unbiased estimate. This can be readily leveraged to estimate the expected return of a stochastic policy [10] as follows: bU(θ) = 1 m Pm i=1 P (τ (i);θ) Q(τ (i)) R(τ (i)), τ (i) ∼Q (2.2) where we assume Q(τ) = 0 ⇒P(τ; θ) = 0. If we choose Q(τ) = P(τ; θ′), then we are estimating the return of a policy πθ from sample paths obtained from acting according to a policy πθ′. Evaluating the importance weights does not require a dynamics model: P (τ (i);θ) P (τ (i);θ′) = QH t=0 πθ(ut|st) QH t=0 πθ′(ut|st). If we have samples from many different distributions P(τ; θ(j)), a standard technique is to create a fused empirical distribution Q(τ) = 1 m Pm j=1 P(τ; θ(j)) to enable use of all past data [10]. 1Any infinite horizon MDP with discounted rewards can be ϵ-approximated by a finite horizon MDP, using a horizon Hϵ = ⌈logγ(ϵ(1 −γ)/Rmax)⌉, where Rmax = maxs |R(s)|. 2 3 Likelihood Ratio Policy Gradient via Importance Sampling We now outline a novel connection between policy gradients and importance sampling. A set of trajectories {τ (1), . . . , τ (m)} sampled from policy πθ∗induces a distribution over paths Q(τ) = P(τ; θ∗). Let bU(θ∗) denote the importance sampled estimate of U(θ) at θ∗. Using Equation (2.2), we have: ∂bU ∂θj (θ∗) = 1 m Pm i=1 1 Q(τ (i)) ∂P (τ (i);θ∗) ∂θj R(τ (i)) = 1 m Pm i=1 P (τ (i);θ∗) Q(τ (i)) ∂log P (τ (i);θ∗) ∂θj R(τ (i)) = 1 m Pm i=1 ∂log P (τ (i);θ∗) ∂θj R(τ (i)) (using Q(τ) = P(τ; θ∗)). (3.1) Equation 3.1 is the j’th entry of the likelihood ratio based estimate of the gradient of U(θ) at θ∗. This analysis shows that the standard likelihood ratio policy gradient can be interpreted as forming an importance sampling based estimate of the expected return based on the runs under the current policy πθ∗and then using this estimate of the expected return function only to estimate a gradient at θ∗. In doing so, it fails to make efficient use of the trials from past policies: (i) It only uses the gradient of the function bU(θ) at the point θ∗, rather than all information provided by the function bU(θ), and (ii) It only uses the runs under the most recent policy πθ∗, rather than using a more informed importance sampling based estimate that uses all past data. Instead of only using local information from a single policy to drive our learning, we can use global information provided by bU(θ) using trials run under all past policies. Such importance sampling based methods (as have been proposed in [10]) should be able to learn from fewer trial runs than the currently widely popular likelihood ratio based methods. Generalization to G(PO)MDP / Policy Gradient Theorem formulation. The observation that past rewards do not depend on future states or actions is leveraged by the G(PO)MDP [8] and the Policy Gradient Theorem [11] variations on REINFORCE to reduce the variance on their gradient estimates. This same observation can also be leveraged when estimating the expected return function itself. Let τ1:t denote the state action sequence experienced from time 1 through time t, then we have U(θ) = P τ P(τ; θ)R(τ) = P τ PH t=0 P(τ1:t; θ)R(st, ut). (3.2) For simplicity of notation we will continue to describe our approach in terms of the expression for U(θ) given in Equation (2.1), but our generalization of baselines, and our policy search algorithm are equally applicable when using the expression for U(θ) we present in Equation (3.2). 4 Generalized Unbiased Baselines Previous work has shown that the REINFORCE gradient estimate benefits greatly from the addition of an optimal baseline term [12, 9, 8]. In this section, we show that policy gradient baselines are special cases of a more general variance reduction technique. Our result generalizes policy gradient baselines in three ways: (i) It applies to estimating expectations of any random quantity, not just policy gradients; (ii) It allows for baseline matrices and higher-dimensional tensors, not just vectors; and (iii) It can be applied recursively to yield baseline terms for baselines since baselines are themselves expectations. Minimum Variance Unbiased Baselines. Given a random variable X ∼Pθ(X), where Pθ is a parametric probability distribution with parameter θ, we have that EPθ[∇θ log Pθ(X)] = 0. Hence for any constant vector b and any scalar function h(X), we have that 1 m Pm i=1(h(x(i)) − b⊤∇θ log Pθ(x(i))) with x(i) drawn from Pθ is an unbiased estimator of the scalar quantity EPθ[h(X)]. The variance of this estimator is minimized when the variance of the random variable g(X) = h(X) −bT ∇θlogPθ(X) is minimized. This variance is given by: VarPθ[h(X)−b⊤∇θ log Pθ(X)] = EPθ[ h(X) −b⊤∇θ log Pθ(X) 2]−(EPθ[h(X)−b⊤∇θ log Pθ(X)])2. As b⊤EPθ[∇θ log Pθ(X)] = 0, the second term is independent of b. Setting the gradient of the first term with respect to b equal to zero yields the minimum variance baseline b = EPθ[∇θ log Pθ(X)∇θ log Pθ(X)⊤]−1EPθ[∇θ log Pθ(X)h(X)]. (4.1) The baselines commonly employed with REINFORCE, GPOMDP, and other likelihood ratio policy gradient methods can be derived as special cases of this generalized baseline [12]. 3 Minimum Variance Unbiased Baselines with Importance Sampling. When using importance sampling with x(i) drawn from Q, we have an unbiased estimator of the form 1 m Pm i=1 Pθ(x(i)) Q(x(i)) (h(x(i)) −b⊤∇θ log Pθ(x(i))) with a minimum variance baseline vector b = EQ h Pθ(X) Q(X) ∇θ log Pθ(X) Pθ(X) Q(X) ∇θ log Pθ(X)⊤i−1 EQ h Pθ(X) Q(X) ∇θ log Pθ(X) Pθ(X) Q(X) h(X) i . (4.2) Baselines. The minimum variance technique is naturally extended to vector-valued or matrixvalued random variables h(X). For each entry in h(X) we can compute a minimum variance baseline vector b using Equation (4.1) or (4.2). In general, if h(X) is an n-dimensional tensor, we can stack these baseline vectors into a n + 1-dimensional tensor. Indeed, in the case of REINFORCE we would obtain a baseline matrix, rather than a baseline scalar (as in the original work [7]) and rather than a vector baseline (as described in later work, such as [12]). The baselines themselves are estimated from sample data. Using standard policy gradient methods, it can be impractical to run enough trials to accurately fit such baselines. By using importance sampling to reuse data we can use richer baseline terms in our estimators. Recursive Baselines. The baselines are themselves composed of expectations. It is possible to recursively insert minimum variance unbiased baseline terms into these expectations in order to reduce the variance on the baseline estimates. However, the number of baseline parameters being estimated increases rapidly in this recursive process. Moreover, if we estimate multiple expectations from the same set of samples, these estimates become correlated and the final result is no longer unbiased. In practice, these baselines can be regularized to match the amount of available data. In Section 8 we empirically investigate the performance of several different baseline schemes. 5 Policy Search Using bU We propose the algorithm outlined in Figure 1. It uses importance sampling with optimal generalized baselines to obtain estimates bU(θ) of the expected return function based on the data gathered so far. This estimator allows to search for a θ which improves the expected return. It maintains a list of candidate policy parameters from which it searches for improvements. Memory-based search allows backtracking away from unpromising parts of the search space without taking additional, costly trials on the real platform. Input: domain of policy parameters Θ, initial policy πˆθ0 for i = 0 to ... do 1.Run M trials under policy πˆθi 2. Search within ESS region for j = 1 : i do θj ←ˆθj while bU(θj) is improving do gj ←step direction(bU(θj)) αj ←ESS aware line search(bU(θj), gj) θj ←θj + αjgj end while end for 3. Update policy: ˆθi+1 = arg maxθj bU(θj) end for Figure 1: Our policy search algorithm. Estimate of Expected Returns: We use weighted importance sampling, and add a baseline to Equation (2.2): bU(θ) = 1 Z Pm i=1 P (τ (i);θ) Q(τ (i)) (R(τ (i)) −b⊤∇θ log P(τ (i); θ)), Z = Pm i=1 P (τ (i);θ) Q(τ (i)) , (5.1) where i indexes over all past trials, and Q is the empirical distribution over past trials (see Section 2). 4 Optimal Baseline: Applying Equation (4.2) we get the following sample based estimate of the optimal baseline b for the estimate of the expected return function:2 b =  1 m Pm i=1  P (τ(i);θ) Q(τ(i)) 2 ∇θ log Pθ(τ (i))∇θ log P(τ (i); θ)⊤ −1  1 m Pm i=1  P (τ(i);θ) Q(τ(i)) 2 ∇θ log P(τ (i); θ)R(τ (i)))  . (5.2) ESS Search Region: As our policy search steps away from areas of Θ where we have gathered sample data, the variance of our estimator bU increases and our function estimate becomes unreliable. The effective sample size ESS = m 1+Var(wi) is commonly used to measure the quality of an importance sampled estimate [13]. Here wi are the normalized importance weights and M is the number of trials. Our policy search only considers parameter values θ with sufficiently high ESS. Step Direction: We use the finite-difference gradient of bU as the step direction for the inner loop of the policy search. In theory, since every outer iteration searches for a local optimum within the ESS region, the choice of step direction affects only the amount of computation and not the number of trials required for convergence.3 Line Search: One issue with gradient based optimization methods is the need to choose the right step size. One solution is to use adaptive line search-based step size rules like the Armijo rule [15].4 For traditional likelihood ratio policy search methods this would require additional trials. By contrast, no new trials are required when using importance sampling.5 6 Prior Work Various past approaches use the idea of constructing a model of the system from sample data, which can be used to search for the optimal policy, e.g., [16], [10], [17]. In contrast to Sutton’s DYNA, our method attempts to directly optimize the expected return function by varying policy parameters rather than building a model for the environment. Cao [17] also uses importance sampling to reuse past data for estimating policy gradients, but focuses on estimating local gradient information rather than global surface information. The work of Peshkin and Shelton [10] is most similar in spirit to our policy search method. They use importance sampling to construct a “proxy” environment from sampled data which can be used to evaluate the expected return at arbitrary policies. They apply a hill-climbing policy search to this “proxy” surface. This technique does not use estimates of the importance sampling variance to restrict the search, does not use generalized minimum variance baselines, and does not use memory. Our experiments show that these improvements are necessary to outperform standard policy gradient methods across our test domains. Our general approach of estimating and optimizing the expected return function instead of the gradient of the expected return function allows for non-local policy steps. Recent EM-based policy search methods [18, 14] are able to make larger steps by optimizing a local lower bound on the expected return function. These methods can use importance sampling to make better use of data. This lower bound objective function and update step could be used in our memory based approach instead of following the finite difference gradient step. We explained throughout the paper the relationship with earlier methods such as REINFORCE [7, 6] and GPOMDP [8, 9]. PEGASUS [19] is an efficient alternative policy search method but can only be used if a simulation model is available. Recent work has suggested following the natural gradient direction [20, 21, 22]. The natural gradient approach is a parameterization invariant second order method which finds the direction which 2Estimating the baseline from the same data as the other terms in Equation (5.1) results in a biased estimator. This is often done in policy gradient methods and we do so in our experiments. It is however possible to retain an unbiased estimate by data splitting, which could include averaging over resamplings. 3In practice, since we cannot always find the true optimum of bU within the ESS region, differences in step direction do affect policies that are sampled. Other step directions or policy improvement rules may be substituted for the finite difference gradient step. For example, we could follow the natural gradient direction, or use an EM-based policy update [14]. 4Though the Armijo rule has its own free parameters to choose, performance is much less sensitive to these hyper-parameters. We use the same Armijo rule parameters for all of our experiments. 5We can extend standard likelihood ratio policy gradient methods to use the importance sampled expected return estimate. In our experience this approach yields results comparable to the best fixed hand-tuned step size for each problem—hence alleviating the need of these methods for tuning the step size. 5 (a) (b) (c) Figure 2: (a) Performance of various choices for the higher level baselines in our approach. We have a matrix baseline (MAT), and a recursive baseline (REC). For reference, we also plot our approach without an optimal baseline (GLO), GPOMDP (GP), and IS GPOMDP (ISGP). (b), (c) Performance evaluation on LQR and Cartpole. The algorithms considered are the GPOMDP likelihood ratio policy gradient method (GP), GPOMDP with importance sampling (ISGP), Peshkin and Shelton’s algorithm (PS), and our approach (OUR). maximize the ratio of the improvement of the objective function over the change in distribution over trajectories. Our approach exploits a similar intuition through consideration of variance through the effective sampling size (ESS)—preferring regions for which the past experience gives a good estimate. Natural actor critic (NAC) approaches have enjoyed substantial success on real-life robotics tasks [1, 23]. In the episodic setting, which we consider in this paper, the only difference between episodic NAC and natural gradient is in the estimate of the baseline. Episodic NAC computes a scalar baseline by solving an LSTD-Q type regression rather than, e.g., using a minimum variance baseline criterion.6 7 Experimental Setup We present experiments on four testbeds: LQR, cartpole, mountaincar, and acrobot. The details of each experimental testbed can be found in the appendix. Though the systems are simulated, the learning algorithms cannot make use of the simulation dynamics except by gathering trials. For each testbed we randomly generated a pool of initial policies until one is found that does not achieve the worst case return We then used our policy gradient algorithms to optimize performance. The same set of initial policies is used across learning algorithms. We focus on an analysis of performance when only allowed for a small number of trials: In each of the following experiments we run 50 iterations of policy search, running M trials for each policy at each iteration. 8 Experimental Results In our experimental results, we first evaluate several generalized baselines in the context of our policy search algorithm. We then break down the effectiveness of each component of our algorithm: memory based search, optimal baselines, and ESS search region. Our policy search outperform likelihood ratio methods on two of the testbeds and performs equally well on the two remaining ones. Performance is reported as the expected return versus the number of sampled trials. The expected return is plotted on the y-axis. Error bars are shown based on running each instance with 10 initial policies. The number of trials is plotted on the x-axis. Generalized Baseline Experiments: There are a variety of choices in our generalized baseline technique: We can vary the dimensionality of the baseline terms to add, the depth of the recursive baseline, and what (if any) regularization to use. We implemented our policy search using three different baseline techniques. We used a vector baseline, a matrix baseline, and a recursive tensor baseline on the matrix baseline. Figure 2 (a) shows the average reward received plotted against the number of trials run for the matrix (MAT) and recursive tensor (REC) baselines. The vector baseline was not able to improve the initial policies. The matrix baseline outperforms the other baselines and we use it going forward. Components of Our Approach: Figure 3 examines each of the central contributions of our algorithm (memory based search, baselines, and ESS). We tested our approach without any of the 6The difference in performance due to different estimation procedure for the scalar baseline has been observed to be so small that only one plot is shown rather than both in [1]. 6 (a) (b) (c) Figure 3: This figure demonstrates the effect of (a) memory based search (b) optimal baselines, and (c) ESS search region on cartpole performance. In each figure, we show the performance of Peshkin and Shelton’s approach (PS) and our approach (OUR). In addition, we show the performance with memory only (PS+M), baselines only (PS+B), and ESS only (PS+E), and our approach with memory (OUR-M), baselines (OUR-B), and ESS (OUR-E) removed. GPOMDP (GP) and IS GPOMDP (ISGP) are also plotted for reference purposes. three components, which is equivalent to Peshkin and Shelton’s algorithm [10], which we label PS. We added each one of the three components individually, labeled PS+M, PS+B, PS+E. We also tested the performance with two out of three components, labeled OUR-M, OUR-B, and OUR-E respectively. Finally we tested the performance of our approach with all three components. The results indicate that each of the three components is improving performance with ESS and memory based being the most important components. Without any one of the components our approach has difficulty outperforming importance sampled GPOMDP. Comparison With Likelihood Ratio Policy Gradients: We have compared several episodic likelihood ratio algorithms against our global policy search algorithm. We run M = 10 trials per iteration, and repeat each trial 10 times. For the likelihood ratio algorithms, we use the appropriate optimal baselines [12] and hand-tune the step size. As a comparison, we have also implemented policy gradient algorithms which use importance sampling to estimate the gradient of bU. Figure 2 plots the reward received as a function of the number of real trials sampled from the system. We plot our global search approach against GPOMDP, an importance sampled GPOMDP (IS GPOMDP), and an implementation of Peshkin and Shelton’s global search.7 Our approach is consistently able to improve its initial policy, outperforming likelihood ratio policy gradient methods on both the cartpole and LQR testbeds. In general, importance sampling based methods outperform non-importance sampling based algorithms, which work poorly when given few trials. All algorithms in consideration performed poorly on the mountaincar and acrobot testbed—none of them showing significant improvement in performance through learning. 9 Conclusion We have shown that policy gradient methods are a special case of gradient descent over the importance sampled expected return function bU. Since our approach provides a full approximation of the expected return function, we can use global information in addition to gradient information to achieve faster learning. We have also shown that optimal baselines for standard policy gradient methods can be seen as special cases of a more general variance reduction technique. Our importance sampling approach allows us to leverage more data to fit generalized baseline terms in our estimators. Our experiments show our algorithm requires fewer trials than current policy gradient methods on several testbeds and no more trials on the remaining testbeds, making it appealing for robotic learning tasks for which trials are expensive. 7We do not plot REINFORCE as our experiments indicate that GPOMDP outperforms REINFORCE on these testbeds, a fact consistent with existing literature [1]. 7 Acknowledgments The authors thank Jan Peters and Hamid Reza Maei for insightful discussions and the anonymous reviewers for their feedback. This work was supported in part by NSF under award IIS-0931463. Jie Tang is supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. Appendix (i) LQR: We use the formulation given in [21]. We use a linear parameterized policy with parameters K ∈R2, given by u(t) ∼N(Lx(t), σ), L = −1.999 + 1.998 1+eK1 and σ = 0.001 + 1 1+eK2 .8 The initial state is drawn from x(0) ∼N(0.3, 0.1), and the dynamics are given by x(t + 1) = 0.7 ∗x(t) + u(t) + N(0, 0.01). The system incurs a penalty of −(x(t)2 + u(t)2) at each time step. Each episode was 20 time steps. (ii) Cartpole: This task consists of a cart moving along a track while balancing a pole. The goal of this task is to move the cartpole back to the origin as quickly as possible while keeping the pole upright. Following the formulation given in [24], our control input is drawn from the policy u ∼N(K⊤x, σ), with state x = [x, ˙x, θ, ˙θ] and policy parameters K = [K1, K2, K3, K4, σ]. The dynamics are given by ¨x = F −mpl(¨θ cos θ−˙θ2 sin θ) mc+mp , and ¨θ = g sin θ(mc+mp)−(ut+mpl ˙θ2 sin θ) cos θ 4 3 l(mc+mp)−mpl cos2 θ . Here mp = 0.1, mc = 1.0, l = 0.5, g = 9.81. The control interval was 0.02s. We solve the dynamics using a fourth order Runge-Kutta method. We run each episode for 200 time steps, though the episode terminates once the cartpole has failed (defined as whenever |x| > 2.4m or |θ| > 0.7rad). The reward function is −2 for every time step after the failure occurs, 0 if the cartpole is balanced and satisfies |x| < 0.05, and −1 otherwise. (iii) Mountain Car: The mountain car testbed [25] models a simulated car, which starts in a valley and must climb the hill to the right as quickly as possible. The task involves two states [x, ˙x] and three policy parameters [K1, K2, σ]. Our control inputs for this problem are restricted to {−1, 1}. Our parameterized policy is given by π(ut = 1|xt, ˙xt) = P(K1sign( ˙xt) ˙x2 t + K2 + ϵt < xt), where ϵt ∼N(0, σ). Our initial acceleration is f0 = +1; ft+1 = utft. The dynamics are given by ˙xt+1 = ˙xt + 0.001ft −0.0025 cos(3(xt −0.5)), and xt+1 = xt + ˙xt We run for 200 time steps, though the episode terminates once the mountaincar reaches its target at x = 1.0. The reward function is 0 if the car is at its target and −1 otherwise. (iv) Acrobot: The acrobot [25] is a robot with 2 rotational links connected by an actuated motor. It has four states [θ1, ˙θ1, θ2, ˙θ2] and parameters K = [K1, . . . , K8, σ]. The acrobot is initialized to be close to [π, 0, 0, 0] (pointing straight up), and the goal is to keep the acrobot balanced upright for as long as possible. Our control input is drawn from the policy u ∼N(Lx + K⊤φ(x), σ). Here L is the optimal LQR controller for acrobot linearized around the stationary point, and φ(x) = [(π −θ1)θ2, ˙θ1 ˙θ2, (π −θ1) ˙θ1, θ2 ˙θ2, (π −θ1)|π −θ1|, ˙θ1| ˙θ1|, θ2|θ2|, ˙θ2| ˙θ2|]. The dynamics are given by ¨θ1 = −d2 ¨θ2+φ1 d1 , ¨θ2 = u+d2/d1φ1−φ2 m2l2 c2+I2−d2 2/d1 , d1 = m1l2 c1+m2(l2 1+l2 c2+2l1lc2 cos θ2)+I1+I2, d2 = m2∗(l2 c2+l1∗lc2 cos θ2)+I2, φ1 = −m2l1lc2 ˙θ2 2−sin(θ2)−2m2l1lc2 ˙θ2 ˙θ1 sin(θ2)+(m1lc1+ m2l1)g cos(θ1 −π/2) + φ2, and φ2 = m2 + lc2g cos(θ1 + θ2 −π/2). Here m1 = 1, m2 = 1, l1 = 1, l2 = 2, lc1 = 0.5, lc2 = 1, I1 = 0.0833, I2 = 0.33, g = 9.81. The control interval was 0.02s. We solve the dynamics using a fourth order Runge-Kutta method. Each episode is run for 400 time steps, though the episode terminates once the acrobot has failed (defined as whenever the height of the second link t = −cos(θ1) −cos(θ1 + θ3) < 0.5). The reward function is −2 for every time step after the failure occurs, and −(1 −(−cos(θ1) −cos(θ1 + θ2))/2)2 otherwise. 8We followed standard formulations of the control policy for LQR and cartpole. All policies are designed as functions of a linear combination of the policy parameters and hand-selected features. 8 References [1] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the European Machine Learning Conference (ECML), 2005. [2] T. Mori, Y. Nakamura, M. Sato, and S. Ishii. Reinforcement learning for cpg-driven biped robot. In AAAI, 2004. [3] R. Tedrake, T. W. Zhang, and H.S. Seung. Learning to walk in 20 minutes. In Proceedings of the Fourteenth Yale Workshop on Adaptive and Learning Systems, 2005. [4] N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion, 2004. [5] J. Zico Kolter and Andrew Y. Ng. Learning omnidirectional path following using dimensionality reduction. RSS, 2007. [6] P. Glynn. Likelihood Ratio Gradient Estimation: An Overview”. In Proceedings of the 1987 Winter Simulation Conference, Atlanta, GA, 1987. [7] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:23, 1992. [8] J. Baxter and P. Bartlett. Direct gradient-based reinforcement learning. Journal of Artificial Intelligence Research, 1999. [9] E. Greensmith, P. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 2004. [10] Leonid Peshkin and Christian R. Shelton. Learning from scarce experience. In Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [11] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning. In NIPS 13, 2000. [12] J. Peters and S. Schaal. Policy gradient methods for robotics. In Proceedings of the IEEE International Conference on Intelligent Robotics Systems, 2006. [13] A. Kong, J. S. Liu, and W. H. Wong. Sequential imputations and Bayesian missing data problems. Journal of American Statistics Association, 89:278–288, 1994. [14] Jens Kober and Jan Peters. Policy search for motor primitives in robotics. NIPS, 2008. [15] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 2004. [16] Richard S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting, 1991. [17] Xi-Ren Cao. A basic formula for on-line policy-gradient algorithms. IEEE Transactions on Automatic Control, 50:696–699, 2005. [18] Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In In: Proceedings of the International Conference on Machine Learning (ICML, pages 745–750, 2007. [19] Andrew Ng and Michael Jordan. Pegasus: A policy search method for large mdps and pomdps. In In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 406–415, 2000. [20] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10, 1998. [21] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14 of 26, 2001. [22] Nicolas Le Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algorithm. NIPS, 2007. [23] Jan Peters. Machine Learning of Motor Skills for Robotics. PhD thesis, University of Southern California, 2007. [24] M. Riedmiller, J. Peters, and S. Schaal. Evaluation of policy gradient methods and variants on the cart-pole benchmark. In IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007. [25] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 9
2010
74
4,118
Functional Geometry Alignment and Localization of Brain Areas Georg Langs, Polina Golland Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology Cambridge, MA 02139, USA langs@csail.mit.edu, polina@csail.mit.edu Yanmei Tie, Laura Rigolo, Alexandra J. Golby Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School Boston, MA 02115, USA ytie@bwh.harvard.edu, lrigolo@bwh.harvard.edu agolby@bwh.harvard.edu Abstract Matching functional brain regions across individuals is a challenging task, largely due to the variability in their location and extent. It is particularly difficult, but highly relevant, for patients with pathologies such as brain tumors, which can cause substantial reorganization of functional systems. In such cases spatial registration based on anatomical data is only of limited value if the goal is to establish correspondences of functional areas among different individuals, or to localize potentially displaced active regions. Rather than rely on spatial alignment, we propose to perform registration in an alternative space whose geometry is governed by the functional interaction patterns in the brain. We first embed each brain into a functional map that reflects connectivity patterns during a fMRI experiment. The resulting functional maps are then registered, and the obtained correspondences are propagated back to the two brains. In application to a language fMRI experiment, our preliminary results suggest that the proposed method yields improved functional correspondences across subjects. This advantage is pronounced for subjects with tumors that affect the language areas and thus cause spatial reorganization of the functional regions. 1 Introduction Alignment of functional neuroanatomy across individuals forms the basis for the study of the functional organization of the brain. It is important for localization of specific functional regions and characterization of functional systems in a population. Furthermore, the characterization of variability in location of specific functional areas is itself informative of the mechanisms of brain formation and reorganization. In this paper we propose to align neuroanatomy based on the functional geometry of fMRI signals during specific cognitive processes. For each subject, we construct a map based on spectral embedding of the functional connectivity of the fMRI signals and register those maps to establish correspondence between functional areas in different subjects. Standard registration methods that match brain anatomy, such as the Talairach normalization [21] or non-rigid registration techniques like [10, 20], accurately match the anatomical structures across individuals. However the variability of the functional locations relative to anatomy can be substantial [8, 22, 23], which limits the usefulness of such alignment in functional experiments. The relationship between anatomy and function becomes even less consistent in the presence of pathological changes in the brain, caused by brain tumors, epilepsy or other diseases [2, 3]. 1 registration brain 1 brain 2 embedding embedding registration brain 2 brain 1 Registration based on anatomical data Registration based on the functional geometry Figure 1: Standard anatomical registration and the proposed functional geometry alignment. Functional geometry alignment matches the diffusion maps of fMRI signals of two subjects. Integrating functional features into the registration process promises to alleviate this challenge. Recently proposed methods match the centers of activated cortical areas [22, 26], or estimate dense correspondences of cortical surfaces [18]. The fMRI signals at the surface points serve as a feature vector, and registration is performed by maximizing the inter-subject fMRI correlation of matched points, while at the same time regularizing the surface warp to preserve cortical topology and penalizing cortex folding and metric distortion, similar to [9]. In [7] registration of a population of subjects is accomplished by using the functional connectivity pattern of cortical points as a descriptor of the cortical surface. It is warped so that the Frobenius norm of the difference between the connectivity matrices of the reference subject and the matched subject is minimized, while at the same time a topology-preserving deformation of the cortex is enforced. All the methods described above rely on a spatial reference frame for registration, and use the functional characteristics as a feature vector of individual cortical surface points or the entire surface. This might have limitations in the case of severe pathological changes that cause a substantial reorganization of the functional structures. Examples include the migration to the other hemisphere, changes in topology of the functional maps, or substitution of functional roles played by one damaged region with another area. In contrast, our approach to functional registration does not rely on spatial consistency. Spectral embedding [27] represents data points in a map that reflects a large set of measured pairwise affinity values in a euclidean space. Previously we used spectral methods to map voxels of a fMRI sequence into a space that captures joint functional characteristics of brain regions [14]. This approach represents the level of interaction by the density in the embedding. In [24], different embedding methods were compared in a study of parceled resting-state fMRI data. Functionally homogeneous units formed clusters in the embedding. In [11] multidimensional scaling was employed to retrieve a low dimensional representation of positron emission tomography (PET) signals after selecting sets of voxels by the standard activation detection technique [12]. Here we propose and demonstrate a functional registration method that operates in a space that reflects functional connectivity patterns of the brain. In this space, the connectivity structure is captured by a structured distribution of points, or functional geometry. Each point in the distribution represents a location in the brain and the relation of its fMRI signal to signals at other locations. Fig. 1 illustrates the method. To register functional regions among two individuals, we first embed both fMRI volumes independently, and then obtain correspondences by matching the two point distributions in the functional geometry. We argue that such a representation offers a more natural view of the co-activation patterns than the spatial structure augmented with functional feature vectors. The functional geometry can handle long-range reorganizations and topological variability in the functional organization of different individuals. Furthermore, by translating connectivity strength to distances we are able to regularize the registration effectively. Strong connections are preserved during registration in the map by penalizing high-frequencies in the map deformation field. The clinical goal of our work is to reliably localize language areas in tumor patients. The functional connectivity pattern for a specific area provides a refined representation of its activity that can augment the individual activation pattern. Our approach is to utilize connectivity information to improve localization of the functional areas in tumor patients. Our method transfers the connectivity patterns from healthy subjects to tumor patients. The transferred patterns then serve as a patient-specific prior for functional localization, improving the accuracy of detection. The functional geometry we use is largely independent of the underlying anatomical organization. As a consequence, our method handles substantial changes in spatial arrangement of the functional areas that typically present sig2 nificant challenges for anatomical registration methods. Such functional priors promise to improve detection accuracy in the challenging case of language area localization. The mapping of healthy networks to patients provides additional evidence for the location of the language areas. It promises to enhance accuracy and robustness of localization. In addition to localization, studies of reconfiguration mechanisms in the presence of lesions aim to understand how specific sub-areas are redistributed (e.g., do they migrate to a compact area, or to other intact language areas). While standard detection identifies the regions whose activation is correlated with the experimental protocol, we seek a more detailed description of the functional roles of the detected regions, based on the functional connectivity patterns. We evaluate the method on healthy control subjects and brain tumor patients who perform language mapping tasks. The language system is highly distributed across the cortex. Tumor growth sometimes causes a reorganization that sustains language ability of the patient, even though the anatomy is severely changed. Our initial experimental results indicate that the proposed functional alignment outperforms anatomical registration in predicting activation in target subjects (both healthy controls and patients). Furthermore functional alignment can handle substantial reorganizations and is much less affected by the tumor presence than anatomical registration. 2 Embedding the brain in a functional geometry We first review the representation of the functional geometry that captures the co-activation patterns in a diffusion map defined on the fMRI voxels [6, 14]. Given a fMRI sequence I ∈RT ×N that contains N voxels, each characterized by an fMRI signal over T time points, we calculate matrix C ∈RN×N that assigns each pair of voxels ⟨k, l⟩with corresponding time courses Ik and Il a non-negative symmetric weight c(k, l) = e corr(Ik,Il) ϵ , (1) where corr is the correlation coefficient of the two signals Ik and Il, and ϵ is the speed of weight decay. We define a graph whose vertices correspond to voxels and whose edge weights are determined by C. In practice, we discard all edges that have a weight below a chosen threshold if they connect nodes with a large distance in the anatomical space. This construction yields a sparse graph which is then transformed into a Markov chain. Note that in contrast to methods like multidimensional scaling, this sparsity reflects the intuition that meaningful information about the connectivity structure is encoded by the high correlation values. We transform the graph into a Markov chain on the set of nodes by the normalized graph Laplacian construction [5]. The degree of each node g(k) = P l c(k, l) is used to define the directed edge weights of the Markov chain as p(k, l) = c(k, l) g(k) , (2) which can be interpreted as transition probabilities along the graph edges. This set of probabilities defines a diffusion operator Pf(x) = P p(x, y)f(y) on the graph vertices (voxels). The diffusion operator integrates all pairwise relations in the graph and defines a geometry on the entire set of fMRI signals. We embed the graph in a Euclidean space via an eigenvalue decomposition of P [6]. The eigenvalue decomposition of the operator P results in a sequence of decreasing eigen values λ1, λ2 . . . and corresponding eigen vectors Ψ1, Ψ2, . . . that satisfy PΨi = λiΨi and constitute the so-called diffusion map: Ψt ≜⟨λt 1Ψ1 . . . λt wΨw⟩, (3) where w ≤T is the dimensionality of the representation, and t is a parameter that controls scaling of the axes in this newly defined space. Ψk t ∈Rw is the representation of voxel k in the functional geometry; it comprises the kth components of the first w eigenvectors. We will refer to Rw as the functional space. The global structure of the functional connectivity is reflected in the point distribution Ψt. The axes of the eigenspace are the directions that capture the highest amount of structure in the connectivity landscape of the graph. This functional geometry is governed by the diffusion distance Dt on the graph: Dt(k, l) is defined through the probability of traveling between two vertices k and l by taking all paths of at most t steps 3 Subject 1 Map 1 Map 2 Subject 2 Ψ0 Ψ1 s0 s1 xk 0 xl 1 a. Maps of two subjects b. Aligning the point sets Figure 2: Maps of two subjects in the process of registration: (a) Left and right: the axial and sagittal views of the points in the two brains. The two central columns show plots of the first three dimensions of the embedding in the functional geometry after coarse rotational alignment. (b) During alignment, a maps is represented as a Gaussian mixture model. The colors in both plots indicate clusters which are only used for visualization. into account. The transition probabilities are based on the functional connectivity of pairs of nodes. Thus the diffusion distance integrates the connectivity values over possible paths that connect two points and defines a geometry that captures the entirety of the connectivity structure. It corresponds to the operator P t parameterized by the diffusion time t: Dt(k, l) = X i=1,...,N (pt(k, i) −pt(l, i))2 π(i) where π(i) = g(i) P u g(u). (4) The distance Dt is low if there is a large number of paths of length t with high transition probabilities between the nodes k and l. The diffusion distance corresponds to the Euclidean distance in the embedding space: ∥Ψt(k) − Ψt(l)∥= Dt(k, l). The functional relations between fMRI signals are translated into spatial distances in the functional geometry [14]. This particular embedding method is closely related to other spectral embedding approaches [17]; the parameter t controls the range of graph nodes that influence a certain local configuration. To facilitate notation, we assume the diffusion time t is fixed in the remainder of the paper, and omit it from the equations. The resulting maps are the basis for the functional registration of the fMRI volumes. 3 Functional geometry alignment Let Ψ0, and Ψ1 be the functional maps of two subjects. Ψ0, and Ψ1 are point clouds embedded in a w-dimensional Euclidean space. The points in the maps correspond to voxels and registration of the maps establishes correspondences between brain regions of the subjects. Our goal is to estimate correspondences of points in the two maps based on the structure in the two distributions determined by the functional connectivity structure in the data. We perform registration in the functional space by non-rigidly deforming the distributions until their overlap is maximized. At the same time, we regularize the deformation so that high frequency displacements of individual points, which would correspond to a change in the relations with strong connectivity, are penalized. 4 We note that the embedding is defined up to rotation, order and sign of individual coordinate axes. However, for successful alignment it is essential that the embedding is consistent between the subjects, and we have to match the nuisance parameters of the embedding during alignment. In [19] a greedy method for sign matching was proposed. In our data the following procedure produces satisfying results. When computing the embedding, we set the sign of each individual coordinate axis j so that mean({Ψj(k)}) −median({Ψj(k)}) > 0, ∀j = 1, . . . , w. Since the distributions typically have a long tail, and are centered at the origin, this step disambiguates the coordinate axis directions well. Fig. 2 illustrates the level of consistency of the maps across two subjects. It shows the first three dimensions of maps for two different control subjects. The colors illustrate clusters in the map and their corresponding positions in the brain. For illustration purposes the colors are matched based on the spatial location of the clusters. The two maps indicate that there is some degree of consistency of the mappings for different subjects. Eigenvectors may switch if the corresponding eigenvalues are similar [13]. We initialise the registration using Procrustes analysis [4] so that the distance between a randomly chosen subset of vertices from the same anatomical part of the brain is minimised in functional space. This typically resolves ambiguity in the embedding with respect to rotation and the order of eigenvectors in the functional space. We employ the Coherent Point Drift algorithm for the subsequent non-linear registration of the functional maps [16]. We consider the points in Ψ0 to be centroids of a Gaussian mixture model that is fitted to the points in Ψ1 to minimize the energy E(χ) = − N0 X k=1 log N1 X l=1 exp  −∥xk 0 −χ(xl 1)∥2 2σ2 ! + λ 2 φ(χ), (5) where xk 0 and xl 1 are the points in the maps Ψ0 and Ψ1 during matching and φ is a function that regularizes the deformation χ of the point set. The minimization of E(χ) involves a trade-off between its two terms controlled by λ. The first term is a Gaussian kernel, that generates a continuous distribution for the entire map Ψ0 in the functional space Rw. By deforming xk 0 we increase the likelihood of the points in Ψ1 with respect to the distribution defined by xk 0. At the same time φ(χ) encourages a smooth deformation field by penalizing high frequency local deformations by e.g., a radial basis function [15]. The first term in Eq. 5 moves the two point distributions so that their overlap is maximized. That is, regions that exhibit similar global connectivity characteristics are moved closer to each other. The regularization term induces a high penalty on changing strong functional connectivity relationships among voxels (which correspond to small distances or clusters in the map). At the same time, the regularization allows more changes between regions with weak connectivity (which correspond to large distances). In other words, it preserves the connectivity structure of strong networks, while being flexible with respect to weak connectivity between distant clusters. Once the registration of the two distributions in the functional geometry is completed, we assign correspondences between points in Ψ0 and Ψ1 by a simple matching algorithm that for any point in one map chooses the closest point in the other map. 4 Validation of Alignment To validate the functional registration quantitatively we align pairs of subjects via (i) the proposed functional geometry alignment, and (ii) the anatomical non-rigid demons registration [25, 28]. We restrict the functional evaluation to the grey matter. Functional geometry embedding is performed on a random sampling of 8000 points excluding those that exhibit no activation (with a liberal threshold of p = 0.15 in the General Linear Model (GLM) analysis [12]). After alignment we evaluate the quality of the fit by (1) the accuracy of predicting the location of the active areas in the target subject and (2) the inter-subject correlation of BOLD signals after alignment. The first criterion is directly related to the clinical aim of localization of active areas. A. Predictive power: We evaluate if it is possible to establish correspondences, so that the activation in one subject lets us predict the activation in another subject after alignment. That is, we examine if the correspondences identify regions that exhibit a relationship with the task (in our experiment, a 5 Reference - healthy control subject Target - patient with tumor indicated in blue fMRI points in the reference subject Corresponding fMRI points in the target - Functional Geometry Alignment Corresponding fMRI points in the target - Anatomical Registration Figure 3: Mapping a region by functional geometry alignment: a reference subject (first column) aligned to a tumor patient (second and third columns, the tumor is shown in blue). The green region in the healthy subject is mapped to the red region by the proposed functional registration and to the yellow region by anatomical registration. Note that the functional alignment places the region narrowly around the tumor location, while the anatomical registration result intersects with the tumor. The slice of the anatomical scan with the tumor and the zoomed visualization of the registration results (fourth column) are also shown. language task) even if they are ambiguous or not detected based on the standard single-subject GLM analysis. That is, can we transfer evidence for the activation of specific regions between subjects, for example between healthy controls and tumor patients? In the following we refer to regions detected by the standard single subject fMRI analysis with an activation threshold of p = 0.05 (false discovery rate (FDR) corrected [1]) as above-threshold regions. We validate the accuracy of localizing activated regions in a target volume by measuring the average correlation of the t-maps (based on the standard GLM) between the source and the corresponding target regions after registration. A t-map indicates activation - i.e., a significant correlation with the task the subject is performing during fMRI acquisition - for each voxel in the fMRI volume. A high inter-subject correlation of the t-maps indicates that the aligned source t-maps are highly predictive of the t-map in the target fMRI data. Additionally, we measure the overlap between regions in the target image to which the above-threshold source regions are mapped, and the above-threshold regions in the target image. Note that for the registration itself neither the inter-subject correlation of fMRI signals, nor the correlation of t-maps is used. In other words, although we enforce homology in the pattern of correlations between two subjects, the correlations across subjects per se are not matched. B. Correlation of BOLD signal across subjects: To assess the relationship between the source and registered target regions relative to the fMRI activation, we measure the correlation between the fMRI signals in the above-threshold regions of the source volume and the fMRI signals at the corresponding locations in the target volume. Across-subject correlation of the fMRI signals indicates a relationship between the underlying functional processes. We are interested in two specific scenarios: (i) above-threshold regions in the target image that were matched to above-threshold regions in the source image, and (ii) below-threshold regions in the target image that were matched to abovethreshold regions in the source image. This second group includes candidates for activation, even though they do not pass detection threshold in the particular volume. We do not expect correlation of signals for non-activated regions. 5 Experimental Results We demonstrate the method on a set of 6 control subjects and 3 patients with low-grade tumors in one of the regions associated with language processing. For all 9 subjects fMRI data was acquired 6 0 0.05 0.1 0.15 0.2 1 2 3 4 FGA A. Correlation of t-maps Control - Control Control - Tumor 0 0.05 0.1 0.15 0.2 FGA AR AR 0 0.05 0.1 0.15 1 2 5 6 B. Correlation of fMRI signal 0 0.05 0.1 0.15 Activ. to Non-Activ. Activ. to Activ. FGA FGA AR AR ? Above-threshold region in source subject Above-threshold region in target subject Region corresponding to abovethreshold region in source subject after alignment BOLD signal in source subject BOLD signal at corresponding position after alignment of target subject A. B. Figure 4: Validation: A. Correlation distribution of corresponding t-values after functional geometry alignment (FGA) and anatomical registration (AR) for control-control and control-tumor matches. B. correlation of the BOLD signals for activated regions mapped to activated regions (left) and activated regions mapped to sub-threshold regions (right). using a 3T GE Signa system (TR=2s, TE=40ms, flip angle=90◦, slice gap=0mm, FOV=25.6cm, dimension 128 × 128 × 27 voxels, voxel size of 2 × 2 × 4 mm3). The language task (antonym generation) block design was 5min 10s, starting with a 10s pre-stimulus period. Eight task and seven rest blocks each 20s long alternated in the design. For each subject, anatomical T1 MRI data was acquired and registered to the functional data. We perform pair-wise registration in all 36 image pairs, 21 of which include at least one patient. Fig.3 illustrates the effect of a tumor in a language region, and the corresponding registration results. An area of the brain associated with language is registered from a control subject to a tumor patient. The location of the tumor is shown in blue; the regions resulting from functional and anatomical registration are indicated in red (FGA), and yellow (AR), respectively. While anatomical registration creates a large overlap between the mapped region and the tumor, functional geometry alignment maps the region to a plausible area narrowly surrounding the tumor. Fig. 4 reports quantitative comparison of functional alignment vs. anatomical registration for the entire set of subjects. Functional geometry alignment achieves significantly higher correlation of t-values than anatomical registration (0.14 vs. 0.07, p < 10−17, paired t-test, all image pairs). Anatomical registration performance drops significantly when registering a control subject and a tumor patient, compared to a pair of control subjects (0.08 vs. 0.06, p = 0.007). For functional geometry alignment this drop is not significant (0.15 vs. 0.14, p = 0.17). Functional geometry alignment predicts 50% of the above-threshold in the target brain, while anatomical registration predicts 29% (Fig. 4 (A)). These findings indicate that the functional alignment of language regions among source and target subjects is less affected by the presence of a tumor and the associated reorganization than the matching of functional regions by anatomical registration. Furthermore the functional alignment has better predictive power for the activated regions in the target subject for both control-control and control-patient pairs. In our experiments this predictive power is affected onyl to a small degree by a tumor presence in the target. In contrast and as expected, the matching of functional regions by anatomical alignment is affected by the tumor. Activated source regions mapped to a target subject exhibit the following characteristics. If both source region and corresponding target region are above-threshold the average correlation between the source and target signals is significantly higher for functional geometry alignment (0.108 vs. 0.097, p = 0.004 paired t-test). For above-threshold regions mapped to below-threshold regions the same significant difference exists (0.020 vs. 0.016, p = 0.003), but correlations are significantly lower. This significant difference between functional geometry alignment and anatomical registration vanishes for regions mapped from below-threshold regions in the source subject. The baseline of below-threshold region pairs exhibits very low correlation (∼0.003) and no difference between the two methods. The fMRI signal correlation in the source and the target region is higher for functional alignment if the source region is activated. This suggests that even if the target region does not exhibit task specific behavior detectable by standard analysis, its fMRI signal still correlates with the activated source fMRI signal to a higher degree than non-activated region pairs. The functional connectivity 7 structure is sufficiently consistent to support an alignment of the functional geometry between subjects. It identifies experimental correspondences between regions, even if their individual relationship to the task is ambiguous. We demonstrate that our alignment improves inter-subject correlation for activated source regions and their target regions, but not for the non-active source regions. This suggest that we enable localization of regions that would not be detected by standard analysis, but whose activations are similar to the source regions in the normal subjects. 6 Conclusion In this paper we propose and demonstrate a method for registering neuroanatomy based on the functional geometry of fMRI signals. The method offers an alternative to anatomical registration; it relies on matching a spectral embedding of the functional connectivity patterns of two fMRI volumes. Initial results indicate that the structure in the diffusion map that reflects functional connectivity enables accurate matching of functional regions. When used to predict the activation in a target fMRI volume the proposed functional registration exhibits higher predictive power than the anatomical registration. Moreover it is more robust to pathologies and the associated changes in the spatial organization of functional areas. The method offers advantages for the localization of activated but displaced regions in cases where tumor-induced changes of the hemodynamics make direct localization difficult. Functional alignment contributes evidence from healthy control subjects. Further research is necessary to evaluate the predictive power of the method for localization of specific functional areas. Acknowledgements This work was funded in part by the NSF IIS/CRCNS 0904625 grant, the NSF CAREER 0642971 grant, the NIH NCRR NAC P41-RR13218, NIH NIBIB NAMIC U54EB005149, NIH U41RR019703, and NIH P01CA067165 grants, the Brain Science Foundation, and the Klarman Family Foundation. References [1] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), pages 289–300, 1995. [2] S.B. Bonelli, R.H.W. Powell, M. Yogarajah, R.S. Samson, M.R. Symms, P.J. Thompson, M.J. Koepp, and J.S. Duncan. Imaging memory in temporal lobe epilepsy: predicting the effects of temporal lobe resection. Brain, 2010. [3] S. Bookheimer. Pre-surgical language mapping with functional magnetic resonance imaging. Neuropsychology Review, 17(2):145–155, 2007. [4] F.L. Bookstein. Two shape metrics for biomedical outline data: Bending energy, procrustes distance, and the biometrical modeling of shape phenomena. In Proceedings International Conference on Shape Modeling and Applications, pages 110 –120, 1997. [5] Fan R.K. Chung. Spectral Graph Theory. American Mathematical Society, 1997. [6] Ronald R. Coifman and St´ephane Lafon. Diffusion maps. App. Comp. Harm. An., 21:5–30, 2006. [7] Bryan Conroy, Ben Singer, James Haxby, and Peter Ramadge. fmri-based inter-subject cortical alignment using functional connectivity. In Adv. in Neural Information Proc. Systems, pages 378–386, 2009. [8] E. Fedorenko and N. Kanwisher. Neuroimaging of Language: Why Hasn’t a Clearer Picture Emerged? Language and Linguistics Compass, 3(4):839–865, 2009. [9] B. Fischl, M.I. Sereno, and A.M. Dale. Cortical surface-based analysis II: Inflation, flattening, and a surface-based coordinate system. Neuroimage, 9(2):195–207, 1999. [10] B. Fischl, M.I. Sereno, R.B.H. Tootell, and A.M. Dale. High-resolution intersubject averaging and a coordinate system for the cortical surface. HBM, 8(4):272–284, 1999. [11] KJ Friston, CD Frith, P. Fletcher, PF Liddle, and RSJ Frackowiak. Functional topography: multidimensional scaling and functional connectivity in the brain. Cerebral Cortex, 6(2):156, 1996. 8 [12] KJ Friston, AP Holmes, KJ Worsley, JB Poline, CD Frith, RSJ Frackowiak, et al. Statistical parametric maps in functional imaging: a general linear approach. Hum Brain Mapp, 2(4):189– 210, 1995. [13] V. Jain and H. Zhang. Robust 3D shape correspondence in the spectral domain. In Shape Modeling and Applications, 2006. SMI 2006. IEEE International Conference on, page 19. IEEE, 2006. [14] Georg Langs, Dimitris Samaras, Nikos Paragios, Jean Honorio, Nelly Alia-Klein, Dardo Tomasi, Nora D Volkow, and Rita Z Goldstein. Task-specific functional brain geometry from model maps. In Proc. of MICCAI, volume 11, pages 925–933, 2008. [15] A. Myronenko and X. Song. Point set registration: Coherent point drift. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010. [16] A. Myronenko, X. Song, and M.A. Carreira-Perpin´an. Non-rigid point set registration: Coherent Point Drift. Adv. in Neural Information Proc. Systems, 19:1009, 2007. [17] H.J. Qiu and E.R. Hancock. Clustering and Embedding Using Commute Times. IEEE TPAMI, 29(11):1873–1890, 2007. [18] M.R. Sabuncu, B.D. Singer, B. Conroy, R.E. Bryan, P.J. Ramadge, and J.V. Haxby. Functionbased intersubject alignment of human cortical anatomy. Cerebral Cortex, 20(1):130–140, 2010. [19] L.S. Shapiro and J. Michael Brady. Feature-based correspondence: an eigenvector approach. Image and vision computing, 10(5):283–288, 1992. [20] D. Shen and C. Davatzikos. HAMMER: hierarchical attribute matching mechanism for elastic registration. IEEE Trans. Med. Imaging, 21(11):1421–1439, 2002. [21] J. Talairach and P. Tournoux. Co-planar stereotaxic atlas of the human brain. Thieme New York, 1988. [22] B. Thirion, G. Flandin, P. Pinel, A. Roche, P. Ciuciu, and J.B. Poline. Dealing with the shortcomings of spatial normalization: Multi-subject parcellation of fMRI datasets. Human brain mapping, 27(8):678–693, 2006. [23] B. Thirion, P. Pinel, S. M´eriaux, A. Roche, S. Dehaene, and J.B. Poline. Analysis of a large fMRI cohort: Statistical and methodological issues for group analyses. Neuroimage, 35(1):105–120, 2007. [24] Bertrand Thirion, Silke Dodel, and Jean-Baptiste Poline. Detection of signal synchronizations in resting-state fmri datasets. Neuroimage, 29(1):321–327, 2006. [25] J.P. Thirion. Image matching as a diffusion process: an analogy with Maxwell’s demons. Medical Image Analysis, 2(3):243–260, 1998. [26] D.C. Van Essen, H.A. Drury, J. Dickson, J. Harwell, D. Hanlon, and C.H. Anderson. An integrated software suite for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association, 8(5):443, 2001. [27] U. Von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, 2007. [28] H. Wang, L. Dong, J. O’Daniel, R. Mohan, A.S. Garden, K.K. Ang, D.A. Kuban, M. Bonnen, J.Y. Chang, and R. Cheung. Validation of an accelerated’demons’ algorithm for deformable image registration in radiation therapy. Physics in Medicine and Biology, 50(12):2887–2906, 2005. 9
2010
75
4,119
Multi-View Active Learning in the Non-Realizable Case Wei Wang and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210093, China {wangw,zhouzh}@lamda.nju.edu.cn Abstract The sample complexity of active learning under the realizability assumption has been well-studied. The realizability assumption, however, rarely holds in practice. In this paper, we theoretically characterize the sample complexity of active learning in the non-realizable case under multi-view setting. We prove that, with unbounded Tsybakov noise, the sample complexity of multi-view active learning can be eO(log 1 ǫ ), contrasting to single-view setting where the polynomial improvement is the best possible achievement. We also prove that in general multi-view setting the sample complexity of active learning with unbounded Tsybakov noise is eO( 1 ǫ ), where the order of 1/ǫ is independent of the parameter in Tsybakov noise, contrasting to previous polynomial bounds where the order of 1/ǫ is related to the parameter in Tsybakov noise. 1 Introduction In active learning [10, 13, 16], the learner draws unlabeled data from the unknown distribution defined on the learning task and actively queries some labels from an oracle. In this way, the active learner can achieve good performance with much fewer labels than passive learning. The number of these queried labels, which is necessary and sufficient for obtaining a good leaner, is well-known as the sample complexity of active learning. Many theoretical bounds on the sample complexity of active learning have been derived based on the realizability assumption (i.e., there exists a hypothesis perfectly separating the data in the hypothesis class) [4, 5, 11, 12, 14, 16]. The realizability assumption, however, rarely holds in practice. Recently, the sample complexity of active learning in the non-realizable case (i.e., the data cannot be perfectly separated by any hypothesis in the hypothesis class because of the noise) has been studied [2, 13, 17]. It is worth noting that these bounds obtained in the non-realizable case match the lower bound Ω( η2 ǫ2 ) [19], in the same order as the upper bound O( 1 ǫ2 ) of passive learning (η denotes the generalization error rate of the optimal classifier in the hypothesis class and ǫ bounds how close to the optimal classifier in the hypothesis class the active learner has to get). This suggests that perhaps active learning in the non-realizable case is not as efficient as that in the realizable case. To improve the sample complexity of active learning in the non-realizable case remarkably, the model of the noise or some assumptions on the hypothesis class and the data distribution must be considered. Tsybakov noise model [21] is more and more popular in theoretical analysis on the sample complexity of active learning. However, existing result [8] shows that obtaining exponential improvement in the sample complexity of active learning with unbounded Tsybakov noise is hard. Inspired by [23] which proved that multi-view setting [6] can help improve the sample complexity of active learning in the realizable case remarkably, we have an insight that multi-view setting will also help active learning in the non-realizable case. In this paper, we present the first analysis on the 1 sample complexity of active learning in the non-realizable case under multi-view setting, where the non-realizability is caused by Tsybakov noise. Specifically: -We define α-expansion, which extends the definition in [3] and [23] to the non-realizable case, and β-condition for multi-view setting. -We prove that the sample complexity of active learning with Tsybakov noise under multi-view setting can be improved to eO(log 1 ǫ ) when the learner satisfies non-degradation condition.1 This exponential improvement holds no matter whether Tsybakov noise is bounded or not, contrasting to single-view setting where the polynomial improvement is the best possible achievement for active learning with unbounded Tsybakov noise. -We also prove that, when non-degradation condition does not hold, the sample complexity of active learning with unbounded Tsybakov noise under multi-view setting is eO( 1 ǫ ), where the order of 1/ǫ is independent of the parameter in Tsybakov noise, i.e., the sample complexity is always eO( 1 ǫ ) no matter how large the unbounded Tsybakov noise is. While in previous polynomial bounds, the order of 1/ǫ is related to the parameter in Tsybakov noise and is larger than 1 when unbounded Tsybakov noise is larger than some degree (see Section 2). This discloses that, when non-degradation condition does not hold, multi-view setting is still able to lead to a faster convergence rate and our polynomial improvement in the sample complexity is better than previous polynomial bounds when unbounded Tsybakov noise is large. The rest of this paper is organized as follows. After introducing related work in Section 2 and preliminaries in Section 3, we define α-expansion in the non-realizable case in Section 4. We analyze the sample complexity of active learning with Tsybakov noise under multi-view setting with and without the non-degradation condition in Section 5 and Section 6, respectively. Finally we conclude the paper in Section 7. 2 Related Work Generally, the non-realizability of learning task is caused by the presence of noise. For learning the task with arbitrary forms of noise, Balcan et al. [2] proposed the agnostic active learning algorithm A2 and proved that its sample complexity is bO( η2 ǫ2 ).2 Hoping to get tighter bound on the sample complexity of the algorithm A2, Hanneke [17] defined the disagreement coefficient θ, which depends on the hypothesis class and the data distribution, and proved that the sample complexity of the algorithm A2 is bO(θ2 η2 ǫ2 ). Later, Dasgupta et al. [13] developed a general agnostic active learning algorithm which extends the scheme in [10] and proved that its sample complexity is bO(θ η2 ǫ2 ). Recently, the popular Tsybakov noise model [21] was considered in theoretical analysis on active learning and there have been some bounds on the sample complexity. For some simple cases, where Tsybakov noise is bounded, it has been proved that the exponential improvement in the sample complexity is possible [4, 7, 18]. As for the situation where Tsybakov noise is unbounded, only polynomial improvement in the sample complexity has been obtained. Balcan et al. [4] assumed that the samples are drawn uniformly from the the unit ball in Rd and proved that the sample complexity of active learning with unbounded Tsybakov noise is O ǫ− 2 1+λ  (λ > 0 depends on Tsybakov noise). This uniform distribution assumption, however, rarely holds in practice. Castro and Nowak [8] showed that the sample complexity of active learning with unbounded Tsybakov noise is bO ǫ−2µω+d−2ω−1 µω  (µ > 1 depends on another form of Tsybakov noise, ω ≥1 depends on the H¨older smoothness and d is the dimension of the data). This result is also based on the strong uniform distribution assumption. Cavallanti et al. [9] assumed that the labels of examples are generated according to a simple linear noise model and indicated that the sample complexity of active learning with unbounded Tsybakov noise is O ǫ− 2(3+λ) (1+λ)(2+λ)  . Hanneke [18] proved that the algorithms or variants thereof in [2] and [13] can achieve the polynomial sample complexity bO ǫ− 2 1+λ  for active learning with unbounded Tsybakov noise. For active learning with unbounded Tsybakov noise, Castro and Nowak [8] also proved that at least Ω(ǫ−ρ) labels are requested to learn 1The eO notation is used to hide the factor log log( 1 ǫ ). 2The bO notation is used to hide the factor polylog( 1 ǫ ). 2 an ǫ-approximation of the optimal classifier (ρ ∈(0, 2) depends on Tsybakov noise). This result shows that the polynomial improvement is the best possible achievement for active learning with unbounded Tsybakov noise in single-view setting. Wang [22] introduced smooth assumption to active learning with approximate Tsybakov noise and proved that if the classification boundary and the underlying distribution are smooth to ξ-th order and ξ > d, the sample complexity of active learning is bO ǫ−2d ξ+d  ; if the boundary and the distribution are infinitely smooth, the sample complexity of active learning is O polylog( 1 ǫ )  . Nevertheless, this result is for approximate Tsybakov noise and the assumption on large smoothness order (or infinite smoothness order) rarely holds for data with high dimension d in practice. 3 Preliminaries In multi-view setting, the instances are described with several different disjoint sets of features. For the sake of simplicity, we only consider two-view setting in this paper. Suppose that X = X1 × X2 is the instance space, X1 and X2 are the two views, Y = {0, 1} is the label space and D is the distribution over X×Y . Suppose that c = (c1, c2) is the optimal Bayes classifier, where c1 and c2 are the optimal Bayes classifiers in the two views, respectively. Let H1 and H2 be the hypothesis class in each view and suppose that c1 ∈H1 and c2 ∈H2. For any instance x = (x1, x2), the hypothesis hv ∈Hv (v = 1, 2) makes that hv(xv) = 1 if xv ∈Sv and hv(xv) = 0 otherwise, where Sv is a subset of Xv. In this way, any hypothesis hv ∈Hv corresponds to a subset Sv of Xv (as for how to combine the hypotheses in the two views, see Section 5). Considering that x1 and x2 denote the same instance x in different views, we overload Sv to denote the instance set {x = (x1, x2) : xv ∈Sv} without confusion. Let S∗ v correspond to the optimal Bayes classifier cv. It is well-known [15] that S∗ v = {xv : ϕv(xv) ≥1 2}, where ϕv(xv) = P(y = 1|xv). Here, we also overload S∗ v to denote the instances set {x = (x1, x2) : xv ∈S∗ v}. The error rate of a hypothesis Sv under the distribution D is R(hv) = R(Sv) = Pr(x1,x2,y)∈D y ̸= I(xv ∈Sv)  . In general, R(S∗ v) ̸= 0 and the excess error of Sv can be denoted as follows, where Sv∆S∗ v = (Sv −S∗ v) ∪(S∗ v −Sv) and d(Sv, S∗ v) is a pseudo-distance between the sets Sv and S∗ v. R(Sv) −R(S∗ v) = Z Sv∆S∗ v |2ϕv(xv) −1|pxvdxv ≜d(Sv, S∗ v) (1) Let ηv denote the error rate of the optimal Bayes classifier cv which is also called as the noise rate in the non-realizable case. In general, ηv is less than 1 2. In order to model the noise, we assume that the data distribution and the Bayes decision boundary in each view satisfies the popular Tsybakov noise condition [21] that Prxv∈Xv(|ϕv(xv) −1/2| ≤t) ≤C0tλ for some finite C0 > 0, λ > 0 and all 0 < t ≤1/2, where λ = ∞corresponds to the best learning situation and the noise is called bounded [8]; while λ = 0 corresponds to the worst situation. When λ < ∞, the noise is called unbounded [8]. According to Proposition 1 in [21], it is easy to know that (2) holds. d(Sv, S∗ v) ≥C1dk ∆(Sv, S∗ v) (2) Here k = 1+λ λ , C1 = 2C−1/λ 0 λ(λ + 1)−1−1/λ, d∆(Sv, S∗ v) = Pr(Sv −S∗ v) + Pr(S∗ v −Sv) is also a pseudo-distance between the sets Sv and S∗ v, and d(Sv, S∗ v) ≤d∆(Sv, S∗ v) ≤1. We will use the following lamma [1] which gives the standard sample complexity for non-realizable learning task. Lemma 1 Suppose that H is a set of functions from X to Y = {0, 1} with finite VC-dimension V ≥1 and D is the fixed but unknown distribution over X × Y . For any ǫ, δ > 0, there is a positive constant C, such that if the size of sample {(x1, y1), . . . , (xN, yN)} from D is N(ǫ, δ) = C ǫ2 V + log( 1 δ )  , then with probability at least 1 −δ, for all h ∈H, the following holds. | 1 N XN i=1 I h(xi) ̸= yi −E(x,y)∈DI h(x) ̸= y  | ≤ǫ 4 α-Expansion in the Non-realizable Case Multi-view active learning first described in [20] focuses on the contention points (i.e., unlabeled instances on which different views predict different labels) and queries some labels of them. It is motivated by that querying the labels of contention points may help at least one of the two views to learn the optimal classifier. Let S1 ⊕S2 = (S1 −S2) ∪(S2 −S1) denote the contention points 3 Table 1: Multi-view active learning with the non-degradation condition Input: Unlabeled data set U = {x1, x2, · · · , } where each example xj is given as a pair (xj 1, xj 2) Process: Query the labels of m0 instances drawn randomly from U to compose the labeled data set L iterate: i = 0, 1, · · · , s Train the classifier hi v (v = 1, 2) by minimizing the empirical risk with L in each view: hi v = arg minh∈Hv P (x1,x2,y)∈L I(h(xv) ̸= y); Apply hi 1 and hi 2 to the unlabeled data set U and find out the contention point set Qi; Query the labels of mi+1 instances drawn randomly from Qi, then add them into L and delete them from U. end iterate Output: hs + and hs − between S1 and S2, then Pr(S1 ⊕S2) denotes the probability mass on the contentions points. “∆” and “⊕” mean the same operation rule. In this paper, we use “∆” when referring the excess error between Sv and S∗ v and use “⊕” when referring the difference between the two views S1 and S2. In order to study multi-view active learning, the properties of contention points should be considered. One basic property is that Pr(S1 ⊕S2) should not be too small, otherwise the two views could be exactly the same and two-view setting would degenerate into single-view setting. In multi-view learning, the two views represent the same learning task and generally are consistent with each other, i.e., for any instance x = (x1, x2) the labels of x in the two views are the same. Hence we first assume that S∗ 1 = S∗ 2 = S∗. As for the situation where S∗ 1 ̸= S∗ 2, we will discuss on it further in Section 5.2. The instances agreed by the two views can be denoted as (S1∩S2)∪(S1∩S2). However, some of these agreed instances may be predicted different label by the optimal classifier S∗, i.e., the instances in (S1 ∩S2 −S∗) ∪(S1 ∩S2 −S∗). Intuitively, if the contention points can convey some information about (S1 ∩S2 −S∗) ∪(S1 ∩S2 −S∗), then querying the labels of contention points could help to improve S1 and S2. Based on this intuition and that Pr(S1 ⊕S2) should not be too small, we give our definition on α-expansion in the non-realizable case. Definition 1 D is α-expanding if for some α > 0 and any S1 ⊆X1, S2 ⊆X2, (3) holds. Pr S1 ⊕S2  ≥α  Pr S1 ∩S2 −S∗ + Pr S1 ∩S2 −S∗ (3) We say that D is α-expanding with respect to hypothesis class H1 × H2 if the above holds for all S1 ∈H1 ∩X1, S2 ∈H2 ∩X2 (here we denote by Hv ∩Xv the set {h∩Xv : h ∈Hv} for v = 1, 2). Balcan et al. [3] also gave a definition of expansion, Pr(T1 ⊕T2) ≥α min  Pr(T1 ∩T2), Pr(T1 ∩ T2)  , for realizable learning task under the assumptions that the learner in each view is never “confident but wrong” and the learning algorithm is able to learn from positive data only. Here Tv denotes the instances which are classified as positive confidently in each view. Generally, in realizable learning tasks, we aim at studying the asymptotic performance and assume that the performance of initial classifier is better than guessing randomly, i.e., Pr(Tv) > 1/2. This ensures that Pr(T1 ∩T2) is larger than Pr(T1 ∩T2). In addition, in [3] the instances which are agreed by the two views but are predicted different label by the optimal classifier can be denoted as T1 ∩T2. So, it can be found that Definition 1 and the definition of expansion in [3] are based on the same intuition that the amount of contention points is no less than a fraction of the amount of instances which are agreed by the two views but are predicted different label by the optimal classifiers. 5 Multi-view Active Learning with Non-degradation Condition In this section, we first consider the multi-view learning in Table 1 and analyze whether multiview setting can help improve the sample complexity of active learning in the non-realizable case remarkably. In multi-view setting, the classifiers are often combined to make predictions and many strategies can be used to combine them. In this paper, we consider the following two combination schemes, h+ and h−, for binary classification: hi +(x) =  1 if hi 1(x1) = hi 2(x2) = 1 0 otherwise hi −(x) =  0 if hi 1(x1) = hi 2(x2) = 0 1 otherwise (4) 4 5.1 The Situation Where S∗ 1 = S∗ 2 With (4), the error rate of the combined classifiers hi + and hi −satisfy (5) and (6), respectively. R(hi +) −R(S∗) = R(Si 1 ∩Si 2) −R(S∗) ≤d∆(Si 1 ∩Si 2, S∗) (5) R(hi −) −R(S∗) = R(Si 1 ∪Si 2) −R(S∗) ≤d∆(Si 1 ∪Si 2, S∗) (6) Here Si v ⊂Xv (v = 1, 2) corresponds to the classifier hi v ∈Hv in the i-th round. In each round of multi-view active learning, labels of some contention points are queried to augment the training data set L and the classifier in each view is then refined. As discussed in [23], we also assume that the learner in Table 1 satisfies the non-degradation condition as the amount of labeled training examples increases, i.e., (7) holds, which implies that the excess error of Si+1 v is no larger than that of Si v in the region of Si 1 ⊕Si 2. Pr Si+1 v ∆S∗ Si 1 ⊕Si 2  ≤Pr(Si v∆S∗ Si 1 ⊕Si 2) (7) To illustrate the non-degradation condition, we give the following example: Suppose the data in Xv (v = 1, 2) fall into n different clusters, denoted by πv 1, . . . , πv n, and every cluster has the same probability mass for simplicity. The positive class is the union of some clusters while the negative class is the union of the others. Each positive (negative) cluster πv ξ in Xv is associated with only 3 positive (negative) clusters π3−v ς (ξ, ς ∈{1, . . . , n}) in X3−v (i.e., given an instance xv in πv ξ, x3−v will only be in one of these π3−v ς ). Suppose the learning algorithm will predict all instances in each cluster with the same label, i.e., the hypothesis class Hv consists of the hypotheses which do not split any cluster. Thus, the cluster πv ξ can be classified according to the posterior probability P(y = 1|πv ξ) and querying the labels of instances in cluster πv ξ will not influence the estimation of the posterior probability for cluster πv ς (ς ̸= ξ). It is evident that the non-degradation condition holds in this task. Note that the non-degradation assumption may not always hold, and we will discuss on this in Section 6. Now we give Theorem 1. Theorem 1 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, when the non-degradation condition holds, if s = ⌈2 log 1 8ǫ log 1 C2 ⌉and mi = 256kC C2 1 V +log( 16(s+1) δ )  , the multi-view active learning in Table 1 will generate two classifiers hs + and hs −, at least one of which is with error rate no larger than R(S∗) + ǫ with probability at least 1 −δ. Here, V = max[V C(H1), V C(H2)] where V C(H) denotes the VC-dimension of the hypothesis class H, k = 1+λ λ , C1 = 2C−1/λ 0 λ(λ + 1)−1−1/λ and C2 = 5α+8 6α+8. Proof sketch. Let Qi = Si 1 ⊕Si 2, first with Lemma 1 and (2) we have d∆(Si+1 1 ∩Si+1 2 | Qi, S∗| Qi) ≤1/8. Let T i+1 v = Si+1 v ∩Qi and τi+1 = P r(T i+1 1 ⊕T i+1 2 −S∗) P r(T i+1 1 ⊕T i+1 2 ) −1 2. Considering (7) and d∆(Si 1 ∩Si 2|Qi, S∗|Qi)Pr(Qi) = Pr(Si 1 ∩Si 2 −S∗) + Pr(Si 1 ∩Si 2 −S∗), then we calculate that d∆(Si+1 1 ∩Si+1 2 , S∗) ≤ Pr(Si 1 ∩Si 2 −S∗) + Pr(Si 1 ∩Si 2 −S∗) + 1 8Pr(Si 1 ⊕Si 2) −τi+1Pr (Si+1 1 ⊕Si+1 2 ) ∩Qi  d∆(Si+1 1 ∪Si+1 2 , S∗) ≤ Pr(Si 1 ∩Si 2 −S∗) + Pr(Si 1 ∩Si 2 −S∗) + 1 8Pr(Si 1 ⊕Si 2) + τi+1Pr (Si+1 1 ⊕Si+1 2 ) ∩Qi  . As in each round some contention points are queried and added into the training set, the difference between the two views is decreasing, i.e., Pr(Si+1 1 ⊕Si+1 2 ) is no larger than Pr(Si 1 ⊕Si 2). Let γi = P r(Si 1⊕Si 2−S∗) P r(Si 1⊕Si 2) −1 2, with Definition 1 and different combinations of τi+1 and γi, we can have either d∆(Si+1 1 ∩Si+1 2 ,S∗) d∆(Si 1∩Si 2,S∗) ≤5α+8 6α+8 or d∆(Si+1 1 ∪Si+1 2 ,S∗) d∆(Si 1∪Si 2,S∗) ≤5α+8 6α+8. When s = ⌈2 log 1 8ǫ log 1 C2 ⌉, where C2 = 5α+8 6α+8 is a constant less than 1, we have either d∆(Ss 1 ∩Ss 2, S∗) ≤ǫ or d∆(Ss 1 ∪Ss 2, S∗) ≤ǫ. Thus, with (5) and (6) we have either R(hs +) ≤R(S∗) + ǫ or R(hs −) ≤R(S∗) + ǫ. □ 5 From Theorem 1 we know that we only need to request Ps i=0 mi = eO(log 1 ǫ ) labels to learn hs + and hs −, at least one of which is with error rate no larger than R(S∗) + ǫ with probability at least 1 −δ. If we choose hs + and it happens to satisfy R(hs +) ≤R(S∗) + ǫ, we can get a classifier whose error rate is no larger than R(S∗) + ǫ. Fortunately, there are only two classifiers and the probability of getting the right classifier is no less than 1 2. To study how to choose between hs + and hs −, we give Definition 2 at first. Definition 2 The multi-view classifiers S1 and S2 satisfy β-condition if (8) holds for some β > 0. Pr {x : x ∈S1 ⊕S2 ∧y(x) = 1}  Pr(S1 ⊕S2) −Pr {x : x ∈S1 ⊕S2 ∧y(x) = 0}  Pr(S1 ⊕S2) ≥β (8) (8) implies the difference between the examples belonging to positive class and that belonging to negative class in the contention region of S1 ⊕S2. Based on Definition 2, we give Lemma 2 which provides information for deciding how to choose between h+ and h−. This helps to get Theorem 2. Lemma 2 If the multi-view classifiers Ss 1 and Ss 2 satisfy β-condition, with the number of 2 log( 4 δ ) β2 labels we can decide correctly whether Pr {x : x ∈Ss 1 ⊕Ss 2 ∧y(x) = 1}  or Pr {x : x ∈ Ss 1 ⊕Ss 2 ∧y(x) = 0}  ) is smaller with probability at least 1 −δ. Theorem 2 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, when the non-degradation condition holds, if the multi-view classifiers satisfy β-condition, by requesting eO(log 1 ǫ ) labels the multi-view active learning in Table 1 will generate a classifier whose error rate is no larger than R(S∗) + ǫ with probability at least 1 −δ. From Theorem 2 we know that we only need to request eO(log 1 ǫ ) labels to learn a classifier with error rate no larger than R(S∗) + ǫ with probability at least 1 −δ. Thus, we achieve an exponential improvement in sample complexity of active learning in the non-realizable case under multi-view setting. Sometimes, the difference between the examples belonging to positive class and that belonging to negative class in Ss 1 ⊕Ss 2 may be very small, i.e., (9) holds. Pr {x : x ∈Ss 1 ⊕Ss 2 ∧y(x) = 1}  Pr(Ss 1 ⊕Ss 2) −Pr {x : x ∈Ss 1 ⊕Ss 2 ∧y(x) = 0}  Pr(Ss 1 ⊕Ss 2) = O(ǫ) (9) If so, we need not to estimate whether R(hs +) or R(hs −) is smaller and Theorem 3 indicates that both hs + and hs −are good approximations of the optimal classifier. Theorem 3 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, when the non-degradation condition holds, if (9) is satisfied, by requesting eO(log 1 ǫ ) labels the multi-view active learning in Table 1 will generate two classifiers hs + and hs −which satisfy either (a) or (b) with probability at least 1 −δ. (a) R(hs +) ≤R(S∗) + ǫ and R(hs −) ≤R(S∗) + O(ǫ); (b) R(hs +) ≤R(S∗) + O(ǫ) and R(hs −) ≤R(S∗) + ǫ. The complete proof of Theorem 1, and the proofs of Lemma 2, Theorem 2 and Theorem 3 are given in the supplementary file. 5.2 The Situation Where S∗ 1 ̸= S∗ 2 Although the two views represent the same learning task and generally are consistent with each other, sometimes S∗ 1 may be not equal to S∗ 2. Therefore, the α-expansion assumption in Definition 1 should be adjusted to the situation where S∗ 1 ̸= S∗ 2. To analyze this theoretically, we replace S∗ by S∗ 1 ∩S∗ 2 in Definition 1 and get (10). Similarly to Theorem 1, we get Theorem 4. Pr S1 ⊕S2  ≥α  Pr S1 ∩S2 −S∗ 1 ∩S∗ 2  + Pr S1 ∩S2 −S∗ 1 ∩S∗ 2  (10) Theorem 4 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to (10), when the non-degradation condition holds, if s = ⌈2 log 1 8ǫ log 1 C2 ⌉and mi = 256kC C2 1 V + log( 16(s+1) δ )  , the multi-view active learning in Table 1 will generate two classifiers hs + and hs −, at least one of which is with error rate no larger than R(S∗ 1 ∩S∗ 2) + ǫ with probability at least 1 −δ. (V , k, C1 and C2 are given in Theorem 1.) 6 Table 2: Multi-view active learning without the non-degradation condition Input: Unlabeled data set U = {x1, x2, · · · , } where each example xj is given as a pair (xj 1, xj 2) Process: Query the labels of m0 instances drawn randomly from U to compose the labeled data set L; Train the classifier h0 v (v = 1, 2) by minimizing the empirical risk with L in each view: h0 v = arg minh∈Hv P (x1,x2,y)∈L I(h(xv) ̸= y); iterate: i = 1, · · · , s Apply hi−1 1 and hi−1 2 to the unlabeled data set U and find out the contention point set Qi; Query the labels of mi instances drawn randomly from Qi, then add them into L and delete them from U; Query the labels of (2i −1)mi instances drawn randomly from U −Qi, then add them into L and delete them from U; Train the classifier hi v by minimizing the empirical risk with L in each view: hi v = arg minh∈Hv P (x1,x2,y)∈L I(h(xv) ̸= y). end iterate Output: hs + and hs − Proof. Since S∗ v is the optimal Bayes classifier in the v-th view, obviously, R(S∗ 1 ∩S∗ 2) is no less than R(S∗ v), (v = 1, 2). So, learning a classifier with error rate no larger than R(S∗ 1 ∩S∗ 2) + ǫ is not harder than learning a classifier with error rate no larger than R(S∗ v) + ǫ. Now we aim at learning a classifier with error rate no larger than R(S∗ 1 ∩S∗ 2) + ǫ. Without loss of generality, we assume R(Si v) > R(S∗ 1 ∩S∗ 2) for i = 0, 1, . . . , s. If R(Si v) ≤R(S∗ 1 ∩S∗ 2), we get a classifier with error rate no larger than R(S∗ 1 ∩S∗ 2) + ǫ. Thus, we can neglect the probability mass on the hypothesis whose error rate is less than R(S∗ 1 ∩S∗ 2) and regard S∗ 1 ∩S∗ 2 as the optimal. Replacing S∗by S∗ 1 ∩S∗ 2 in the discussion of Section 5.1, with the proof of Theorem 1 we get Theorem 4 proved. □ Theorem 4 shows that for the situation where S∗ 1 ̸= S∗ 2, by requesting eO(log 1 ǫ ) labels we can learn two classifiers hs + and hs −, at least one of which is with error rate no larger than R(S∗ 1 ∩S∗ 2) + ǫ with probability at least 1 −δ. With Lemma 2, we get Theorem 5 from Theorem 4. Theorem 5 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to (10), when the non-degradation condition holds, if the multi-view classifiers satisfy βcondition, by requesting eO(log 1 ǫ ) labels the multi-view active learning in Table 1 will generate a classifier whose error rate is no larger than R(S∗ 1 ∩S∗ 2) + ǫ with probability at least 1 −δ. Generally, R(S∗ 1 ∩S∗ 2) is larger than R(S∗ 1) and R(S∗ 2). When S∗ 1 is not too much different from S∗ 2, i.e., Pr(S∗ 1 ⊕S∗ 2) ≤ǫ/2, we have Corollary 1 which indicates that the exponential improvement in the sample complexity of active learning with Tsybakov noise is still possible. Corollary 1 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to (10), when the non-degradation condition holds, if the multi-view classifiers satisfy βcondition and Pr(S∗ 1 ⊕S∗ 2) ≤ǫ/2, by requesting eO(log 1 ǫ ) labels the multi-view active learning in Table 1 will generate a classifier with error rate no larger than R(S∗ v)+ǫ (v = 1, 2) with probability at least 1 −δ. The proofs of Theorem 5 and Corollary 1 are given in the supplemental file. 6 Multi-view Active Learning without Non-degradation Condition Section 5 considers situations when the non-degradation condition holds, there are cases, however, the non-degradation condition (7) does not hold. In this section we focus on the multi-view active learning in Table 2 and give an analysis with the non-degradation condition waived. Firstly, we give Theorem 6 for the sample complexity of multi-view active learning in Table 2 when S∗ 1 = S∗ 2 = S∗. Theorem 6 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, if s = ⌈2 log 1 8ǫ log 1 C2 ⌉and mi = 256kC C2 1 V + log( 16(s+1) δ )  , the multi-view active learning in Table 2 will generate two classifiers hs + and hs −, at least one of which is with error rate no larger than R(S∗) + ǫ with probability at least 1 −δ. (V , k, C1 and C2 are given in Theorem 1.) 7 Proof sketch. In the (i + 1)-th round, we randomly query (2i+1 −1)mi labels from Qi and add them into L. So the number of training examples for Si+1 v (v = 1, 2) is larger than the number of whole training examples for Si v. Thus we know that d(Si+1 v |Qi, S∗|Qi) ≤d(Si v|Qi, S∗|Qi) holds for any ϕv. Setting ϕv ∈{0, 1}, the non-degradation condition (7) stands. Thus, with the proof of Theorem 1 we get Theorem 6 proved. □ Theorem 6 shows that we can request Ps i=0 2imi = eO( 1 ǫ ) labels to learn two classifiers hs + and hs −, at least one of which is with error rate no larger than R(S∗) + ǫ with probability at least 1 −δ. To guarantee the non-degradation condition (7), we only need to query (2i −1)mi more labels in the i-th round. With Lemma 2, we get Theorem 7. Theorem 7 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, if the multi-view classifiers satisfy β-condition, by requesting eO( 1 ǫ ) labels the multi-view active learning in Table 2 will generate a classifier whose error rate is no larger than R(S∗) + ǫ with probability at least 1 −δ. Theorem 7 shows that, without the non-degradation condition, we need to request eO( 1 ǫ ) labels to learn a classifier with error rate no larger than R(S∗)+ǫ with probability at least 1−δ. The order of 1/ǫ is independent of the parameter in Tsybakov noise. Similarly to Theorem 3, we get Theorem 8 which indicates that both hs + and hs −are good approximations of the optimal classifier. Theorem 8 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to Definition 1, if (9) holds, by requesting eO( 1 ǫ ) labels the multi-view active learning in Table 2 will generate two classifiers hs + and hs −which satisfy either (a) or (b) with probability at least 1 −δ. (a) R(hs +) ≤R(S∗) + ǫ and R(hs −) ≤R(S∗) + O(ǫ); (b) R(hs +) ≤R(S∗) + O(ǫ) and R(hs −) ≤R(S∗) + ǫ. As for the situation where S∗ 1 ̸= S∗ 2, similarly to Theorem 5 and Corollary 1, we have Theorem 9 and Corollary 2. Theorem 9 For data distribution D α-expanding with respect to hypothesis class H1 × H2 according to (10), if the multi-view classifiers satisfy β-condition, by requesting eO( 1 ǫ ) labels the multi-view active learning in Table 2 will generate a classifier whose error rate is no larger than R(S∗ 1 ∩S∗ 2)+ǫ with probability at least 1 −δ. Corollary 2 For data distribution D α-expanding with respect to hypothesis class H1 ×H2 according to (10), if the multi-view classifiers satisfy β-condition and Pr(S∗ 1 ⊕S∗ 2) ≤ǫ/2, by requesting eO( 1 ǫ ) labels the multi-view active learning in Table 2 will generate a classifier with error rate no larger than R(S∗ v) + ǫ (v = 1, 2) with probability at least 1 −δ. The complete proof of Theorem 6, the proofs of Theorem 7 to 9 and Corollary 2 are given in the supplementary file. 7 Conclusion We present the first study on active learning in the non-realizable case under multi-view setting in this paper. We prove that the sample complexity of multi-view active learning with unbounded Tsybakov noise can be improved to eO(log 1 ǫ ), contrasting to single-view setting where only polynomial improvement is proved possible with the same noise condition. In general multi-view setting, we prove that the sample complexity of active learning with unbounded Tsybakov noise is eO( 1 ǫ ), where the order of 1/ǫ is independent of the parameter in Tsybakov noise, contrasting to previous polynomial bounds where the order of 1/ǫ is related to the parameter in Tsybakov noise. Generally, the non-realizability of learning task can be caused by many kinds of noise, e.g., misclassification noise and malicious noise. It would be interesting to extend our work to more general noise model. Acknowledgments This work was supported by the NSFC (60635030, 60721002), 973 Program (2010CB327903) and JiangsuSF (BK2008018). 8 References [1] M. Anthony and P. L. Bartlett, editors. Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge, UK, 1999. [2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, pages 65–72, 2006. [3] M.-F. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and practice. In NIPS 17, pages 89–96. 2005. [4] M.-F. Balcan, A. Z. Broder, and T. Zhang. Margin based active learning. In COLT, pages 35–50, 2007. [5] M.-F. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In COLT, pages 45–56, 2008. [6] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, pages 92–100, 1998. [7] R. M. Castro and R. D. Nowak. Upper and lower error bounds for active learning. In Allerton Conference, pages 225–234, 2006. [8] R. M. Castro and R. D. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339–2353, 2008. [9] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear classification and selective sampling under low noise conditions. In NIPS 21, pages 249–256. 2009. [10] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [11] S. Dasgupta. Analysis of a greedy active learning strategy. In NIPS 17, pages 337–344. 2005. [12] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS 18, pages 235– 242. 2006. [13] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS 20, pages 353–360. 2008. [14] S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. In COLT, pages 249–263, 2005. [15] L. Devroye, L. Gy¨orfi, and G. Lugosi, editors. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [16] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3):133–168, 1997. [17] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages 353–360, 2007. [18] S. Hanneke. Adaptive rates of convergence in active learning. In COLT, 2009. [19] M. K¨a¨ari¨ainen. Active learning in the non-realizable case. In ACL, pages 63–77, 2006. [20] I. Muslea, S. Minton, and C. A. Knoblock. Active + semi-supervised learning = robust multiview learning. In ICML, pages 435–442, 2002. [21] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135–166, 2004. [22] L. Wang. Sufficient conditions for agnostic active learnable. In NIPS 22, pages 1999–2007. 2009. [23] W. Wang and Z.-H. Zhou. On multi-view active learning and the combination with semisupervised learning. In ICML, pages 1152–1159, 2008. 9
2010
76
4,120
Epitome driven 3-D Diffusion Tensor image segmentation: on extracting specific structures∗ Kamiya Motwani†§ Nagesh Adluru§ Chris Hinrichs†§ Andrew Alexander‡ Vikas Singh§† †Computer Sciences §Biostatistics & Medical Informatics ‡Medical Physics University of Wisconsin University of Wisconsin University of Wisconsin {kmotwani,hinrichs,vsingh}@cs.wisc.edu {adluru,alalexander2}@wisc.edu Abstract We study the problem of segmenting specific white matter structures of interest from Diffusion Tensor (DT-MR) images of the human brain. This is an important requirement in many Neuroimaging studies: for instance, to evaluate whether a brain structure exhibits group level differences as a function of disease in a set of images. Typically, interactive expert guided segmentation has been the method of choice for such applications, but this is tedious for large datasets common today. To address this problem, we endow an image segmentation algorithm with “advice” encoding some global characteristics of the region(s) we want to extract. This is accomplished by constructing (using expert-segmented images) an epitome of a specific region – as a histogram over a bag of ‘words’ (e.g., suitable feature descriptors). Now, given such a representation, the problem reduces to segmenting a new brain image with additional constraints that enforce consistency between the segmented foreground and the pre-specified histogram over features. We present combinatorial approximation algorithms to incorporate such domain specific constraints for Markov Random Field (MRF) segmentation. Making use of recent results on image co-segmentation, we derive effective solution strategies for our problem. We provide an analysis of solution quality, and present promising experimental evidence showing that many structures of interest in Neuroscience can be extracted reliably from 3-D brain image volumes using our algorithm. 1 Introduction Diffusion Tensor Imaging (DTI or DT-MR) is an imaging modality that facilitates measurement of the diffusion of water molecules in tissues. DTI has turned out to be especially useful in Neuroimaging because the inherent microstructure and connectivity networks in the brain can be estimated from such data [1]. The primary motivation is to investigate how specific components (i.e., structures) of the brain network topology respond to disease and treatment [2], and how these are affected as a result of external factors such as trauma. An important challenge here is to reliably extract (i.e., segment) specific structures of interest from DT-MR image volumes, so that these regions can then be analyzed to evaluate variations between clinically disparate groups. This paper focuses on efficient algorithms for this application – that is, 3-D image segmentation with side constraints to preserve fidelity of the extracted foreground with a given epitome of the brain region of interest. DTI data are represented as a 3 × 3 positive semidefinite tensor at each image voxel. These images provide information about connection pathways in the brain, and neuroscientists focus on the ∗Supported by AG034315 (Singh), MH62015 (Alexander), UW ICTR (1UL1RR025011), and UW ADRC (AG033514). Hinrichs and Adluru are supported by UW-CIBM funding (via NLM 5T15LM007359). Thanks to Richie Davidson for assistance with the data, and Anne Bartosic and Chad Ennis for ground truth indications. The authors thank Lopamudra Mukherjee, Moo K. Chung, and Chuck Dyer for discussions and suggestions. 1 analysis of white-matter regions (these are known to encompass the ‘brain axonal networks’). In general, standard segmentation methods yield reasonable results in separating white matter (WM) from gray-matter (GM), see [3]. While some of these algorithms make use of the tensor field directly [4], others utilize ‘maps’ of certain scalar-valued anisotropy measures calculated from tensors to partition WM/GM regions [5], see Fig. 1. But different pathways play different functional roles; hence it is more meaningful to evaluate group differences in a population at the level of specific white matter structures (e.g., corpus callosum, fornix, cingulum bundle). Part of the reason is that even significant volume differences in small structures may be overwhelmed in a pair-wise t-test using volume measures of the entire white matter (obtained via WM/GM segmentation [6]). To analyze variations in specific regions, we require segmentation of such structures as a first step. Unsupervised segmentation of specific regions of interest from DTI is difficult. Even interactive segmentation (based on gray-level fractional anisotropy maps) leads to unsatisfactory results unless guided by a neuroanatomical expert – that is, specialized knowledge of the global appearance of the structure is essential in this process. Further, this is tedious for large datasets. One alternative is to use a set of already segmented images to facilitate processing of new data. Fortunately, since many studies use hand indicated regions for group analysis [7], such data is readily available. However, directly applying off the shelf toolboxes to learn a classifier (from such segmented images) does not work well. Part of the reason is that the local spatial context at each tensor voxel, while useful, is not sufficiently discriminative. In fact, the likelihood of a voxel to be assigned as part of the foreground (structure of interest) depends on whether the set of all foreground voxels (in entirety) match an ‘appearance model’ of the structure, in addition to being perceptually homogeneous. One strategy to model the first requirement is to extract features, generate a codebook dictionary of feature descriptors, and ask that distribution over the codebook (for foreground voxels) be consistent with the distribution induced by the expert-segmented foreground (on the same codebook). Putting this together with the homogeneity requirement serves to define the problem: segment a given DTI image (using MRFs, normalized cuts), while ensuring that the extracted foreground matches a known appearance model (over a bag of codebook features). The goal is related to recent work on simultaneous segmentation of two images called Cosegmentation [8, 9, 10, 11]. In the following sections, we formalize the problem and then present efficient segmentation methods. The key contributions of this paper are: (i) We propose a new algorithm for epitome-based graph-cuts segmentation, one which permits introduction of a bias to favor solutions that match a given epitome for regions of interest. (ii) We present an application to segmentation of specific structures in Diffusion Tensor Images of the human brain and provide experimental evidence that many structures of interest in Neuroscience can be extracted reliably from large 3-D DTI images. (iii) Our analysis provides a guarantee of a constant factor approximation ratio of 4. For a deterministic round-up strategy to obtain integral solutions, this approximation is provably tight. 2 Preliminaries We provide a short overview of how image segmentation is expressed as finding the maximum likelihood solution to a Conditional or Markov Random Field function. Later, we extend the model to include an additional bias (or regularizer) so that the configurations that are consistent with an epitome of a structure of interest turn out to be more likely (than other possibly lower energy solutions). Figure 1: Specific white matter structures such as Corpus Callosum, Interior Capsules, and Cingulum Bundle are shown in 3D (left), within the entire white matter (center), and overlaid on a Fractional Anisotropy (FA) image slice (right). Our objective is to segment such structures from DTI images. Note that FA is a scalar anisotropy measure often used directly for WM/GM segmentation, since anisotropy is higher in white matter. 2 2.1 Markov Random Fields (MRF) Markov Random Field based image segmentation approaches are quite popular in computer vision [12, 13] and neuroimaging [14]. A random field is assumed over the image lattice consisting of discrete random variables, x = {x1, · · · , xn}. Each xj ∈x, j ∈{1, · · · , n} takes a value from a finite label set, L = {L1, · · · , Lm}. The set Nj = {i|j ∼i} lists the neighbors of xj on the adjacency lattice, denoted as (j ∼i). A configuration of the MRF is an assignment of each xj to a label in L. Labels represent distinct image segments; each configuration gives a segmentation, and the desired segmentation is the least energy MRF configuration. The energy is expressed as a sum of (1) individual data log-likelihood terms (cost of assigning xj to Lk ∈L) and (2) pairwise smoothness prior (favor voxels with similar appearance to be assigned to the same label) [12, 15, 16]: min x,z X Lk∈L n X j=1 wjkxjk + X (i∼j) cijzij (1) subject to |xik −xjk| ≤zij ∀k ∈{1, · · · , m}, ∀(i ∼j) ∈N where i, j ∈{1, · · · , n}, (2) x is binary of size n × m, z is binary of size |N|, (3) where wjk is a unary term encoding the probability of j being assigned to Lk ∈L, and cij is the pairwise smoothness prior (e.g., Generalized Potts model). The variable zij = 1 indicates that voxels i and j are assigned to different labels and x provides the assignment of voxel to labels (i.e., segments or regions). The problem is NP-hard but good approximation algorithms (including combinatorial methods) are known [16, 15, 17, 12]. Special cases (e.g., when c is convex) are known to be poly-time solvable [15]. Next, we discuss an interesting extension of MRF segmentation, namely Cosegmentation, which deals with the simultaneous segmentation of multiple images. 2.2 From Cosegmentation toward Epitome-based MRFs Cosegmentation uses the observation that while global histograms of images of the same object (in different backgrounds) may differ, the histogram(s) of the respective foreground regions in the image pair (based on certain invariant features) remain relatively stable. Therefore, one may perform a concurrent segmentation of the images with a global constraint that enforces consistency between histograms of only the foreground voxels. We first construct a codebook of features F (e.g., using RGB intensities) for images I(1) and I(2); the histograms on this dictionary are: H(1) = {H(1) 1 , · · · , H(1) β } and H(2) = {H(2) 1 , · · · , H(2) β } (b indexes the histogram bins), such that H(u) b (j) = 1 if voxel j ∈I(u) is most similar to codeword Fb, where u ∈{1, 2}. If x(1) and x(2) denote the segmentation solutions, and x(1) j = 1 assigns voxel j of I(1) to the foreground, a measure of consistency between the foreground regions (after segmentation) is given by: β X b=1 Ψ “ ⟨H(1) b , x(1)⟩, ⟨H(2) b , x(2)⟩ ” . (4) where Ψ(·, ·) is a suitable similarity (or distance) function and ⟨H(u) b , x(u)⟩= Pn j=1 H(u) b (j)x(u) j , a count of the number of voxels in I(u) (from Fb) assigned to the foreground for u ∈{1, 2}. Using (4) to regularize the segmentation objective (1) biases the model to favor solutions where the foregrounds match (w.r.t. the codebook F), leading to more consistent segmentations. The form of Ψ(·, ·) above has a significant impact on the hardness of the problem, and different ideas have been explored [8, 9, 10]. For example, the approach in [8] uses the ℓ1 norm to measure (and penalize) the variation, and requires a Trust Region based method for optimization. The sum of squared differences (SSD) function in [9] leads to partially optimal (half integral) solutions but requires solving a large linear program – infeasible for the image sizes we consider (which are orders of magnitude larger). Recently, [10] substituted Ψ(·, ·) with a so-called reward on histogram similarity. This does lead to a polynomial time solvable model, but requires the similarity function to be quite discriminative (otherwise offering a reward might be counter-productive in this setting). 3 Optimization Model We start by using the sum of squared differences (SSD) as in [9] to bias the objective function and incorporate epitome awareness within the MRF energy in (1). However, unlike [9], where one 3 seeks a segmentation of both images, here we are provided the second histogram – the epitome (representation) of the specific region of interest. Clearly, this significantly simplifies the resultant Linear Program. Unfortunately, it remains computationally intractable for high resolution 3-D image volumes (2562 ×128) we consider here (the images are much larger than what is solvable by state of the art LP software, as in [9]). We propose a solution based on a combinatorial method, using ideas from some recent papers on Quadratic Pseudoboolean functions and their applications [18, 19]. This allows us to apply our technique on large scale image volumes, and obtain accurate results quite efficiently. Further, our analysis shows that we can obtain good constant-factor approximations (these are tight under mild conditions). We discuss our formulation next. We first express the objective in (1) with an additional regularization term to penalize histogram dissimilarity using the sum of squared differences. This gives the following simple expression, min x,z X i∼j cijz(1) ij + n X j=1 wj0(1 −x(1) j ) + n X j=1 wj1x(1) j + λ β X b=1 (⟨H(1) b , x(1)⟩− ˆ Hb |{z} ⟨H(2) b ,x(2)⟩ )2 Since the epitome (histogram) is provided, the second argument of Ψ(·, ·) in (4) is replaced with ˆH, and x(1) represents the solution vector for image I(1). In addition, the term wj0 (and wj1) denote the unary cost of assigning voxel j to the background (and foreground), and λ is a user-specified tunable parameter to control the influence of the histogram variation. This yields min x,z X i∼j cijzij + n X j=1 wj0(1 −xj) + n X j=1 wj1xj + λ β X b=1 0 @⟨Hb, x⟩2 −2⟨Hb, x⟩ˆ Hb + ˆ H2 b |{z} constant 1 A subject to |xi −xj| ≤zij ∀(i ∼j) where i, j ∈{1, · · · , n}, and x, z is binary, (5) The last term in (5) is constant. So, the model reduces to min x,z X i∼j cijzij + n X j=1 wj0(1 −xj) + n X j=1 wj1xj + λ β X b=1 n X j=1 n X l=1 Hb(j)Hb(l)xjxl −2 n X j=1 Hb(j)xj ˆ Hb ! s.t. |xi −xj| ≤zij ∀(i ∼j) where i, j ∈{1, · · · , n}, and x, z is binary, (6) Observe that (6) can be expressed as a special case of the general form, Γ(x1, · · · , xn) = P S⊂U φS Q j∈S xj where U = {1, · · · , n}, x = (x1, · · · , xn) ∈Bn is a binary vector, S is a subset of U, and φS denotes the coefficient of S. Such a function Γ : Bn 7→R is called a pseudoBoolean function [18]. If the cardinality of S is no more than two, the corresponding form is Γ(x1, x2, · · · , xn) = X j φjxj + X (i,j) φijxixj These functions are called Quadratic Pseudo-Boolean functions (QPB). In general if the objective permits a representation as a QPB, an upper (or lower) bound can be derived using roof (or floor) duality [18], recently utilized in several papers [19, 20, 21]. Notice that the function in (6) is a QPB because it has at most two variables in each term in the expansion. An advantage of the model derived above is that (pending some additional adjustments) we will be able to leverage an extensive existing combinatorial machinery to solve the problem. We discuss these issues in more detail next. 4 Reparameterization and Graph Construction Now we discuss a graph construction to optimize the above energy function by computing a maximum flow/minimum cut. We represent each variable as a pair of literals, xj and ¯xj, which corresponds to a pair of nodes in a graph G. Edges are added to G based on various terms in the corresponding QPB. The min-cut computed on G will determine the assignments of variables to 1 (or 0), i.e., foreground/background assignment. Depending on how the nodes for a pair of literals are partitioned, we either get “persistent” integral solutions (same as in optimal) and/or obtain variables assigned 1 2 (half integral) values and need additional rounding to obtain a {0, 1} solution. We will first reparameterize the coefficients in our objective as a vector denoted by Φ. More specifically, we express the energy by collecting the unary and pairwise costs in (6) as the coefficients of the linear and quadratic variables. For a voxel j, we denote the unary coefficient as Φj and for a pair of voxels (i, j) we give their corresponding coefficients as Φij. For presentation, we show 4 Voxel pairs (i, j) i ∼j, i ̸∼= j i ̸∼j, i ∼= j i ∼j, i ∼= j (vi →vj), (¯vj →¯vi) 1 2cij 0 1 2cij (vj →vi), (¯vi →¯vj) 1 2cij 0 1 2cij (¯vj →vi), (¯vi →vj) 0 1 2λ 1 2λ Table 1: Illustration of edge weights introduced in the graph for voxel pairs. spatial adjacency as i ∼j, and if i and j share a bin in the histogram we denote it as i ∼= j, i.e., ∃b : Hb(i) = Hb(j) = 1. The definition of the pairwise costs will include the following scenarios: Φij = 8 > < > : cij if i ∼j and i ̸∼= j and (i, j) assigned to different labels λ if i ̸∼j and i ∼= j and (i, j) assigned to foreground cij if i ∼j and i ∼= j and (i, j) assigned to different labels λ if i ∼j and i ∼= j and (i, j) assigned to foreground (7) The above cases enumerate three possible relationships between a pair of voxels (i, j): (i) (i, j) are spatial neighbors but not bin neighbors; (ii) (i, j) are bin neighbors, but not spatial neighbors; (iii) (i, j) are bin neighbors and spatial neighbors. In addition, the cost is also a function of label assignments to (i, j). Note that we assume i ̸= j above since if i = j, we can absorb those costs in the unary terms (because xi · xi = xi). We define the unary costs for each voxel j next. Φj =  wj0 if j is assigned to background wj1 + λ −2λ ˆ Hb if j is assigned to foreground and ∃b : Hb(i) = 1 (8) 1 2wj0 I 1 2(wj1 + λ −2λ ˆHb) ¯I 1 2(wj1 + λ −2λ ˆHb) 1 2wj0 1 2λ 1 2λ s t 1 2cij 1 2cji 1 2cij 1 2cji bin neighbors spatial neighbors any voxel j Figure 2: A graph to optimize (6). Nodes in the left box represents vj; nodes in the right box represent ¯vj. Colors indicate spatial neighbors (orange) or bin neighbors (green). With the reparameterization given as Φ = [Φj Φij]T done, we follow the recipe in [18, 22] to construct a graph (briefly summarized below). For each voxel j ∈I, we introduce two nodes, vj and ¯vj. Hence, the size of the graph is 2|I|. We also have two special nodes s and t which denote the source and sink respectively. We connect each node to the source and/or the sink based on the unary costs, assuming that the source (and sink) partitions correspond to foreground (and background). The source is connected to the node vj with weight, 1 2(wj1 + λ −2λ ˆHb), and to node ¯vj with weight 1 2wj0. Nodes vj and ¯vj are in turn connected to the sink with costs 1 2wj0 and 1 2(wj1 + λ −2λ ˆHb) respectively. These edges, if saturated in a max-flow, count towards the node’s unary cost. Edges between node pairs (except source and sink) give pairwise terms of the energy. These edge weights (see Table1) quantify all possible relationships of pairwise voxels and label assignments (Fig. 2). A maximum flow/minimum cut procedure on this graph gives a solution to our problem. After the cut, each node (for a voxel) is connected either to the source set or to the sink set. Using this membership, we can obtain a final solution (i.e., labeling) as follows. xj = 8 < : 0 if vj ∈s, ¯vj ∈t 1 if vj ∈t, ¯vj ∈s 1 2 otherwise (9) A property of the solution obtained by (9) is that the variables assigned {0, 1} values are “persistent”, i.e., they are the same in the optimal integral solution to (6). This means that the solution from the algorithm above is partially optimal [18, 20]. We now only need to find an assignment for the 1 2 variables (to 0 or 1) by rounding. The rounding strategy and analysis is presented next. 5 Rounding and Approximation analysis In general, any reasonable heuristic can be used to round 1 2-valued variables to 0 or 1 (e.g., we can solve for and obtain a segmentation for only the 1 2-valued variables without the additional bias). Our 5 experiments later make use of such a heuristic. The approximation analysis below, however, is based on a more conservative scheme of rounding all 1 2-valued variables up to 1. We only summarize our main results here, the longer version of the paper includes details. A 2-approximation for the objective function (without the epitome bias) is known [16, 12]. The rounding above gives a constant factor approximation. Theorem 1 The rounding strategy described above gives a feasible solution to Problem (6). This solution is a factor 4 approximation to (6). Further, the approximation ratio is tight for this rounding. 6 Experimental Results Overview. We now empirically evaluate our algorithm for extracting specific structures of interest from DTI data, focusing on (1) Corpus Callosum (CC), and (2) Interior Capsule (IC) as representative examples. Our experiments were designed to answer the following main questions: (i) Can the model reliably and accurately identify the structures of interest? Note that general-purpose white matter segmentation methods do not extract specific regions (which is often obtained via intensive interactive methods instead). Solutions from our algorithm, if satisfactory, can be used directly for analysis or as a warm-start for user-guided segmentations for additional refinement. (ii) Does segmentation with a bias for fidelity with epitomes offer advantages over training a classifier on the same features? Clearly, the latter scheme will work nicely if the similarity between foreground/background voxels is sufficiently discriminative. Our experiments provide evidence that epitomes indeed offer advantages. (iii) Finally, we evaluate the advantages of our method in terms of relative effort expended by a user performing interactive extraction of CC and IC from 3-D volumes. Data and Setup. We acquired 25 Diffusion Tensor brain images in 12 non-collinear diffusion encoding directions (and one b = 0 reference image) with diffusion weighting factor of b = 1000s/mm2. Standard image processing included correcting for eddy current related distortion, distortion from field inhomogeneities (using field maps), and head motion. From this data, the tensor elements were estimated using standard toolboxes (Camino [23]). The images were then hand-segmented (slice by slice) by experts to serve as the gold standard segmentation. Within a leave one out cross validation scheme, we split our set into training (24 images) and test set (hold out image). Epitomes were constructed using training data (by averaging tensor volumes and generating feature codeword dictionaries), and then specific structures in the hold out image were segmented using our model. Codewords used for the epitome also served to train a SVM classifier (on training data), which was then used to label voxels as foreground (part of structure of interest) or background, in the hold-out image. We present the mean of segmentation accuracy over 25 realizations. Figure 3: WM/GM segmentation (without epitomes) from standard toolkits, overlaid on FA maps (axial, sagittal views shown). WM/GM DTI segmentation. To briefly elaborate on (i) above, we note that most existing DTI segmentation algorithms in the literature [24] focus on segmenting the entire white-matter (WM) from gray-matter (GM) where as the focus here is to extract specific structure within the WM pathways, to facilitate the type of analysis being pursued in neuroscience studies [25, 2]. Fig. 3 shows results of a DTI image WM segmentation. Such methods segment WM well but are not designed to identify different components within the WM. Certain recent works [26] have reported success in identifying structures such as the cingulum bundle if a good population specific atlas is available (here, one initializes the segmentation by a sophisticated registration procedure). Dictionary Generation. A suitable codebook of features (i.e., F from §2.2) is essential to modulate the segmentation (with an uninformative histogram, the process degenerates to a ordinary segmentation without epitomes). Results from our preliminary experiments suggested that the codeword generation must be informed by the properties/characteristics of Diffusion Tensor images. While general purpose feature extractors or interest-point detectors from Vision cannot be directly applied to tensor data, our simple scheme below is derived from these ideas. Briefly, by first setting up a neighborhood region around each voxel, we evaluate the local orientation context and shape in6 formation from the principal eigen vectors and eigen values of tensors at each neighboring voxel. Similar to Histogram of Oriented Gradients or SIFT, each neighboring voxel casts a vote for the primary eigen vector orientation (weighted by its eigen value), which encodes the distribution of tensor orientations in a local neighborhood around the voxel, as a feature vector. These feature vectors are then clustered, and each voxel is ‘assigned’ to its closest codeword/feature to give H(u). Certain adjustments are needed for structurally sparse regions close to periphery of the brain surface, where we use all primary eigen vectors in a (larger) neighborhood window. This dictionary generation is not rotationally invariant since the orientation of the eigen-vectors are used. Our literature review suggests that there is no ‘accepted’ strategy for feature extraction from tensor-valued images. While the problem is interesting, the procedure here yields reasonable results for our purpose. We acknowledge that improvements may be possible using more sophisticated approaches. Implementation Details. Our implementation in C++ was interfaced with a QPB solver [22, 18]. We used a distance measure proposed in DTI-TK [23] which is popular in the neuroimaging literature, to obtain a similarity measure between tensors. The unary terms for the MRF component were calculated as the least DTI-TK metric distance between the voxel and a set of labels (generated by sampling from foreground in the training data). Pairwise smoothness terms were calculated using a spatial neighborhood of 18 neighbors. The parameter λ was set to 10 for all runs. 6.1 Results: User guided interactive segmentation, Segmentation with Epitomes and SVMs User study for interactive segmentation. To assess the amount of effort expended in obtaining a good segmentation of the regions of interest in an interactive manner, we set up a user study with two users who were familiar with (but not experts in) neuroanatomy. The users were presented with the ground truth solution for each image. The user provided “scribbles” denoting foreground/background regions, which were incorporated into the segmentation via must-link/cannotlink constraints. Ignoring the time required for segmentation, typically 20-40 seeds were needed for each 2-D slice/image to obtain results close to ground-truth segmentations, which required ∼60s of user participation per 3-4 slices. Representative results are presented in Figs. 4–5 (column 5). Results from SVM and our model. For comparison, we trained a SVM classifier on the same set of voxel-codewords used for the epitomes. For training, feature vectors for foreground/background voxels from the training images were used, and the learnt function was used to classify voxels in the hold-out image. Representative results are presented in Figs. 4–5, overlaid on 2-D slices of Fractional Anisotropy. We see good consistency between our solutions and the ground truth in Figs. 4–5 where as the SVM results seem to oversegment, undersegment or pick up erroneous regions with similar contextual appearance to some voxels in the epitome. It is true that such a classification experiment with better (more discriminative) features will likely perform better; however, it is not clear how to reliably extract good quality features from tensor valued images. The results also suggest that our model exploits the epitome of such features rather well within a segmentation criterion. Quantitative Summary. For quantitative evaluations, we computed the Dice Similarity coefficient between the segmentation solutions A and the expert segmentation B, given as 2(A∩B) |A|+|B|. On CC and IC, the similarity coefficient of our solutions were 0.62 ± 0.04 and 0.57 ± 0.05 respectively. The corresponding values for the SVM segmentation were 0.28 ± 0.06 and 0.15 ± 0.02 respectively. Hence, the null hypothesis using a two sample t-test can be rejected at α = 0.01 (significance level). The running time of our algorithm was comparable to the running times of SVM using Shogun (a subset of voxels were used for training). It took ∼2 mins for our algorithm to solve the network flow on the graph, and < 4 mins to read in the images and construct the graph. While the segmentation results from the user-guided interactive segmentation are marginally better than ours, the user study above indicates that a significant level of interaction is required, which is already difficult for large 3-D volumes and becomes impractical for neuroimaging studies with tens of image volumes. 7 Discussion and Conclusions We present a new combinatorial algorithm for segmenting specific structures from DTI images. Our goal is to segment the structure while maintaining consistency with an epitome of the structure, generated from expert segmented images (note that this is different from top-down segmentation approaches [27], and algorithms which use a parametric prior [28, 11]). We see that direct application of max-margin methods does not yield satisfactory results, and inclusion of a segmentation-specific objective function seems essential. Our derived model can be optimized using a network flow pro7 Figure 4: A segmentation of the Corpus Callosum overlaid on FA maps. Rows refer to axial and sagittal views. Columns: (1) Tensors. (2) Ground truth. (3) Our solutions. (4) SVM results. (5) User-guided segmentation. Figure 5: A segmentation of the Interior Capsules overlaid on FA maps. Rows correspond to axial views. Columns: (1) Tensors. (2) Ground truth. (3) Our Solutions. (4) SVM results. (5) User-guided segmentation. cedure. We also prove a 4 factor approximation ratio, which is tight for the proposed rounding mechanism. We present experimental evaluations on a number of large scale image volumes which shows that the approach works well, and is also computationally efficient (2-3 mins). Empirical improvements seem possible by designing better methods of feature extraction from tensor-valued images. The model may serve to incorporate epitomes for general segmentation problems on other images as well. In summary, our approach shows that many structures of interest in neuroimaging can be accurately extracted from DTI data. References [1] J. Burns, D. Job, M. E. Bastin, et al. Structural disconnectivity in schizophrenia: a diffusion tensor magnetic resonance imaging study. The British J. of Psychiatry, 182(5):439–443, 2003. 1 8 [2] A. Pfefferbaum and E. Sullivan. Microstructural but not macrostructural disruption of white matter in women with chronic alcoholism. Neuroimage, 15(3):708–718, 2002. 1, 6 [3] T. Liu, H. Li, K. Wong, et al. Brain tissue segmentation based on DTI data. Neuroimage, 38:114–123, 2007. 2 [4] Z. Wang and B. Vemuri. DTI segmentation using an information theoretic tensor dissimilarity measure. Trans. on Med. Imaging, 24:1267–1277, 2005. 2 [5] P. A. Yushkevich, H. Zhang, T. J. Simon, and J. C. Gee. Structure-specific statistical mapping of white matter tracts using the continuous medial representation. In Proc. of MMBIA, 2007. 2 [6] N. Lawes, T. Barrick, V. Murugam, et al. Atlas based segmentation of white matter tracts of the human brain using diffusion tensor tractography and comparison with classical dissection. Neuroimage, 39:62– 79, 2008. 2 [7] C. B. Goodlett, T. P. Fletcher, J. H. Gilmore, and G. Gerig. Group analysis of DTI fiber tract statistics with application to neurodevelopment. Neuroimage, 45(1):S133 – S142, 2009. 2 [8] C. Rother, T. Minka, A. Blake, and V. Kolmogorov. Cosegmentation of image pairs by histogram matching: Incorporating a global constraint into MRFs. In Comp. Vision and Pattern Recog., 2006. 2, 3 [9] L. Mukherjee, V. Singh, and C. Dyer. Half-integrality based algorithms for cosegmentation of images. In Comp. Vision and Pattern Recog., 2009. 2, 3, 4 [10] D. Hochbaum and V. Singh. An efficient algorithm for co-segmentation. In Intl. Conf. on Comp. Vis., 2009. 2, 3 [11] D. Batra, A. Kowdle, D. Parikh, et al. icoseg: Interactive co-segmentation with intelligent scribble guidance. In Comp. Vision and Patter Recog., 2010. 2, 7 [12] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. Trans. on Pattern Anal. and Machine Intel., 23(11):1222–1239, 2001. 3, 6 [13] V. Kolmogorov, Y. Boykov, and C. Rother. Applications of parametric maxflow in Computer Vision. In Intl. Conf. on Comp. Vision, 2007. 3 [14] Y. T . Weldeselassie and G. Hamarneh. DT -MRI segmentation using graph cuts. In Medical Imaging: Image Processing, volume 6512 of Proc. SPIE, 2007. 3 [15] D. Hochbaum. An efficient algorithm for image segmentation, markov random fields and related problems. J. of the ACM, 48(4):686–701, 2001. 3 [16] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric partitioning and markov random fields. J. of the ACM, 49(5):616–639, 2002. 3, 6 [17] H. Ishikawa. Exact optimization for markov random fields with convex priors. Trans. on Pattern Anal. and Machine Intel, 25(10):1333–1336, 2003. 3 [18] E. Boros and P. Hammer. Pseudo-Boolean optimization. Disc. Appl. Math., 123:155–225, 2002. 4, 5, 7 [19] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof duality. In Comp. Vision and Pattern Recog., 2007. 4 [20] P. Kohli, A. Shekhovtsov, C. Rother, V. Kolmogorov, et al. On partial optimality in multi-label MRFs. In Intl. Conf. on Machine learning, 2008. 4, 5 [21] A. Raj, G. Singh, and R. Zabih. MRFs for MRIs: Bayesian reconstruction of MR images via graph cuts. In Comp. Vision and Pattern Recog., 2006. 4 [22] V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts-a review. Trans. on Pattern Anal. and Machine Intel., 29(7):1274, 2007. 5, 7 [23] H. Zhang, P. A. Yushkevich, D. C. Alexander, and J. C. Gee. Deformable registration of diffusion tensor MR images with explicit orientation optimization. Medical Image Analysis, 10:764–785, 2006. 6, 7 [24] M. Rousson, C. Lenglet, and R. Deriche. Level set and region based surface propagation for diffusion tensor MRI segmentation. In Proc. of CVAMIA-MMBIA, volume 3117 of LNCS, pages 123–134, 2004. 6 [25] S. M. Smith, M. Jenkinson, H. Johansen-Berg, et al. Tract-based spatial statistics: Voxelwise analysis of multi-subject diffusion data. 31:1487–1505, 2006. 6 [26] S. Awate, H. Zhang, and Gee. J. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis with applications to DTI tract extraction. Trans. on Med. Imaging, 26(11):1525–1536, 2007. 6 [27] E. Borenstein, E. Sharon, and S. Ullman. Combining top-down and bottom-up segmentation. In Comp. Vision and Pattern Recognition Workshop, 2004. 7 [28] C. Jingyu, Y. Qiong, W. Fang, et al. Transductive object cutout. In Comp. Vision and Pattern Recog., 2008. 7 9
2010
77
4,121
Over-complete representations on recurrent neural networks can support persistent percepts Shaul Druckmann Janelia Farm Research Campus Howard Hughes Medical Institute Ashburn, VA 20147 druckmanns@janelia.hhmi.org Dmitri B. Chklovskii Janelia Farm Research Campus Howard Hughes Medical Institute Ashburn, VA 20147 mitya@janelia.hhmi.org Abstract A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons are then connected by a sparse network of lateral synapses. Here we propose that such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which the receptive field of each neuron is re-expressed by its outgoing connections, a represented percept can remain constant despite changing activity. We term this choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit. 1 Introduction Two salient features of cortical networks are the numerous recurrent lateral connections within a cortical area and the high ratio of cortical cells to sensory input channels. In their seminal study [1], Olshausen and Field argued that such architecture may subserve sparse over-complete representations, which maximize representation accuracy while minimizing the metabolic cost of spiking. In this framework, lateral connections between neurons with correlated receptive fields mediate explaining away of the sensory input features[2]. With the exception of an Ising-like generative model for the lateral connections [3] and a mutual information maximization approach [4], most theoretical work on lateral connections did not focus on the representation over-completeness [5] and references therein. Here, we propose that over-complete representations on recurrently connected networks offer a solution to a long-standing puzzle in neuroscience, that of maintaining a stable sensory percept in the absence of time-invariant persistent activity (rate of action potential discharge). In order for sensory percepts to guide actions, their duration must extend to behavioral time scales, hundreds of milliseconds or seconds if not more. However, many cortical neurons exhibit time-varying activity even during working memory tasks [6, 7] and references therein. If each neuron codes for orthogonal directions in stimulus space, any change in the activity of neurons would cause a distortion in the network representation, implying that a percept cannot be maintained. We point out that, in an over-complete representation, network activity can change without any change in the percept, allowing persistent percepts to be maintained in face of variable neuronal activity. This results from the fact that the activity space has a higher dimensionality than that of the stimulus space. When the activity changes in a direction nulled by the projection onto stimulus space, the percept remains invariant. 1 What lateral connectivity can support persistent percepts, even in the face of changing neuronal activity? We derive the condition on lateral connection weights for networks to maintain persistent percepts, thus defining a family of REceptive FIeld REcombination networks. Furthermore, we propose that minimizing synaptic volume cost favors sparse REFIRE networks, whose properties are remarkably similar to that of the cortex. Such REFIRE networks act as high dimensional integrators of sensory input. 2 Model We consider n sensory neurons, their activity marked by s in Rn which project to a layer of m cortical neurons, where m > n. The activity of the m neurons, marked by a in Rm, at any given time represents a percept of a certain stimulus. The represented percept s is a linear superposition of feature vectors, stacked as columns of matrix D, weighted by the neuronal activity a: s = Da. (1) For instance, s could represent the intensity level of pixels in a patch of the visual field and the columns of D a dictionary chosen to represent the patches, e.g. a set of Gabor filters [8]. Since m > n, the columns of dictionary D cannot be orthogonal and hence define a frame rather than a basis [9]. 2.1 Frames A frame is a generalization of the idea of a basis to linearly dependent elements [9]. The mapping between the activity space Rm and the sensory space Rn is accomplished by the synthesis operator, D. The adjoint operator DT is called the analysis operator and their composition the frame operator DDT . As a consequence of columns of D being a frame, a given vector in the space of percepts can be represented non-uniquely, i.e. with different coefficients expressed by neuronal activity a. The general form of coefficients is given by: a = DT (DDT )−1s + a⊥, (2) where a⊥belongs to the null-space of D, i.e. Da⊥= 0. One choice of coefficients, called frame coefficients, corresponds to a⊥= 0 and minimizes their l2 norm. Alternatively one can choose a set of coefficients minimizing the l1 norm. These can be computed by Matching Pursuit [10], Basis Pursuit [11] or LASSO [12], or by the dynamics of a neural network with feedforward and lateral connections [13]. In summary, the neural activity is an over-complete representation of the sensory percepts, the m columns of D acting as a frame for the space of sensory percepts. 2.2 Persistent percepts and lateral connectivity Now, we derive a necessary and sufficient condition on the lateral connections L such that for every a the percept represented by Equation (1) persists. We focus on the dynamics of a following a transient presentation of the sensory stimulus. The dynamics of a network with lateral connectivity matrix L is given by: ˙a = −a + La, (3) where time is measured in units of the neuronal membrane time constant. Requiring time-invariant persistent activity amounts to ˙a = 0 or a = La. (4) However, this is not necessary if we require only the percept represented by the network to be fixed. Instead, ˙s = D ˙a = D(−a + La) = 0 (5) Thus, setting the derivative of s to zero is tantamount to Da = DLa. (6) If we require persistent percepts for any a, then: D = DL (7) 2 Equation (7) has a trivial solution L = I, which corresponds to a network with no actual lateral connections and only autapses. We do not consider this solution further for two reasons. First, autapses are extremely rare among cortical neurons[14]. Second, recurrent networks better support persistency than autapses [15, 16]. The intuition behind the derivation of Equation (7) is as follows: as the activity of each neuron changes due to the first term in the rhs of Equation (5) its contribution to the percept may change. To compensate for this change without necessarily keeping the activity fixed, we require that the other neurons adjust their activity according to Equation (6). The condition imposed by Equation (7) on the synaptic weights can be understood as follows. For each neuron j the sum of its post-synaptic partners receptive fields, weighted by the synaptic efficacy from neuron j to the other neurons equals to the receptive field of neuron j. Thus, the other neurons get excited by exactly the amount that it would take for them to replace the lost contribution to the percept. Equation (7) and its non-trivial solutions that maintain persistent percepts are the main results of the present study. We term non-trivial solutions of Equation (7) REceptive FIeld REexpression, or REFIRE networks due to the intuition underlying their definition. Some patterns of activity satisfying Equation (4) will remain time-invariant themselves. These correspond to patterns spanned by the right eigenvectors of L with an eigenvalue of one. Note that in order to satisfy Equation (7) a right eigenvector v of L must have either an eigenvalue of one or be in the null-space of D. There are infinitely many solutions satisfying Equation (7), since there are m ∗n equations and m ∗m variables in L. A general solution is given by: L = DT (DDT )−1D + L⊥, (8) where L⊥indicates a component in L corresponding to the null-space of D i.e. DL⊥= 0. We shall use these degrees of freedom to require a zero diagonal for L, thus avoiding autapses. -1 -1 -1 -1 -1 -1 a a a a a a s s s a D L (mx1) (nx1) (mxm) (mxn) T Figure 1: Schematic network diagram and Mercedes-Benz example. Left: Network diagram. Middle: Directions of vectors in the MB example. Right: visualization of L 2.3 An example: the Mercedes-Benz frame In order to present a more intuitive view of the concept of persistent percepts we consider the Mercedes-Benz frame [17]. This simple frame spans the R2 plane with three frame elements: [0 1], [− √ 3/2 −1/2], [ √ 3/2 −1/2]. In this case, the frame operator DDT has a particularly simple form, being proportional to the identity matrix, indicating that the frame is tight. The first term in the general form of L (Equation (8)) has a non-zero diagonal, which can be removed by adding L⊥, a matrix with all its entries equal to one (times a scalar). Thus, L is: L = 0 −1 −1 −1 0 −1 −1 −1 0 ! This seems a rather unlikely candidate matrix to support persistent percepts. However, consider starting out with the vector a0 = [1 0 0] representing the point [0 1] on the plane, after convergence of the dynamics we have a = [2/3 −1/3 −1/3]. This new activity vector represents 3 exactly the same point on the plane: Da = [0 1]. Thus, the percept, the point on the plane, remained constant despite changing neuronal activity. Note that some patterns of activity will remain strictly persistent themselves. These correspond to vectors which are a linear combination of the right eigenvectors of L with an eigenvalue of one. In this case, these eigenvectors are: v1 = [−1 1 0],v2 = [1/2 1/2 −1]. 2.4 The sparse REFIRE network Which members of the family of REFIRE networks obeying equation (7) are most likely to model cortical networks? In the cortex, the connectivity is sparse and the synaptic weights are distributed exponentially [18, 19]. These measurements are consistent with minimizing cost proportional to synaptic weight, such as for example their volume. Motivated by these observations, we choose each column of L as a sparse representation of each individual dictionary element by every other element. Define Dj = d1, d2, . . . dj−1, dj+1 . . . dm. We shall denote the sparse approximation coefficients by β. Therefore: β∗ j = min βj∈Rm−1 ||dj −Djβj||2 2 + λ||βj||1 (9) These are vectors in Rm−1, we now need to insert a zero in the position of the dictionary element that was extracted for each of these vectors. Denote by ˜βj a vector where a zero before the jth location of βj, resulting in a vector in Rm. The connectivity of our model network is given by L = [ ˜ β1, ˜ β2, . . . ˜ βm] in Rmxm. We call this form of L the sparse REFIRE network. Similar networks were previously constructed on the raw data (or image patches) [20, 21], while sparse REFIRE networks reflect the relationship among dictionary elements. Previously, the dependencies between dictionary elements were captured by tree-graphs [22, 23]. 3 Results In this section, we apply our model to the primary visual cortex by modeling the receptive fields following the approach of [1]. We study the properties of the resulting sparse REFIRE network and compare them with experimentally established properties of cortical networks. 3.1 Constructing the sparse REFIRE network for visual cortex We learn the sparse REFIRE network from a standard set of natural images [8]. We extract patches of size 13x13 pixels. We use a set of 100,000 such patches distributed evenly across different natural images to learn the model. Whitening was performed through PCA, after the DC component of each patch was removed. The dimensionality was reduced from 169 to 84 dimensions. We learn a four times over-complete dictionary, via the SPAMS online sparse approximation toolbox [24]. Figure 2 left shows the forward weights (columns of D) learned. As expected, the filters obtained are edge detectors differing in scale, spatial location and orientation. The sparse REFIRE network was then learned from the dictionary using the same toolbox. Parameter λ in equation (9) governs the tradeoff between sparsity and reconstruction fidelity, figure 2 right. We verified that the results presented in this study do not qualitatively change over a wide range of λ and chose the value of λ where the average probability of connection was 9%, in agreement with the experimental number of approximately 10%. For this choice the relative reconstruction mismatch was approximately 10−3. The distribution of synaptic weights in the network, Figure 3 left, shows a strong bias to zero valued connections and a heavier than gaussian tail as does the cortical data [25]. For an enlarged view of the network see Figure 7. From here on we consider that particular choice when we refer to the sparse REFIRE network. Remarkably, the real part of all eigenvalues is less than or equal to one, Figure 3 right, indicating stability of network dynamics. Although equation (7) guarantees that n eigenvalues are equal to one, it does not rule out the existence of eigenvalues with greater real part. We speculate that the absence of such eigenvalues in the spectrum is due to the l1 term in equation (9), the minimization of which could be viewed as a shrinkage of Gershgorin circles. We find that the connectivity learned 4 10-5 10-4 10-3 10-2 10-1 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 Lambda Relative reconstruction mismatch 500 1000 1500 2000 2500 3000 Summed l-one length of L Figure 2: The sparse REFIRE network. Left: the patches corresponding to columns of D sorted by variance. Right: Summed l1-norm of all columns of L (left y-axis, red), the reconstruction mismatch |(D −DL)|/|D| (right y-axis, blue) as a function of λ. Dashed line indicates the value of λ chosen for the sparse REFIRE network. was asymmetric with substantial imaginary components in the eigenvalues, see Figure 3 right. In general, the sparse REFIRE network is unlikely to be symmetric because the connection weights between a pair of neurons are not decided based solely on the identity of the neurons in the pair but are dependent on other connections of the same pre-synaptic neuron. 0 0.2 0.4 0.6 0.8 1 1.2 0 500 1000 1500 2000 11500 Connection weight Count Connection weight Weight survival func. 0 1 2 3 10-6 10-4 10-2 10 0 -1.5 -1 -0.5 0 0.5 1 -0.6 -0.2 0.2 0.6 Real part of eigenvalue Imaginary part of eigenvalue Figure 3: Properties of lateral connections. Left: distribution of lateral connectivity weights. Inset shows a survival plot with logarithmic y-axis and same axes limits. Right: scatter plot of eigenvalues of the lateral connectivity matrix. Note that there are many eigenvalues at real value one, imaginary value zero. Histogram shown below plot Numerical simulations of the dynamics of a recurrent network with connectivity matrix L confirm that the percept remains stable during the network dynamics. We chose an image patch at random and simulated the network dynamics. As can be seen in Figure 4 left, despite significant changes in the activity of the neurons, the percept encoded by the network remained stable, PSNR between original image and image after dynamics lasting 100 neuronal time constants: 45.5dB. The dynamics of the network desparsified the representation (Figure 4 right). Averaged across multiple patches, the value of each coefficient in the sparse representation was 0.0704, while after the network dynamics this increased to 0.0752, though still below the value obtained for the frame coefficients representation which was 0.0814. 3.2 Computational advantages of the sparse REFIRE network In this section, we consider possible computational advantages for the de-coupling between the sensory percept and it representation by neuronal activity. Specifically, we address a shortcoming of the sparse representation, its lack of robustness [13]. Namely, the fact that stimuli that differ only to a small degree might end up being represented with very different coefficients. Intuitively speaking, this may occur when two (or more) dictionary elements compete for the same role in the 5 0 10 20 30 40 Time, units of neuro. time cons. Activity a.u Activity after dynamics a.u. Activity before dynamics a.u. 0 -0.1 0 0.1 Figure 4: Evolution of neuronal activity in time. Left: activity of a subset of neurons over time. Top shows the original percept (framed in black) and plotted left to right patches taken from consecutive points in the dynamics. Right: scatter of the coefficients before and after 400 neuronal time constants of the dynamics. sparse representation. To arrive at a sparse approximation of the stimuli either one of the dictionary elements could potentially be used, but due to the high cost of non-sparseness both of them together are not likely to be chosen in a given representation. Thus, small changes in the image, as might arise due to various noise sources, might cause one of the coefficients to be preferred over the other in an essentially random fashion, potentially resulting in very different coefficient values for highly similar images. The dynamics of the sparse REFIRE network improve the robustness of the coefficient values in the face of noise. In order to model this effect we extract a single patch and corrupt it repeatedly with i.i.d 5% Gaussian noise. Figure 5 left shows two patches with similar orientation. Figure 5 middle shows the values of these two coefficients for the sparse approximation taken across the different noise repetitions. As can be clearly seen only one or the other of the two coefficients is used, exemplifying the competition described above. The resulting flickering in the coefficients exemplifies this lack of robustness. Note that the true lack of robustness arises due to multicollinear relations between the different dictionary elements. Here we restrict ourselves to two in the interests of clarity. Figure 5 right shows these coefficient values plotted one against the other in red along with the values of the two coefficients following the model dynamics in blue. In the latter case, the coefficient values between different repetitions remain fairly constant and the flickering representation as in Figure 5 middle is abolished. We further examined the utility of a more stable representation by training a Naive Bayes classifier to discriminate between noisy versions of two patches. We corrupt the two patches with i.i.d noise and train the classifier on 75% of the data while reserving the remaining data for testing generalization. We train one of classifier on the sparse representation and the other on the representation following the dynamics of the sparse REFIRE network. We find that the generalization of the classifier learned following the dynamics was indeed higher, providing 92% accuracy, while the sparse coefficient trained classifier scored 83% accuracy. We then demonstrate the computational advantages of the sparse REFIRE network in a more realistic scenario, encoding a set of patches extracted from an image by shifting the patch one pixel at a time. Such a shift can be caused by fixational drift or slow self-movement. Figure 5 right top shows a subset of the patches extracted in this fashion. For each of the patches we calculate the sparse approximation coefficients and then determine the dot product between the representation of consecutive patches. We then take the same coefficients, evolve them through the dynamics of the sparse REFIRE network network and compute the dot product between these new coefficients. Figure 5 right bottom shows the normalized dot product, the value of the dot product between the coefficients of two consecutive patches after the sparse REFIRE network dynamics, divided by the same dot product between the original coefficients. As can be seen, for nearly all cases the ratio is higher than one, indicating a smoother transition between the coefficients of the consecutive patches. 6 0 1 2 3 4 5 0 1 2 3 4 5 Coefcient One Coefcient Two 0 5 10 15 0.5 1 2 3 4 5 Relative dot prod. Figure 5: Sparse REFIRE network dynamics enhances the robustness of representation. Left: the patches corresponding to two columns of D with similar tuning. Followed by the coefficient of each of the patch in the representation of the different noisy image instantiations and a scatter plot of the coefficient values before recurrent dynamics (red) and following (blue) recurrent dynamics. Right: an example of the patches in the sliding frame (top) and the normalized dot product between consecutive patches. Figure 6: Dictionary clustering. Clusters of patches obtained by a three-way sparse REFIRE network partitioning by normalized cut. Note the mainly horizontal orientation of the first set of patches and the vertical orientation of the second. The sparse REFIRE network encodes useful information regarding the relation between the different dictionary elements. This can be probed by partitioning performed on the graph [20]. Figure 6 shows the components of a normalized cut performed on the sparse REFIRE network. The left group shows clear bias towards horizontal orientation tuning, the middle towards vertical. Thus, subspaces can be learned directly from partitioning on the sparse REFIRE network offering a complementary approach to learning structured models directly from the data [26, 27]. Finally, the sparse REFIRE network serves as an integrator of the sensory input. Eigenspace of the unit eigenvalue is a multi-dimensional generalization of the line attractor used to model persistent activity [16]. However, unlike the persistent activity theory, which focuses on dynamics along the line attractor, we emphasize the transient dynamics approaching the unitary eigenspace. 4 Discussion This study makes a number of novel contributions. First, we propose and demonstrate that in an over-complete representation certain types of network connectivity allow the percept, i.e. the stimulus represented by the network activity, to remain fixed in time despite changing neuronal activity. Second, we propose the sparse REFIRE network as a biologically plausible model for cortical lateral connections that enables such persistent percepts. Third, we point out that the ability to manipulate activity without affecting the accuracy of representation can be exploited in order to achieve computational goals. As an example, we show that the sparse REFIRE network dynamics, though causing the representation to be less sparse, alleviates the problem of representation non-robustness. Although this study focused on sensory representation in the visual cortex, the framework can be extended to other sensory modalities, motor cortex and, perhaps, even higher cognitive areas such as prefrontal cortex or hippocampus. 7 0 20 40 60 80 0 0.02 0.04 0.06 Fraction Figure 7: sparse REFIRE network structure. Nodes are shown by a patch corresponding to its feature vector. Arrows indicate connections, blue excitatory, red inhibitory. Plot organized to put strongly connected nodes close in space. Only strongest connections shown in the interests of clarity. Inset: Left: histogram of connectivity fraction by difference in feature orientation; red non-zero connections, gray all connections. Right: zoomed in view. The sparse REFIRE network model bears an important relation to the family of sparse subspace models, which have been suggested to improve the robustness of sparse representations[26, 27]. We have shown that subspaces can be learned directly from the graph by standard graph partitioning algorithms. The optimal way to leverage the information embodied in the sparse REFIRE network to learn subspace-like models is a subject of ongoing work with promising results as is the study of different matrices L that allow persistent percepts. Acknowledgments We would like to thank Anatoli Grinshpan, Tao Hu, Alexei Koulakov, Bruno Olshausen and Lav Varshney for fruitful discussions and Frank Midgley for assistance with preparing figure 7. References [1] B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, pp. 607–9, Jun 1996. [2] M. Rehn and F. Sommer, “A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields,” Journal of Computational Neuroscience, vol. 22, pp. 135–146, 2007. 10.1007/s10827-006-0003-9. [3] P. J. Garrigues and B. A. Olshausen, “Learning horizontal connections in a sparse coding model of natural images,” Advances in Neural Information Processing Systems, vol. 20, pp. 505–512, 2008. 8 [4] O. Shriki, H. Sompolinsky, and D. D. Lee, “An information maximization approach to overcomplete and recurrent representations,” Advances in Neural Information Processing Systems, vol. 12, pp. 87–93, 2000. [5] D. B. Chklovskii and A. A. Koulakov, “Maps in the brain: What can we learn from them?,” Annual Review of Neuroscience, vol. 27, no. 1, pp. 369–392, 2004. [6] G. Major and D. Tank, “Persistent neural activity: prevalence and mechanisms,” Current opinion in neurobiology, vol. 14, no. 6, pp. 675–684, 2004. [7] M. Goldman, “Memory without feedback in a neural network,” Neuron, vol. 61, no. 4, pp. 621– 634, 2009. [8] A. Hyvarinen, J. Hurri, and P. O. Hoyer, Natural Image Statistics: A Probabilistic Approach to Early Computational Vision. Springer Publishing Company, Incorporated, 2009. [9] O. Christensen, An Introduction to Frames and Riesz Bases. birkhauser, 2003. [10] S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” Signal Processing, IEEE Transactions on, vol. 41, pp. 3397 –3415, dec 1993. [11] S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM review, vol. 43, no. 1, pp. 129–159, 2001. [12] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society (Series B), vol. 58, pp. 267–288, 1996. [13] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen, “Sparse coding via thresholding and local competition in neural circuits,” Neural Comput, vol. 20, pp. 2526–63, 2008. [14] V. Braitenberg and A. Sch¨uz, Cortex: Statistics and Geometry of Neuronal Connectivity. Berlin, Germany: Springer, 1998. ISBN: 3-540-63816-4. [15] S. Cannon, D. Robinson, and S. Shamma, “A proposed neural network for the integrator of the oculomotor system,” Biological Cybernetics, vol. 49, no. 2, pp. 127–136, 1983. [16] H. Seung, “How the brain keeps the eyes still,” Proceedings of the National Academy of Sciences, vol. 93, no. 23, p. 13339, 1996. [17] J. Kovavcevic and A. Chebira, “An introduction to frames,” Found. Trends Signal Process., vol. 2, no. 1, pp. 1–94, 2008. [18] Y. Mishchenko, T. Hu, J. Spacek, J. Mendenhall, K. M. Harris, and D. B. Chklovskii, “Ultrastructural analysis of hippocampal neuropil from the connectomics perspective,” Neuron, vol. 67, no. 6, pp. 1009–1020, 2010. [19] L. R. Varshney, P. J. Sj¨ostr¨om, and D. B. Chklovskii, “Optimal information storage in noisy synapses under resource constraints,” Neuron, vol. 52, no. 3, pp. 409 – 423, 2006. [20] B. Cheng, J. Yang, S. Yan, Y. Fu, and T. Huang, “Learning with L1-Graph for Image Analysis,” IEEE Transactions on Image Processing, p. 1, 2010. [21] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in CVPR, pp. 2790 –2797, 2009. [22] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach, “Proximal Methods for Sparse Hierarchical Dictionary Learning,” Proc. ICML, 2010. [23] D. Zoran and Y. Weiss, “The” Tree-Dependent Components” of Natural Images are Edge Filters,” Advances in Neural Information Processing Systems, 2009. [24] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” Journal of Machine Learning Research, vol. 11, pp. 19–60, 2010. [25] S. Song, P. J. Sj¨ostr¨om, M. Reigl, S. Nelson, and D. B. Chklovskii, “Highly nonrandom features of synaptic connectivity in local cortical circuits,” PLoS Biol, vol. 3, p. e68, Mar 2005. [26] G. Yu, G. Sapiro, and S. Mallat, “Image modeling and enhancement via structured sparse model selection,” 2010. [27] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, “Learning invariant features through topographic filter maps,” in Proc. International Conference on Computer Vision and Pattern Recognition (CVPR’09), IEEE, 2009. 9
2010
78
4,122
Non-Stochastic Bandit Slate Problems Satyen Kale Yahoo! Research Santa Clara, CA skale@yahoo-inc.com Lev Reyzin∗ Georgia Inst. of Technology Atlanta, GA lreyzin@cc.gatech.edu Robert E. Schapire† Princeton University Princeton, NJ schapire@cs.princeton.edu Abstract We consider bandit problems, motivated by applications in online advertising and news story selection, in which the learner must repeatedly select a slate, that is, a subset of size s from K possible actions, and then receives rewards for just the selected actions. The goal is to minimize the regret with respect to total reward of the best slate computed in hindsight. We consider unordered and ordered versions of the problem, and give efficient algorithms which have regret O( √ T), where the constant depends on the specific nature of the problem. We also consider versions of the problem where we have access to a number of policies which make recommendations for slates in every round, and give algorithms with O( √ T) regret for competing with the best such policy as well. We make use of the technique of relative entropy projections combined with the usual multiplicative weight update algorithm to obtain our algorithms. 1 Introduction In traditional bandit models, the learner is presented with a set of K actions. On each of T rounds, an adversary (or the world) first chooses rewards for each action, and afterwards the learner decides which action it wants to take. The learner then receives the reward of its chosen action, but does not see the rewards of the other actions. In the standard bandit setting, the learner’s goal is to compete with the best fixed arm in hindsight. In the more general “experts setting,” each of N experts recommends an arm on each round, and the goal of the learner is to perform as well as the best expert in hindsight. The bandit setting tackles many problems where a learner’s decisions reflect not only how well it performs but also the data it learns from — a good algorithm will balance exploiting actions it already knows to be good and exploring actions for which its estimates are less certain. One such real-world problem appears in computational advertising, where publishers try to present their customers with relevant advertisements. In this setting, the actions correspond to advertisements, and choosing an action means displaying the corresponding ad. The rewards correspond to the payments from the advertiser to the publisher, and these rewards depend on the probability of users clicking on the ads. Unfortunately, many real-world problems, including the computational advertising problem, do not fit so nicely into the traditional bandit framework. Most of the time, advertisers have the ability to display more than one ad to users, and users can click on more than one of the ads displayed to them. To capture this reality, in this paper we define the slate problem. This setting is similar to the traditional bandit setting, except that here the advertiser selects a slate, or subset, of S actions. In this paper we first consider the unordered slate problem, where the reward to the learning algorithm is the sum of the rewards of the chosen actions in the slate. This setting is applicable when all ∗This work was done while Lev Reyzin was at Yahoo! Research, New York. This material is based upon work supported by the National Science Foundation under Grant #0937060 to the Computing Research Association for the Computing Innovation Fellowship program. †This work was done while R. Schapire was visiting Yahoo! Research, New York. 1 actions in a slate are treated equally. While this is a realistic assumption in certain settings, we also deal with the case when different positions in a slate have different importance. Going back to our computational advertising example, we can see not all ads are given the same treatment (i.e. an ad displayed higher in a list is more likely to be clicked on). One may plausibly assume that for every ad and every position that it can be shown in, there is a click-through-rate associated with the (ad, position) pair, which specifies the probability that a user will click on the ad if it is displayed in that position. This is a very general user model used widely in practice in web search engines. To abstract this, we turn to the ordered slate problem, where for each action and position in the ordering, the adversary specifies a reward for using the action in that position. The reward to the learner then is the sum of the rewards of the (actions, position) pairs in the chosen ordered slate.1 This setting is similar to that of Gy¨orgy, Linder, Lugosi and Ottucs´ak [10] in that the cost of all actions in the chosen slate are revealed, rather than just the total cost of the slate. Finally, we show how to tackle these problems in the experts setting, where instead of competing with the best slate in hindsight, the algorithm competes with the best expert, recommending different slates on different rounds. One key idea appearing in our algorithms is to use a variant of the multiplicative weights expert algorithm for a restricted convex set of distributions. In our case, the restricted set of distributions over actions corresponds to the one defined by the stipulation that the learner choose a slate instead of individual actions. Our variant first finds the distribution generated by multiplicative weights and then chooses the closest distribution in the restricted subset using relative entropy as the distance metric — this is a type of Bregman projection, which has certain nice properties for our analysis. Previous Work. The multi-armed bandit problem, first studied by Lai and Robbins [15], is a classic problem which has had wide application. In the stochastic setting, where the rewards of the arms are i.i.d., Lai and Robbins [15] and Auer, Cesa-Bianchi and Fischer [2] gave regret bounds of O(K ln(T)). In the non-stochastic setting, Auer et al. [3] gave regret bounds of O( p K ln(K)T).2 This non-stochastic setting of the multi-armed bandit problem is exactly the specific case of our problem when the slate size is 1, and hence our results generalize those of Auer et al. which can be recovered by setting s = 1. Our problem is a special case of the more general online linear optimization with bandit feedback problem [1, 4, 5, 11]. Specializing the best result in this series to our setting, we get worse regret bounds of O( p T log(T)). The constant in the O(·) notation is also worse than our bounds. For a more specific comparison of regret bounds, see Section 2. Our algorithms, being specialized for the slates problem, are simpler to implement as well, avoiding the sophisticated self-concordant barrier techniques of [1]. This work also builds upon the algorithm in [18] to learn subsets of experts and the algorithm in [12] for learning permutations, both in the full information setting. Our work is also a special case of the Combinatorial Bandits setting of Cesa-Bianchi and Lugosi [9]; however, our algorithms obtain better regret bounds and are computationally more efficient. Our multiplicative weights algorithm also appears under the name Component Hedge in the independent work of Koolen, Warmuth and Kivinen [14]. Furthermore, the expertless, unordered slate problem is studied by Uchiya, Nakamura and Kudo [17] who obtain the same asymptotic bounds as appear in this paper, though using different techniques. 2 Statement of the problem and main results Notation. For vectors x, y ∈RK, x · y denotes their inner product, viz. P i xiyi. For matrices X, Y ∈Rs×K, X • Y denotes their inner product considering them vectors in RsK, viz. 1The unordered slate problem is a special case of the ordered slate problem for which all positional factors are equal. However, the bound on the regret that we get when we consider the unordered slate problem separately is a factor of ˜O(√s) better than when we treat it as a special case of the ordered slate problem. 2The difference in the regret bounds can be attributed to the definition of regret in the stochastic and nonstochastic settings. In the stochastic setting, we compare the algorithm’s expected reward to that of the arm with the largest expected reward, with the expectation taken over the reward distribution. 2 P ij XijYij. For a set S of actions, let 1S be the indicator vector for that set. For two distributions p and q, let RE(p ∥q) denote their relative entropy, i.e. RE(p ∥q) = P i pi ln( pi qi ). Problem Statement. In a sequence of rounds, for t = 1, 2, . . . , T, we are required to choose a slate from a base set A of K actions. An unordered slate is a subset S ⊆A of s out of the K actions. An ordered slate is a slate together with an ordering over its s actions; thus, it is a one-to-one mapping π : {1, 2, . . . , s} →A. Prior to the selection of the slate, the adversary chooses losses3 for the actions in the slates. Once the slate is chosen, the cost of only the actions in the chosen slate is revealed. This cost is defined in the following manner: • Unordered slate. The adversary chooses a loss vector ℓ(t) ∈RK which specifies a loss ℓj(t) ∈[−1, 1] for every action j ∈A. For a chosen slate S, only the coordinates ℓj(t) for j ∈S are revealed, and the cost incurred for choosing S is P j∈S ℓj(t). • Ordered slate. The adversary chooses a loss matrix L(t) ∈Rs×K which specifies a loss Lij(t) ∈[−1, 1] for every action j ∈A and every position i, 1 ≤i ≤s, in the ordering on the slate. For a chosen slate π, the entries Li,π(i)(t) for every position i are revealed, and the cost incurred for choosing π is Ps i=1 Li,π(i)(t). In the unordered slate problem, if slate S(t) is chosen in round t, for t = 1, 2, . . . , T, then the regret of the algorithm is defined to be RegretT = T X t=1 X j∈S(t) ℓj(t) −min S T X t=1 X j∈S ℓj(t). Here, the subscript S is used as a shorthand for ranging over all slates S. The regret for the ordered slate problem is defined analogously. Our goal is to design a randomized algorithm for online slate selection such that E[RegretT ] = o(T), where the expectation is taken over the internal randomization of the algorithm. Competing with policies. Frequently in applications we have access to N policies which are algorithms that recommend slates to use in every round. These policies might leverage extra information that we have about the losses in the next round. It is therefore beneficial to devise algorithms that have low regret with respect to the best policy in the pool in hindsight, where regret is defined as: RegretT = T X t=1 X j∈S(t) ℓj(t) −min ρ T X t=1 X j∈Sρ(t) ℓj(t). Here, ρ ranges over all policies, Sρ(t) is the recommendation of policy ρ at time t, and S(t) is the algorithm’s chosen slate. The regret is defined analogously for ordered slates. More generally, we may allow policies to recommend distributions over slates, and our goal is to minimize the expected regret with respect to the best policy in hindsight, where the expectation is taken over the distribution recommended by the policy as well as the internal randomization of the algorithm. Our results. We are now able to formally state our main results: Theorem 2.1. There are efficient (running in poly(s, K) time in the no-policies case, and in poly(s, K, N) time with N policies) randomized algorithms achieving the following regret bounds: Unordered slates Ordered slates No policies 4 p sK ln(K/s)T (Sec. 3.2) 4s p K ln(K)T (Sec. 3.3) N policies 4 p sK ln(N)T (Sec. 4.1) 4s p K ln(N)T (Sec. 4.2) To compare, the best bounds obtained for the no-policies case using the more general algorithms [1] and [9] are O( p s3K ln(K/s)T) in the unordered slates problem, and O(s2p K ln(K)T) in the ordered slates problem. It is also possible, in the no-policies setting, to devise algorithms that have regret bounded by O( √ T) with high probability, using the upper confidence bounds technique of [3]. We omit these algorithms in this paper for the sake of brevity. 3Note that we switch to losses rather than rewards to be consistent with most recent literature on online learning. Since we allow negative losses, we can easily deal with rewards as well. 3 Algorithm MW(P) Initialization: An arbitrary probability distribution p(1) ∈P on the experts, and some η > 0. For t = 1, 2, . . . , T: 1. Choose distribution p(t) over experts, and observe the cost vector ℓ(t). 2. Compute the probability vector ˆp(t + 1) using the following multiplicative update rule: for every expert i, ˆpi(t + 1) = pi(t) exp(−ηℓi(t))/Z(t) (1) where Z(t) = P i pi(t) exp(−ηℓi(t)) is the normalization factor. 3. Set p(t + 1) to be the projection of ˆp(t + 1) on the set P using the RE as a distance function, i.e. p(t + 1) = arg minp∈P RE(p ∥ˆp(t + 1)). Figure 1: The Multiplicative Weights Algorithm with Restricted Distributions 3 Algorithms for the slate problems with no policies 3.1 Main algorithmic ideas Our starting point is the Hedge algorithm for learning online with expert advice. In this setting, on each round t, the learner chooses a probability distribution p(t) over experts, each of which then suffers a (fully observable) loss represented by the vector ℓ(t). The learner’s loss is then p(t) · ℓ(t). The main idea of our approach is to apply Hedge (and ideas from bandit variants of it, especially Exp3 [3]) by associating the probability distributions that it selects with mixtures of (ordered or unordered) slates, and thus with the randomized choice of a slate. However, this requires that the selected probability distributions have a particular form, which we describe shortly. We therefore need a special variant of Hedge which uses only distributions p(t) from some fixed convex subset P of the simplex of all distributions. The goal then is to minimize regret relative to an arbitrary distribution p ∈P. Such a version of Hedge is given in Figure 1, and a statement of its performance below. This algorithm is implicit in the work of [13, 18]. Theorem 3.1. Assume that η > 0 is chosen so that for all t and i, ηℓi(t) ≥−1. Then algorithm MW(P) generates distributions p(1), . . . , p(T) ∈P, such that for any p ∈P, T X t=1 ℓ(t) · p(t) −ℓ(t) · p ≤η T X t=1 (ℓ(t))2 · p(t) + RE(p ∥p(1)) η . Here, (ℓ(t))2 is the vector that is the coordinate-wise square of ℓ(t). 3.2 Unordered slates with no policies To apply the approach described above, we need a way to compactly represent the set of distributions over slates. We do this by embedding slates as points in some high-dimensional Euclidean space, and then giving a compact representation of the convex hull of the embedded points. Specifically, we represent an unordered slate S by its indicator vector 1S ∈RK, which is 1 for all coordinates j ∈S, and 0 for all others. The convex hull X of all such 1S vectors can be succinctly described [18] as the convex polytope defined by the linear constraints PK j=1 xj = s and xj ≥0 for j = 1, . . . , K. An algorithm is given in [18] (Algorithm 2) to decompose any vector x ∈X into a convex combination of at most K indicator vectors 1S. We embed the convex hull X of all the 1S vectors in the simplex of distributions over the K actions simply by scaling down all coordinates by s so that they sum to 1. Let P be this scaled down version of X. Our algorithm is given in Figure 2. Step 3 of MW(P) requires us to compute the arg minp∈P RE(p ∥ˆp(t + 1)), which can be solved by convex programming. A linear time algorithm is given in [13], and a simple algorithm (from [18]) is the following: find the least index k such that clipping the largest k coordinates of p to 1 s and rescaling the rest of the coordinates to sum up to 1 −k s ensures that all coordinates are at most 1 s, and output the probability vector thus obtained. This can be implemented by sorting the coordinates, and so it takes O(K log(K)) time. 4 Bandit Algorithm for Unordered Slates Initialization: Start an instance of MW(P) with the uniform initial distribution p(1) = 1 K 1. Set η = q (1−γ)s ln(K/s) KT , and γ = q (K/s) ln(K/s) T . For t = 1, 2, . . . , T: 1. Obtain the distribution p(t) from MW(P). 2. Set p′(t) = (1 −γ)p(t) + γ K 1A. 3. Note that p′(t) ∈P. Decompose sp′(t) as a convex combination of slate vectors 1S corresponding to slates S as sp′(t) = P S qS1S, where qS > 0 and P S qS = 1. 4. Choose a slate S to display with probability qS, and obtain the loss ℓj(t) for all j ∈S. 5. Set ˆℓj(t) = ℓj(t)/(sp′ j(t)) if j ∈S, and 0 otherwise. 6. Send ˆℓ(t) as the loss vector to MW(P). Figure 2: The Bandit Algorithm with Unordered Slates We now prove the regret bound of Theorem 2.1. We use the notation Et[X] to denote the expectation of a random variable X conditioned on all the randomness chosen by the algorithm up to round t, assuming that X is measurable with respect to this randomness. We note the following facts: Et[ˆℓj(t)] = P S∋j qS · ℓj(t) sp′ j(t) = ℓj(t), since p′ j(t) = P S∋j qS · 1 s. This immediately implies that Et[ˆℓ(t) · p(t)] = ℓ(t) · p(t) and E[ˆℓ(t) · p] = ℓ(t) · p, for any fixed distribution p. Note that if we decompose a distribution p ∈P as a convex combination of 1 s1S vectors and randomly choose a slate S according to its weight in the combination, then the expected loss, averaged over the s actions chosen, is ℓ(t) · p. We can bound the difference between the expected loss (averaged over the s actions) in round t suffered by the algorithm, ℓ(t) · p′(t), and ℓ(t) · p(t) as follows: ℓ(t) · p′(t) −ℓ(t) · p(t) = X j ℓj(t)(p′ j(t) −pj(t)) ≤ X j ℓj(t) · γ K ≤γ. Using this bound and Theorem 3.1, if S⋆= arg minS P t ℓ(t) · 1 s1S, we have E[RegretT ] s = X t ℓ(t) · p′(t) −ℓ(t) · 1 s1S⋆≤η X t E[(ˆℓ(t))2 · p(t)] + RE( 1 s1S⋆∥p(1)) η + γT. We note that the leading factor of 1 s on the expected regret is due to the averaging over the s positions. We now bound the terms on the RHS. First, we have E t [(ˆℓ(t))2 · p(t)] = X S qS ·  X j∈S (ℓj(t))2pj(t) (sp′ j(t))2   = X j " (ℓj(t))2pj(t) (sp′ j(t))2 # · X S∋j qS = X j " (ℓj(t))2pj(t) (sp′ j(t))2 # · sp′ j(t) ≤ K s(1 −γ), because pj(t) p′ j(t) ≤ 1 1−γ , and all |ℓj(t)| ≤1. E[RegretT ] ≤η KT 1 −γ + s ln(K/s) η + sγT ≤4 p sK ln(K/s)T, by setting η = q (1−γ)s ln(K/s) KT and γ = q (K/s) ln(K/s) T . It remains to verify that ηˆℓj(t) ≥−1 for all i and t. We know that ˆℓj(t) ≥−K sγ , because p′ j(t) ≥γ K , so all we need to check is that q (1−γ)s ln(K/s) KT ≤sγ K , which is true for our choice of γ. 5 Bandit Algorithm for Ordered Slates Initialization: Start an instance of MW(P) with the uniform initial distribution p(1) = 1 sK 1. Set η = q (1−γ) ln(K) KT and γ = q K ln(K) T . For t = 1, 2, . . . , T: 1. Obtain the distribution p(t) from MW(P). 2. Set p′(t) = (1 −γ)p(t) + γ sK 1A. 3. Note that p′(t) ∈P, and so sp′(t) ∈M. Decompose sp′(t) as a convex combination of Mπ matrices corresponding to ordered slates π as sp′(t) = P π qπMπ, where qπ > 0 and P π qπ = 1. 4. Choose a slate π to display w.p. qπ, and obtain the loss Li,π(i)(t) for all 1 ≤i ≤s. 5. Construct the loss matrix ˆL(t) as follows: for 1 ≤i ≤s, set ˆLi,π(i)(t) = Li,π(i)(t) sp′ i,π(i)(t), and all other entries are 0. 6. Send ˆL(t) as the loss vector to MW(P). Figure 3: Bandit Algorithm for Ordered Slates 3.3 Ordered slates with no policies A similar approach can be used for ordered slates. Here, we represent an ordered slate π by the subpermutation matrix Mπ ∈Rs×K which is defined as follows: for i = 1, 2, . . . , s, we have M π i,π(i) = 1, and all other entries are 0. In [7, 16], it is shown that the convex hull M of all the Mπ matrices is the convex polytope defined by the linear constraints: PK j=1 Mij = 1 for i = 1, . . . , s; Ps i=1 Mij ≤1 for j = 1, . . . , K; and Mij ≥0 for i = 1, . . . , s and j = 1, . . . , K. Clearly, all subpermutation matrices Mπ ∈M. To complete the characterization of the convex hull, we can show (details omitted) that given any matrix M ∈M, we can efficiently decompose it into a convex combination of at most K2 subpermutation matrices. We identify matrices in Rs×K with vectors in RsK in the obvious way. We embed M in the simplex of distributions in RsK simply by scaling all the entries down by s so that their sum equals one. Let P be this scaled down version of M. Our algorithm is given in Figure 3. The projection in step 3 of MW(P) can be computed simply by solving the convex program. In practice, however, noticing that the relative entropy projection is a Bregman projection, the cyclic projections method of Bregman [6, 8] is likely to work faster. Adapted to the specific problem at hand, this method works as follows (see [8] for details): first, for every column j, initialize a dual variable λj = 1. Then, alternate between row phases and column phases. In a row phase, iterate over all rows, and rescale them to make them sum to 1 s. The column phase is a little more complicated: first, for every column j, compute the scaling factor α to make it sum to 1 s. Set α′ = min{λj, α}, and scale the column by α′, and update λj ←λj/α′. Repeat these alternating row and column phases until convergence to within the desired tolerance. The regret bound analysis is similar to that of Section 3.2. We have Et[ˆLij(t)] = P π:π(i)=j qπ · Lij(t) sp′ ij = Lij(t), and hence Et[ˆL(t) • p(t)] = L(t) • p(t) and E[ˆL(t) • p] = L(t) • p. We can show also that L(t) • p′(t) −L(t) • p(t) ≤γ. Using this bound and Theorem 3.1, if π⋆= arg minπ P t L(t) • 1 sMπ, we have E[RegretT ] s = X t L(t)•p′(t)−L(t)• 1 sMπ⋆≤η X t E[(ˆL(t))2•p(t)]+ RE( 1 sMπ⋆∥p(1)) η +γT. We now bound the terms on the RHS. First, we have E t [(ˆL(t))2 • p(t)] = X π qπ · " s X i=1 (Li,π(i)(t))2pi,π(i)(t) (sp′ i,π(i)(t))2 # = s X i=1 K X j=1 " (Lij(t))2pij(t) (sp′ ij(t))2 # · X π:π(i)=j qπ 6 Bandit Algorithm for Unordered Slates With Policies Initialization: Start an instance of MW with no restrictions over the set of distributions over the N policies, with the initial distribution r(1) = 1 N 1. Set η = q (1−γ)s ln(N) KT , and γ = q (K/s) ln(N) T . For t = 1, 2, . . . , T: 1. Obtain the distribution over policies r(t) from MW, and the recommended distribution over slates φρ(t) ∈P for each policy ρ. 2. Compute the distribution p(t) = PN ρ=1 rρ(t)φρ(t). 3. Set p′(t) = (1 −γ)p(t) + γ K 1. 4. Note that p′(t) ∈P. Decompose sp′(t) as a convex combination of slate vectors 1S corresponding to slates S as sp′(t) = P S qS1S, where qS > 0 and P S qS = 1. 5. Choose a slate S to display with probability qS, and obtain the loss ℓj(t) for all j ∈S. 6. Set ˆℓj(t) = ℓj(t)/sp′ j(t) if j ∈S, and 0 otherwise. 7. Set the loss of policy ρ to be λρ(t) = ˆℓ(t) · φρ(t) in the MW algorithm. Figure 4: Bandit Algorithm for Unordered Slates With Policies = s X i=1 K X j=1 " (Lij(t))2pij(t) (sp′ ij(t))2 # · sp′ ij(t) ≤ K 1 −γ , because pij(t) p′ ij(t) ≤ 1 1−γ , all |Lij(t)| ≤1. Finally, we have RE( 1 sMπ⋆∥p(1)) = ln(K). Plugging these bounds into the bound of Theorem 3.1, we get the stated regret bound from Theorem 2.1: E[RegretT ] ≤η sKT 1 −γ + s ln(K) η + sγT ≤4s p K ln(K)T, by setting η = q (1−γ) ln(K) KT and γ = q K ln(K) T , which satisfy the necessary technical conditions. 4 Competing with a set of policies 4.1 Unordered Slates with N Policies In each round, every policy ρ recommends a distribution over slates φρ(t) ∈P, where P is the X scaled down by s as in Section 3.2. Our algorithm is given in Figure 4. Again the regret bound analysis is along the lines of Section 3.2. We have for any j, Et[ˆℓj(t)] = P S∋j qS · ℓj(t) sp′ j(t) = ℓj(t). Thus, Et[λρ(t)] = ℓ(t) · φρ(t), and hence Et[λ(t) · r(t)] = P ρ(ℓ(t) · φρ(t))rρ(t) = ℓ(t) · p(t). We can also show as before that ℓ(t) · p′(t) −ℓ(t) · p(t) ≤γ. Using this bound and Theorem 3.1, if ρ⋆= arg minρ P t ℓ(t) · φρ(t), we have E[RegretT ] s = X t ℓ(t) · p′(t) −ℓ(t) · φρ⋆(t) ≤η X t E[(λ(t))2 · r(t)] + RE(eρ⋆∥r(1)) η + γT, where eρ⋆is the distribution (vector) that is concentrated entirely on policy ρ⋆. We now bound the terms on the RHS. First, we have E t [(λ(t))2 · r(t)] = E t "X ρ λρ(t)2rρ(t) # = E t "X ρ ( ˆ ℓ(t) · φρ(t))2rρ(t) # ≤E t "X ρ ((ˆℓ(t))2 · φρ(t))rρ(t) # = E t [(ˆℓ(t))2 · p(t)] ≤ K s(1 −γ). 7 Bandit Algorithm for Ordered Slates with Policies Initialization: Start an instance of MW with no restrictions, over the set of distributions over the N policies, starting with r(1) = 1 N 1. Set η = q (1−γ) ln(N) KT and γ = q K ln(N) T . For t = 1, 2, . . . , T: 1. Obtain the distribution over policies r(t) from MW, and the recommended distribution over ordered slates φρ(t) ∈P for each policy ρ. 2. Compute the distribution p(t) = PN ρ=1 rρ(t)φρ(t). 3. Set p′(t) = (1 −γ)p(t) + γ sK 1A. 4. Note that p′(t) ∈P, and so sp′(t) ∈X. Decompose sp′(t) as a convex combination of Mπ matrices corresponding to ordered slates π as sp′(t) = P π qπMπ, where qπ > 0 and P π qπ = 1. 5. Choose a slate π to display w.p. qπ, and obtain the loss Li,π(i)(t) for all 1 ≤i ≤s. 6. Construct the loss matrix ˆL(t) as follows: for 1 ≤i ≤s, set ˆLi,π(i)(t) = Li,π(i)(t) sp′ i,π(i)(t), and all other entries are 0. 7. Set the loss of policy ρ to be λρ(t) = ˆL(t) • φρ(t) in the MW algorithm. Figure 5: Bandit Algorithm for Ordered Slates with Policies The first inequality above follows from Jensen’s inequality, and the second one is proved exactly as in Section 3.2. Finally, we have RE(eρ⋆∥p(1)) = ln(N). Plugging these bounds into the bound above, we get the stated regret bound from Theorem 2.1: E[RegretT ] ≤η KT 1 −γ + s ln(N) η + sγT ≤4 p sK ln(N)T, by setting η = q (1−γ)s ln(N) KT and γ = q (K/s) ln(N) T , which satisfy the necessary technical conditions. 4.2 Ordered Slates with N Policies In each round, every policy ρ recommends a distribution over ordered slates φρ(t) ∈P, where P is M scaled down by s as in Section 3.3. Our algorithm is given in Figure 5. The regret bound analysis is exactly along the lines of that in Section 4.1, with L(t) and ˆL(t) playing the roles of ℓ(t) and ˆℓ(t) respectively, with the inequalities from Section 3.3. We omit the details for brevity. We get the stated regret bound from Theorem 2.1: E[RegretT ] ≤4s p K ln(N)T. 5 Conclusions and Future Work In this paper, we presented efficient algorithms for the unordered and ordered slate problems with regret bounds of O( √ T), in the presence and and absence of policies, employing the technique of Bregman projections on a convex set representing the convex hull of slate vectors. Possible future work on this problem is in two directions. The first direction is to handle other user models for the loss matrices, such as models incorporating the following sort of interaction between the chosen actions: if two very similar ads are shown, and the user clicks on one, then the user is less likely to click on the other. Our current model essentially assumes no interaction. The second direction is to derive high probability O( √ T) regret bounds for the slate problems in the presence of policies. The techniques of [3] only give such algorithms in the no-policies setting. References [1] ABERNETHY, J., HAZAN, E., AND RAKHLIN, A. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT (2008), pp. 263–274. 8 [2] AUER, P., CESA-BIANCHI, N., AND FISCHER, P. Finite-time analysis of the multiarmed bandit problem. Machine Learning 47, 2-3 (2002), 235–256. [3] AUER, P., CESA-BIANCHI, N., FREUND, Y., AND SCHAPIRE, R. E. The nonstochastic multiarmed bandit problem. SIAM J. Comput. 32, 1 (2002), 48–77. [4] AWERBUCH, B., AND KLEINBERG, R. Online linear optimization and adaptive routing. J. Comput. Syst. Sci. 74, 1 (2008), 97–114. [5] BARTLETT, P. L., DANI, V., HAYES, T. P., KAKADE, S., RAKHLIN, A., AND TEWARI, A. High-probability regret bounds for bandit online linear optimization. In COLT (2008), pp. 335–342. [6] BREGMAN, L. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comp. Mathematics and Mathematical Physics 7 (1967), 200–217. [7] BRUALDI, R. A., AND LEE, G. M. On the truncated assignment polytope. Linear Algebra and its Applications 19 (1978), 33–62. [8] CENSOR, Y., AND ZENIOS, S. Parallel optimization. Oxford University Press, 1997. [9] CESA-BIANCHI, N., AND LUGOSI, G. Combinatorial bandits. In COLT (2009). [10] GY ¨ORGY, A., LINDER, T., LUGOSI, G., AND OTTUCS ´AK, G. The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research 8 (2007), 2369– 2403. [11] HAZAN, E., AND KALE, S. Better algorithms for benign bandits. In SODA (2009), pp. 38–47. [12] HELMBOLD, D. P., AND WARMUTH, M. K. Learning permutations with exponential weights. In COLT (2007), pp. 469–483. [13] HERBSTER, M., AND WARMUTH, M. K. Tracking the best linear predictor. Journal of Machine Learning Research 1 (2001), 281–309. [14] KOOLEN, W. M., WARMUTH, M. K., AND KIVINEN, J. Hedging structured concepts. In COLT (2010). [15] LAI, T., AND ROBBINS, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6 (1985), 4–22. [16] MENDELSOHN, N. S., AND DULMAGE, A. L. The convex hull of sub-permutation matrices. Proceedings of the American Mathematical Society 9, 2 (Apr 1958), 253–254. [17] UCHIYA, T., NAKAMURA, A., AND KUDO, M. Algorithms for adversarial bandit problems with multiple plays. In ALT (2010), pp. 375–389. [18] WARMUTH, M. K., AND KUZMIN, D. Randomized PCA algorithms with regret bounds that are logarithmic in the dimension. In In Proc. of NIPS (2006). 9
2010
79
4,123
Decomposing Isotonic Regression for Efficiently Solving Large Problems Ronny Luss Dept. of Statistics and OR Tel Aviv University ronnyluss@gmail.com Saharon Rosset Dept. of Statistics and OR Tel Aviv University saharon@post.tau.ac.il Moni Shahar Dept. of Electrical Eng. Tel Aviv University moni@eng.tau.ac.il Abstract A new algorithm for isotonic regression is presented based on recursively partitioning the solution space. We develop efficient methods for each partitioning subproblem through an equivalent representation as a network flow problem, and prove that this sequence of partitions converges to the global solution. These network flow problems can further be decomposed in order to solve very large problems. Success of isotonic regression in prediction and our algorithm’s favorable computational properties are demonstrated through simulated examples as large as 2 × 105 variables and 107 constraints. 1 Introduction Assume we have a set of n data observations (x1, y1), ..., (xn, yn), where x ∈X (usually X =Rp) is a vector of covariates or independent variables, y ∈R is the response, and we wish to fit a model ˆf : X →R to describe the dependence of y on x, i.e., y ≈ˆf(x). Isotonic regression is a non-parametric modeling approach which only restricts the fitted model to being monotone in all independent variables [1]. Define G as the family of isotonic functions, that is, g ∈G satisfies x1 ⪯x2 ⇒g(x1) ≤g(x2), where the partial order ⪯here will usually be the standard Euclidean one, i.e., x1 ⪯x2 if x1j ≤x2j ∀j. Given these definitions, isotonic regression solves ˆf = arg min g∈G ∥y −g(x)∥2. (1) As many authors have noted, the optimal solution to this problem comprises a partitioning of the space X into regions obeying a monotonicity property with a constant fitted to ˆf in each region. It is clear that isotonic regression is a very attractive model for situations where monotonicity is a reasonable assumption, but other common assumptions like linearity or additivity are not. Indeed, this formulation has found useful applications in biology [2], medicine [3], statistics [1] and psychology [4], among others. Practicality of isotonic regression has already been demonstrated in various fields and in this paper we focus on algorithms for computing isotonic regressions on large problems. An equivalent formulation of L2 isotonic regression seeks an optimal isotonic fit ˆyi at every point by solving minimize n X i=1 (ˆyi −yi)2 subject to ˆyi ≤ˆyj ∀(i, j) ∈I (2) where I denotes a set of isotonic constraints. This paper assumes that I contains no redundant constraints, i.e. (i, j), (j, k) ∈I ⇒(i, k) ̸∈I. Problem (2) is a quadratic program subject to 1 simple linear constraints, and, according to a literature review, appears to be largely ignored due to computational difficulty on large problems. The worst case O(n4) complexity (a large overstatement in practice as will be shown) has resulted in overlooking the results that follow [5, 6]. The discussion of isotonic regression originally focused on the case x ∈R, where ⪯denoted a complete order [4]. For this case, the well known pooled adjacent violators algorithm (PAVA) efficiently solves the isotonic regression problem. For the partially ordered case, many different algorithms have been developed over the years, with most early efforts concentrated on generalizations of PAVA [7, 5]. These algorithms typically have no polynomial complexity guarantees and are impractical when data size exceed a few thousand observations. Problem (1) can also be treated as a separable quadratic program subject to simple linear equality constraints. Such was done, for example, in [8], which applies active set methods to solve the problem. While such algorithms can often be efficient in practice, the algorithm of [8] gives no complexity guarantees. Related algorithms in [9] to those described here were applied to problems for scheduling reorder intervals in production systems and are of complexity O(n4) and connections to isotonic regression can be made through [1]. Interior point methods are another tool for solving Problem (1), and have time complexity guarantees of O(n3) when the number of constraints is on the same order as the number of variables (see [10]). However, the excessive memory requirements of interior point methods from solving large systems of linear equations typically make them impractical for large data sizes. Recently, [6] and [11] gave an O(n2) approximate generalized PAVA algorithm, however solution quality can only be demonstrated via experimentation. An even better complexity of O(n log n) can be obtained for the optimal solution when the isotonic constraints take a special structure such as a tree, e.g. [12]. 1.1 Contribution Our novel approach to isotonic regression offers an exact solution of (1) with a complexity bounded by O(n4), but acts on the order of O(n3) for practical problems. We demonstrate here that it accommodates problems with tens of thousands of observations, or even more with our decomposition. The main goal of this paper is to make isotonic regression a reasonable computational tool for large data sets, as the assumptions in this framework are very applicable in real-world applications. Our framework solves quadratic programs with 2 × 105 variables and more than 107 constraints, a problem of size not solved anywhere in previous isotonic regression literature, and with the decomposition detailed below, even larger problems can be solved. The paper is organized as follows. Section 2 describes a partitioning algorithm for isotonic regression and proves convergence to the globally optimal solution. Section 3 explains how the subproblems (creating a single partition) can be solved efficiently and decomposed in order to solve large-scale problems. Section 4 demonstrates that the partitioning algorithm is significantly better in practice than the O(n4) worst-case complexity. Finally, Section 5 gives numerical results and demonstrates favorable predictive performance on large simulated data sets and Section 6 concludes with future directions. Notation The weight of a set of points A is defined as yA = 1 |A| P i∈A yi. A subset U of A is an upper set of A if x ∈U, y ∈A, x ≺y ⇒y ∈U. A set B ⊆A is defined as a block of A if yU∩B ≤yB for each upper set U of A such that U ∩B ̸= {}. A general block A is considered a block of the entire space. For two blocks A and B, we denote A ⪯B if ∃x ∈A, y ∈B such that x ⪯y and ∄x ∈A, y ∈B such that y ⪯x (i.e. there is at least one comparable pair of points that satisfy the direction of isotonicity). A and B are then said to be isotonic blocks (or obey isotonicity). A group of nodes X majorizes (minorizes) another group Y if X ⪰Y (X ⪯Y ). A group X is a majorant (minorant) of X ∪A where A = ∪k i=1Ai if X ̸⪯Ai (X ̸⪰Ai) ∀i = 1 . . . k. 2 Partitioning Algorithm We first describe the structure of the classic L2 isotonic regression problem and continue to detail the partitioning algorithm. The section concludes by proving convergence of the algorithm to the globally optimal isotonic regression solution. 2 2.1 Structure Problem (2) is a quadratic program subject to simple linear constraints. The structure of the optimal solution to (2) is well-known. Observations are divided into k groups where the fits in each group take the group mean observation value. This can be seen through the equations given by the following Karush-Kuhn-Tucker (KKT) conditions: (a) ˆyi = yi −1 2( X j:(i,j)∈I λij − X j:(j,i)∈I λji) (b) ˆyi ≤ˆyj ∀(i, j) ∈I (c) λij ≥0 ∀(i, j) ∈I (d) λij(ˆyi −ˆyj) = 0 ∀(i, j) ∈I. This set of conditions exposes the nature of the optimal solution, since condition (d) implies that λij > 0 ⇒ˆyi = ˆyj. Hence λij can be non-zero only within blocks in the isotonic solution which have the same fitted value. For observations in different blocks, λij = 0. Furthermore, the fit within each block is trivially seen to be the average of the observations in the block, i.e. the fits minimize the block’s squared loss. Thus, we get the familiar characterization of the isotonic regression problem as one of finding a division into isotonic blocks. 2.2 Partitioning In order to take advantage of the optimal solution’s structure, we propose solving the isotonic regression problem (2) as a sequence of subproblems that divides a group of nodes into two groups at each iteration. An important property of our partitioning approach is that nodes separated at one iteration are never rejoined into the same group in future iterations. This gives a clear bound on the total number of iterations in the worst case. We now describe the partitioning criterion used for each subproblem. Suppose a current block V is optimal and thus ˆy∗ i = yV ∀i ∈V. From condition (a) of the KKT conditions, we define the net outflow of a group V as P i∈V (yi −ˆyi). Finding two groups within V such that the net outflow from the higher group is greater than the net outflow from the lower group should be infeasible, according to the KKT conditions. The partition here looks for two such groups. Denote by CV the set of all feasible (i.e. isotonic) cuts through the network defined by nodes in V. A cut is called isotonic if the two blocks created by the cut are isotonic. The optimal cut is determined as the cut that solves the problem max c∈CV X i∈V+ c (yi −yV) − X i∈V− c (yi −yV) (3) where V− c (V+ c ) is the group on the lower (upper) side of the edges of cut c. In terms of isotonic regression, the optimal cut is such that the difference in the sum of the normalized fits (yi −yV) at each node of a group is maximized. If this maximized difference is zero, then the group must be an optimal block. The optimal cut problem (3) can also be written as the binary program maximize P i xi(yi −yV) subject to xi ≤xj ∀(i, j) ∈I xi ∈{−1, +1} ∀i ∈V. (4) Well-known results from [13] (due to the fact that the constraint matrix is totally unimodular) say that the following relaxation to this binary program is optimal with x∗on the boundary, and hence the optimal cut can be determined by solving the linear program maximize zT x subject to xi ≤xj ∀(i, j) ∈I −1 ≤xi ≤1 ∀i ∈V (5) where zi = yi −yV. This group-wise partitioning operation is the basis for our partitioning algorithm which is explicitly given in Algorithm 1. It starts with all observations as one group (i.e., V = {1, . . ., n}), and recursively splits each group optimally by solving subproblem (5). At each 3 iteration, a list C of potential optimal cuts for each group generated thus far is maintained, and the cut among them with the highest objective value is performed. The list C is updated with the optimal cuts in both sub-groups generated. Partitioning ends whenever the solution to (5) is trivial (i.e., no split is found because the group is a block). As proven next, this algorithm terminates with the optimal global (isotonic) solution to the isotonic regression problem (2). Algorithm 1 Paritioning Algorithm Require: Observations y1, . . . , yn and partial order I. Require: V = {{1, . . ., n}}, C = {(0, {1, . . ., n}, {})}, W = {}. 1: while V ̸= {} do 2: Let (val, w−, w+) ∈C be the potential cut with largest val. 3: Update V = (V \ (w−∪w+)) ∪{w−, w+}, C = C \ (val, w−, w+) . 4: for all v ∈{w−, w+} do 5: Set zi = yi −yv ∀i ∈v where yv is the mean of observations in v. 6: Solve LP (5) with input z and get x∗. 7: if x∗ 1 = . . . = x∗ n (group is optimally divided) then 8: Update V = V \ v and W = W ∪v. 9: else 10: Let v−= {i : x∗ i = −1}, v+ = {i : x∗ i = +1}. 11: Update C = C ∪{(zTx∗, v−, v+)} 12: end if 13: end for 14: end while 15: return W the optimal groups 2.3 Convergence Theorem 1 next states the main result that allows for a no-regret partitioning algorithm for isotonic regression. This will lead to our convergence result. We assume that group V is isotonic (i.e. has no holes) and is the union of optimal blocks. Theorem 1 Assume a group V is a union of blocks from the optimal solution to problem (2). Then a cut made by solving (5) does not cut through any block in the global optimal solution. Proof. The following is a brief sketch of the proof idea: Let M be the union of K optimal blocks in V that get broken by the cut. Define M1 (MK) to be a minorant (majorant) block in M. For each Mk define M L k (M U k ) as the groups in Mk below (above) the algorithm cut. Using the definitions of how the algorithm makes partitions, the following two consequences can be proven: (1) yM1 < yMK by optimality (i.e. according to KKT conditions) and isotonicity and (2) yM1 > yV and yMK < yV. This is proven by showing that yMU 1 > yV, because otherwise the M U 1 block would be on the lower side of the cut, resulting in M1 being on the lower side of the cut, and thus yM1 > yV since yML 1 > yMU 1 by the optimality assumption on block M1 (with symmetric arguments for MK). This leads to the contradiction yV < yM1 < yMK < yV, and hence M must be empty. Since Algorithm 1 starts with V = {1, ..., n} which is a union of (all) optimal blocks, we can conclude from this theorem that partitions never cut an optimal block. The following corollary is then a direct consequence of repeatedly applying Theorem 1 in Algorithm 1: Corollary 2 Algorithm 1 converges to the global optimal solution of (2) with no regret (i.e. without having to rejoin observations that are divided at a previous iteration). 3 Efficient solutions of the subproblems Linear program (5) has a special structure that can be taken advantage of in order to solve larger problems faster. We first show why these problems can be solved faster than typical linear programs, followed by a novel decomposition of the structure that allows problems of extremely large size to be solved efficiently. 4 3.1 Network flow problems The dual to Problem (2) is a network flow problem with quadratic objective. The network flow constraints are identical to those in (6) below, but the objective is 1 4 Pn i=1 (s2 i + t2 i ), which, to the author’s knowledge, currently still precludes this dual from being efficiently solved with special network algorithms. While this structure does not help solve directly the quadratic program, the network structure allows the linear program for the subproblems to be solved very efficiently. The dual program to (5) is minimize X i∈V (si + ti) subject to X j:(i,j)∈I λij − X j:(j,i)∈I λji −si + ti = zi ∀i ∈V λ, s, t ≥0 (6) where again zi = yi −yV. Linear program (6) is a network flow problem with |V| + 2 nodes and |I| + 2|V| arcs. Variable s denotes links directed from a source node into each other node, while t denotes links connecting each node into a sink node. The network flow problem here minimizes the total sum of flow over links from the source and into the sink with the goal to leave zi units of flow at each node i ∈V. Note that this is very similar to the network flow problem solved in [14] where zi there represents the classification performance on node i. Specialized simplex methods for such network flow problems are typically much faster ([15] documents an average speedup factor of 10 to 100 over standard simplex solvers) due to several reasons such as simpler operations on network data structures rather than maintaining and operating on the simplex tableau (see [16] for an overview of network simplex methods). 3.2 Large-scale decompositions In addition to having a very efficient method for solving this network flow problem, further enhancements can be made on extremely large problems of similar structure that might suffer from memory problems. It is already assumed that no redundant arcs exist in I (i.e. (i, j), (j, k) ∈I ⇒(i, k) ̸∈ I). One simple reduction involves eliminating negative (positive) nodes, i.e. nodes with zi < 0 (zi ≥0) where where zi = yi −yV, that are bounded only from above (below). It is trivial to observe that these nodes will be be equal to −1 (+1) in the optimal solution and that eliminating them does not affect solving (5) without them. However, in practice, this trivial reduction has a computationally minimal affect on large data sets. These reductions were also discussed in [14]. We next consider a novel reduction for the primal linear program (5). The main idea is that it can be solved through a sequence of smaller linear programs that reduce the total size of the full linear program on each iteration. Consider a minorant group of nodes J ⊆V and the subset of arcs IJ ⊆I connecting them. Solving problem (5) on this reduced network with the original input z divides the nodes in J into a lower and upper group, denoted JL and JU. Nodes in JL are not bounded from above and will be in the lower group of the full problem solved on V. In addition, the same problem solved on the remaining nodes in V \ JL will give the optimal solutions to these nodes. This is formalized in Proposition 3. Proposition 3 Let J ⊆V be a minorant group of nodes in V. Let w∗and x∗be optimal solutions to Problem (5) on the reduced set J and full set V of nodes, respectively. If w∗ i = −1, then x∗ i = −1 ∀i ∈J . The optimal solution for the remaining nodes (V \ J ) can be found by solving (5) over only those nodes. The same claims can be made when J ⊆V is a majorant group of nodes in V where instead w∗ i = +1 ⇒x∗ i = +1 ∀i ∈J . Proof. Denote W the set of nodes such that w∗ i = −1 and ˆ W = V \ W. Clearly, the solution to Problem (5) over nodes in W has the solution with all variables equal to −1. Problem (5) can be written in the following form with separable objective: maximize X i∈W zixi + X i∈V\W zixi subject to xi ≤xj ∀(i, j) ∈I, i, j ∈W xi ≤xj ∀(i, j) ∈I, i ∈V, j ∈V \ W −1 ≤xi ≤1 ∀i ∈V (7) 5 Start with an initial solution xi = 1 ∀i ∈V. Variables in W can be optimized over first and by assumption have the optimal value with all variables equal to −1. Optimization over variables in ˆ W is not bounded from below by variables in W since those variables are all at the lower bound. Hence the optimal solution to variables in ˆ W is given by optimizing over only these variables. The result for minorant groups follows. The final claim is easily argued in the same way as for the minorant groups. Given Proposition 3, Algorithm 2, which iteratively solves (5), can be stated. The subtrees are built as follows. First, an upper triangular adjacency matrix C can be constructed to represent I, where Cij = 1 if xi ≤xj is an isotonic constraint and Cij = 0 otherwise. A minorant (majorant) subtree with k nodes is then constructed as the upper left (lower right) k × k sub-matrix of C. Algorithm 2 Iterative algorithm for linear program (5) Require: Observations y1, . . . , yn and partial order I. Require: MAXSIZE of problem to be solved by general LP solver Require: V = {1, . . . , n}, L = U = {}. 1: while |V| ≥MAXSIZE do 2: ELIMINATE A MINORANT SET OF NODES: 3: Build a minorant subtree T . 4: Solve linear program (5) on T and get solution ˆy ∈{−1, +1}|T |. 5: L = L ∪{v ∈T : ˆyv = −1}, V = V \ {v ∈T : ˆyv = −1}. 6: ELIMINATE A MAJORANT SET OF NODES: 7: Build majorant subtree T . 8: Solve linear program (5) on T and get solution ˆy ∈{−1, +1}|T |. 9: U = U ∪{v ∈T : ˆyv = +1}, V = V \ {v ∈T : ˆyv = +1}. 10: end while 11: Solve linear program (5) on V and get solution ˆy ∈{−1, +1}|V|. 12: L = L ∪{v ∈T : ˆyv = −1}, U = U ∪{v ∈T : ˆyv = +1}. The computational bottleneck of Algorithm 2 is solving linear program (5), which is done efficiently by solving the dual network flow problem (6). This shows that, if the first network flow problem is too large to solve, it can be solved by a sequence of smaller network flow problems as illustrated in Figure 1. Lemma 4 below proves that this reduction optimally solves the full problem (5). In the worst case, many network flow problems will be solved until the original full-size network flow problem is solved. However, in practice on large problems, this artifact is never observed. Computational performance of this reduction is demonstrated in Section 5. Lemma 4 Algorithm 2 optimally solves Problem (5). Proof. The result follows from repeated application of Proposition 3 over the set of nodes V that has not yet been optimally solved for. 4 Complexity of the partitioning algorithm Linear program (5) can be solved in O(n3) using interior point methods. Given that the algorithm performs at most n iterations, the worst case complexity of Algorithm 1 is O(n4). However, the practical complexity of IRP is significantly better than the worst case. Each iteration of LP (5) solves smaller problems. Consider the case of balanced partitioning at each iteration until there are n final blocks. In this case, we can represent the partitioning path as a binary tree with log n levels, and at each level k, LP (5) is solved 2k times on instances of size n 2k which leads to a total complexity of log n X k=0 2k( n 2k )3 = n3( log n X k=0 (1 4)k) = n3(1 −.25log n+1 .75 ), subject to additional constants. For n ≥10, the summation is approximately 1.33, and hence in this case the partitioning algorithm has complexity O(1.33n3) (considering the complexity of interior 6 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 1: Illustration of LP (5) decomposition. Data here is 2 dimensional with only 1000 nodes in order to leave a clear picture. First 7 iterations and the final iteration 16 of the decomposition are shown from left to right and top to bottom. The remaining nodes (blue circles) to identify as ±1 decreases through the iterations. LP (5) solved on the entire set of nodes in the first picture may be too large for memory. Hence subproblems are solved on the lower left (red dots) and upper right (green dots) of the networks and some nodes are fixed from the solution of these subproblems. This is repeated until the number of unidentified nodes in the last iteration is of small enough size for memory. Note that at each iteration the three groups obey isotonicity. point methods for partitioning). More generally, let p and 1 −p be the percentages on each split. Table 1 displays the constants c representing the complexity from O(cn3) over varying p and n. As demonstrated, the problem size rapidly decreases and the complexity is in practice O(n3). n=100 n=1000 n=10000 p=0.55 1.35n3 1.35n3 1.35n3 p=0.65 1.46n3 1.46n3 1.47n3 p=0.75 1.77n3 1.78n3 1.78n3 p=0.85 2.56n3 2.61n3 2.61n3 p=0.95 6.41n3 6.94n3 7.01n3 Table 1: Complexity: Groups are split with ratio p at each iteration. Complexity in practice is O(n3). 5 Numerical experiments We here demonstrate that exact isotonic regression is computationally tractable for very large problems, and compare against the time it takes to get an approximation. We first show the computational performance of isotonic regression on simulated data sets as large as 2 × 105 training points with more than 107 constraints. We then show the favorable predictive performance of isotonic regression on large simulated data sets. 5.1 Large-Scale Computations Figure 2 demonstrates that the partitioning algorithm with decompositions of the partitioning step can solve very large isotonic regressions. Three dimensional data is simulated from U(0, 2) and the responses are created as linear functions plus noise. The size of the training sets varies from 104 to 2 × 105 points. The left figure shows that the partitioning algorithm finds the globally optimal isotonic regression solution in not much more time than it takes to find an approximation as done in [6] for very large problems. Although the worst-case complexity of our exact algorithm is much worse, the two algorithms scale comparably in practice. Figure 2 (right) shows how the number of partitions (left axis) increases as the number of training points increases. It is not clear why the approximation in [6] has less partitions as the size of the problem grows. More partitions (left axis) require solving more network flow problems, however, as discussed, they reduce in size very quickly over the partitioning path, resulting in the practical complexity seen in the figure on the left. The bold black line also shows the number of constraints (right axis) which goes up to more than 107 constraints. 7 0 0.5 1 1.5 2 x 10 5 0 50 100 150 200 250 300 350 400 450 IRP GPAV Time vs # Training Points Time (seconds) Number Training Points 0 0.5 1 1.5 2 x 10 5 0 1000 2000 3000 4000 5000 6000 7000 IRP GPAV 0 0.5 1 1.5 2 x 10 5 0 1 2 3 4 5 6 7 8 9 10 x 10 6 # Partitions vs # Training Points Number Partitions Number Constraints Number Training Points Figure 2: IRP performance on large-scale simulations. Data x ∈R3 has xi ∼U(0, 2). Responses y are linear functions plus noise. Number of training points varies from 104 to 2 × 105. Results shown are averages of 5 simulations with dotted lines at ± one standard deviation. Time (seconds) versus number of training points is on the left. On the right, the number of partitions is illustrated using the left axis and the bold black line shows the average number of constraints per test using the right axis. 5.2 Predictive Performance Here we show that isotonic regression is a useful tool when the data fits the monotonic framework. Data is simulated as above and responses are constructed as yi = Q i xi + N(0, .52) where p varies from 2 to 6. The training set varies from 500 to 5000 to 50000 points and the test size is fixed at 5000. Results are averaged over 10 trials and 95% confidence intervals are given. A comparison is made between isotonic regression and linear least squares regression. With only 500 training points, the model is poorly fitted and a simple linear regression performs much better. 5000 training points is sufficient to fit the model well with up to 4 dimensions, after which linear regression outperforms the isotonic regression, and 50000 training points fits the model well up with up to 5 dimensions. Two trends are observed. Larger training sets allow better models to be fit which improves performance while higher dimensions increase overfitting which, in turn, decreases performance. Dim IRP MSE LS MSE IRP MSE LS MSE IRP MSE LS MSE n=500 n=500 n=5000 n=5000 n=50000 n=50000 2 0.69 ± 0.01 0.37 ± 0.00 0.27 ± 0.00 0.36 ± 0.00 0.25 ± 0.00 0.36 ± 0.00 3 0.76 ± 0.03 0.65 ± 0.01 0.31 ± 0.00 0.61 ± 0.01 0.26 ± 0.00 0.62 ± 0.00 4 1.45 ± 0.08 1.08 ± 0.01 0.61 ± 0.02 1.08 ± 0.02 0.34 ± 0.01 1.06 ± 0.03 5 4.61 ± 0.65 1.76 ± 0.02 2.61 ± 0.16 1.88 ± 0.04 0.93 ± 0.04 1.86 ± 0.05 6 12.89 ± 1.30 3.06 ± 0.04 8.41 ± 1.36 2.84 ± 0.07 3.37 ± 0.06 2.83 ± 0.12 Table 2: Statistics for simulation generated with yi = Q i xi+N(0, .52). A comparison between the results of IRP and a least squares linear regression is shown. Bold demonstrates statistical significance at 95% confidence. 6 Conclusion This paper demonstrates that isotonic regression can be used to solve extremely large problems. Fast approximations are useful, however, as shown, globally optimal solutions are also computationally tractable. Indeed, isotonic regression as done here performs with a complexity of O(n3) in practice. As also shown, isotonic regression performs well at reasonable dimensions, but suffers from overfitting as the dimension of the data increases. Extensions of this algorithm will analyze the path of partitions in order to control overfitting by stopping the algorithm early. Statistical complexity of the models generated by partitioning will be examined. Furthermore, similar results will be made for isotonic regression with different loss functions. 8 References [1] R.E. Barlow and H.D. Brunk. The isotonic regression problem and its dual. Journal of the American Statistical Association, 67(337):140–147, 1972. [2] G. Obozinski, G. Lanckriet, C. Grant, M.I. Jordan, and W.S. Noble. Consistent probabilistic outputs for protein function prediction. Genome Biology, 9:247–254, 2008. Open Access. [3] M.J. Schell and B. Singh. The reduced monotonic regression method. Journal of the American Statistical Association, 92(437):128–135, 1997. [4] J.B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1), 1964. [5] H. Block, S. Qian, and A. Sampson. Structure algorithms for partially ordered isotonic regression. Journal of Computational and Graphical Statistcs, 3(3):285–300, 1994. [6] O. Burdakov, O. Sysoev, A. Grimvall, and M. Hussian. An o(n2) algorithm for isotonic regression. 83:25– 83, 2006. In: G. Di Pillo and M. Roma (Eds) Large-Scale Nonlinear Optimization. Series: Nonconvex Optimization and Its Applications. [7] C.-I. C. Lee. The min-max algorithm and isotonic regression. The Annals of Statistics, 11(2):467–477, 1983. [8] J. de Leeuw, K. Hornik, and P. Mair. Isotone optimization in r: Pool-adjacent-violators algorithm (pava) and active set methods. 2009. UC Los Angeles: Department of Statistics, UCLA. Retrieved from: http://cran.r-project.org/web/packages/isotone/vignettes/isotone.pdf. [9] W.L. Maxwell and J.A. Muckstadt. Establishing consistent and realistic reorder intervals in productiondistribution systems. Operations Research, 33(6):1316–1341, 1985. [10] R.D.C. Monteiro and I. Adler. Interior path following primal-dual algorithms. part II: Convex quadratic programming. Mathematical Programming, 44:43–66, 1989. [11] O. Burdakov, O. Sysoev, and A. Grimvall. Generalized PAV algorithm with block refinement for partially ordered monotonic regression. pages 23–37, 2009. In: A. Feelders and R. Potharst (Eds.) Proc. of the Workshop on Learning Monotone Models from Data at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. [12] P.M. Pardalos and G. Xue. Algorithms for a class of isotonic regression problems. Algorithmica, 23:211– 222, 1999. [13] K.G. Murty. Linear Programming. John Wiley & Sons, Inc., 1983. [14] R. Chandrasekaran, Y.U. Ryu, V.S. Jacob, and S. Hong. Isotonic separation. INFORMS Journal on Computing, 17(4):462–474, 2005. [15] MOSEK ApS. The MOSEK optimization tools manual. version 6.0, revision 61. 2010. Software available at http://www.mosek.com. [16] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice-Hall, Inc., 1993. 9
2010
8
4,124
Large Margin Learning of Upstream Scene Understanding Models Jun Zhu† Li-Jia Li‡ Fei-Fei Li‡ Eric P. Xing† †{junzhu,epxing}@cs.cmu.edu ‡{lijiali,feifeili}@cs.stanford.edu †School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 ‡Department of Computer Science, Stanford University, Stanford, CA 94305 Abstract Upstream supervised topic models have been widely used for complicated scene understanding. However, existing maximum likelihood estimation (MLE) schemes can make the prediction model learning independent of latent topic discovery and result in an imbalanced prediction rule for scene classification. This paper presents a joint max-margin and max-likelihood learning method for upstream scene understanding models, in which latent topic discovery and prediction model estimation are closely coupled and well-balanced. The optimization problem is efficiently solved with a variational EM procedure, which iteratively solves an online loss-augmented SVM. We demonstrate the advantages of the large-margin approach on both an 8-category sports dataset and the 67-class MIT indoor scene dataset for scene categorization. 1 Introduction Probabilistic topic models like the latent Dirichlet allocation (LDA) [5] have recently been applied to a number of computer vision tasks such as objection annotation and scene classification due to their ability to capture latent semantic compositions of natural images [22, 23, 9, 13]. One of the advocated advantages of such models is that they do not require “supervision” during training, which is arguably preferred over supervised learning that would necessitate extra cost. But with the increasing availability of free on-line information such as image tags, user ratings, etc., various forms of “side-information” that can potentially offer “free” supervision have led to a need for new models and training schemes that can make effective use of such information to achieve better results, such as more discriminative topic representations of image contents, and more accurate image classifiers. The standard unsupervised LDA ignores the commonly available supervision information, and thus can discover a sub-optimal topic representation for prediction tasks. Extensions to supervised topic models which can explore side information for discovering predictive topic representations have been proposed, such as the sLDA [4, 25] and MedLDA [27]. A common characteristic of these models is that they are downstream, that is, the supervised response variables are generated from topic assignment variables. Another type of supervised topic models are the so-called upstream models, of which the response variables directly or indirectly generate latent topic variables. In contrast to downstream supervised topic models (dSTM), which are mainly designed by machine learning researchers, upstream supervised topic models (uSTM) are well-motivated from human vision and psychology research [18, 10] and have been widely used for scene understanding tasks. For example, in the recently developed scene understanding models [23, 13, 14, 8], complex scene images are modeled as a hierarchy of semantic concepts where the most top level corresponds to a scene, which can be represented as a set of latent objects likely to be found in a given scene. To learn an upstream scene model, maximum likelihood estimation (MLE) is the most common choice. However, MLE can make the prediction model estimation independent of latent topic discovery and result in an imbalanced prediction rule for scene classification, as we explain in Section 3. 1 In this paper, our goal is to address the weakness of MLE for learning upstream supervised topic models. Our approach is based on the max-margin principle for supervised learning which has shown great promise in many machine learning tasks, such as classification [21] and structured output prediction [24]. For the dSTM, max-margin training has been developed in MedLDA [27], which has achieved better prediction performance than MLE. In such downstream models, latent topic assignments are sufficient statistics for the prediction model and it is easy to define the max-margin constraints based on existing max-margin methods (e.g., SVM). However, for upstream supervised topic models, the discriminant function for prediction involves an intractable computation of posterior distributions, which makes the max-margin training more delicate. Specifically, we present a joint max-margin and max-likelihood estimation method for learning upstream scene understanding models. By using a variational approximation to the posterior distribution of supervised variables (e.g., scene categories), our max-margin learning approach iterates between posterior probabilistic inference and max-margin parameter learning. The parameter learning solves an online loss-augmented SVM, which closely couples the prediction model estimation and latent topic discovery, and this close interplay results in a well-balanced prediction rule for scene categorization. Finally, we demonstrate the advantages of our max-margin approach on both the 8category sports [13] and the 67-class MIT indoor scene [20] datasets. Empirical results show that max-margin learning can significantly improve the scene classification accuracy. The paper is structured as follows. Sec. 2 presents a generic scene understanding model we will work on. Sec. 3 discusses the weakness of MLE in learning upstream models. Sec. 4 presents the max-margin learning approach. Sec. 5 presents empirical results and Sec. 6 concludes. 2 Joint Scene and Object Model: a Generic Running Example In this section, we present a generic joint scene categorization and object annotation model, which will be used to demonstrate the large margin learning of upstream scene understanding models. 2.1 Image Representation How should we represent a scene image? Friedman [10] pointed out that object recognition is critical in the recognition of a scene. While individual objects contribute to the recognition of visual scenes, human vision researchers Navon [18] and Biederman [2] also showed that people perform rapid global scene analysis before conducting more detailed local object analysis when recognizing scene images. To obtain a generic model, we represent a scene by using its global scene features and objects within it. We first segment an image I into a set of local regions {r1, · · · , rN}. Each region is represented by three region features R (i.e., color, location and texture) and a set of image patches X. These region features are represented as visual codewords. To describe detailed local information of objects, we partition each region into patches. For each patch, we extract the SIFT [16] features, which are insensitive to view-point and illumination changes. To model the global scene representation, we extract a set of global features G [19]. In our dataset, we represent an image as a tuple (r, x, g), where r denotes an instance of R, and likewise for x and g. 2.2 The Joint Scene and Object Model The model is shown in Fig. 1 (a). S is the scene random variable, taking values from a finite set S = {s1, · · · , sMs}. For an image, the distribution over scene categories depends on its global representation features G. Each scene is represented as a mixture over latent objects O and the mixing weights are defined with a generalized linear model (GLM) parameterized by ψ. By using a normal prior on ψ, the scene model can capture the mutual correlations between different objects, similar to the correlated topic models (CTMs) [3]. Here, we assume that for different scenes, the objects have different distributions and correlations. Let f denote the vector of real-valued feature functions of S and G, the generating procedure of an image is as follows: 1. Sample a scene category from a conditional scene model: p(s|g, θ) = exp(θ⊤f(g,s) ∑ s′ exp(θ⊤f(g,s′)). 2. Sample the parameters ψ|s, µ, Σ ∼N(µs, Σs). 3. For each region n (a) sample an object from: p(on = k|ψ) = exp(ψk) ∑ j exp(ψj). (b) sample Mr (i.e., 3: color, location and texture) region features: rnm|on, β ∼Multi(βmon). (c) sample Mx image patches xnm|on, η ∼Multi(ηon). 2 D R N X G Mr Mx O S Ms (a) 1 2 3 4 5 6 7 8 0 2 4 6 8 x 10 −3 Log−likelihood Ratio MLE 1 2 3 4 5 6 7 8 0 2 4 6 8 x 10 −3 Max−Margin (b) 1 2 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Scene Classification Accuracy Max−Margin MLE (c) Figure 1: (a) a joint scene categorization and object annotation model with global features G; (b) average log-likelihood ratio log p(s|g, θ)/L−θ under MLE and max-margin estimations, where the first bar is for true categories and the rest are for categories sorted based on their difference from the first one; (c) scene classification accuracy by using (Blue) L−θ, (Green) log p(s|g, θ), and (Red) L−θ +log p(s|g, θ) for prediction. Group 1 is for MLE and group 2 is for max-margin training. The generative model defines a joint distribution p(s, ψ, o, r, x|g, Θ) = p(s|θ, g)p(ψ|µs, Σs) N ∏ n=1 ( p(on|ψ) Mr ∏ m=1 p(rnm|on, β) Mx ∏ m=1 p(xnm|on, η) ) , where we have used Θ to denote all the unknown parameters (θ, µ, Σ, β, η). From the joint distribution, we can make two types of predictions, namely scene classification and object annotation. For scene classification, we infer the maximum a posteriori prediction ˆs ≜arg max s p(s|g, r, x) = arg max s log p(s, r, x|g). (1) For object annotation, we can use the inferred latent representation of regions based on p(o|g, r, x) and build a classifier to categorize regions into object classes, when some training examples with manually annotated objects are provided. Since collecting fully labeled images with annotated objects is difficult, upstream scene models are usually learned with partially labeled images for scene categorization, where only scene categories are provided and objects are treated as latent topics or themes [9]. In this paper, we focus on scene classification. Some empirical results on object annotation will be reported when labeled objects are available. We use this joint model as a running example to demonstrate the basic principe of performing maxmargin learning for the widely applied upstream scene understanding models because it is wellmotivated, very generic and covers many other existing scene understanding models. For example, if we do not incorporate the global scene representation G, the joint model will be reduced to a model similar as [14, 6, 23]. Moreover, the generic joint model provides a good framework for studying the relative contributions of local object modeling and global scene representation, which has been shown to be useful for scene classification [20] and object detection [17] tasks. 3 Weak Coupling of MLE in Learning Upstream Scene Models To learn an upstream scene model, the most commonly used method is the maximum likelihood estimation (MLE), such as in [23, 6, 14]. In this section, we discuss the weakness of MLE for learning upstream scene models and motivate the max-margin approach. Let D = {(Id, sd)}D d=1 denote a set of partially labeled training images. The standard MLE obtains the optimum model parameters by maximizing the log-likelihood1 ∑D d=1 log p(sd, rd, xd|gd, Θ). By using the factorization of p(s, ψ, o, r, x|g, Θ), MLE solves the following equivalent problem max θ,Θ−θ ∑ d ( log p(sd|gd, θ) + Lsd,−θ ) , (2) where Lsd,−θ ≜log ∫ ψ ∑ o p(ψ, o, rd, xd|sd, Θ) = log p(rd, xd|sd, Θ) is the log-likelihood of image features given the scene class, and Θ−θ denotes all the parameters except θ. Since Ls,−θ does not depend on θ, the MLE estimation of the conditional scene model is to solve max θ ∑ d log p(sd|gd, θ), (3) which does not depend on the latent object model. This is inconsistent with the prediction rule (1) which does depend on both the conditional scene model (i.e., p(s|g, θ)) and the local object model. 1The conditional likelihood estimation can avoid this problem to some extend, but it has not been studied, to the best of our knowledge. 3 This decoupling will result in an imbalanced combination between the conditional scene and object models for prediction, as we explain below. We first present some details of the MLE method. For θ, the problem (3) is an MLE estimation of a GLM, and it can be efficiently solved with gradient descent methods, such as quasi-Newton methods [15]. For Θ−θ, since the likelihood Ls,−θ is intractable to compute, we apply variational methods to obtain an approximation. By introducing a variational distribution qs(ψ, o) to approximate the posterior p(ψ, o|s, r, x, Θ) and using the Jensen’s inequality, we can derive a lower bound Ls,−θ ≥Eqs[log p(ψ, o, r, x|s, Θ)] + H(qs) ≜L−θ(qs, Θ), (4) where H(q) = −Eq[q] is the entropy. Then, the intractable prediction rule (1) can be approximated with the variational prediction rule ˆs ≜arg max s,qs ( log p(s|g, θ) + L−θ(qs, Θ) ) . (5) Maximizing ∑ d L−θ(qsd, Θ) will lead to a closed form solution of Θ−θ. See Appendix for the inference of qs as involved in the prediction rule (5) and the estimation of Θ−θ. Now, we examine the effects of the conditional scene model p(s|g, θ) in making a prediction via the prediction rule (5). Fig. 1 (b-left) shows the relative importance of log p(s|g, θ) in the joint decision rule (5) on the sports dataset [13]. We can see that in MLE the conditional scene model plays a very weak role in making a prediction when it is combined with the object model, i.e., L−θ. Therefore, as shown in Fig. 1 (c), although a simple logistic regression with global features (i.e., the green bar) can achieve a good accuracy, the accuracy of the prediction rule (5) that uses the joint likelihood bound (i.e, the red bar) is decreased due to the strong effect of the potentially bad prediction rule based on L−θ (i.e., the blue bar), which only considers local image features. In contrast, as shown in Fig. 1 (b-right), in the max-margin approach to be presented, the conditional scene model plays a much more influential role in making a prediction via the rule (5). This results in a better balanced combination between the scene and the object models. The strong coupling is due to solving an online loss-augmented SVM, as we explain below. Note that we are not claiming any weakness of MLE in general. All our discussions are concentrated on learning upstream supervised topic models, as generically represented by the model in Fig. 1. 4 Max-Margin Training Now, we present the max-margin method for learning upstream scene understanding models. 4.1 Problem Definition For the predictive rule (1), we use F(s, g, r, x; Θ) ≜log p(s|g, r, x, Θ) to denote the discriminant function, which is more complicated than the commonly chosen linear form, in the sense we will explain shortly. In the same spirit of max-margin classifiers (e.g., SVMs), we define the hinge loss of the prediction rule (1) on D as Rhinge(Θ) = 1 D ∑ d max s [∆ℓd(s) −∆Fd(s; Θ)], where ∆ℓd(s) is a loss function (e.g., 0/1 loss), and ∆Fd(s; Θ) = F(sd, gd, rd, xd; Θ) − F(s, gd, rd, xd; Θ) is the margin favored by the true category sd over any other category s. The problem with the above definition is that exactly computing the posterior distribution p(s|g, r, x, Θ) is intractable. As in MLE, we use a variational distribution qs to approximate it. By using the Bayes’s rule and the variational bound in Eq. (4), we can lower bound the log-likelihood log p(s|g, r, x, Θ) = log p(s, r, x|g, Θ) −log p(r, x|g, Θ) ≥log p(s|g, θ) + L−θ(qs, Θ) −c, (6) where c = log p(r, x|g, Θ). Without causing ambiguity, we will use L−θ(qs) without Θ. Since we need to make some assumptions about qs, the equality in (6) usually does not hold. Therefore, the tightest lower bound is an approximation of the intractable discriminant function F(s, g, r, x; Θ) ≈log p(s|g, θ) + max qs L−θ(qs) −c. (7) Then, the margin is ∆Fd(s; Θ) = θ⊤∆fd(s) + maxqsd L−θ(qsd) −maxqs L−θ(qs), of which the linear term is the same as that in a linear SVM [7] and the difference between two variational bounds causes the topic discovery to bias the learning of the scene classification model, as we shall see. 4 Using the variational discriminant function in Eq. (7) and applying the principle of regularized empirical risk minimization, we define the max-margin learning of the joint scene and object model as solving min Θ Ω(Θ) + λ ∑ d (−max qsd L−θ(qsd)) + CRhinge(Θ), (8) where Ω(Θ) is a regularizer of the parameters. Here, we define Ω(Θ) ≜1 2∥θ∥2 2. For the normal mean µs or covariance matrix Σs, a similar ℓ2-norm or Frobenius norm can be used without changing our algorithm. The free parameters λ and C are positive and tradeoff the classification loss and the data likelihood. When λ →∞, the problem (8) reduces to the standard MLE of the joint scene model with a fixed uniform prior on scene classes. Moreover, we can see the difference from the standard MLE (2). Here, we minimize a hinge loss, which is defined on the joint prediction rule, while MLE minimizes the log-likelihood loss log p(sd|gd, θ), which does not depend on the latent object model. Therefore, our approach can be expected to achieve a closer dependence between the conditional scene model and the latent object model. More insights will be provided in the next section. 4.2 Solving the Optimization Problem The problem (8) is generally hard to solve because the model parameters and variational distributions are strongly coupled. Therefore, we develop a natural iterative procedure that estimates the parameters Θ and performs posterior inference alternatively. The intuition is that by fixing one part (e.g., qs) the other part (e.g., Θ) can be efficiently done. Specifically, using the definitions, we rewrite the problem (8) as a min-max optimization problem min Θ,{qsd } max {s,qs} (1 2∥Θ∥2 2 −(λ + C) ∑ d L−θ(qsd) + C ∑ d [−θ⊤∆fd(s) + ∆ℓd(s) + L−θ(qs)] ) , (9) where the factor 1/D in Rhinge is absorbed in the constant C. This min-max problem can be approximately solved with an iterative procedure. First, we infer the optimal variational posterior2 q⋆ s = arg maxqs L−θ(qs) for each s and each training image. Then, we solve min Θ,{qsd } (1 2∥Θ∥2 2 −(λ + C) ∑ d L−θ(qsd) + C ∑ d max s [−θ⊤∆fd(s) + ∆ℓd(s) + L−θ(q⋆ s)] ) , For this sub-step, again, we apply an alterative procedure to solve the minimization problem over Θ and qsd. We first infer the optimal variational posterior q⋆ sd = arg maxqsd L−θ(qsd), and then we estimate the parameters by solving the following problem min Θ (1 2∥Θ∥2 2 −(λ + C) ∑ d L−θ(q⋆ sd) + C ∑ d max s [−θ⊤∆fd(s) + ∆ℓd(s) + L−θ(q⋆ s)] ) , (10) Since inferring q⋆ sd is included in the step of inferring q⋆ s (∀s), the algorithm can be summarized as a two-step EM-procedure that iteratively performs posterior inference of qs and max-margin parameter estimation. Another way to understand this iterative procedure is from the definitions. The first step of inferring q⋆ s is to compute the discriminant function F under the current model. Then, we update the model parameters Θ by solving a large-margin learning problem. For brevity, we present the parameter estimation only. The posterior inference is detailed in Appendix A.1. Parameter Estimation: This step can be done with an alternating minimization procedure. For the Gaussian parameters (µ, Σ) and multinomial parameters (η, β), the estimation can be written in a closed-form as in a standard MLE of CTMs [3] by using a loss-augmented prediction of s. For brevity, we defer the details to the Appendix A.2. Now, we present the step of estimating θ, which illustrates the essential difference between the large-margin approach and the standard MLE. Specifically, the optimum solution of θ is obtained by solving the sub-problem3 min θ 1 2∥θ∥2 2 + C ∑ d ( max s [θ⊤f(gd, s) + ∆ℓd(s) + L−θ(q⋆ s)] −[θ⊤f(gd, sd) + L−θ(q⋆ sd)] ) , which is equivalent to a constrained problem by introducing a set of non-negative slack variables ξ min θ,ξ 1 2∥θ∥2 2 + C D ∑ d=1 ξd s.t.: θ⊤∆fd(s) + [L−θ(q⋆ sd) −L−θ(q⋆ s)] ≥∆ℓd(s) −ξd, ∀d, s. (11) 2To retain an accurate large-margin criterion for estimating model parameters (especially θ), we do not perform the maximization over s at this step. 3The constant (w.r.t. θ) term −C ∑ d L−θ(q⋆ sd) is kept for easy explanation. It won’t change the estimation. 5 The constrained optimization problem is similar to that of a linear SVM [7]. However, the difference is that we have the additional term ∆L⋆ d(s) ≜L−θ(q⋆ sd) −L−θ(q⋆ s). This term indicates that the estimation of the scene classification model is influenced by the topic discovery procedure, which finds an optimum posterior distribution q⋆. If ∆L⋆ d(s) < 0, s ̸= sd, which means it is very likely that a wrong scene s explains the image content better than the true scene sd, then the term ∆L⋆ d(s) acts in a role of augmenting the linear decision boundary θ to make a correct prediction on this image by using the prediction rule (5). If ∆L⋆ d(s) > 0, which means the true scene can explain the image content better than s, then the linear decision boundary can be slightly relaxed. If we move the additional term to the right hand side, the problem (11) is to learn a linear SVM, but with an online updated loss function ∆ℓd(s) −∆L⋆ d(s). We call this SVM an online loss-augmented SVM. Solving the loss-augmented SVM will result in an amplified influence of the scene classification model in the joint predictive rule (5) as shown in Fig. 1 (b). 5 Experiments Now, we present empirical evaluation of our approach on the sports [13] and MIT indoor scene [20] datasets. Our goal is to demonstrate the advantages of the max-margin method over the MLE for learning upstream scene models with or without global features. Although the model in Fig. 1 can also be used for object annotation, we report the performance on scene categorization only, which is our main focus in this paper. For object annotation, which requires additional human annotated examples of objects, some preliminary results are reported in the Appendix due to space limitation. 5.1 Datasets and Features The sports data contain 1574 diverse scene images from 8 categories, as listed in Fig. 2 with example images. The indoor scene dataset [20] contains 15620 scene images from 67 categories as listed in Table 2. We use the method [1] to segment these images into small regions based on color, brightness and texture homogeneity. For each region, we extract color, texture and location features, and quantize them into 30, 50 and 120 codewords, respectively. Similarly, the SIFT features extracted from the small patches within each region are quantized into 300 SIFT codewords. We use the gist features [19] as one example of global features. Extension to include other global features, such as SIFT sparse codes [26], can be directly done without changing the model or the algorithm. 5.2 Models For the upstream scene model as in Fig. 1, we compare the max-margin learning with the MLE method, and we denote the scene models trained with max-margin training and MLE by MM-Scene and MLE-Scene, respectively. For both methods, we evaluate the effectiveness of global features, and we denote the scene models without global features by MM-Scene-NG and MLE-Scene-NG, respectively. Since our main goal in this paper is to demonstrate the advantages of max-margin learning in upstream supervised topic models, rather than dominance of such models over all others, we just compare with one example of downstream models–the multi-class sLDA (Multi-sLDA) [25]. Systematical comparison with other methods, including DiscLDA [12] and MedLDA [27], is deferred to a full version. For the downstream Multi-sLDA, the image-wise scene category variable S is generated from latent object variables O via a softmax function. For this downstream model, the parameter estimation can be done with MLE as detailed in [25]. Finally, to show the usefulness of the object model in scene categorization, we also compare with the margin-based multi-class SVM [7] and likelihood-based logistic regression for scene classification based on the global features. For the SVM, we use the software SVMmulticlass 4, which implements a fast cutting-plane algorithm [11] to do parameter learning. We use the same software with slight changes to learn the loss-augmented SVM in our max-margin method. 5.3 Scene Categorization on the 8-Class Sports Dataset We partition the dataset equally into training and testing data. For all the models except SVM and logistic regression, we run 5 times with random initialization of the topic parameters (e.g., β and η). 4http://svmlight.joachims.org/svm multiclass.html 6 badminton bocce croquet polo badminton croquet bocce rockclimbing croquet polo polo badminton rockclimbing rowing sailing snowboarding rockclimbing snowboarding rowing sailing sailing rowing snowboarding bocce Figure 2: Example images from each category in the sports dataset with predicted scene classes, where the predictions in blue are correct while red ones are wrong predictions. 10 20 30 40 50 60 70 80 90 100 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 # Topics Scene Classification Accuracy MM−Scene MM−Scene−NG MLE−Scene MLE−Scene−NG Multi−sLDA Multi−SVM Figure 3: Classification accuracy of different models with respect to the number of topics. The average overall accuracy of scene categorization on 8 categories and its standard deviation are shown in Fig. 3. The result of logistic regression is shown in the left green bar in Fig. 1 (c). We also show the confusion matrix of the max-margin scene model with 100 latent topics in Table 1, and example images from each category are shown in Fig. 2 with predicted labels. Overall, the max-margin scene model with global features achieves significant improvements as compared to all other approaches we have tested. Interestingly, although we provide only scene categories as supervised information during training, our best performance with global features is close to that reported in [13], where additional supervision of objects is used. The outstanding performance of the max-margin method for scene classification can be understood from the following aspects. Max-margin training: from the comparison of the max-margin approach with the standard MLE in both cases of using global features and not using global features, we can see that the max-margin learning can improve the performance dramatically, especially when the scene model uses global features (about 3 percent). This is due to the well-balanced prediction rule achieved by the maxmargin method, as we have explained in Section 3. Global features: from the comparison between the scene models with and without global features, we can see that using the gist features can significantly (about 8 percent) improve the scene categorization accuracy in both MLE and max-margin training. We also did some preliminary experiments on the SIFT sparse codes feature [26], which are a bit more expensive to extract. By using both gist and sparse codes features, we can achieve dramatic improvements in both max-margin and MLE methods. Specifically, the max-margin scene model achieves an accuracy of about 0.83 in scene classification, and the likelihood-based model obtains an accuracy of about 0.80. Object modeling: the superior performance of the max-margin learned MM-scene model comparing to the SVM and logistic regression (See the left green bar of Fig. 1 (c)), which use global features only, indicates that modeling objects can facilitate scene categorization. This is because the scene classification model is influenced by the latent object modeling through the term ∆L⋆ d(s), which can improve the decision boundary of a standard linear SVM for those images that have negative scores of ∆L⋆ d(s), as we have discussed in the online loss-augmented SVM. However, object modeling does not improve the classification accuracy and sometimes it can even be harmful when the scene model is learned with the standard MLE. This is because the object model (using the state-of-the-art representation) (e.g., MM-MLE-NG) alone performs much worse than global feature models (e.g., logistic regression), as shown in Fig. 1 and Fig. 3, and the standard MLE learns an imbalanced prediction rule, as we have analyzed in Section 3. Given that the state-of-the-art object model is not good, it is very encouraging to see that we can still obtain positive improvements by using the closely coupled and well-balanced max-margin learning. These results indicate that further improvements can be expected by improving the local object model, e.g., by incorporating rich features. We also compare with the theme model [9], which is for scene categorization only. The theme model uses a different image representation, where each image is a vector of image patch codewords. The theme model achieves about 0.65 in classification accuracy, lower than that of MM-Scene. 7 Table 1: Confusion matrix for 100-topic MMScene on the sports dataset. 0.717 badmin- bocce croquet polo rockrowing sailing snowton climbing boarding badminton 0.768 0.051 0.051 0.081 0.020 0.020 0.000 0.010 bocce 0.043 0.333 0.275 0.145 0.087 0.058 0.014 0.043 croquet 0.025 0.144 0.669 0.093 0.025 0.025 0.008 0.008 polo 0.220 0.055 0.099 0.516 0.022 0.022 0.011 0.055 rockclimbing 0.000 0.010 0.021 0.000 0.845 0.031 0.010 0.082 rowing 0.008 0.008 0.008 0.008 0.024 0.912 0.016 0.016 sailing 0.011 0.021 0.000 0.021 0.011 0.053 0.884 0.000 snowboarding 0.011 0.021 0.032 0.095 0.084 0.053 0.063 0.642 Table 2: The 67 indoor categories sorted by classification accuracy by 70-topic MM-Scene. buffet 0.85 lobby 0.40 stairscase 0.25 hospitalroom 0.10 green house 0.84 prison cell 0.39 studiomusic 0.24 kindergarden 0.10 cloister 0.71 casino 0.36 children room 0.21 laundromat 0.10 inside bus 0.61 dining room 0.35 garage 0.20 office 0.10 movie theater 0.60 kitchen 0.35 gym 0.20 restaurant kitchen 0.09 poolinside 0.59 winecellar 0.34 hairsalon 0.20 shoeshop 0.09 church inside 0.56 library 0.31 livingroom 0.20 videostore 0.08 classroom 0.55 tv studio 0.30 operating room 0.20 airport inside 0.07 concert hall 0.55 warehouse 0.29 pantry 0.20 bar 0.06 corridor 0.55 batchroom 0.26 subway 0.20 deli 0.06 florist 0.55 bookstore 0.25 toystore 0.19 jewelleryshop 0.06 trainstation 0.54 computerroom 0.25 artstudio 0.14 laboratorywet 0.05 closet 0.51 dentaloffice 0.25 fastfood restaurant 0.13 locker room 0.05 elevator 0.49 grocerystore 0.25 auditorium 0.12 museum 0.05 nursery 0.44 inside subway 0.25 bakery 0.11 restaurant 0.05 bowling 0.41 mall 0.25 bedroom 0.11 waitingroom 0.04 gameroom 0.40 meeting room 0.25 clothingstore 0.10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Scene Classification Accuracy 0/1 0/5 0/10 0/20 0/30 0/40 0/50 Figure 4: Classification accuracy of MMScene with different loss functions ∆ℓd(s). Finally, we examine the influence of the loss function ∆ℓd(s) on the performance of the max-margin scene model. As we can see in problem (11), the loss function ∆ℓd(s) is another important factor that influences the estimation of θ and its relative importance in the prediction rule (5). Here, we use the 0/ℓ-loss function, that is, ∆ℓd(s) = ℓif s ̸= sd; otherwise 0. Fig. 4 shows the performance of the 100-topic MM-Scene model when using different loss functions. When ℓis set between 10 and 20, the MM-Scene method stably achieves the best performance. The above results in Fig. 3 and Table 1 are achieved with ℓselected from 5 to 40 with cross-validation during training. 5.4 Scene Categorization on the 67-Class MIT Indoor Scene Dataset 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 MLE−Scene−NG MM−Scene−NG SVM LR ROI+Gist(segmentation) ROI+Gist(annotation) MLE−Scene MM−Scene Scene Classification Accuracy Figure 5: Classification accuracy on the 67class MIT indoor dataset. The MIT indoor dataset [20] contains complex scene images from 67 categories. We use the same training and testing dataset as in [20], in which each category has about 80 images for training and about 20 images for testing. We compare the joint scene model with SVM, logistic regression (LR), and the prototype-based methods [20]. Both the SVM and LR are based on the global gist features only. For the joint scene model, we set the number of latent topics at 70. The overall performance of different methods are shown in Fig. 5 and the classification accuracy of each class is shown in Table 2. For the prototype-based methods, we cite the results from [20]. We can see that the joint scene model (both MLE-Scene and MM-Scene) significantly outperforms SVM and LR that use global features only. The likelihood-based MLE-Scene slightly outperforms the ROI-Gist(segmentation), which uses both the global gist features and local region-of-interest (ROI) features extracted from automatically segmented regions [20]. By using max-margin training, the joint scene model (i.e., MM-Scene) achieves significant improvements compared to MLE-Scene. Moreover, the marginbased MM-Scene, which uses automatically segmented regions to extract features, outperforms the ROI-Gist(annotation) method that uses human annotated interested regions. 6 Conclusions In this paper, we address the weak coupling problem of the commonly used maximum likelihood estimation in learning upstream scene understanding models by presenting a joint maximum margin and maximum likelihood learning method. The proposed approach achieves a close interplay between the prediction model estimation and latent topic discovery, and thereby a well-balanced prediction rule. The optimization problem is efficiently solved with a variational EM procedure, which iteratively learns an online loss-augmented SVM. Finally, we demonstrate the advantages of max-margin training and the effectiveness of using global features in scene understanding on both an 8-category sports dataset and the 67-class MIT indoor scene data. 8 Acknowledgements J.Z and E.P.X are supported by ONR N000140910758, NSF IIS-0713379, NSF Career DBI0546594, and an Alfred P. Sloan Research Fellowship to E.P.X. L.F-F is partially supported by an NSF CAREER grant (IIS-0845230), a Google research award, and a Microsoft Research Fellowship. We also would like to thank Olga Russakovsky for helpful comments. References [1] P. Arbel´aez and L. Cohen. Constrained image segmentation from hierarchical boundaries. In CVPR, 2008. [2] I. Biederman. On the semantics of a glance at a scene. Perceptual Organization, 213–253, 1981. [3] D. Blei and J. Lafferty. Correlated topic models. In NIPS, 2006. [4] D. Blei and J.D. McAuliffe. Supervised topic models. In NIPS, 2007. [5] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, (3):993–1022, 2003. [6] L.-L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes. In ICCV, 2007. [7] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, (2):265–292, 2001. [8] L. Du, L. Ren, D. Dunson, and L. Carin. A bayesian model for simultaneous image cluster, annotation and object segmentation. In NIPS, 2009. [9] L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In CVPR, 2005. [10] A. Friedman. Framing pictures: The role of knowledge in automatized encoding and memory for gist. Journal of Experimental Psychology: General, 108(3):316–355, 1979. [11] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27–59, 2009. [12] S. Lacoste-Jullien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In NIPS, 2008. [13] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In CVPR, 2007. [14] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and segmentation in an automatic framework. In CVPR, 2009. [15] D.C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, (45):503–528, 1989. [16] D.G. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999. [17] K. Murphy, A. Torralba, and W. Freeman. Using the forest to see the trees: A graphical model relating features, objects, and scenes. In NIPS, 2003. [18] D. Navon. Forest before trees: The precedence of global features in visual perception. Perception and Psychophysics, 5:197–200, 1969. [19] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001. [20] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009. [21] B. Sch¨olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. [22] J. Sivic, B.C. Russell, A. Efros, A. Zisserman, and W.T. Freeman. Discovering objects and their locatioins in images. In ICCV, 2005. [23] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects, and parts. In CVPR, 2005. [24] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003. [25] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, 2009. [26] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding forimage classification. In CVPR, 2009. [27] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: Maximum margin supervised topic models for regression and classification. In ICML, 2009. 9
2010
80
4,125
Switched Latent Force Models for Movement Segmentation Mauricio A. ´Alvarez 1, Jan Peters 2, Bernhard Sch¨olkopf 2, Neil D. Lawrence 3,4 1 School of Computer Science, University of Manchester, Manchester, UK M13 9PL 2 Max Planck Institute for Biological Cybernetics, T¨ubingen, Germany 72076 3 School of Computer Science, University of Sheffield, Sheffield, UK S1 4DP 4 The Sheffield Institute for Translational Neuroscience, Sheffield, UK S10 2HQ Abstract Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. 1 Introduction Latent force models [1] are a new approach for modeling data that allows combining dimensionality reduction with systems of differential equations. The basic idea is to assume an observed set of D correlated functions to arise from an unobserved set of R forcing functions. The assumption is that the R forcing functions drive the D observed functions through a set of differential equation models. Each differential equation is driven by a weighted mix of latent forcing functions. Sets of coupled differential equations arise in many physics and engineering problems particularly when the temporal evolution of a system needs to be described. Learning such differential equations has important applications, e.g., in the study of human motor control and in robotics [6]. A latent force model differs from classical approaches as it places a probabilistic process prior over the latent functions and hence can make statements about the uncertainty in the system. A joint Gaussian process model over the latent forcing functions and the observed data functions can be recovered using a Gaussian process prior in conjunction with linear differential equations [1]. The resulting latent force modeling framework allows the combination of the knowledge of the systems dynamics with a data driven model. Such generative models can be used to good effect, for example in ranked target prediction for transcription factors [5]. If a single Gaussian process prior is used to represent each latent function then the models we consider are limited to smooth driving functions. However, discontinuities and segmented latent forces are omnipresent in real-world data. For example, impact forces due to contacts in a mechanical dynamical system (when grasping an object or when the feet touch the ground) or a switch in an electrical circuit result in discontinuous latent forces. Similarly, most non-rhythmic natural mo1 tor skills consist of a sequence of segmented, discrete movements. If these segments are separate time-series, they should be treated as such and not be modeled by the same Gaussian process model. In this paper, we extract a sequence of dynamical systems motor primitives modeled by second order linear differential equations in conjunction with forcing functions (as in [1, 6]) from human movement to be used as demonstrations of elementary movements for an anthropomorphic robot. As human trajectories have a large variability: both due to planned uncertainty of the human’s movement policy, as well as due to motor execution errors [7], a probabilistic model is needed to capture the underlying motor primitives. A set of second order differential equations is employed as mechanical systems are of the same type and a temporal Gaussian process prior is used to allow probabilistic modeling [1]. To be able to obtain a sequence of dynamical systems, we augment the latent force model to include discontinuities in the latent function and change dynamics. We introduce discontinuities by switching between different Gaussian process models (superficially similar to a mixture of Gaussian processes; however, the switching times are modeled as parameters so that at any instant a single Gaussian process is driving the system). Continuity of the observed functions is then ensured by constraining the relevant state variables (for example in a second order differential equation velocity and displacement) to be continuous across the switching points. This allows us to model highly non stationary multivariate time series. We demonstrate our approach on synthetic data and real world movement data. 2 Review of Latent force models (LFM) Latent force models [1] are hybrid models that combine mechanistic principles and Gaussian processes as a flexible way to introduce prior knowledge for data modeling. A set of D functions {yd(t)}D d=1 is modeled as the set of output functions of a series of coupled differential equations, whose common input is a linear combination of R latent functions, {ur(t)}R r=1. Here we focus on a second order ordinary differential equation (ODE). We assume the output yd(t) is described by Ad d2yd(t) dt2 + Cd dyd(t) dt + κdyd(t) = PR r=1Sd,rur(t), where, for a mass-spring-damper system, Ad would represent the mass, Cd the damper and κd, the spring constant associated to the output d. We refer to the variables Sd,r as the sensitivity parameters. They are used to represent the relative strength that the latent force r exerts over the output d. For simplicity we now focus on the case where R = 1, although our derivations apply more generally. Note that models that learn a forcing function to drive a linear system have proven to be well-suited for imitation learning for robot systems [6]. The solution of the second order ODE follows yd(t) = yd(0)cd(t) + ˙yd(0)ed(t) + fd(t, u), (1) where yd(0) and ˙yd(0) are the output and the velocity at time t = 0, respectively, known as the initial conditions (IC). The angular frequency is given by ωd = p (4Adκd −C2 d)/(4A2 d) and the remaining variables are given by cd(t) = e−αdth cos(ωdt) + αd ωd sin(ωdt) i , ed(t) = e−αdt ωd sin(ωdt), fd(t, u) = Sd Adωd Z t 0 Gd(t −τ)u(τ)dτ = Sd Adωd Z t 0 e−αd(t−τ) sin[(t −τ)ωd]u(τ)dτ, with αd = Cd/(2Ad). Note that fd(t, u) has an implicit dependence on the latent function u(t). The uncertainty in the model of Eq. (1) is due to the fact that the latent force u(t) and the initial conditions yd(0) and ˙yd(0) are not known. We will assume that the latent function u(t) is sampled from a zero mean Gaussian process prior, u(t) ∼GP(0, ku,u(t, t′)), with covariance function ku,u(t, t′). If the initial conditions, yIC = [y1(0), y2(0), . . . , yD(0), v1(0), v2(0), . . . , vD(0)]⊤, are independent of u(t) and distributed as a zero mean Gaussian with covariance KIC the covariance function between any two output functions, d and d′ at any two times, t and t′, kyd,yd′(t, t′) is given by cd(t)cd′(t′)σyd,yd′ + cd(t)ed′(t′)σyd,vd′ + ed(t)cd′(t′)σvd,yd′ + ed(t)ed′(t′)σvd,vd′ + kfd,fd′(t, t′), where σyd,yd′, σyd,vd′, σvd,yd′ and σvd,vd′ are entries of the covariance matrix KIC and kfd,fd′(t, t′) = K0 R t 0Gd(t −τ) R t′ 0 Gd′(t′ −τ ′)ku,u(t, t′)dτ ′dτ, (2) 2 where K0 = SdSd′/(AdAd′ωdωd′). So the covariance function kfd,fd′(t, t′) depends on the covariance function of the latent force u(t). If we assume the latent function has a radial basis function (RBF) covariance, ku,u(t, t′) = exp[−(t −t′)2/ℓ2], then kfd,fd′(t, t′) can be computed analytically [1] (see also supplementary material). The latent force model induces a joint Gaussian process model across all the outputs. The parameters of the covariance function are given by the parameters of the differential equations and the length scale of the latent force. Given a multivariate time series data set these parameters may be determined by maximum likelihood. The model can be thought of as a set of mass-spring-dampers being driven by a function sampled from a Gaussian process. In this paper we look to extend the framework to the case where there can be discontinuities in the latent functions. We do this through switching between different Gaussian process models to drive the system. 3 Switching dynamical latent force models (SDLFM) We now consider switching the system between different latent forces. This allows us to change the dynamical system and the driving force for each segment. By constraining the displacement and velocity at each switching time to be the same, the output functions remain continuous. 3.1 Definition of the model We assume that the input space is divided in a series of non-overlapping intervals [tq−1, tq]Q q=1. During each interval, only one force uq−1(t) out of Q forces is active, that is, there are {uq−1(t)}Q q=1 forces. The force uq−1(t) is activated after time tq−1 (switched on) and deactivated (switched off) after time tq. We can use the basic model in equation (1) to describe the contribution to the output due to the sequential activation of these forces. A particular output zd(t) at a particular time instant t, in the interval (tq−1, tq), is expressed as zd(t) = yq d(t −tq−1) = cq d(t −tq−1)yq d(tq−1) + eq d(t −tq−1) ˙yq d(tq−1) + f q d(t −tq−1, uq−1). This equation is assummed to be valid for describing the output only inside the interval (tq−1, tq). Here we highlighted this idea by including the superscript q in yq d(t −tq−1) to represent the interval q for which the equation holds, although later we will omit it to keep the notation uncluttered. Note that for Q = 1 and t0 = 0, we recover the original latent force model given in equation (1). We also define the velocity ˙zd(t) at each time interval (tq−1, tq) as ˙zd(t) = ˙yq d(t −tq−1) = gq d(t −tq−1)yq d(tq−1) + hq d(t −tq−1) ˙yq d(tq−1) + mq d(t −tq−1, uq−1), where gd(t) = −e−αdt sin(ωdt)(α2 dω−1 d + ωd) and hd(t) = −e−αdt αd ωd sin(ωdt) −cos(ωdt)  , md(t) = Sd Adωd d dt  Z t 0 Gd(t −τ)u(τ)dτ  . Given the parameters θ = {{Ad, Cd, κd, Sd}D d=1, {ℓq−1}Q q=1}, the uncertainty in the outputs is induced by the prior over the initial conditions yq d(tq−1), ˙yq d(tq−1) for all values of tq−1 and the prior over latent force uq−1(t) that is active during (tq−1, tq). We place independent Gaussian process priors over each of these latent forces uq−1(t), assuming independence between them. For initial conditions yq d(tq−1), ˙yq d(tq−1), we could assume that they are either parameters to be estimated or random variables with uncertainty governed by independent Gaussian distributions with covariance matrices Kq IC as described in the last section. However, for the class of applications we will consider: mechanical systems, the outputs should be continuous across the switching points. We therefore assume that the uncertainty about the initial conditions for the interval q, yq d(tq−1), ˙yq d(tq−1) are proscribed by the Gaussian process that describes the outputs zd(t) and velocities ˙zd(t) in the previous interval q −1. In particular, we assume yq d(tq−1), ˙yq d(tq−1) are Gaussian-distributed with mean values given by yq−1 d (tq−1 −tq−2) and ˙yq−1 d (tq−1 −tq−2) and covariances kzd,zd′(tq−1, tq′−1) = cov[yq−1 d (tq−1 −tq−2), yq−1 d′ (tq−1 − tq−2)] and k ˙zd, ˙zd′(tq−1, tq′−1) = cov[ ˙yq−1 d (tq−1 −tq−2), ˙yq−1 d′ (tq−1 −tq−2)]. We also consider covariances between zd(tq−1) and ˙zd′(tq′−1), this is, between positions and velocities for different values of q and d. Example 1. Let us assume we have one output (D = 1) and three switching intervals (Q = 3) with switching points t0, t1 and t2. At t0, we assume that yIC follows a Gaussian distribution with 3 mean zero and covariance KIC. From t0 to t1, the output z(t) is described by z(t) = y1(t −t0) = c1(t −t0)y1(t0) + e1(t −t0) ˙y1(t0) + f 1(t −t0, u0). The initial condition for the position in the interval (t1, t2) is given by the last equation evaluated a t1, this is, z(t1) = y2(t1) = y1(t1 −t0). A similar analysis is used to obtain the initial condition associated to the velocity, ˙z(t1) = ˙y2(t1) = ˙y1(t1 −t0). Then, from t1 to t2, the output z(t) is z(t) = y2(t −t1) = c2(t −t1)y2(t1) + e2(t −t1) ˙y2(t1) + f 2(t −t1, u1), = c2(t −t1)y1(t1 −t0) + e2(t −t1) ˙y1(t1 −t0) + f 2(t −t1, u1). Following the same train of thought, the output z(t) from t2 is given as z(t) = y3(t −t2) = c3(t −t2)y3(t2) + e3(t −t2) ˙y3(t2) + f 3(t −t2, u2), where y3(t2) = y2(t2 −t1) and ˙y3(t2) = ˙y2(t2 −t1). Figure 1 shows an example of the switching dynamical latent force model scenario. To ensure the continuity of the outputs, the initial condition is forced to be equal to the output of the last interval evaluated at the switching point. 3.2 The covariance function y1(t −t0) y2(t −t1) y3(t −t2) y1(t0) y1(t1 −t0) y2(t1) y2(t2 −t1) y3(t2) z(t) t0 t1 t2 Figure 1: Representation of an output constructed through a switching dynamical latent force model with Q = 3. The initial conditions yq(tq−1) for each interval are matched to the value of the output in the last interval, evaluated at the switching point tq−1, this is, yq(tq−1) = yq−1(tq−1 −tq−2). The derivation of the covariance function for the switching model is rather involved. For continuous output signals, we must take into account constraints at each switching time. This causes initial conditions for each interval to be dependent on final conditions for the previous interval and induces correlations across the intervals. This effort is worthwhile though as the resulting model is very flexible and can take advantage of the switching dynamics to represent a range of signals. As a taster, Figure 2 shows samples from a covariance function of a switching dynamical latent force model with D = 1 and Q = 3. Note that while the latent forces (a and c) are discrete, the outputs (b and d) are continuous and have matching gradients at the switching points. The outputs are highly nonstationary. The switching times turn out to be parameters of the covariance function. They can be optimized along with the dynamical system parameters to match the location of the nonstationarities. We now give an overview of the covariance function derivation. Details are provided in the supplementary material. (a) System 1. Samples from the latent force. 0 2 4 6 8 10 −4 −3 −2 −1 0 1 2 3 4 (b) System 1. Samples from the output. 0 2 4 6 8 10 −10 −5 0 5 10 (c) System 2. Samples from the latent force. 0 2 4 6 8 10 −3 −2 −1 0 1 2 3 (d) System 2. Samples from the output. 0 2 4 6 8 10 −6 −4 −2 0 2 4 6 Figure 2: Joint samples of a switching dynamical LFM model with one output, D = 1, and three intervals, Q = 3, for two different systems. Dashed lines indicate the presence of switching points. While system 2 responds instantaneously to the input force, system 1 delays its reaction due to larger inertia. 4 In general, we need to compute the covariance kzd,zd′(t, t′) = cov[zd(t), zd′(t′)] for zd(t) in time interval (tq−1, tq) and zd′(t′) in time interval (tq′−1, tq′). By definition, this covariance follows cov[zd(t), zd′(t′)] = cov  yq d(t −tq−1), yq′ d′(t −tq′−1))  . We assumme independence between the latent forces uq(t) and independence between the initial conditions yIC and the latent forces uq(t).1 With these conditions, it can be shown2 that the covariance function3 for q = q′ is given as cq d(t −tq−1)cq d′(t′ −tq−1)kzd,zd′(tq−1, tq−1) + cq d(t −tq−1)eq d′(t′ −tq−1)kzd, ˙zd′(tq−1, tq−1) +eq d(t −tq−1)cq d′(t′ −tq−1)k ˙zd,zd′(tq−1, tq−1) + eq d(t −tq−1)eq d′(t′ −tq−1)k ˙zd, ˙zd′(tq−1, tq−1) +kq fd,fd′(t, t′), (3) where kzd,zd′(tq−1, tq−1) = cov[yq d(tq−1)yq d′(tq−1)], kzd, ˙zd′(tq−1, tq−1) = cov[yq d(tq−1) ˙yq d′(tq−1)], k ˙zd,zd′(tq−1, tq−1) = cov[ ˙yq d(tq−1)yq d′(tq−1)], k ˙zd, ˙zd′(tq−1, tq−1) = cov[ ˙yq d(tq−1) ˙yq d′(tq−1)]. kq fd,fd′(t, t′) = cov[f q d(t −tq−1)f q d′(t′ −tq−1)]. In expression (3), kzd,zd′(tq−1, tq−1) = cov[yq−1 d (tq−1 −tq−2), yq−1 d′ (tq−1 −tq−2)] and values for kzd, ˙zd′(tq−1, tq−1), k ˙zd,zd′(tq−1, tq−1) and k ˙zd, ˙zd′(tq−1, tq−1) can be obtained by similar expressions. The covariance kq fd,fd′(t, t′) follows a similar expression that the one for kfd,fd′ (t, t′) in equation (2), now depending on the covariance kuq−1,uq−1(t, t′). We will assume that the covariances for the latent forces follow the RBF form, with length-scale ℓq. When q > q′, we have to take into account the correlation between the initial conditions yq d(tq−1), ˙yq d(tq−1) and the latent force uq′−1(t′). This correlation appears because of the contribution of uq′−1(t′) to the generation of the initial conditions, yq d(tq−1), ˙yq d(tq−1). It can be shown4 that the covariance function cov[zd(t), zd′(t′)] for q > q′ follows cq d(t −tq−1)cq′ d′(t′ −tq′−1)kzd,zd′(tq−1, tq′−1) + cq d(t −tq−1)eq′ d′(t′ −tq′−1)kzd, ˙zd′(tq−1, tq′−1) +eq d(t −tq−1)cq′ d′(t′ −tq′−1)k ˙zd,zd′(tq−1, tq′−1) + eq d(t −tq−1)eq′ d′(t′ −tq′−1)k ˙zd, ˙zd′(tq−1, tq′−1) +cq d(t −tq−1)X 1 d kq′ fd,fd′(tq′−1, t′) + cq d(t −tq−1)X 2 d kq′ md,fd′(tq′−1, t′) +eq d(t −tq−1)X 3 d kq′ fd,fd′(tq′−1, t′) + eq d(t −tq−1)X 4 d kq′ md,fd′(tq′−1, t′), (4) where kzd,zd′(tq−1, tq′−1) = cov[yq d(tq−1)yq′ d′(tq′−1)], kzd, ˙zd′(tq−1, tq′−1) = cov[yq d(tq−1) ˙yq′ d′(tq′−1)], k ˙zd,zd′(tq−1, tq′−1) = cov[ ˙yq d(tq−1)yq′ d′(tq′−1)], k ˙zd, ˙zd′(tq−1, tq′−1) = cov[ ˙yq d(tq−1) ˙yq′ d′(tq′−1)], kq md,fd′(t, t′) = cov[mq d(t −tq−1)f q d′(t′ −tq−1)], and X 1 d , X 2 d , X 3 d and X 4 d are functions of the form Pq−q′ n=2 Qq−q′ i=2 xq−i+1 d (tq−i+1 −tq−i), with xq−i+1 d being equal to cq−i+1 d , eq−i+1 d , gq−i+1 d or hq−i+1 d , depending on the values of q and q′. A similar expression to (4) can be obtained for q′ > q. Examples of these functions for specific values of q and q′ and more details are also given in the supplementary material. 4 Related work There has been a recent interest in employing Gaussian processes for detection of change points in time series analysis, an area of study that relates to some extent to our model. Some machine learning related papers include [3, 4, 9]. [3, 4] deals specifically with how to construct covariance functions 1Derivations of these equations are rather involved. In the supplementary material, section 2, we include a detailed description of how to obtain the equations (3) and (4) 2See supplementary material, section 2.2.1. 3We will write f q d(t −tq−1, uq−1) as f q d(t −tq−1) for notational simplicity. 4See supplementary material, section 2.2.2 5 in the presence of change points (see [3], section 4). The authors propose different alternatives according to the type of change point. From these alternatives, the closest ones to our work appear in subsections 4.2, 4.3 and 4.4. In subsection 4.2, a mechanism to keep continuity in a covariance function when there are two regimes described by different GPs, is proposed. The authors call this covariance continuous conditionally independent covariance function. In our switched latent force model, a more natural option is to use the initial conditions as the way to transit smoothly between different regimes. In subsections 4.3 and 4.4, the authors propose covariances that account for a sudden change in the input scale and a sudden change in the output scale. Both type of changes are automatically included in our model due to the latent force model construction: the changes in the input scale are accounted by the different length-scales of the latent force GP process and the changes in the output scale are accounted by the different sensitivity parameters. Importantly, we also concerned about multiple output systems. On the other hand, [9] proposes an efficient inference procedure for Bayesian Online Change Point Detection (BOCPD) in which the underlying predictive model (UPM) is a GP. This reference is less concerned about the particular type of change that is represented by the model: in our application scenario, the continuity of the covariance function between two regimes must be assured beforehand. 5 Implementation In this section, we describe additional details on the implementation, i.e., covariance function, hyperparameters, sparse approximations. Additional covariance functions. The covariance functions k ˙zd,zd′(t, t′), kzd, ˙zd′(t, t′) and k ˙zd, ˙zd′(t, t′) are obtained by taking derivatives of kzd,zd′(t, t′) with respect to t and t′ [10]. Estimation of hyperparameters. Given the number of outputs D and the number of intervals Q, we estimate the parameters θ by maximizing the marginal-likelihood of the joint Gaussian process {zd(t)}D d=1 using gradient-descent methods. With a set of input points, t = {tn}N n=1, the marginal-likelihood is given as p(z|θ) = N(z|0, Kz,z + Σ), where z = [z⊤ 1 , . . . , z⊤ D]⊤, with zd = [zd(t1), . . . , zd(tN)]⊤, Kz,z is a D × D block-partitioned matrix with blocks Kzd,zd′. The entries in each of these blocks are evaluated using kzd,zd′(t, t′). Furthermore, kzd,zd′(t, t′) is computed using the expressions (3), and (4), according to the relative values of q and q′. Efficient approximations Optimizing the marginal likelihood involves the inversion of the matrix Kz,z, inversion that grows with complexity O(D3N 3). We use a sparse approximation based on variational methods presented in [2] as a generalization of [11] for multiple output Gaussian processes. The approximations establish a lower bound on the marginal likelihood and reduce computational complexity to O(DNK2), being K a reduced number of points used to represent u(t). 6 Experimental results We now show results with artificial data and data recorded from a robot performing a basic set of actions appearing in table tennis. 6.1 Toy example Using the model, we generate samples from the GP with covariance function as explained before. In the first experiment, we sample from a model with D = 2, R = 1 and Q = 3, with switching points t0 = −1, t1 = 5 and t2 = 12. For the outputs, we have A1 = A2 = 0.1, C1 = 0.4, C2 = 1, κ1 = 2, κ2 = 3. We restrict the latent forces to have the same length-scale value ℓ0 = ℓ1 = ℓ2 = 1e−3, but change the values of the sensitivity parameters as S1,1 = 10, S2,1 = 1, S1,2 = 10, S2,2 = 5, S1,3 = −10 and S2,3 = 1, where the first subindex refers to the output d and the second subindex refers to the force in the interval q. In this first experiment, we wanted to show the ability of the model to detect changes in the sensitivities of the forces, while keeping the length scales equal along the intervals. We sampled 5 times from the model with each output having 500 data points and add some noise with variance equal to ten percent of the variance of each sampled output. In each of the five repetitions, we took N = 200 data points for training and the remaining 300 for testing. 6 Q = 1 Q = 2 Q = 3 Q = 4 Q = 5 1 SMSE 76.27±35.63 14.66±11.74 0.30±0.02 0.31±0.03 0.72±0.56 MSLL −0.98±0.46 −1.79±0.26 −2.90±0.03 −2.87±0.04 −2.55±0.41 2 SMSE 7.27±6.88 1.08±0.05 1.10±0.05 1.06±0.05 1.10±0.09 MSLL −1.79±0.28 −2.26±0.02 −2.25±0.02 −2.27±0.03 −2.26±0.06 Table 1: Standarized mean square error (SMSE) and mean standardized log loss (MSLL) using different values of Q for both toy examples. The figures for the SMSE must be multiplied by 10−2. See the text for details. (a) Latent force toy example 1. 0 5 10 15 −2 −1 0 1 2 (b) Output 1 toy example 1. 0 5 10 15 −0.2 −0.1 0 0.1 0.2 0.3 0.4 (c) Output 2 toy example 1. 0 5 10 15 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 (d) Latent force toy example 2. 0 5 10 15 20 −1 0 1 2 (e) Output 1 toy example 2. 0 5 10 15 20 −1 −0.5 0 0.5 1 1.5 (f) Output 3 toy example 2. 0 5 10 15 20 −0.1 0 0.1 0.2 0.3 0.4 0.5 Figure 4: Mean and two standard deviations for the predictions over the latent force and two of the three outputs in the test set. Dashed lines indicate the final value of the swithcing points after optimization. Dots indicate training data. Figure 3: Data collection was performed using a Barrett WAM robot as haptic input device. Optimization of the hyperparameters (including t1 and t2) is done by maximization of the marginal likelihood through scaled conjugate gradient. We train models for Q = 1, 2, 3, 4 and 5 and measure the mean standarized log loss (MSLL) and the mean standarized mean square error (SMSE) [8] over the test set for each value of Q. Table 1, first two rows, show the corresponding average results over the 5 repetitions together with one standard deviation. Notice that for Q = 3, the model gets by the first time the best performance, performance that repeats again for Q = 4. The SMSE performance remains approximately equal for values of Q greater than 3. Figures 4(a), 4(b) and 4(c) shows the kind of predictions made by the model for Q = 3. We generate also a different toy example, in which the length-scales of the intervals are different. For the second toy experiment, we assume D = 3, Q = 2 and switching points t0 = −2 and t1 = 8. The parameters of the outputs are A1 = A2 = A3 = 0.1, C1 = 2, C2 = 3, C3 = 0.5, κ1 = 0.4, κ2 = 1, κ3 = 1 and length scales ℓ0 = 1e −3 and ℓ1 = 1. Sensitivities in this case are S1,1 = 1, S2,1 = 5, S3,1 = 1, S1,2 = 5, S2,2 = 1 and S3,2 = 1. We follow the same evaluation setup as in toy example 1. Table 1, last two rows, show the performance again in terms of MLSS and SMSE. We see that for values of Q > 2, the MLSS and SMSE remain similar. In figures 4(d), 4(e) and 4(f), the inferred latent force and the predictions made for two of the three outputs. 6.2 Segmentation of human movement data for robot imitation learning In this section, we evaluate the feasibility of the model for motion segmentation with possible applications in the analysis of human movement data and imitation learning. To do so, we had a human teacher take the robot by the hand and have him demonstrate striking movements in a cooperative game of table tennis with another human being as shown in Figure 3. We recorded joint positions, 7 (a) Log-Likelihood Try 1. 1 2 3 4 5 6 7 8 9 10 11 12 −1000 −800 −600 −400 −200 0 200 400 Value of the log−likelihood Number of intervals (b) Latent force Try 1. 5 10 15 20 −2 −1 0 1 2 Time Latent Force (c) HR Output Try 1. 5 10 15 20 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 Time HR (d) Log-Likelihood Try 2. 1 2 3 4 5 6 7 8 9 10 11 12 −1200 −1000 −800 −600 −400 −200 0 200 Value of the log−likelihood Number of intervals (e) Latent force Try 2. 5 10 15 −2 −1 0 1 2 3 4 Time Latent Force (f) SFE Output Try 2. 5 10 15 −1 −0.5 0 0.5 1 1.5 2 2.5 Time SFE Figure 5: Employing the switching dynamical LFM model on the human movement data collected as in Fig.3 leads to plausible segmentations of the demonstrated trajectories. The first row corresponds to the loglikelihood, latent force and one of four outputs for trial one. Second row shows the same quantities for trial two. Crosses in the bottom of the figure refer to the number of points used for the approximation of the Gaussian process, in this case K = 50. angular velocities, and angular acceleration of the robot for two independent trials of the same table tennis exercise. For each trial, we selected four output positions and train several models for different values of Q, including the latent force model without switches (Q = 1). We evaluate the quality of the segmentation in terms of the log-likelihood. Figure 5 shows the log-likelihood, the inferred latent force and one output for trial one (first row) and the corresponding quantities for trial two (second row). Figures 5(a) and 5(d) show peaks for the log-likelihood at Q = 9 for trial one and Q = 10 for trial two. As the movement has few gaps and the data has several output dimensions, it is hard even for a human being to detect the transitions between movements (unless it is visualized as in a movie). Nevertheless, the model found a maximum for the log-likelihood at the correct instances in time where the human transits between two movements. At these instances the human usually reacts due to an external stimulus with a large jerk causing a jump in the forces. As a result, we obtained not only a segmentation of the movement but also a generative model for table tennis striking movements. 7 Conclusion We have introduced a new probabilistic model that develops the latent force modeling framework with switched Gaussian processes. This allows for discontinuities in the latent space of forces. We have shown the application of the model in toy examples and on a real world robot problem, in which we were interested in finding and representing striking movements. Other applications of the switching latent force model that we envisage include modeling human motion capture data using the second order ODE and a first order ODE for modeling of complex circuits in biological networks. To find the order of the model, this is, the number of intervals, we have used cross-validation. Future work includes proposing a less expensive model selection criteria. Acknowledgments MA and NL are very grateful for support from a Google Research Award “Mechanistically Inspired Convolution Processes for Learning” and the EPSRC Grant No EP/F005687/1 “Gaussian Processes for Systems Identification with Applications in Systems Biology”. MA also thanks PASCAL2 Internal Visiting Programme. We also thank to three anonymous reviewers for their helpful comments. 8 References [1] Mauricio ´Alvarez, David Luengo, and Neil D. Lawrence. Latent Force Models. In David van Dyk and Max Welling, editors, Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, pages 9–16, Clearwater Beach, Florida, 16-18 April 2009. JMLR W&CP 5. [2] Mauricio A. ´Alvarez, David Luengo, Michalis K. Titsias, and Neil D. Lawrence. Efficient multioutput Gaussian processes through variational inducing kernels. In JMLR: W&CP 9, pages 25–32, 2010. [3] Roman Garnett, Michael A. Osborne, Steven Reece, Alex Rogers, and Stephen J. Roberts. Sequential Bayesian prediction in the presence of changepoints and faults. The Computer Journal, 2010. Advance Access published February 1, 2010. [4] Roman Garnett, Michael A. Osborne, and Stephen J. Roberts. Sequential Bayesian prediction in the presence of changepoints. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 345–352, 2009. [5] Antti Honkela, Charles Girardot, E. Hilary Gustafson, Ya-Hsin Liu, Eileen E. M. Furlong, Neil D. Lawrence, and Magnus Rattray. Model-based method for transcription factor target identification with limited data. PNAS, 107(17):7793–7798, 2010. [6] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In Advances in Neural Information Processing Systems 15, 2003. [7] T. Oyama, Y. Uno, and S. Hosoe. Analysis of variability of human reaching movements based on the similarity preservation of arm trajectories. In International Conference on Neural Information Processing (ICONIP), pages 923–932, 2007. [8] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. [9] Yunus Saatc¸i, Ryan Turner, and Carl Edward Rasmussen. Gaussian Process change point models. In Proceedings of the 27th Annual International Conference on Machine Learning, pages 927–934, 2010. [10] E. Solak, R. Murray-Smith W. E. Leithead, D. J. Leith, and C. E. Rasmussen. Derivative observations in Gaussian process models of dynamic systems. In Sue Becker, Sebastian Thrun, and Klaus Obermayer, editors, NIPS, volume 15, pages 1033–1040, Cambridge, MA, 2003. MIT Press. [11] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In JMLR: W&CP 5, pages 567–574, 2009. 9
2010
81
4,126
Adaptive Multi-Task Lasso: with Application to eQTL Detection Seunghak Lee, Jun Zhu and Eric P. Xing School of Computer Science, Carnegie Mellon University {seunghak,junzhu,epxing}@cs.cmu.edu Abstract To understand the relationship between genomic variations among population and complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects. However, detecting eQTLs remains a challenge due to complex underlying mechanisms and the very large number of genetic loci involved compared to the number of samples. Thus, to address the problem, it is desirable to take advantage of the structure of the data and prior information about genomic locations such as conservation scores and transcription factor binding sites. In this paper, we propose a novel regularized regression approach for detecting eQTLs which takes into account related traits simultaneously while incorporating many regulatory features. We first present a Bayesian network for a multi-task learning problem that includes priors on SNPs, making it possible to estimate the significance of each covariate adaptively. Then we find the maximum a posteriori (MAP) estimation of regression coefficients and estimate weights of covariates jointly. This optimization procedure is efficient since it can be achieved by using a projected gradient descent and a coordinate descent procedure iteratively. Experimental results on simulated and real yeast datasets confirm that our model outperforms previous methods for finding eQTLs. 1 Introduction One of the fundamental problems in computational biology is to understand associations between genomic variations and phenotypic effects. The most common genetic variations are single nucleotide polymorphisms (SNPs), and many association studies have been conducted to find SNPs that cause phenotypic variations such as diseases or gene-expression traits [1]. However, association mapping of causal QTLs or eQTLs remains challenging as the variation of complex traits is a result of contributions of many genomic variations. In this paper, we focus on two important problems to detect eQTLs. First, we need to find methods to take advantage of the structure of data for finding association SNPs from high dimensional eQTL datasets when p ≫N, where p is the number of SNPs and N is the sample size. Second, we need techniques to take advantage of prior biological knowledge to improve the performance of detecting eQTLs. To address the first problem, Lasso is a widely used technique for high-dimensional association mapping problems, which can yield a sparse and easily interpretable solution via an ℓ1 regularization [2]. However, despite the success of Lasso, it is limited to considering each trait separately. If we have multiple related traits it would be beneficial to estimate eQTLs jointly since we can share information among related traits. For the second problem, Fig. 1 shows some prior knowledge on SNPs in a genome including transcription factor binding sites (TFBS), 5’ UTR and exon, which play important roles for the regulation of genes. For example, TFBS controls the transcription of DNA sequences to mRNAs. Intuitively, if SNPs are located on these regions, they are more likely to be true eQTLs compared to those on regions without such annotations since they are related to genes or gene regulations. Thus, it would be desirable to penalize regression coefficients less corresponding 1 5’ UTR Chromosome Exon Annotation SNPs Transcription factor binding site Figure 1: Examples of prior knowledge on SNPs including transcription factor binding sites, 5’ UTR and exon. Arrows represent SNPs and we indicate three genomic annotations on the chromosome. Here association SNPs are denoted by red arrows (best viewed in color), showing that SNPs on regions with regulatory features are more likely to be associated with traits. to SNPs having significant annotations such as TFBS in a regularized regression model. Again, the widely used Lasso is limited to treating all SNPs equally. This paper presents a novel regularized regression approach, called adaptive multi-task Lasso, to effectively incorporate both the relatedness among multiple gene-expression traits and useful prior knowledge for challenging eQTL detection. Although some methods have been developed for either adaptive or multi-task learning, to the best of our knowledge, adaptive multi-task Lasso is the first method that can consider prior information on SNPs and multi-task learning simultaneously in one single framework. For example, Lirnet uses prior knowledge on SNPs such as conservation scores, non-synonymous coding and UTR regions for a better search of association mappings [3]. However, Lirnet considers the average effects of SNPs on gene modules by assuming that association SNPs are shared in a module. This approach is different from multi-task learning where association SNPs are found for each trait while considering group effects over multiple traits. To find genetic markers that affect correlated traits jointly, the graph-guided fused Lasso [4] was proposed to consider networks over multiple traits within an association analysis. However, graph-guided fused Lasso does not incorporate prior knowledge of genomic locations. Unlike other methods, we define the adaptive multi-task Lasso as finding a MAP estimate of a Bayesian network, which provides an elegant Bayesian interpretation of our approach; the resultant optimization problem is efficiently solved with an alternating minimization procedure. Finally, we present empirical results on both simulated and real yeast eQTL datasets, which demonstrates the advantages of adaptive multi-task Lasso over many other competitors. 2 Problem Definition: Adaptive Multi-task Lasso Let Xij ∈{0, 1, 2} denote the number of minor alleles at the j-th SNP of i-th individual for i = 1, . . . , N and j = 1, . . . , p. We have K related gene traits and Y k i represents the gene expression level of k-th gene of i-th individual for k = 1, . . . , K. In our setting, we assume that the K traits are related to each other and we explore the relatedness in a multi-task learning framework. To achieve the relatedness among tasks via grouping effects [5], we can use any clustering algorithms such as spectral clustering or hierarchical clustering. In association mapping problems, these clusters can be viewed as clusters of genes which consist of regulatory networks or pathways [4]. We treat the problem of detecting eQTLs as a linear regression problem. The general setting includes one design matrix X and multiple tasks (genes) for k = 1, . . . , K, Y k = Xβk + ǫ (1) where ǫ is a standard Gaussian noise. We further assume that Xij’s are standardized such that P i Xij/N = 0 and P i X2 ij/N = 1, and consider a model without an intercept. Now, the open question is how we can devise an appropriate objective function over β that could effectively consider the desirable group effects over multiple traits and incorporate useful prior knowledge, as we have stated. To explain the motivation of our work and provide a useful baseline that grounds the proposed approach, we first briefly review the standard Lasso and multi-task Lasso. 2.1 Lasso and Multi-task Lasso Lasso [2] is a technique for estimating the regression coefficients β and has been widely used for association mapping problems. Mathematically, it solves the ℓ1-regularized least square problem, ˆβ = argmin β 1 2∥Y −Xβ∥2 2 + λ p X j=1 δj|βj| (2) 2 where λ determines the degree of regularization of nonzero βj. The scaling parameters δj ∈[0, 1] are usually fixed (e.g., unit ones) or set by cross-validation, which can be very difficult when p is large. Due to the singularity at the origin, the ℓ1 regularization (Lasso penalty) can yield a stable and sparse solution, which is desirable for association mapping problems because in most cases we have p ≫N and there exists only a small number of eQTLs. It is worth mentioning that Lasso estimates are posterior mode estimates under a multivariate independent Laplace prior for β [2]. As we can see from problem (2), the standard Lasso does not distinguish the inputs and regression coefficients from different tasks. In order to capture some desirable properties (e.g., shared structures or sparse patterns) among multiple related tasks, the multi-task Lasso was proposed [5], which solves the problem, min β 1 2 K X k=1 ∥Y k −Xβk∥2 2 + λ p X j=1 δj∥βj∥2 (3) where ∥βj∥2 = qP k(βk j ) 2 is the ℓ2-norm. This model encourages group-wise sparsity across related tasks via the ℓ1/ℓ2 regularization. Again, the solution of Eq. (3) can be interpreted as a MAP estimate under appropriate priors with fixed scaling parameters. Multi-task Lasso has been applied (with some extensions) to perform association analysis [4]. However, as we have stated, the limitation of current approaches is that they do not incorporate the useful prior knowledge. The proposed adaptive multi-task Lasso, as to be presented, is an extension of the multi-task Lasso to perform joint group-wise and within-group feature selection and incorporate the useful prior knowledge for effective association analysis. 2.2 Adaptive Multi-task Lasso Now, we formally introduce the adaptive multi-task Lasso. For clarity, we first define the sparse multi-task Lasso with fixed scaling parameters, which will be a sub-problem of the adaptive multi-task Lasso, as we shall see. Specifically, sparse multi-task Lasso solves the problem, min β 1 2 K X k=1 ∥Y k −Xβk∥2 2 + λ1 p X j=1 θj K X k=1 |βk j | + λ2 p X j=1 ρj∥βj∥2 (4) where θ and ρ are the scaling parameters for the ℓ1 and ℓ1/ℓ2-norm, respectively. The regularization parameters λ1 and λ2 can be determined by cross or holdout validation. Obviously, this model subsumes the standard Lasso and multi-task Lasso, and it has three advantages over previous models. First, unlike the multi-task Lasso, which contains the ℓl/ℓ2-norm only to achieve group-wise sparsity, the ℓ1-norm in Eq. (4) can achieve sparsity among SNPs within a group. This property is useful when K tasks are not perfectly related and we need additional sparsity in each block of ∥βj∥2. In section 4, we demonstrate the usefulness of the blended regularization. The hierarchical penalization [6] can achieve a smooth shrinkage effect for variables within a group, but it cannot achieve within-group sparsity. Second, unlike Lasso we induce group sparsity across multiple related traits. Finally, as to be extended, unlike Lasso and multi-task Lasso which treat βj equally or with a fixed scaling parameter, we can adaptively penalize each βj according to prior knowledge on covariates in such a way that SNPs having desirable features are less penalized (see Fig. 1 for details of prior knowledge on SNPs). To incorporate the prior knowledge as we have stated, we propose to automatically learn the scaling parameters (θ, ρ) from data. To that end, we define θ and ρ as mixtures of features on j-th SNP, i.e. θj = X t ωtf j t and ρj = X t νtf j t , (5) where f j t is t-th feature for j-th SNP. For example f j t can be a conservation score of j-th SNP or one if the SNP is located on TFBS, zero otherwise. To avoid scaling issues, we assume each feature is standardized, i.e., P j f j t = 1, ∀t. Since we are interested in the relative contributions from different features, we further add the constraints that P t ωt = 1 and P t νt = 1. These constraints can be interpreted as a regularization on the feature weights ω ≥0 and ν ≥0. Although using the definitions (5) in problem (4) and jointly estimating β and feature weights (ω, ν) can give a solution of adaptive multi-task learning, the resultant method would be lack of an elegant Bayesian interpretation, which is a desirable property that can make the framework more 3 ν X Y β θ ρ ω f1 fT Figure 2: Graphical model representation of adaptive multi-task Lasso. flexible and easily extensible. Recall that the Lasso estimates can be interpreted as MAP estimates under Laplace priors. Similarly, to achieve a framework that enjoys an elegant Bayesian interpretation, we define a Bayesian network and treat the adaptive multi-task learning problem as finding its MAP estimate. Specifically, we build a Bayesian network as shown in Fig. 2 in order to compute the MAP estimate of β under adaptive scaling parameters, {θ, ρ}. We define the conditional probability of β given scaling parameters as, P(β|θ, ρ) = 1 Z(θ, ρ) p Y j=1 K Y k=1 exp (−θj|βk j |) × p Y j=1 exp (−ρj∥βj∥2) where Z(θ, ρ) is a normalization factor, and P(Y |X, β) ∼N(Xβ, Σ), where Σ is the identity matrix. Although in principle we can treat θ and ρ as random variables and define a fully Bayesian approach, for simplicity, we define θ and ρ as deterministic functions of ω and ν as in Eq. (5). Extension to a fully Bayesian approach is our future work. Now we define the adaptive multi-task Lasso as finding the MAP estimation of β and simultaneously estimating the feature weights (ω, ν), which is equivalent to solving the optimization problem, min β,ω,ν 1 2 K X k=1 ∥Y k −Xβk∥2 2 + λ1 p X j=1 θj K X k=1 |βk j | + λ2 p X j=1 ρj∥βj∥2 + log Z(θ, ρ), (6) where ω and ν are related to θ and ρ through Eq. (5) and subject to the constraints as defined above. Remark 1 Although we can interpret problem (4) as a MAP estimate of β under appropriate priors when scaling parameters (θ, ρ) are fixed, it does not enjoy an elegant Bayesian interpretation if we perform joint estimation of β and the scaling parameters (ω, ν) because it ignores normalization factors of the appropriate priors. Lee et al. [3] used this approach where a regularized regression model is optimized over scaling parameters and β jointly. Therefore, their method does not have an elegant Bayesian interpretation. Moreover, as we have stated, Lee et al. [3] did not consider grouping effects over multiple traits. Remark 2 Our method also differs from the adaptive Lasso [7] , transfer learning with meta-priors [8] and the Bayesian Lasso [9]. First, although both adaptive Lasso and our method use adaptive parameters for penalizing regression coefficients, we learn adaptive parameters from prior knowledge on covariates in a multitask setting while adaptive Lasso uses ordinary least square solutions for adaptive parameters in a single task setting. Second, the method of transfer learning with meta-priors [8] is similar to our method in a sense that both use prior knowledge with multiple related tasks. However, we couple related tasks via ℓ1/ℓ2 penalty while they couple tasks via transferring hyper-parameters among them. Thus we have group sparsity across tasks as well as sparsity in each group but they cannot induce group sparsity across different tasks. Finally, the Bayesian Lasso [9] does not have the grouping effects in multiple traits and the priors used usually do not consider domain knowledge. 3 Optimization: an Alternating Minimization Approach Now, we solve the adaptive multi-task Lasso problem (6). First, since the normalization factor Z is hard to compute, we use its upper bound, as given by, Z ≤ p Y j=1 Z RK exp (−∥ρj∥2)dρ Y j  2 θj K = p Y j=1 π K−1 2 Γ( K+1 2 )2K (ρjK)K Y j  2 θj K . (7) This integral result is due to normalization constant of K dimensional multivariate Laplace distribution [10, 11]. Using this upper bound, the learning problem is to minimize an upper bound of the objective function in problem (6), which will be denoted by L(β, ω, ν) henceforth. Although L is not joint convex over β, ω and ν, it is convex over β given {ω, ν} and convex over {ω, ν} given β. We use an alternating optimization procedure which (1) minimizes the upper bound L of problem (6) over {ω, ν} by fixing β; and (2) minimizes L over β by fixing {ω, ν} iteratively until convergence [12]. Both sub-problems are convex and can be solved efficiently via a projected gradient descent method and a coordinate descent method, respectively. 4 For the first step of optimizing L over ω and ν, the sub-problem is to solve min ω∈Pω,ν∈Pν X j X k  −log θj + θj|βk j |  + X j (−K log ρj + ρj∥βj∥2) , where Pω ≜{ω : P t ωt = 1, ωt ≥0, ∀t} is a simplex over ω, likewise for Pν. θ and ρ are functions of ω and ν as defined in Eq. (5). This constrained problem is convex and can be solved by using a gradient descent algorithm combined with a projection onto a simplex sub-space, which can be efficiently done [13]. Since ω and ν are not coupled, we can learn each of them separately. For the second sub-problem that optimizes L over β given fixed feature weights (ω, ν), it is exactly the optimization problem (4). We can solve it using a coordinate descent procedure, which has been used to optimize the sparse group Lasso [14]. Our problem is different from the sparse group Lasso in the sense that the sparse group Lasso includes group penalty over multiple covariates for a single trait, while adaptive multi-task Lasso considers group effects over multiple traits. Here we solve problem (4) using a modified version of the algorithm proposed for the sparse group Lasso. As summarized in Algorithm 1, the general optimization procedure is as follows: for each j, we check the group sparsity condition that βj = 0. If it is true, no update is needed for βj. Otherwise, we check whether βk j = 0 for each k. If it is true that βk j = 0, no update is needed for βk j ; otherwise, we optimize problem (4) over βk j with all other coefficients fixed. This one-dimensional optimization problem can be efficiently solved by using a standard optimization method. This procedure is continued until a convergence condition is met. More specifically, we first obtain the optimal conditions for problem (4) by computing the subgradient of its objective function with respect to βk j and set it to zero: −XT j (Y k −Xβk) + λ2ρjgk j + λ1θjhk j = 0, (8) where g and h are sub-gradients of the ℓ1/ℓ2-norm and the ℓ1-norm, respectively. Note that gk j = βk j ∥βj∥2 if βj ̸= 0, otherwise ∥gj∥2 ≤1; and hk j = sign(βk j ) if βk j ̸= 0, otherwise hk j ∈[−1, 1]. Then, we check the group sparsity that βj = 0. To do that, we set βj = 0 in Eq. (8), and we have, XT j Y k−XT j X r̸=j Xrβk r = λ2ρjgk j +λ1θjhk j , and ||gj||2 2 = 1 λ2 2ρ2 j K X k=1 (XT j Y k −XT j X r̸=j Xrβk r −λ1θjhk j )2. According to subgradient conditions, we need to have a gj that satisfies the less than inequality ∥gj∥2 2 < 1; otherwise, βj will be non-zero. Since gj is a function of hj, it suffices to check whether the minimal square ℓ2-norm of gj is less than 1. Therefore, we solve the minimization problem of ∥gj∥2 2 w.r.t hj, which gives the optimal hj as, hk j =    ck j λ1θj if | ck j λ1θj | ≤1 sign( ck j λ1θj ) otherwise (9) where ck j =XT j Y k −XT j P r̸=jXrβk r . If the minimal ∥gj∥2 2 is less than 1, then βj is zero and no update is needed; otherwise, we continue to the next step of checking whether βk j =0, ∀k, as follows. Again, we start by assuming βk j is zero. By setting βk j = 0 in Eq. (8), we have, XT j Y k −XT j X r̸=j Xrβk r = λ1θjhk j , and hk j = 1 λ1θj (XT j Y k −XT j X r̸=j Xrβk r ). According to the definition of the subgradient hk j , it needs to satisfy the condition that |hk j | < 1; otherwise, βk j will be non-zero. This checking step can be easily done. After the check, if we have βk j ̸= 0, the problem (4) becomes an one-dimensional optimization problem with respect to βk j , and the solution can be obtained using existing optimization algorithms (e.g. optimize function in the R). We used majorize-minimize algorithm with gradient descent [15]. With the above two steps, we iteratively optimize (ω, ν) by fixing β and optimize β by fixing feature weights until convergence. Note that the parameters λ1 and λ2 in Eq. (4), which determine sparsity levels, are determined by cross or hold-out validation. 5 Input : X ∈RN×p; Y ∈RN×K; θ ∈Rp; ρ ∈Rp; and βinit ∈Rp×K Output: β ∈Rp×K β ←βinit; Iterate this procedure until convergence; for j ←1 to p do m ← 1 λ2 2ρ2 j PK k=1 (ck j −λ1θjhk j )2 where ck j and hk j are computed as in Eq. (9); if m < 1 then βk j = 0, for all k = 1, . . . K; else for k ←1 to K do q ← 1 λ1θj |XT j (Y k −Xβk) + XT j Xjβk j |; if q < 1 then βk j = 0; else Solve the following one-dimensional optimization problem: βk j ←argmin βk j 1 2 ∥Y k −Xβk∥2 2 + λ1θj|βk j | + λ2ρj∥βj∥2; end end Algorithm 1: Optimization algorithm for Equation (4) with fixed scaling parameters. 4 Simulation Study To confirm the behavior of our model, we run the adaptive multi-task Lasso and other methods on our simulated dataset (p=100, K=10). We first randomly select 100 SNPs from 114 yeast genotypes from the yeast eQTL dataset [16]. Following the simulation study in Kim et al. [4], we assume that some SNPs affect biological networks including multiple traits, and true causal SNPs are selected by the following procedure. Three sets of randomly selected four SNPs are associated with three trait clusters (1 −3), (4 −6), (7 −10), respectively. One SNP is associated with two clusters (1 −3) and (4 −6), and one causal SNP is for all traits (1 −10). For all association SNPs we set identical association strength from 0.3 to 1. Traits are generated by Y k = Xβk + ǫ, for all k = 1, . . . , 10 where ǫ follows the standard normal distribution. We make 10 features (f1 −f10), of which six are continuous and four are discrete. For the first three continuous features (f1 −f3), the feature value is drawn from s(N(2, 1)) if a SNP is associated with any traits; otherwise from s(N(1, 1)), where s(x) = 1 1+exp(x) is the sigmoid function. For the other three continuous features (f4−f6), the value is drawn from s(N(2, 0.5)) if a SNP is associated with any traits; otherwise from s(N(1, 0.5)). Finally, for the discrete features (f7 −f10), the value is set to s(2) with probability 0.8 if a SNP is associated with any traits; otherwise to s(1). We standardize all the features. True β 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 AML 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 SML 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 A + l1/l2 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 Single SNP 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 Lasso 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 l1/l∞ 2 4 6 8 10 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Results of the β matrix estimated by different methods. For visualization, we present normalized absolute values of regression coefficients and darker colors imply stronger association with traits. For each matrix, X-axis represents traits (1-10) and Y-axis represents SNPs (1-100). True β is shown in the left. Fig. 3 shows the estimated β matrix by various methods including AML (adaptive multi-task Lasso), SML (sparse multi-task Lasso which is AML without adaptive weights), A+ℓ1/ℓ2 (AML without Lasso penalty), Single SNP [17], Lasso and ℓ1/ℓ∞(multi-task learning with ℓ1/ℓ∞norm). In this figure, X-axis represents traits (1-10) and Y-axis represents SNPs (1-100). Note that regression parameters (e.g. λ1 and λ2 for AML) were determined by holdout validation, and we set association strength to 0.3. We also used hierarchical clustering with cutoff criterion 0.8 prior to run AML, SML, A+ℓ1/ℓ2 and ℓ1/ℓ∞, and Single SNP and Lasso were analyzed for each trait separately. We investigate the effect of Lasso penalty in our model by comparing the results of AML and A+ℓ1/ℓ2. While AML is slightly more efficient than A+ℓ1/ℓ2 in finding association SNPs, both 6 work very well for this task. It is not surprising since hierarchical clustering reproduced true trait clusters and true β could be detected without considering single SNP level sparsity in each group. To further validate the effectiveness of Lasso penalty, we run AML and A+ℓ1/ℓ2 without a priori clustering step. Interestingly, AML could pick correct SNP-traits associations due to Lasso penalty, however, A+ℓ1/ℓ2 failed to do so (see Fig. 5c,d for the comparison of performance). While Lasso penalty did not show significant contribution for this task when we generated a priori clusters, it is good to include it when the quality of a clustering is not guaranteed. Comparing the results of AML and SML in Fig. 3, we could observe that adaptive weights improve the performance significantly. Adaptive weights help not only reduce false positives but also increase true positives. 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Features ωt f2 f3 f4 f5 f6 f7 f8 f9 f10 f1 Figure 4: Learned feature weights of ω. Fig. 4 shows the learned feature weights of ω (ν is almost identical to ω and not shown here). The results are based on 100 simulations for each association strength 0.3, 0.5, 0.8 and 1, and half of error bar represents one standard deviation from the mean. We could observe that discrete features f7−f10 have highest weights while lowest weights are assigned to f1 −f3. These weights are reasonable because f1−f3 are drawn from Gaussian with large standard deviation (STD: 1) compared to that of features f4 −f6 (STD: 0.5). Also, discrete features are the most important since they discriminate true association SNPs with a high probability 0.8. 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1 − Specificity Sensitivity 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1 − Specificity Sensitivity 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1 − Specificity Sensitivity 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1 − Specificity Sensitivity AML SML A+l1/l2 l1/l∞ Lasso Single SNP b a c d Figure 5: ROC curves of various methods as association strength varies (a) 0.3, (b) 0.5 on clustered data, (c) 0.3, and (d) 0.5 on input dataset. (a,b) Results on clustered data, where correct groups of gene traits are found using hierarchical clustering (cutoff = 0.8). (c,d) Results on input dataset without using clustering algorithm. We compare the sensitivity and specificity of our model with other methods. In Fig. 5, we generated ROC curves for association strength of 0.3 and 0.5. Fig. 5a,b show the results with a priori hierarchical clustering and Fig. 5c,d is with no such preprocessing steps. Using hierarchical clustering we could correctly find three clusters of gene traits at cutoff 0.8. In Fig. 5, when association strength is small (i.e., 0.3), AML and A+ℓ1/ℓ2 significantly outperformed other methods. As association strength increased, the performance of multi-task learning methods improved quickly while methods based on a single trait such as Lasso and Single SNP showed gradual increase of performance. We computed test errors on 100 simulated dataset using 30 samples for test and 84 samples for training. On average, AML achieved the best test error rate of 0.9427, and the order of other methods in terms of test errors is: A + ℓ1/ℓ2 (0.9506), SML (1.0436), ℓ1/ℓ∞(1.0578) and Lasso (1.1080). 5 Yeast eQTL dataset We analyze the yeast eQTL dataset [16] that contains expression levels of 5,637 genes and 2,956 SNPs. The genotype data include genetic variants of 114 yeast strains that are progenies of the standard laboratory strain (BY) and a wild strain (RM). We used 141 modules given by Lee et al. [3] as groups of gene traits, and extracted unique 1,260 SNPs from 2,956 SNPs for our analysis. For prior biological knowledge on SNPs used for adaptive multi-task Lasso, we downloaded 12 features from Saccharomyces Genome Database (http://www.yeastgenome.org) including 11 discrete and 1 continuous feature (conservation score). For a discrete feature, we set its value as f j t = s(2) if the feature is found on the j-th SNP, f j t = s(1) otherwise. For conservation score, we set f j t = s(score). All the features are then standardized. 7 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Features ωt f1 f2 f3 f4 f5 f6 f7 f8 f9 f12 f10 f11 Figure 6: Learned weights of ω on the yeast eQTL dataset. Fig. 6 represents ω learned from the yeast eQTL dataset (ν is almost identical to ω). The features are ncRNA (f1), noncoding exon (f2), snRNA (f3), tRNA (f4), intron (f5), binding site (f6), 5’ UTR intron (f7), LTR retrotransposon (f8), ARS (f9), snoRNA (f10), transposable element gene (f11) and conservation score (f12). Five discrete features turn out to be important including ncRNA, snRNA, binding site, 5’ UTR intron and snoRNA as well as one continuous feature, i.e., conservation score. These results agree with biological insights. For example, ncRNA, snRNA and snoRNA are potentially important for gene regulation since they are functional RNA molecules having a variety of roles such as transcriptional regulation [18]. Also, conservation score would be significant since mutation in conserved region is more likely to result in phenotypic effects. 0 10 20 30 40 50 60 70 80 90 100 110 120 0 0.5 1 1.5 2 2.5 3 3.5 SNPs Number of associated traits * 202 β ncRNA snRNA binding sites five prime UTR intron conservation scores Figure 7: Plot of 121 SNPs on chromosome 1 and 2 vs the number of genes affected by the SNPs from the yeast eQTL analysis (blue bar). Five significant prior knowledge on SNPs are overlapped with the plot. For the four discrete priors (ncRNA, snRNA, binding site, 5’ UTR intron) we set the value to 1 if annotated, 0 otherwise. Binding sites and regions with no associated traits are denoted by long green and short blue arrows. Fig. 7 shows the number of associated genes for SNPs on chromosome 1 and 2, superimposed on 5 significant features. We see that association mapping results were affected by both priors and data. For example, genomic region indicated by blue arrow shows weak association with traits, where conservation score is low and no other annotations exist. Also we can see that three SNPs located on binding sites affect a larger number of gene traits (see green arrows). As an example of biological analysis, we investigate these three association SNPs. The three SNPs are located on telomeres (chr1:483, chr1:229090, chr2:9425 (chromosome:coordinate)), and these genomic locations are in cis to Abf1p (autonomously replicating sequence binding factor-1) binding sites. In biology, it is known that Abf1p acts as a global transcriptional regulator in yeast [19]. Thus, the genomic regions in telomeres would be good candidates for novel putative eQTL hotspots that regulate the expression levels of many genes. They were not reported as eQTL hotspots in Yvert et al. [20]. 6 Conclusions In this paper, we proposed a novel regularized regression model, referred to as adaptive multi-task Lasso, which takes into account multiple traits simultaneously while weights of different covariates are learned adaptively from prior knowledge and data. Our simulation results support that our model outperforms other methods via ℓ1 and ℓ1/ℓ2 penalty over multiple related genes, and especially adaptively learned regularization significantly improved the performance. In our experiments on the yeast eQTL dataset, we could identify putative three eQTL hotspots with biological supports where SNPs are associated with a large number of genes. Acknowledgments This work was done under a support from NIH 1 R01 GM087694-01, NIH 1RC2HL101487-01 (ARRA), AFOSR FA9550010247, ONR N0001140910758, NSF Career DBI-0546594, NSF IIS0713379 and Alfred P. Sloan Fellowship awarded to E.X. 8 References [1] R. Sladek, G. Rocheleau, J. Rung, C. Dina, L. Shen, D. Serre, P. Boutin, D. Vincent, A. Belisle, S. Hadjadj, et al. A genome-wide association study identifies novel risk loci for type 2 diabetes. Nature, 445(7130):881–885, 2007. [2] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996. [3] S.I. Lee, A.M. Dudley, D. Drubin, P.A. Silver, N.J. Krogan, D. Pe’er, and D. Koller. Learning a prior on regulatory potential from eQTL data. PLoS Genetics, 5(1):e1000358, 2009. [4] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS Genetics, 5(8):e1000587, 2009. [5] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. In Technical Report, Department of Statistics, University of California, Berkeley, 2006. [6] M. Szafranski, Y. Grandvalet, and P. Morizet-Mahoudeaux. Hierarchical penalization. Advances in Neural Information Processing Systems, 20:1457–1464, 2007. [7] H. Zou. The adaptive Lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429, 2006. [8] S.I. Lee, V. Chatalbashev, D. Vickrey, and D. Koller. Learning a meta-level prior for feature relevance from multiple related tasks. In Proceedings of the 24th International Conference on Machine Learning, pages 489–496, 2007. [9] T. Park and G. Casella. The bayesian Lasso. Journal of the American Statistical Association, 103(482):681–686, 2008. [10] B. M. Marlin, M. Schmidt, and K. P. Murphy. Group sparse priors for covariance estimation. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages 383–392, 2009. [11] E. G´omez, M. A. Gomez-Viilegas, and J. M. Marin. A multivariate generalization of the power exponential family of distributions. Communications in Statistics-Theory and Methods, 27(3):589–600, 1998. [12] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. Advances in Neural Information Processing Systems, 19:801–808, 2007. [13] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ℓ1ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, pages 272–279, 2008. [14] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group Lasso and a sparse group Lasso. arXiv:1001.0736v1 [math.ST], 2010. [15] T. T. Wu and K. Lange. Coordinate descent algorithms for Lasso penalized regression. Ann. Appl. Stat, 2(1):224–244, 2008. [16] R. B. Brem and L. Kruglyak. The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proceedings of the National Academy of Sciences of the United States of America, 102(5):1572–1577, 2005. [17] S. Purcell, B. Neale, K. Todd-Brown, L. Thomas, M. A. R. Ferreira, D. Bender, J. Maller, P. Sklar, P. I. W. De Bakker, M. J. Daly, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. The American Journal of Human Genetics, 81(3):559– 575, 2007. [18] G. Storz. An expanding universe of noncoding RNAs. Science, 296(5571):1260–1263, 2002. [19] T. Miyake, J. Reese, C. M. Loch, D. T. Auble, and R. Li. Genome-wide analysis of ARS (autonomously replicating sequence) binding factor 1 (Abf1p)-mediated transcriptional regulation in Saccharomyces cerevisiae. Journal of Biological Chemistry, 279(33):34865–34872, 2004. [20] G. Yvert, R. B. Brem, J. Whittle, J. M. Akey, E. Foss, E. N. Smith, R. Mackelprang, L. Kruglyak, et al. Trans-acting regulatory variation in Saccharomyces cerevisiae and the role of transcription factors. Nature Genetics, 35(1):57–64, 2003. 9
2010
82
4,127
Categories and Functional Units: An Infinite Hierarchical Model for Brain Activations Danial Lashkari Ramesh Sridharan Polina Golland Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 {danial, rameshvs, polina}@csail.mit.edu Abstract We present a model that describes the structure in the responses of different brain areas to a set of stimuli in terms of stimulus categories (clusters of stimuli) and functional units (clusters of voxels). We assume that voxels within a unit respond similarly to all stimuli from the same category, and design a nonparametric hierarchical model to capture inter-subject variability among the units. The model explicitly encodes the relationship between brain activations and fMRI time courses. A variational inference algorithm derived based on the model learns categories, units, and a set of unit-category activation probabilities from data. When applied to data from an fMRI study of object recognition, the method finds meaningful and consistent clusterings of stimuli into categories and voxels into units. 1 Introduction The advent of functional neuroimaging techniques, in particular fMRI, has for the first time provided non-invasive, large-scale observations of brain processes. Functional imaging techniques allow us to directly investigate the high-level functional organization of the human brain. Functional specificity is a key aspect of this organization and can be studied along two separate dimensions: 1) which sets of stimuli or cognitive tasks are treated similarly by the brain, and 2) which areas of the brain have similar functional properties. For instance, in the studies of visual object recognition the first question defines object categories intrinsic to the visual system, while the second characterizes regions with distinct profiles of selectivity. To answer these questions, fMRI studies examine the responses of all relevant brain areas to as many stimuli as possible within the domain under study. Novel methods of analysis are needed to extract the patterns of functional specificity from the resulting high-dimensional data. Clustering is a natural choice for answering questions we pose here regarding functional specificity with respect to both stimuli and voxels. Applying clustering in the space of stimuli identifies stimuli that induce similar patterns of response and has been recently used to discover object categories from responses in the human inferior temporal cortex [1]. Applying clustering in the space of brain locations seeks voxels that show similar functional responses [2, 3, 4, 5]. We will refer to a cluster of voxels with similar responses as a functional unit. In this paper, we present a model to investigate the interactions between these two aspects of functional specificity. We make the natural assumptions that functional units are organized based on their responses to the categories of stimuli and the categories of stimuli can be characterized by the responses they induce in the units. Therefore, categories and units are interrelated and informative about each other. Our generative model simultaneously learns the specificity structure in the space of both stimuli and voxels. We use a block co-clustering framework to model the relationship between clusters of stimuli and brain locations [6]. In order to account for variability across subjects in a group study, we assume a hierarchical model where a group-level structure generates the clustering of voxels in different subjects (Fig. 1). A nonparametric prior enables the model to search the space 1 Figure 1: Co-clustering fMRI data across subjects. The first row shows a hypothetical data set of brain activations. The second row shows the same data after co-clustering, where rows and columns are re-ordered based on the membership in categories and functional units. of different numbers of clusters. Furthermore, we tailor the method specifically to brain imaging by including a model of fMRI signals [7]. Most prior work applies existing machine learning algorithms to functional neuroimaging data. In contrast, our Bayesian integration of the co-clustering model with the model of fMRI signals informs each level of the model about the uncertainties of inference in the other levels. As a result, the algorithm is better suited to handling the high levels of noise in fMRI observations. We apply our method to a group fMRI study of visual object recognition where 8 subjects are presented with 69 distinct images. The algorithm finds a clustering of the set of images into a number of categories along with a clustering of voxels in different subjects into units. We find that the learned categories and functional units are indeed meaningful and consistent. Related Work Different variants of co-clustering algorithms have found applications in biological data analysis [8, 9, 10]. Our model is closely related to the probabilistic formulations of co-clustering [11, 12] and the application of Infinite Relational Models to co-clustering [13]. Prior work in the applications of advanced machine learning techniques to fMRI has mainly focused on supervised learning, which requires prior knowledge of stimulus categories [14]. Unsupervised learning methods such as Independent Component Analysis (ICA) have also been applied to fMRI data to decompose it into a set of spatial and temporal (functional) components [15, 16]. ICA assumes an additive model for the data and allows spatially overlapping components. However, neither of these assumptions is appropriate for studying functional specificity. For instance, an fMRI response that is a weighted combination of a component selective for category A and another component selective for category B may be better described by selectivity for a new category (the union of both). We also note that Formal Concept Analysis, which is closely related to the idea of block co-clustering, has been recently applied to neural data from visual studies in monkeys [17]. 2 Model Our model consists of three main components: I. Co-clustering structure expressing the relationship between the clustering of stimuli (categories) and the clustering of brain voxels (functional units), II. Hierarchical structure expressing the variability among functional units across subjects, III. Signal model expressing the relationship between voxel activations and observed fMRI time courses. The co-clustering level is the key element of the model that encodes the interactions between stimulus categories and functional units. Due to the differences in the level of noise among subjects, we do not expect to find the same set of functional units in all subjects. We employ the structure of the Hierarchical Dirichlet Processes (HDP) [18] to account for this fact. The first two components of the model jointly explain how different brain voxels are activated by each stimulus in the experiment. The third component of the model links these binary activations to the observed fMRI time courses 2 xjis activation of voxel i in subject j to stimulus s zji unit membership of voxel i in subject j cs category membership of stimulus s φk,l activation probability of unit k to category l βj unit prior weight in subject j π group-level unit prior weight α, γ unit HDP scale parameters ρ category prior weight χ category DP scale parameters τ prior parameters for actviation probabilities φ yjit fMRI signal of voxel i in subject j at time t ejih nuisance effect h for voxel i in subject j aji amplitude of activation of voxel i in subject j λji variance reciprocal of noise for voxel i in subject j µa j , σa j prior parameters for response amplitudes µe jh, σe jh prior parameters for nuisance factors κj, θj prior parameters for noise variance Figure 2: The graphical representation of our model where the set of voxel response variables (aji, ejih, λji) and their corresponding prior parameters (µa j , σa j , µe h, σe h, κj, θj) are denoted by ηji and ϑj, respectively. of voxels. Sec. 2.1 presents the hierarchical co-clustering part of the model that includes both the first and the second components above. Sec. 2.2 presents the fMRI signal model that integrates the estimation of voxel activations with the rest of the model. Sec. 2.3 outlines the variational algorithm that we employ for inference. Fig. 2 shows the graphical model for the joint distribution of the variables in the model. 2.1 Nonparametric Hierarchical Co-clustering Model Let xjis ∈{0, 1} be an activation variable that indicates whether stimulus s activates voxel i in subject j. The co-clustering model describes the distribution of voxel activations xjis based on the category and the functional units to which stimulus s and voxel i belong. We assume that all voxels within functional unit k have the same probability φk,l of being activated by a particular category l of stimuli. Let z = {zji}, (zji ∈{1, 2, · · · }) be the set of unit memberships of voxels and c = {cs}, (cs ∈{1, 2, · · · }) the set of category memberships of the stimuli. Our model of co-clustering assumes: xjis | zji, cs, φ i.i.d. ∼ Bernoulli(φzji,cs). (1) The set φ = {φk,l} of the probabilities of activation of functional units to different categories summarizes the structure in the responses of voxels to stimuli. We use the stick-breaking formulation of HDP [18] to construct an infinite hierarchical prior for voxel unit memberships: zji | βj i.i.d. ∼ Mult(βj), (2) βj | π i.i.d. ∼ Dir(απ), (3) π | γ ∼ GEM(γ). (4) Here, GEM(γ) is a distribution over infinitely long vectors π = [π1, π2, · · · ]T , named after Griffiths, Engen and McCloskey [19]. This distribution is defined as: πk = vk k−1 Y k′=1 (1 −vk′) , vk | γ i.i.d. ∼Beta(1, γ), (5) where the components of the generated vectors π sum to one with probability 1. In subject j, voxel memberships are distributed according to subject-specific weights of functional units βj. The weights βj are in turn generated by a Dirichlet distribution centered around π with a degree of variability determined by α. Therefore, π acts as the group-level expected value of the subjectspecific weights. With this prior over the unit memberships of voxels z, the model in principle allows an infinite number of functional units; however, for any finite set of voxels, a finite number of units is sufficient to include all voxels. We do not impose a similar hierarchical structure on the clustering of stimuli among subjects. Conceptually, we assume that stimulus categories reflect how the human brain has evolved to 3 organize the processing of stimuli within a system and are therefore identical across subjects. Even if any variability exists, it will be hard to learn such a complex structure from data since we can present relatively few stimuli in each experiment. Hence, we assume identical clustering c in the space of stimuli for all subjects, with a Dirichlet process prior: cs | ρ i.i.d. ∼ Mult(ρ), ρ | χ ∼ GEM(χ). (6) Finally, we construct the prior distribution for unit-category activation probabilities φ: φk,l i.i.d. ∼ Beta(τ1, τ2). (7) 2.2 Model of fMRI Signals Functional MRI yields a noisy measure of average neuronal activation in each brain voxel at different time points. The standard linear time-invariant model of fMRI signals expresses the contribution of each stimulus by the convolution of the spike train of stimulus onsets with a hemodynamic response function (HRF) [20]. The HRF peaks at about 6-9 seconds, modeling an intrinsic delay between the underlying neural activity and the measured fMRI signal. Accordingly, measured signal yjit in voxel i of subject j at time t is modeled as: yjit = X s bjisGst + X h ejihFht + ǫjit, (8) where Gst is the model regressor for stimulus s, Fht represents nuisance factor h, such as a baseline or a linear temporal trend, at time t and ǫjit is gaussian noise. We use the simplifying assumption throughout that ǫjit i.i.d. ∼Normal(0, λ−1 ji ). In the absence of any priors, the response bjis of voxel i to stimulus s can be estimated by solving the least squares regression problem. Unfortunately, fMRI signal does not have a meaningful scale and may vary greatly across trials and experiments. In order to use this data for inferences about brain function across subjects, sessions, and stimuli, we need to transform it into a standard and meaningful space. The binary activation variables x, introduced in the previous section, achieve this transformation by assuming that in response to any stimulus a voxel is either in an active or a non-active state, similar to [7]. If voxel i is activated by stimulus s, i.e., if xjis = 1, its response takes positive value aji that specifies the voxel-specific amplitude of response; otherwise, its response remains 0. We can write bjis = ajixjis and assume that aji represents uninteresting variability in fMRI signal. When making inference on binary activation variable xjis, we consider not only the response, but also the level of noise and responses to other stimuli. Therefore, the binary activation variables can be directly compared across different subjects, sessions, and experiments. We assume the following priors on voxel response variables: ejih ∼Normal µe jh, σe jh  , (9) aji ∼Normal+ µa j , σa j  , (10) λji ∼Gamma (κj, θj) , (11) where Normal+ defines a normal distribution constrained to only take positive values. 2.3 Algorithm The size of common fMRI data sets and the space of hidden variables in our model makes stochastic inference methods, such as Gibbs sampling, prohibitively slow. Currently, there is no faster splitmerge-type sampling technique that can be applied to hierarchical nonparametric models [18]. We therefore choose a variational Bayesian inference scheme, which is known to yield faster algorithms. To formulate the inference for the hierarchical unit memberships, we closely follow the derivation of the Collapsed Variational HDP approximation [21]. We integrate over the subject-specific unit weights β = {βj} and introduce a set of auxiliary variables r = {rjk} that represent the number of tables corresponding to unit (dish) k in subject (restaurant) j according to the Chinese restaurant franchise formulation of HDP [18]. Let h = {x, z, c, r, a, φ, e, λ, v, u} denote the set of all unobserved variables. Here, v = {vk} and u = {ul} are the stick breaking fractions corresponding 4 to distributions π and ρ, respectively. We approximate the posterior distribution on the hidden variables given the observed data p(h|y) by a factorizable distribution q(h). The variational method minimizes the Gibbs free energy function F[q] = E[log q(h)]−E[log p(y, h)] where E[·] indicates expected value with respect to distribution q. We assume a distribution q of the form: q(h) = q(r|z) Y k q(vk) Y l q(ul) Y k,l q(φk,l) Y s q(cs) · Y j,i " q(aji)q(λji)q(zji) Y s q(xjis) Y h q(ejih) # . We apply coordinate descent in the space of q(·) to minimize the free energy. Since we explicitly account for the dependency of the auxiliary variables on unit memberships in the posterior, we can derive closed form update rules for all hidden variables. Due to space constraints in this paper, we present the update rules and their derivations in the Supplementary Material. Iterative application of the update rules leads to a local minimum of the Gibbs free energy. Since variational solutions are known to be biased toward their initial configurations, the initialization phase becomes critical to the quality of the results. For initialization of the activation variables xjis, we estimate bjis in Eq. (8) using least squares regression and for each voxel normalize the estimates to values between 0 and 1 using the voxel-wise maximum and minimum. We use the estimates of b to also initialize λ and e. For memberships, we initialize q(z) by introducing the voxels one by one in random order to the collapsed Gibbs sampling scheme [18] constructed for our model with each stimulus as a separate category and the initial x assumed known. We initialize category memberships c by clustering the voxel responses across all subjects. Finally, we set the hyperparameters of the fMRI model such that they match the corresponding statistics computed by least squares regression on the data. 3 Results 246 05 10 NBC BAC VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli 0 0.25 0.5 0.75 1 Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 Normalized Mutual Information (NMI) VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli 0 0.25 0.5 0.75 1 Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 Classification Accuracy (CA) Figure 3: Comparison between our nonparametric Bayesian co-clustering algorithm (NBC) and Block Average Co-clustering (BAC) on synthetic data. Both classiciation accuracy (CA) and noramlized mutual information (NMI) are reported. We demonstrate the performance of the model and the inference algorithm on both synthetic and real data. As a baseline algorithm for comparison, we use the Block Average Co-clustering (BAC) algorithm [6] with the Euclidean distance. First, we show that the hierarchical structure of our algorithm enables us to retrieve the cluster membership more accurately in synthetic group data. Then, we present the results of our method in an fMRI study of visual object recognition. 3.1 Synthetic Data We generate synthetic data from a stochastic process defined by our model with the set of parameters γ = 3, α = 100, χ = 1, and τ1 = τ2 = 1, Nj = 1000 voxels, S = 100 stimuli, and J = 4 subjects. For the model of the fMRI signals, we use parameters that are representative of our experimental setup and the corresponding hyperparameters estimated from the data. We generate 5 data sets with these parameters; they have between 5 to 7 categories and 13 to 21 units. We apply our algorithm directly to time courses in 5 different data sets generated using the above scheme. To apply BAC to the same data sets, we need to first turn the time-courses into voxel-stimulus data. We use the least squares estimates of voxel responses (bjis) normalized in the same way as we initialize our fMRI model. We run each algorithm 20 times with different initializations. The BAC algorithm is initialized by the result of a soft k-means clustering in the space of voxels. Our method is initialized as explained in the previous section. For BAC, we use the true number of clusters while our algorithm is always initialized with 15 clusters. We evaluate the results of clustering with respect to both voxels and stimuli by comparing clustering results with the ground truth. Since there is no consensus on the best way to compare different clusterings of the same set, here we employ two different clustering distance measures. Let P(k, k′) denote the fraction of data points (voxels or stimuli) assigned to cluster k in the ground truth and k′ 5 in the estimated clustering. The first measure is the so-called classification accuracy (CA), which is defined as the fraction of data points correctly assigned to the true clusters [22]. To compute this measure, we need to first match the cluster indices in our results with the true clustering. We find a one-to-one matching between the two sets of clusters by solving a bipartite graph matching problem. We define the graph such that the two sets of cluster indices represent the nodes and P(k, k′) represents the weight of the edge between node k and k′. As the second measure, we use the normalized mutual information (NMI), which expresses the proportion of the entropy (information) of the ground truth clustering that is shared with the estimated clustering. We define two random variables X and Y that take values in the spaces of the true and the estimated cluster indices, respectively. Assuming a joint distribution P(X=k, Y =k′) = P(k, k′), we set NMI = I(X; Y )/H(X). Both measures take values between 0 and 1, with 1 corresponding to perfect clustering. Fig. 3 presents the clustering quality measures for the two algorithms on the 5 generated data sets. As expected, our method performs consistently better in finding the true clustering structure on data generated by the co-clustering process. Since the two algorithms share the same block co-clustering structure, the advantage of our method is in its model for the hierarchical structure and fMRI signals. 3.2 Experiment We apply our method to data from an fMRI study where 8 subjects view 69 distinct images. Each image is repeated on average about 40 times in one of the two sessions in the experiment. The data includes 42 slices of 1.65mm thickness with in plane voxel size of 1.5mm, aligned with the temporal lobe (ventral visual pathway). As part of the standard preprocessing stream, the data was first motion-corrected separately for the two sessions [23], and then spatially smoothed with a Gaussian kernel of 3mm width. The time course data included 120 volumes per run and from 24 to 40 runs for each subject. We registered the data from the two sessions to the subject’s native anatomical space [24]. We removed noisy voxels from the analysis by performing an ANOVA test and only keeping the voxels for which the stimulus regressors significantly explained the variation in the time course (threshold p=10−4 uncorrected). This procedure selects on average about 6,000 voxels for each subject. Finally, to remove the idiosyncratic aspects of responses in different subjects, such as attention to particular stimuli, we regressed out the subject-average time course from voxel signals after removing the baseline and linear trend. We split trials of each image into two groups of equal size and consider each group as an independent stimulus forming a total of 138 stimuli. Hence, we can examine the consistency of our stimulus categorization with respect to identical trials. We use α = 100, γ = 5, χ = 0.1, and τ1 = τ2 = 1 for the nonparametric prior. We initialize our algorithm 20 times and choose the solution that achieves the lowest Gibbs free energy. Fig. 4 shows the categories that the algorithm finds on the data from all 8 subjects. First, we note that stimulus pairs corresponding to the same image are generally assigned to the same category, confirming the consistency of the resuls across trials. Category 1 corresponds to the scene images and, interestingly, also includes all images of trees. This may suggest a high level category structure that is not merely driven by low level features. Such a structure is even more evident in the 4th category where images of a tiger that has a large face join human faces. Some other animals are clustered together with human bodies in categories 2 and 9. Shoes and cars, which have similar shapes, are clustered together in category 3 while tools are mainly found in category 6. The interaction between the learned categories and the functional units is summarized in the posterior unit-category activation probabilities E[φk,l] ( Fig. 4, right ). The algorithm finds 18 units across all subjects. The largest unit does not show preference for any of the categories. Functional unit 2 is the most selective one and shows high activation for category 4 (faces). This finding agrees with previous studies that have discovered face-selective areas in the brain [25]. Other units show selectivity for different combinations of categories. For instance, Unit 6 prefers categories that mostly include body parts and animals, unit 8 prefers category 1 (scenes and trees), while the selectivity of unit 5 seems to be correlated with the pixel-size of the image. Our method further learns sets of variables {q(zji=k)}Nj i=1 that represent the probabilities that different voxels in subject j belong to functional unit k. Although the algorithm does not use any information about the spatial location of voxels, we can visualize the posterior membership probabilities in each subject as a spatial map. To see whether there is any degree of spatial consistency in the locations of the learned units across subjects, we align the brains of all subjects with the Montreal 6 Categories 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 0 0.5 1 Unit 1 0 0.5 1 Unit 2 0 0.5 1 Unit 3 0 0.5 1 Unit 4 0 0.5 1 Unit 5 0 0.5 1 Unit 6 0 0.5 1 Unit 7 0 0.5 1 Unit 8 0 0.5 1 Unit 9 0 0.5 1 Unit 10 0 0.5 1 Unit 11 0 0.5 1 Unit 12 0 0.5 1 Unit 13 0 0.5 1 Unit 14 0 0.5 1 Unit 15 1 2 3 4 5 6 7 8 91011 0 0.5 1 Unit 16 Categories 1 2 3 4 5 6 7 8 91011 0 0.5 1 Unit 17 Categories 1 2 3 4 5 6 7 8 91011 0 0.5 1 Unit 18 Categories Figure 4: Categories (left) and activation probabilities of functional units (E[φk,l]) (right) estimated by the algorithm from all 8 subjects in the study. 8 Subjects Group 1 Unit 2 Unit 5 Unit 6 246 05 10 NBC BAC Voxels Stimuli Voxels Stimuli 0 0.25 0.5 0.75 1 Group 1 Group 2 NMI Voxels Stimuli Voxels Stimuli 0 0.25 0.5 0.75 1 Group 1 Group 2 CA Figure 5: (Left) Spatial maps of functional unit overlap across subjects in the normalized space. For each voxel, we show the fraction of subjects in the group for which the voxel was assigned to the corresponding functional unit. We see that functional units with similar profiles between the two datasets show similar spatial extent as well. (Right) Comparison between the clustering robustness in the results of our algorithm (NBC) and the best results of Block Average Co-clustering (BAC) on the real data. Neurological Institute coordinate space using affine registration [26]. Fig. 5 (left) shows the average maps across subjects for units 2, 5, and 6 in the normalized space. Despite the relative sparsity of the maps, they have significant overlap across subjects. As with many other real world applications of clustering, the validation of results is challenging in the absence of ground truth. In order to assess the reliability of the results, we examine their consistency across subjects. We split the 8 subjects into two groups of 4 and perform the analysis on the two group data separately. Fig. 6 (left) shows the categories found for one of the two groups (group 1), which show good agreement with the categories found in the data from all subjects (categories are indexed based on the result of graph matching). As a way to quantify the stability of clustering across subjects, we compute the measures CA and NMI for the results in the two groups 7 Categories 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: Categories 1: 2: 3: 4: 5: 6: 7: 8: 9: Figure 6: Categories found by our algorithm in group 1 (left) and by BAC in all subjects for (l, k) = (14, 14) (right). relative to the results in the 8 subjects. We also apply the BAC algorithm to response values estimated via least squares regression in all 8 subjects and the two groups. Since the number of units and categories is not known a priori, we perform the BAC algorithm for all pairs of (l, k) such that 5 ≤l ≤15 and k ∈{10, 12, 14, 16, 18, 20}. Fig. 5 (right) compares the clustering measures for our method with those found by the best BAC results in terms of average CA and NMI measures (achieved with (l, k) = (6, 14) for CA, and (l, k) = (14, 14) for NMI). Fig. 6 (right) shows the categories for (l, k) = (14, 14), which appear to lack some of the structures found in our results. We also obtain better measures of stability compared to the best BAC results for clustering stimuli, while the measures are similar for clustering voxels. We note that in contrast to the results of BAC, our first unit is always considerably larger than all the others including about 70% of voxels. This seems neuroscientifically plausible since we expect large areas of the visual cortex to be involved in processing low level features and therefore incapable of distinguishing different objects. 4 Conclusion This paper proposes a model for learning large-scale functional structures in the brain responses of a group of subjects. We assume that the structure can be summarized in terms of functional units with similar responses to categories of stimuli. We derive a variational Bayesian inference scheme for our hierarchical nonparametric Bayesian model and apply it to both synthetic and real data. In an fMRI study of visual object recognition, our method finds meaningful structures in both object categories and functional units. This work is a step toward devising models for functional brain imaging data that explicitly encode our hypotheses about the structure in the brain functional organization. The assumption that functional units, categories, and their interactions are sufficient to describe the structure, although proved successful here, may be too restrictive in general. A more detailed characterization may be achieved through a feature-based representation where a stimulus can simultaneously be part of several categories (features). Likewise, a more careful treatment of the structure in the organization of brain areas may require incorporating spatial information. In this paper, we show that we can turn such basic insights into principled models that allow us to investigate the structures of interest in a data-driven fashion. By incorporating the properties of brain imaging signals into the model, we better utilize the data for making relevant inferences across subjects. 8 Acknowledgments We thank Ed Vul, Po-Jang Hsieh, and Nancy Kanwisher for the insight they have offered us throughout our collaboration, and also for providing the fMRI data. This research was supported in part by the NSF grants IIS/CRCNS 0904625, CAREER 0642971, the MIT McGovern Institute Neurotechnology Program grant, and NIH grants NIBIB NAMIC U54-EB005149 and NCRR NAC P41-RR13218. References [1] N. Kriegeskorte, M. Mur, D.A. Ruff, R. Kiani, J. Bodurka, H. Esteky, K. Tanaka, and P.A. Bandettini. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60(6):1126–1141, 2008. [2] B. Thirion and O. Faugeras. Feature characterization in fMRI data: the Information Bottleneck approach. MedIA, 8(4):403–419, 2004. [3] D. Lashkari and P. Golland. Exploratory fMRI analysis without spatial normalization. In IPMI, 2009. [4] D. Lashkari, E. Vul, N. Kanwisher, and P. Golland. Discovering structure in the space of fMRI selectivity profiles. NeuroImage, 50(3):1085–1098, 2010. [5] D. Lashkari, R. Sridharan, E. Vul, P.J. Hsieh, N. Kanwisher, and P. Golland. Nonparametric hierarchical Bayesian model for functional brain parcellation. In MMBIA, 2010. [6] A. Banerjee, I. Dhillon, J. Ghosh, S. Merugu, and D.S. Modha. A generalized maximum entropy approach to bregman co-clustering and matrix approximation. JMLR, 8:1919–1986, 2007. [7] S. Makni, P. Ciuciu, J. Idier, and J.-B. Poline. Joint detection-estimation of brain activity in functional MRI: a multichannel deconvolution solution. TSP, 53(9):3488–3502, 2005. [8] Y. Cheng and G.M. Church. Biclustering of expression data. In ISMB, 2000. [9] S.C. Madeira and A.L. Oliveira. Biclustering algorithms for biological data analysis: a survey. TCBB, 1(1):24–45, 2004. [10] Y. Kluger, R. Basri, J.T. Chang, and M. Gerstein. Spectral biclustering of microarray data: coclustering genes and conditions. Genome Research, 13(4):703–716, 2003. [11] B. Long, Z.M. Zhang, and P.S. Yu. A probabilistic framework for relational clustering. In ACM SIGKDD, 2007. [12] D. Lashkari and P. Golland. Coclustering with generative models. CSAIL Technical Report, 2009. [13] C. Kemp, J.B. Tenenbaum, T.L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In AAAI, 2006. [14] K.A. Norman, S.M. Polyn, G.J. Detre, and J.V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences, 10(9):424–430, 2006. [15] C.F. Beckmann and S.M. Smith. Probabilistic independent component analysis for functional magnetic resonance imaging. TMI, 23(2):137–152, 2004. [16] M.J. McKeown, S. Makeig, G.G. Brown, T.P. Jung, S.S. Kindermann, A.J. Bell, and T.J. Sejnowski. Analysis of fMRI data by blind separation into independent spatial components. Hum Brain Mapp, 6(3):160–188, 1998. [17] D. Endres and P. F¨oldi´ak. Interpreting the neural code with Formal Concept Analysis. In NIPS, 2009. [18] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical dirichlet processes. JASA, 101(476):1566– 1581, 2006. [19] J. Pitman. Poisson–Dirichlet and GEM invariant distributions for split-and-merge transformations of an interval partition. Combinatorics, Prob, Comput, 11(5):501–514, 2002. [20] KJ Friston, AP Holmes, KJ Worsley, JP Poline, CD Frith, RSJ Frackowiak, et al. Statistical parametric maps in functional imaging: a general linear approach. Hum Brain Mapp, 2(4):189–210, 1994. [21] Y.W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. In NIPS, 2008. [22] M. Meil˘a and D. Heckerman. An experimental comparison of model-based clustering methods. Machine Learning, 42(1):9–29, 2001. [23] R.W. Cox and A. Jesmanowicz. Real-time 3D image registration for functional MRI. Magn Reson Med, 42(6):1014–1018, 1999. [24] D.N. Greve and B. Fischl. Accurate and robust brain image alignment using boundary-based registration. NeuroImage, 48(1):63–72, 2009. [25] N. Kanwisher and G. Yovel. The fusiform face area: a cortical region specialized for the perception of faces. R Soc Lond Phil Trans, Series B, 361(1476):2109–2128, 2006. [26] J. Talairach and P. Tournoux. Co-planar Stereotaxic Atlas of the Human Brain. Thieme, New York, 1988. 9
2010
83
4,128
Random Projection Trees Revisited Aman Dhesi∗ Department of Computer Science Princeton University Princeton, New Jersey, USA. adhesi@princeton.edu Purushottam Kar Department of Computer Science and Engineering Indian Institute of Technology Kanpur, Uttar Pradesh, INDIA. purushot@cse.iitk.ac.in Abstract The Random Projection Tree (RPTREE) structures proposed in [1] are space partitioning data structures that automatically adapt to various notions of intrinsic dimensionality of data. We prove new results for both the RPTREE-MAX and the RPTREE-MEAN data structures. Our result for RPTREE-MAX gives a nearoptimal bound on the number of levels required by this data structure to reduce the size of its cells by a factor s ≥2. We also prove a packing lemma for this data structure. Our final result shows that low-dimensional manifolds have bounded Local Covariance Dimension. As a consequence we show that RPTREE-MEAN adapts to manifold dimension as well. 1 Introduction The Curse of Dimensionality [2] has inspired research in several directions in Computer Science and has led to the development of several novel techniques such as dimensionality reduction, sketching etc. Almost all these techniques try to map data to lower dimensional spaces while approximately preserving useful information. However, most of these techniques do not assume anything about the data other than that they are imbedded in some high dimensional Euclidean space endowed with some distance/similarity function. As it turns out, in many situations, the data is not simply scattered in the Euclidean space in a random fashion. Often, generative processes impose (non-linear) dependencies on the data that restrict the degrees of freedom available and result in the data having low intrinsic dimensionality. There exist several formalizations of this concept of intrinsic dimensionality. For example, [1] provides an excellent example of automated motion capture in which a large number of points on the body of an actor are sampled through markers and their coordinates transferred to an animated avatar. Now, although a large sample of points is required to ensure a faithful recovery of all the motions of the body (which causes each captured frame to lie in a very high dimensional space), these points are nevertheless constrained by the degrees of freedom offered by the human body which are very few. Algorithms that try to exploit such non-linear structure in data have been studied extensively resulting in a large number of Manifold Learning algorithms for example [3, 4, 5]. These techniques typically assume knowledge about the manifold itself or the data distribution. For example, [4] and [5] require knowledge about the intrinsic dimensionality of the manifold whereas [3] requires a sampling of points that is “sufficiently” dense with respect to some manifold parameters. Recently in [1], Dasgupta and Freund proposed space partitioning algorithms that adapt to the intrinsic dimensionality of data and do not assume explicit knowledge of this parameter. Their data structures are akin to the k-d tree structure and offer guaranteed reduction in the size of the cells after a bounded number of levels. Such a size reduction is of immense use in vector quantization [6] and regression [7]. Two such tree structures are presented in [1] – each adapting to a different notion ∗Work done as an undergraduate student at IIT Kanpur 1 of intrinsic dimensionality. Both variants have already found numerous applications in regression [7], spectral clustering [8], face recognition [9] and image super-resolution [10]. 1.1 Contributions The RPTREE structures are new entrants in a large family of space partitioning data structures such as k-d trees [11], BBD trees [12], BAR trees [13] and several others (see [14] for an overview). The typical guarantees given by these data structures are of the following types : 1. Space Partitioning Guarantee : There exists a bound L(s), s ≥2 on the number of levels one has to go down before all descendants of a node of size ∆are of size ∆/s or less. The size of a cell is variously defined as the length of the longest side of the cell (for box-shaped cells), radius of the cell, etc. 2. Bounded Aspect Ratio : There exists a certain “roundedness” to the cells of the tree - this notion is variously defined as the ratio of the length of the longest to the shortest side of the cell (for box-shaped cells), the ratio of the radius of the smallest circumscribing ball of the cell to that of the largest ball that can be inscribed in the cell, etc. 3. Packing Guarantee : Given a fixed ball B of radius R and a size parameter r, there exists a bound on the number of disjoint cells of the tree that are of size greater than r and intersect B. Such bounds are usually arrived at by first proving a bound on the aspect ratio for cells of the tree. These guarantees play a crucial role in algorithms for fast approximate nearest neighbor searches [12] and clustering [15]. We present new results for the RPTREE-MAX structure for all these types of guarantees. We first present a bound on the number of levels required for size reduction by any given factor in an RPTREE-MAX. Our result improves the bound obtainable from results presented in [1]. Next, we prove an “effective” aspect ratio bound for RPTREE-MAX. Given the randomized nature of the data structure it is difficult to directly bound the aspect ratios of all the cells. Instead we prove a weaker result that can nevertheless be exploited to give a packing lemma of the kind mentioned above. More specifically, given a ball B, we prove an aspect ratio bound for the smallest cell in the RPTREE-MAX that completely contains B. Our final result concerns the RPTREE-MEAN data structure. The authors in [1] prove that this structure adapts to the Local Covariance Dimension of data (see Section 5 for a definition). By showing that low-dimensional manifolds have bounded local covariance dimension, we show its adaptability to the manifold dimension as well. Our result demonstrates the robustness of the notion of manifold dimension - a notion that is able to connect to a geometric notion of dimensionality such as the doubling dimension (proved in [1]) as well as a statistical notion such as Local Covariance Dimension (this paper). Due to lack of space we relegate some proofs to the Supplementary Material document and present proofs of only the main theorems here. All results cited from other papers are presented as Facts in this paper. We will denote by B(x, r), a closed ball of radius r centered at x. We will denote by d, the intrinsic dimensionality of data and by D, the ambient dimensionality (typically d ≪D). 2 The RPTREE-MAX structure The RPTREE-MAX structure adapts to the doubling dimension of data (see definition below). Since low-dimensional manifolds have low doubling dimension (see [1] Theorem 22) hence the structure adapts to manifold dimension as well. Definition 1. The doubling dimension of a set S ⊂RD is the smallest integer d such that for any ball B(x, r) ⊂RD, the set B(x, r) ∩S can be covered by 2d balls of radius r/2. The RPTREE-MAX algorithm is presented data imbedded in RD having doubling dimension d. The algorithm splits data lying in a cell C of radius ∆by first choosing a random direction v ∈RD, projecting all the data inside C onto that direction, choosing a random value δ in the range [−1, 1] · 6∆/ √ D and then assigning a data point x to the left child if x · v < median({z · v : z ∈C}) + δ and the right child otherwise. Since it is difficult to get the exact value of the radius of a data set, 2 the algorithm settles for a constant factor approximation to the value by choosing an arbitrary data point x ∈C and using the estimate ˜∆= max({∥x −y∥: y ∈C}). The following result is proven in [1] : Fact 2 (Theorem 3 in [1]). There is a constant c1 with the following property. Suppose an RPTREEMAX is built using a data set S ⊂RD . Pick any cell C in the RPTREE-MAX; suppose that S ∩C has doubling dimension ≤d. Then with probability at least 1/2 (over the randomization in constructing the subtree rooted at C), every descendant C′ more than c1d log d levels below C has radius(C′) ≤radius(C)/2. In Sections 2, 3 and 4, we shall always assume that the data has doubling dimension d and shall not explicitly state this fact again and again. Let us consider extensions of this result to bound the number of levels it takes for the size of all descendants to go down by a factor s > 2. Let us analyze the case of s = 4. Starting off in a cell C of radius ∆, we are assured of a reduction in size by a factor of 2 after c1d log d levels. Hence all 2c1d log d nodes at this level have radius ∆/2 or less. Now we expect that after c1d log d more levels, the size should go down further by a factor of 2 thereby giving us our desired result. However, given the large number of nodes at this level and the fact that the success probability in Fact 2 is just greater than a constant bounded away from 1, it is not possible to argue that after c1d log d more levels the descendants of all these 2c1d log d nodes will be of radius ∆/4 or less. It turns out that this can be remedied by utilizing the following extension of the basic size reduction result in [1]. We omit the proof of this extension. Fact 3 (Extension of Theorem 3 in [1]). For any δ > 0, with probability at least 1−δ, every descendant C′ which is more than c1d log d + log(1/δ) levels below C has radius(C′) ≤radius(C)/2. This gives us a way to boost the confidence and do the following : go down L = c1d log d+2 levels from C to get the the radius of all the 2c1d log d+2 descendants down to ∆/2 with confidence 1−1/4. Afterward, go an additional L′ = c1d log d + L + 2 levels from each of these descendants so that for any cell at level L, the probability of it having a descendant of radius > ∆/4 after L′ levels is less than 1 4·2L . Hence conclude with confidence at least 1 −1 4 − 1 4·2L · 2L ≥1 2 that all descendants of C after 2L + c1d log d + 2 have radius ≤∆/4. This gives a way to prove the following result : Theorem 4. There is a constant c2 with the following property. For any s ≥2, with probability at least 1−1/4, every descendant C′ which is more than c2·s·d log d levels below C has radius(C′) ≤ radius(C)/s. Proof. Refer to Supplementary Material Notice that the dependence on the factor s is linear in the above result whereas one expects it to be logarithmic. Indeed, typical space partitioning algorithms such as k-d trees do give such guarantees. The first result we prove in the next section is a bound on the number of levels that is poly-logarithmic in the size reduction factor s. 3 A generalized size reduction lemma for RPTREE-MAX In this section we prove the following theorem : Theorem 5 (Main). There is a constant c3 with the following property. Suppose an RPTREE-MAX is built using data set S ⊂RD . Pick any cell C in the RPTREE-MAX; suppose that S ∩C has doubling dimension ≤d. Then for any s ≥2, with probability at least 1 −1/4 (over the randomization in constructing the subtree rooted at C), for every descendant C′ which is more than c3 · log s · d log sd levels below C, we have radius(C′) ≤radius(C)/s. Compared to this, data structures such as [12] give deterministic guarantees for such a reduction in D log s levels which can be shown to be optimal (see [1] for an example). Thus our result is optimal but for a logarithmic factor. Moving on with the proof, let us consider a cell C of radius ∆in the RPTREE-MAX that contains a dataset S having doubling dimension ≤d. Then for any ǫ > 0, a repeated application of Definition 1 shows that the S can be covered using at most 2d log(1/ǫ) balls of radius ǫ∆. We will cover S ∩C using balls of radius ∆ 960s √ d so that O (sd)d balls would suffice. Now consider all pairs of these balls, the distance between whose centers is ≥∆ s − ∆ 960s √ d. 3 good split bad split neutral split B2 B1 ∆ Figure 1: Balls B1 and B2 are of radius ∆/s √ d and their centers are ∆/s −∆/s √ d apart. If random splits separate data from all such pairs of balls i.e. for no pair does any cell contain data from both balls of the pair, then each resulting cell would only contain data from pairs whose centers are closer than ∆ s − ∆ 960s √ d. Thus the radius of each such cell would be at most ∆/s. We fix such a pair of balls calling them B1 and B2. A split in the RPTREE-MAX is said to be good with respect to this pair if it sends points inside B1 to one child of the cell in the RPTREE-MAX and points inside B2 to the other, bad if it sends points from both balls to both children and neutral otherwise (See Figure 1). We have the following properties of a random split : Lemma 6. Let B = B(x, δ) be a ball contained inside an RPTREE-MAX cell of radius ∆that contains a dataset S of doubling dimension d. Lets us say that a random split splits this ball if the split separates the data set S into two parts. Then a random split of the cell splits B with probability atmost 3δ √ d ∆. Proof. Refer to Supplementary Material Lemma 7. Let B1 and B2 be a pair of balls as described above contained in the cell C that contains data of doubling dimension d. Then a random split of the cell is a good split with respect to this pair with probability at least 1 56s. Proof. Refer to Supplementary Material. Proof similar to that of Lemma 9 of [1]. Lemma 8. Let B1 and B2 be a pair of balls as described above contained in the cell C that contains data of doubling dimension d. Then a random split of the cell is a bad split with respect to this pair with probability at most 1 320s. Proof. The proof of a similar result in [1] uses a conditional probability argument. However the technique does not work here since we require a bound that is inversely proportional to s. We instead make a simple observation that the probability of a bad split is upper bounded by the probability that one of the balls is split since for any two events A and B, P [A ∩B] ≤min{P [A] , P [B]}. The result then follows from an application of Lemma 6. We are now in a position to prove Theorem 5. What we will prove is that starting with a pair of balls in a cell C, the probability that some cell k levels below has data from both the balls is exponentially small in k. Thus, after going enough number of levels we can take a union bound over all pairs of balls whose centers are well separated (which are O (sd)2d in number) and conclude the proof. Proof. (of Theorem 5) Consider a cell C of radius ∆in the RPTREE-MAX and fix a pair of balls contained inside C with radii ∆/960s √ d and centers separated by at least ∆/s −∆/960s √ d. Let 4 pi j denote the probability that a cell i levels below C has a descendant j levels below itself that contains data points from both the balls. Then the following holds : Lemma 9. p0 k ≤ 1 − 1 68s l pl k−l. Proof. Refer to Supplementary Material. Proof similar to that of Lemma 11 of [1]. Note that this gives us p0 k ≤ 1 − 1 68s k as a corollary. However using this result would require us to go down k = Ω(sd log(sd)) levels before p0 k = 1 Ω((sd)2d) which results in a bound that is worse (by a factor logarithmic in s) than the one given by Theorem 4. This can be attributed to the small probability of a good split for a tiny pair of balls in large cells. However, here we are completely neglecting the fact that as we go down the levels, the radii of cells go down as well and good splits become more frequent. Indeed setting s = 2 in Theorems 7 and 8 tells us that if the pair of balls were to be contained in a cell of radius ∆ s/2 then the good and bad split probabilities are 1 112 and 1 640 respectively. This paves way for an inductive argument : assume that with probability > 1 −1/4, in L(s) levels, the size of all descendants go down by a factor s. Denote by pl g the probability of a good split in a cell at depth l and by pl b the corresponding probability of a bad split. Set l∗= L(s/2) and let E be the event that the radius of every cell at level l∗is less than ∆ s/2. Let C′ represent a cell at depth l∗. Then, pl∗ g ≥ P [good split in C′|E] · P [E] ≥ 1 112 ·  1 −1 4  ≥ 1 150 pl∗ b = P [bad split in C′|E] · P [E] + P [bad split in C′|¬E] · P [¬E] ≤ 1 640 · 1 + 1 640 · 1 4 ≤ 1 512 Notice that now, for any m > 0, we have pl∗ m ≤ 1 − 1 213 m. Thus, for some constant c4, setting k = l∗+ c4d log(sd) and applying Lemma 9 gives us p0 k ≤ 1 − 1 68s l∗1 − 1 213 c4d log(sd) ≤ 1 4(sd)2d . Thus we have L(s) ≤L(s/2) + c4d log(sd) which gives us the desired result on solving the recurrence i.e. L(s) = O (d log s log sd). 4 A packing lemma for RPTREE-MAX In this section we prove a probabilistic packing lemma for RPTREE-MAX. A formal statement of the result follows : Theorem 10 (Main). Given any fixed ball B(x, R) ⊂RD, with probability greater than 1/2 (where the randomization is over the construction of the RPTREE-MAX), the number of disjoint RPTREEMAX cells of radius greater than r that intersect B is at most R r O(d log d log(dR/r)). Data structures such as BBD-trees give a bound of the form O R r D which behaves like R r O(1) for fixed D. In comparison, our result behaves like R r O(log R r ) for fixed d. We will prove the result in two steps : first of all we will show that with high probability, the ball B will be completely inscribed in an RPTREE-MAX cell C of radius no more than O  Rd √ d log d  . Thus the number of disjoint cells of radius at least r that intersect this ball is bounded by the number of descendants of C with this radius. To bound this number we then invoke Theorem 5 and conclude the proof. 4.1 An effective aspect ratio bound for RPTREE-MAX cells In this section we prove an upper bound on the radius of the smallest RPTREE-MAX cell that completely contains a given ball B of radius R. Note that this effectively bounds the aspect ratio of this cell. Consider any cell C of radius ∆that contains B. We proceed with the proof by first 5 ∆ 2 useful split useless split Bi B C Figure 2: Balls Bi are of radius ∆/512 √ d and their centers are ∆/2 far from the center of B. showing that the probability that B will be split before it lands up in a cell of radius ∆/2 is at most a quantity inversely proportional to ∆. Note that we are not interested in all descendants of C - only the ones ones that contain B. That is why we argue differently here. We consider balls of radius ∆/512 √ d surrounding B at a distance of ∆/2 (see Figure 2). These balls are made to cover the annulus centered at B of mean radius ∆/2 and thickness ∆/512 √ d – clearly dO(d) balls suffice. Without loss of generality assume that the centers of all these balls lie in C. Notice that if B gets separated from all these balls without getting split in the process then it will lie in a cell of radius < ∆/2. Fix a Bi and call a random split of the RPTREE-MAX useful if it separates B from Bi and useless if it splits B. Using a proof technique similar to that used in Lemma 7 we can show that the probability of a useful split is at least 1 192 whereas Lemma 6 tells us that the probability of a useless split is at most 3R √ d ∆ . Lemma 11. There exists a constant c5 such that the probability of a ball of radius R in a cell of radius ∆getting split before it lands up in a cell of radius ∆/2 is at most c5Rd √ d log d ∆ . Proof. Refer to Supplementary Material We now state our result on the “effective” bound on aspect ratios of RPTREE-MAX cells. Theorem 12. There exists a constant c6 such that with probability > 1 −1/4, a given (fixed) ball B of radius R will be completely inscribed in an RPTREE-MAX cell C of radius no more than c6 · Rd √ d log d. Proof. Refer to Supplementary Material Proof. (of Theorem 10) Given a ball B of radius R, Theorem 12 shows that with probability at least 3/4, B will lie in a cell C of radius at most R′ = O  Rd √ d log d  . Hence all cells of radius atleast r that intersect this ball must be either descendants or ancestors of C. Since we want an upper bound on the largest number of such disjoint cells, it suffices to count the number of descendants of C of radius no less than r. We know from Theorem 5 that with probability at least 3/4 in log(R′/r)d log(dR′/r) levels the radius of all cells must go below r. The result follows by observing that the RPTREE-MAX is a binary tree and hence the number of children can be at most 2log(R′/r)d log(dR′/r). The success probability is at least (3/4)2 > 1/2. 6 p Tp(M) M Figure 3: Locally, almost all the energy of the data is concentrated in the tangent plane. 5 Local covariance dimension of a smooth manifold The second variant of RPTREE, namely RPTREE-MEAN, adapts to the local covariance dimension (see definition below) of data. We do not go into the details of the guarantees presented in [1] due to lack of space. Informally, the guarantee is of the following kind : given data that has small local covariance dimension, on expectation, a data point in a cell of radius r in the RPTREE-MEAN will be contained in a cell of radius c7 · r in the next level for some constant c7 < 1. The randomization is over the construction of RPTREE-MEAN as well as choice of the data point. This gives per-level improvement albeit in expectation whereas RPTREE-MAX gives improvement in the worst case but after a certain number of levels. We will prove that a d-dimensional Riemannian submanifold M of RD has bounded local covariance dimension thus proving that RPTREE-MEAN adapts to manifold dimension as well. Definition 13. A set S ⊂RD has local covariance dimension (d, ǫ, r) if there exists an isometry M of RD under which the set S when restricted to any ball of radius r has a covariance matrix for which some d diagonal elements contribute a (1 −ǫ) fraction of its trace. This is a more general definition than the one presented in [1] which expects the top d eigenvalues of the covariance matrix to account for a (1 −ǫ) fraction of its trace. However, all that [1] requires for the guarantees of RPTREE-MEAN to hold is that there exist d orthonormal directions such that a (1 −ǫ) fraction of the energy of the dataset i.e. P x∈S ∥x −mean(S)∥2 is contained in those d dimensions. This is trivially true when M is a d-dimensional affine set. However we also expect that for small neighborhoods on smooth manifolds, most of the energy would be concentrated in the tangent plane at a point in that neighborhood (see Figure 3). Indeed, we can show the following : Theorem 14 (Main). Given a data set S ⊂M where M is a d-dimensional Riemannian manifold with condition number τ, then for any ǫ ≤1 4, S has local covariance dimension  d, ǫ, √ǫτ 3  . For manifolds, the local curvature decides how small a neighborhood should one take in order to expect a sense of “flatness” in the non-linear surface. This is quantified using the Condition Number τ of M (introduced in [16]) which restricts the amount by which the manifold can curve locally. The condition number is related to more prevalent notions of local curvature such as the second fundamental form [17] in that the inverse of the condition number upper bounds the norm of the second fundamental form [16]. Informally, if we restrict ourselves to regions of the manifold of radius τ or less, then we get the requisite flatness properties. This is formalized in [16] as follows. For any hyperplane T ⊂RD and a vector v ∈Rd, let v∥(T ) denote the projection of v onto T . Fact 15 (Implicit in Lemma 5.3 of [16]). Suppose M is a Riemannian manifold with condition number τ. For any p ∈M and r ≤√ǫτ, ǫ ≤1 4, let M′ = B(p, r) ∩M. Let T = Tp(M) be the tangent space at p. Then for any x, y ∈M′, ∥x∥(T ) −y∥(T )∥2 ≥(1 −ǫ)∥x −y∥2. This already seems to give us what we want - a large fraction of the length between any two points on the manifold lies in the tangent plane - i.e. in d dimensions. However in our case we have to show that for some d-dimensional plane P, P x∈S ∥(x −µ)∥(P)∥2 > (1 −ǫ) P x∈S ∥x −µ∥2 7 where µ = mean(S). The problem is that we cannot apply Fact 15 since there is no surety that the mean will lie on the manifold itself. However it turns out that certain points on the manifold can act as “proxies” for the mean and provide a workaround to the problem. Proof. (of Theorem 14) Refer to Supplementary Material 6 Conclusion In this paper we considered the two random projection trees proposed in [1]. For the RPTREEMAX data structure, we provided an improved bound (Theorem 5) on the number of levels required to decrease the size of the tree cells by any factor s ≥2. However the bound we proved is polylogarithmic in s. It would be nice if this can be brought down to logarithmic since it would directly improve the packing lemma (Theorem 10) as well. More specifically the packing bound would become R r O(1)instead of R r O(log R r ) for fixed d. As far as dependence on d is concerned, there is room for improvement in the packing lemma. We have shown that the smallest cell in the RPTREE-MAX that completely contains a fixed ball B of radius R has an aspect ratio no more than O  d √ d log d  since it has a ball of radius R inscribed in it and can be circumscribed by a ball of radius no more than O  Rd √ d log d  . Any improvement in the aspect ratio of the smallest cell that contains a given ball will also directly improve the packing lemma. Moving on to our results for the RPTREE-MEAN, we demonstrated that it adapts to manifold dimension as well. However the constants involved in our guarantee are pessimistic. For instance, the radius parameter in the local covariance dimension is given as √ǫτ 3 - this can be improved to √ǫτ 2 if one can show that there will always exists a point q ∈B(x0, r) ∩M at which the function g : x ∈M 7−→∥x −µ∥attains a local extrema. We conclude with a word on the applications of our results. As we already mentioned, packing lemmas and size reduction guarantees for arbitrary factors are typically used in applications for nearest neighbor searching and clustering. However, these applications (viz [12], [15]) also require that the tree have bounded depth. The RPTREE-MAX is a pure space partitioning data structure that can be coerced by an adversarial placement of points into being a primarily left-deep or right-deep tree having depth Ω(n) where n is the number of data points. Existing data structures such as BBD Trees remedy this by alternating space partitioning splits with data partitioning splits. Thus every alternate split is forced to send at most a constant fraction of the points into any of the children thus ensuring a depth that is logarithmic in the number of data points. A similar technique is used in [7] to bound the depth of the version of RPTREEMAX used in that paper. However it remains to be seen if the same trick can be used to bound the depth of RPTREE-MAX while maintaining the packing guarantees because although such “space partitioning” splits do not seem to hinder Theorem 5, they do hinder Theorem 10 (more specifically they hinder Theorem 11). We leave open the question of a possible augmentation of the RPTREE-MAX structure, or a better analysis, that can simultaneously give the following guarantees : 1. Bounded Depth : depth of the tree should be o(n), preferably (log n)O(1) 2. Packing Guarantee : of the form R r (d log R r ) O(1) 3. Space Partitioning Guarantee : assured size reduction by factor s in (d log s)O(1) levels Acknowledgments The authors thank James Lee for pointing out an incorrect usage of the term Assouad dimension in a previous version of the paper. Purushottam Kar thanks Chandan Saha for several fruitful discussions and for his help with the proofs of the Theorems 5 and 10. Purushottam is supported by the Research I Foundation of the Department of Computer Science and Engineering, IIT Kanpur. 8 References [1] Sanjoy Dasgupta and Yoav Freund. Random Projection Trees and Low dimensional Manifolds. In 40th Annual ACM Symposium on Theory of Computing, pages 537–546, 2008. [2] Piotr Indyk and Rajeev Motwani. Approximate Nearest Neighbors : Towards Removing the Curse of Dimensionality. In 30th Annual ACM Symposium on Theory of Computing, pages 604–613, 1998. [3] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(22):2319–2323, 2000. [4] Piotr Indyk and Assaf Naor. Nearest-Neighbor-Preserving Embeddings. ACM Transactions on Algorithms, 3, 2007. [5] Richard G. Baraniuk and Michael B. Wakin. Random Projections of Smooth Manifolds. Foundations of Computational Mathematics, 9(1):51–77, 2009. [6] Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. Learning the structure of manifolds using random projections. In Twenty-First Annual Conference on Neural Information Processing Systems, 2007. [7] Samory Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. In 22nd Annual Conference on Learning Theory, 2009. [8] Donghui Yan, Ling Huang, and Michael I. Jordan. Fast Approximate Spectral Clustering. In 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 907–916, 2009. [9] John Wright and Gang Hua. Implicit Elastic Matching with Random Projections for Pose-Variant Face Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1502–1509, 2009. [10] Jian Pu, Junping Zhang, Peihong Guo, and Xiaoru Yuan. Interactive Super-Resolution through Neighbor Embedding. In 9th Asian Conference on Computer Vision, pages 496–505, 2009. [11] Jon Louis Bentley. Multidimensional Binary Search Trees Used for Associative Searching. Communications of the ACM, 18(9):509–517, 1975. [12] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions. Journal of the ACM, 45(6):891–923, 1998. [13] Christian A. Duncan, Michael T. Goodrich, and Stephen G. Kobourov. Balanced Aspect Ratio Trees: Combining the Advantages of k-d Trees and Octrees. Journal of Algorithms, 38(1):303–333, 2001. [14] Hanan Samet. Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann Publishers, 2005. [15] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu. A local search approximation algorithm for k-means clustering. Computational Geometry, 28(2-3):89–112, 2004. [16] Partha Niyogi, Stephen Smale, and Shmuel Weinberger. Finding the Homology of Submanifolds with High Confidence from Random Samples. Discrete & Computational Geometry, 39(1-3):419–441, 2008. [17] Sebasti´an Montiel and Antonio Ros. Curves and Surfaces, volume 69 of Graduate Studies in Mathematics. American Mathematical Society and Real Sociedad Matem´atica Epa˜nola, 2005. 9
2010
84
4,129
Joint Analysis of Time-Evolving Binary Matrices and Associated Documents 1Eric Wang, 1Dehong Liu, 1Jorge Silva, 2David Dunson and 1Lawrence Carin 1Electrical and Computer Engineering Department, Duke University 2Statistics Department, Duke University {eric.wang,dehong.liu,jg.silva,lawrence.carin}@duke.edu dunson@stat.duke.edu Abstract We consider problems for which one has incomplete binary matrices that evolve with time (e.g., the votes of legislators on particular legislation, with each year characterized by a different such matrix). An objective of such analysis is to infer structure and inter-relationships underlying the matrices, here defined by latent features associated with each axis of the matrix. In addition, it is assumed that documents are available for the entities associated with at least one of the matrix axes. By jointly analyzing the matrices and documents, one may be used to inform the other within the analysis, and the model offers the opportunity to predict matrix values (e.g., votes) based only on an associated document (e.g., legislation). The research presented here merges two areas of machine-learning that have previously been investigated separately: incomplete-matrix analysis and topic modeling. The analysis is performed from a Bayesian perspective, with efficient inference constituted via Gibbs sampling. The framework is demonstrated by considering all voting data and available documents (legislation) during the 220-year lifetime of the United States Senate and House of Representatives. 1 Introduction There has been significant recent research on the analysis of incomplete matrices [10, 15, 1, 12, 13, 18]. Most analyses have been performed under the assumption that the matrix is real. There are interesting problems for which the matrices may be binary; for example, reflecting the presence/absence of links on nodes of a graph, or for analysis of data associated with a series of binary questions. One may connect an underlying real matrix to binary (or, more generally, integer) observations via a probit or logistic link function; for example, such analysis has been performed in the context of analyzing legislative roll-call data [6]. A problem that has received less attention concerns the analysis of time-evolving matrices. The specific motivation of this paper involves binary questions in a legislative setting; we are interested in analyzing such data over many legislative sessions, and since the legislators change over time, it is undesirable to treat the entire set of votes as a single matrix. Each piece of legislation (question) is unique, but it is desirable to infer inter-relationships and commonalities over time. Similar latent groupings and relationships exist for the legislators. This general setting is also of interest for analysis of more-general social networks [8]. A distinct line of research has focused on analysis of documents, with topic modeling constituting a popular framework [4, 2, 17, 3, 11]. Although the analysis of matrices and documents has heretofore been performed independently, there are many problems for which documents and matrices may be coupled. For example, in addition to a matrix of links between websites or email sender/recipient data, one also has access to the associated documents (website and email content). By analyzing the matrices and documents simultaneously, one may infer inter-relationships about each. For example, in a factor-based model of matrices [8], the associated documents may be used to relate matrix factors to topics/words, providing insight from the documents about the matrix, and vice versa. 1 To the authors’ knowledge, this paper represents the first joint analysis of time-evolving matrices and associated documents. The analysis is performed using nonparametric Bayesian tools; for example, the truncated Dirichlet process [7] is used to jointly cluster latent topics and matrix features. The framework is demonstrated through analysis of large-scale data sets. Specifically, we consider binary vote matrices from the United States Senate and House of Representatives, from the first congress in 1789 to the present. Documents of the legislation are available for the most recent 20 years, and those are also analyzed jointly with the matrix data. The quantitative predictive performance of this framework is demonstrated, as is the power of this setting for making qualitative assessments of large-scale and complex joint matrix-document data. 2 Modeling Framework 2.1 Time-evolving binary matrices Assume we are given a set of binary matrices, {Bt}t=1,τ, with Bt ∈{0, 1}N (t) y ×N (t) x . The number of rows and columns, respectively N (t) y and N (t) x , may vary with time. For example, for the legislative roll-call data consider below, time index t corresponds to year and the number of pieces of legislation and legislators changes with time (e.g., for the historical data considered for the United States congress, the number of states and hence legislators changes as the country has grown). Using a modeling framework analogous to that in [6], the binary matrix has a probit-model generative process, with Bt(i, j) = 1 if Xt(i, j) > 0, and Bt(i, j) = 0 otherwise, and the latent real matrix is defined as Xt(i, j) =< y(t) i , x(t) j > +β(t) i + α(t) j + ϵ(t) i,j (1) where < ·, · > denotes a vector inner product, and ϵ(t) i,j ∼N(0, 1). The random effects are drawn β(t) i ∼N(0, λ−1 β ) and α(t) j ∼N(0, λ−1 α ), with λα ∼µαδ∞+ (1 −µα)Gamma(a, b) and λβ ∼ µβδ∞+ (1 −µβ)Gamma(a, b); δ∞is a point measure at infinity, corresponding to there not being an associated random effect. The probability of whether there is a random effect is controlled by µβ and µα, each of which is drawn from a beta distribution. Random effect αj is motivated by our example application, for which the index j denotes a specific piece of legislation that is voted upon; this parameter reflects the “difficulty” of the vote, and if |αj| is large, then all people are likely to vote one way or the other (an “easy” vote), while if α(t) j is small the details of the legislator (defined by y(t) i ) and legislation (defined by x(t) j ) strongly impact the vote. In previous political science Bayesian analysis [6] researchers have simply set µβ = 1 and µα = 0, but here we consider the model in a more-general setting, and infer these relationships. Additionally, in previous Bayesian analysis [6] the dimensionality of y(t) i and x(t) j has been set (usually to one or two). In related probabilistic matrix factorization (PMF) applied to real matrices [15, 12], priors/regularizers are employed to constrain the dimensionality of the latent features. Here we employ the sparse binary vector b ∈{0, 1}K, with bk ∼Bernoulli(πk), and πk ∼Beta(c/K, d(K −1)/K), for K set to a large integer. By setting c and d appropriately, this favors that most of the components of b are zero (imposes sparseness). Specifically, by integrating out the {πk}k=1,K, one may readily show that the number of non-zero components in b is a random variable drawn from Binomial(K, c/(c + d(K −1))), and the expected number of ones in b is cK/[c + d(K −1)]. This is related to a draw from a truncated beta-Bernoulli process [16]. We consider two types of matrix axes. Specifically, we assume that each row corresponds to a person/entity that may be present for matrix t + 1 and matrix t. It is assumed here that each column corresponds to a question (in the examples, a piece of legislation), and each question is unique. Since the columns are each unique, we assume x(t) j = b◦ˆx(t) j , ˆx(t) j ∼N(0, γ−1 x IK), γx ∼Gamma(e, f), where ◦denotes the pointwise/Hadamard vector product. If the person/entity associated with the ith row at time t is introduced for the first time, its associated feature vector is similarly drawn y(t) i = b ◦ˆy(t) i , ˆy(t) i ∼N(0, γ−1 y IK), with γy ∼Gamma(e, f). However, assuming y(t) i is already drawn (person/entity i is active prior to time t + 1), then a simple auto-regressive model is used to draw y(t+1) i : y(t+1) i = b ◦ˆy(t+1) i , ˆy(t+1) i ∼N(ˆy(t) i , ξ−1IK), with ξ ∼Gamma(g, h). The prior on ξ is set to favor small/smooth changes in the features of an individual on consecutive years. This model constitutes a relatively direct extension of existing techniques for real matrices [15, 12]. Specifically, we have introduced a probit link function and a simple auto-regression construction to 2 impose statistical correlation in the traits of a person/entity at consecutive times. The introduction of the random effects αj and βi has also not been considered within much of the machine-learning matrix-analysis literature, but the use of αj is standard in political science Bayesian models [6]. The principal modeling contribution of this paper concerns how one may integrate such a time-evolving binary-matrix model with associated documents. 2.2 Topic model The manner in which the topic modeling is performed is a generalization of latent Dirichlet allocation (LDA) [4]. Assume that the documents of interest have words drawn from a vocabulary V = {w1, . . . , wV }. The kth topic is characterized by a distribution pk on words (“bag-of-words” assumption), where pk ∼Dir(αV /V, . . . , αV /V ). The generative model draws {pk}k=1,T once for each of the T possible topics. Each document is characterized by a probability distribution on topics, where the cl ∼ Dir(αT /T, . . . , αT /T) corresponds to the distribution across T topics for document l. The generative process for drawing words for document l is to first (and once) draw cl for document l. For word i in document l, we draw a topic zil ∼Mult(cl), and then the specific word is drawn from a multinomial with probability vector pzil. The above procedure is like the standard LDA [4], with the difference manifested in how we handle the Dirichlet distributions Dir(αV /V, . . . , αV /V ) and Dir(αT /T, . . . , αT /T). The Dirichlet distribution draws are constituted via Sethuraman’s construction [14]; this allows us to place gamma priors on αV and αT , while retaining conjugacy, permitting analytic Gibbs’ sampling (we therefore get a full posterior distribution for all model parameters, while most LDA implementations employ a point estimate for the document-dependent probabilities of topics). Specifically, the following hierarchical construction is used for draws from Dir(αV /V, . . . , αV /V ) (and similarly for Dir(αT /T, . . . , αT /T)): pk = ∞ X h=1 ahδθh , ah = Uh Y n<h (1 −Un) , Uh ∼Beta(1, αV ) , θh ∼ V X w=1 1 V δw (2) The probability mass ah is associated with component θh ∈{1, . . . , V } of the probability vector. The infinite sum is truncated, analogous to the truncated stick-breaking representation of the Dirichlet process [9]. 2.3 Joint analysis of matrices and documents Section 2.1 discusses how we model time-evolving binary matrices, and Section 2.2 describes our procedure for implementing topic models. We now put these two models together. Specifically, we consider the case for which there is a document D(t) j of words associated with the jth column at time t; in our example below, this will correspond to the jth piece of legislation in year t. It is possible that we may have documents associated with the matrix rows as well (e.g., speeches for the ith legislature), but in our model development (and in our examples), documents are only assumed present for the columns. For column j at time t, we have both a feature vector x(t) j (for the matrix) and a distribution on topics c(t) j (for the document D(t) j ), and these are now coupled; the remainder of the matrix and topic models are unchanged. We define a set of atoms {c∗ m, µ∗ m, ζ∗ m}m=1,M. The atoms µ∗ m are drawn from N(0, γ−1 x IK), again with a gamma prior placed on γx, and ζ∗ m are also drawn from a gamma distribution; the c∗ m are drawn iid from Dir(αT /T, . . . , αT /T), using the Dirichlet distribution construction as above. To couple the pair (x(t) j , c(t) j ), we draw indicator variable ujt as ujt ∼ M X m=1 bmδm , bm = Cm Y i<m (1 −Ci) , Cm ∼Beta(1, η) (3) with a gamma prior again placed on η (with CM = 1). The pair (x(t) j , c(t) j ) is now defined by x(t) j = b ◦ˆx(t) j , with ˆx(t) j ∼N(µ∗ ujt, ζ∗ ujt −1IK). Further, c(t) j is set to c∗ ujt. This construction clusters the columns, with the clustering mechanism defined by a truncated stickbreaking representation of the Dirichlet process [9]. The components {µ∗ m, ζ∗ m}m=1,M define a 3 j = 1,…, m = 1,…,M i = 1,…, k = 1,…, K t = 1,…, T Figure 1: Graphical representation of the model, with the hyperparameters omitted for simplicity. The plates indicate replication, and the filled circle around Bt indicates it is observed. Gaussian mixture model (GMM) in matrix-column feature space, while the {c∗ m}m=1,M define a set of M probability vectors over topics, with one such vector associated with each of the aforementioned GMM mixture components. The truncated Dirichlet process infers how many mixture components are needed to represent the data. In this construction, each of the matrix columns is associated with a distribution on topics (based upon which mixture component it is drawn from). This provides powerful interpretative insights between the latent features in the matrix model and the words from the associated documents. Further, since the topic and matrix models are constituted jointly, the topics themselves are defined as to be best matched to the characteristics of the matrix (vis-a-vis simply modeling the documents in isolation, which may yield topics that are not necessarily well connected to what matters for the matrices). A graphical representation of the model is shown in Figure 1. There are several extensions one may consider in future work. For example, for simplicity the GMM in column feature space is assumed time-independent. One may consider having a separate GMM for each time (year) t. Further, we have not explicitly imposed time-dependence in the topic model itself, and this may also be considered [2, 11]. For the examples presented below on real data, despite these simplifications, the model seems to perform well. 2.4 Computations The posterior distribution of all model parameters has been computed using Gibbs sampling; the detailed update equations are provided as supplemental material at http://sites.google. com/site/matrixtopics/. The first 1000 Gibbs iterations were discarded as burn-in followed by 500 collection iterations.The truncation levels on the model are T = 20, M = 10, K = 30, and the number of words in the vocabulary is V = 5249. Hyperparameters were set as a = b = e = f = 10−6, c = d = 1, g = 103, and h = 10−3. None of these parameters have been optimized, and “reasonable” related settings yield very similar results. We have performed joint matrix and text analysis considering the United States Congress voting records (and, when available, the document associated with the legislation); we consider both the House of Representatives (House) and Senate, from 1789-2008. Legislation documents and metadata (bill sponsorship, party affiliation of voters, etc.) are available for sessions 101–110 (19892008). For the legislation, stop words were removed using a common stopword list (the 514 stop words are posted at http://sites.google.com/site/matrixtopics/, and the corpus was stemmed using a Porter stemmer). These data are available from www.govtrack.us and from the Library of Congress thomas.loc.gov (votes, text and metadata), while the votes dating from 1789 are at voteview.com. A binary matrix is manifested by mapping all “affirmative” vote codes (e.g., “Yea”, “Yes”, “Present”) to one, and “negative” codes (e.g., “Nay”,“No”,“Not Present”) to zero. Not all legislatures are present to vote on a given piece of legislation, and therefore missing data are manifested naturally. It varies from year to year, but typically 4% of the votes are missing in a given year. We implemented our proposed model in non-optimized Matlab. Computations were performed on a PC with a 3.6GHz CPU and 4GB memory. A total of 11.5 hours of CPU time are required for 4 analysis of Senate sessions 101-110 (1989-2008), and 34.6 hours for House sessions 101-110; in both cases, this corresponds to joint analysis of both votes and text (legislation). If we only analyze the votes, 15.5 hours of CPU are required for Senate session 1-110 (1789-2008), and 62.1 hours for House 1-110 respectively (the number of legislators in the House is over four times larger than that for the Senate). 3 Experiments 3.1 Joint analysis of documents and votes We first consider the joint analysis of the legislation (documents) and votes in the Senate, for 19892008. A key aspect of this analysis is the clustering of the legislation, with legislation j at time t mapped to a cluster (mixture component), with each mixture component characterized by a distribution across latent topics c(t) j , and a latent feature x(t) j for the associated matrix analysis (recall Section 2.3). Five dominant clusters were inferred for these data. Since we are running a Gibbs sampler, and the cluster index changes in general between consecutive iterations (because the index is exchangeable), below we illustrate the nature of the clusters based upon the last Gibbs iteration. The dimensionality of the features was inferred to be ∥b∥0 = 5 (on average, across the Gibbs collection), but two dimensions dominated for the legislation feature vectors x(t) j . In Figure 2 we present the inferred distributions of the five principal mixture components (clusters). The cluster index and the indices of the features are arbitrary; we, for example, number the clusters from 1 to 5 for illustrative simplicity. In Figure 2 we depict the distribution of topics c∗ m associated with each of the five clusters, and in Figure 3 we list the ten most probable words associated with each of the topics. By examining the topic characteristics in Figure 3, and the cluster-dependent distribution of topics, we may assign words/topics to the latent features x(t) j that are linked to the associated matrix, and hence to the vote itself. For example, clusters 1 and 4, which are the most separated in latent space (top row in Figure 2), share a very similar support over topics (bottom row in Figure 2). These clusters appear to be associated with highly partisan topics, specifically taxes (topics 11 and 15) and health/Medicare/Social Security (topics 12 and 16), as can be seen by considering the topic-dependent words in Figure 3. Based upon the voting data and the party of the legislation sponsor (bill author), cluster 1 (red) appears to represent a Republican viewpoint on these topics, while cluster 4 (blue) appears to represent a Democratic viewpoint. This distinction will play an important role in predicting the votes on legislation based on the documents, as discussed below in Section 3.2. In Figure 4 (last plot) we present the estimated density functions for the random-effect parameters β(t) i and α(t) j (estimated from the Gibbs collection iterations). Note that p(β) is much more tightly concentrated around zero than p(α). In the political science literature [6] (in which the legislation/documents have not been considered), researchers simply just set β = 0, and therefore only assume random effects on the legislation, but not on the senators/congressman. Our analysis appears to confirm that this simplification is reasonable. 3.2 Matrix prediction based on documents There has been significant recent interest in the analysis of matrices, particularly in predicting matrix entries that are missing at random [10, 15, 1, 12, 13, 18]. In such collaborative-filtering research, the views of a subset of individuals on a movie, for example, help inform predictions on ratings of people who have not seen the movie (but a fraction of the people must have seen every movie). However, in the problem considered here, these previous models are not applicable: prediction of votes on a new legislation LN requires one to relate LN to votes on previous legislation L1, . . . , LN−1, but in the absence of any prior votes on LN; this corresponds to estimating an entire column of the vote matrix). The joint analysis of text (legislation) and votes, however, offers the ability to relate LN to L1, . . . , LN−1, by making connections via the underlying topics of the legislation (documents), even in the absence of any votes for LN. To examine this predictive potential, we performed joint analysis on all votes and legislation (documents) in the US Senate from 1989-2007. Through this process, we yielded a model very similar to that summarized in Figures 2-4. Using this model, we predict votes on new legislation in 2008, based on the documents of the associated legislation (but using no vote information on this new legislation). To do this, the mixture of topics learned from 1989-2007 data are assumed fixed (each topic characterized by a distribution over words), and these fixed topics are used in the analysis of 5 -4 -2 0 2 4 -6 -4 -2 0 2 4 6 Latent Dimension 1 Latent Dimension 2 101st Congress -4 -2 0 2 4 -6 -4 -2 0 2 4 6 Latent Dimension 1 Latent Dimension 2 110th Congress 5 10 15 20 0 0.1 0.2 0.3 Topic Cluster 1 5 10 15 20 0 0.05 0.1 0.15 0.2 Topic Cluster 2 5 10 15 20 0 0.05 0.1 0.15 0.2 Topic Cluster 3 5 10 15 20 0 0.1 0.2 0.3 Topic Cluster 4 5 10 15 20 0 0.2 0.4 Topic Cluster 5 Figure 2: Characteristics of the five principal mixture components (clusters) associated with Senate data, based upon joint analysis of the documents and the associated vote matrix. Top row: Principal two dimensions of the latent matrix features x(t) j , with the ellipses denoting the standard deviation about the mean of the five clusters. The points reflect specific legislation, with results shown for the 101st and 110th Congresses. The colors of the ellipses are linked to the colors of the topic distributions. Bottom row: Distribution of topics c∗ m for the five clusters (number indices arbitrary). T = 20 topics are considered, and each cluster is characterized by a distribution over topics c∗ m (bottom row), as well as an associated feature (top row) for the matrix. Topic 1 annual research economy doe food sale motor crop county employee Topic 2 military defense this product expense restore public annual universal independence Topic 3 fuel transport public research agriculture export electrical forest foreign water Topic 4 military defense navy air guard research closure naval ndaa bonus Topic 5 public research transport annual children train expense law student organization Topic 6 law violate import goal bureau commerce registration reform risk list Topic 7 defense civilian iraq train health cost foreign environment air depend Topic 8 penalty expense health drug property credit public work medical organization Topic 9 employee public cost defense domestic work inspect bureau tax build Topic 10 foreign law terrorist criminal agriculture justice terror engage economy crime Topic 11 tax budget annual debtor bankruptcy foreign taxpayer credit property product Topic 12 tax health drug medicaid candidate cost children aggregate law medical Topic 13 military transportation safety air defense health guard annual foreign waste Topic 14 violence victim drug alien employee visa youth penalty criminal minor Topic 15 tax health annual cost law this mail financial liability loan Topic 16 medicare tax ssa annual deduct hospital parent bankruptcy debtor male Topic 17 loan environment train property science annual law transportation high five Topic 18 annual health this public defend esea product fcc carrier columbia Topic 19 immigration juvenile firearm sentence alien crime dh train convict prison Topic 20 alien civil parent ha immigrant labor criminal free term petition Figure 3: Top-ten most probable words associated with the Senate-legislation topics, 1989-2008. the documents from new legislation. In this manner, each of the new documents is mapped to one of the mixture-dependent distributions on topics {c∗ m}m=1,M. If a particular piece of legislation is mapped to cluster m (with mapping based upon the words alone), it is then assumed that the latent matrix feature associated with the legislation is the associated cluster mean µ∗ m (learned via the modeling of 1989-2007 data). Once this mapping of legislation to matrix latent space is achieved, and using the senator’s latent feature vector y(t) i from 2007, we may readily compute < y(t) i , µ∗ m >, and via the probit link function the probability of a “yes” vote is quantified, for Senator i on new legislation LN. This is the model in (1), with β(t) i = 0 and α(t) j = 0. Based upon Figure 4 (last plot), the approximation β(t) i = 0 is reasonable. The legislation-dependent random effect α(t) j is expected to be important 6 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Empirical vs. Predicted Voting Frequency for Cluster 1 Senators (sorted by predicted probability) Probability of Voting Yes Prediction Democrat Republican 102 Senators 26 Votes 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Empirical vs. Predicted Voting Frequency for Cluster 2 Senators (sorted by predicted probability) Probability of Voting Yes Prediction Democrat Republican 102 Senators 43 Votes 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Empirical vs. Predicted Voting Frequency for Cluster 3 Senators (sorted by predicted probability) Probability of Voting Yes Prediction Democrat Republican 102 Senators 46 Votes 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Empirical vs. Predicted Voting Frequency for Cluster 4 Senators (sorted by predicted probability) Probability of Voting Yes Prediction Democrat Republican 102 Senators 29 Votes −4 −2 0 2 4 10 −4 10 −3 10 −2 10 −1 10 0 α β α, β Log-posterior probability Figure 4: First four plots: Predicted probability of voting “Yes” given only the legislation text for 2008, based upon the model learned using vote-legislation data from 1898–2007. The dots (colored by party affiliation) show the empirical voting frequencies for all legislation in the cluster, from 2008 (not used in model). Only four clusters are utilized during session 2008, out of five inferred by the model for the overall period 1989–2007. Last plot: Estimated log p(α) and log p(β). Note how p(β) is much more sharply peaked near zero. for legislation for which most senators vote “yes” (large positive α(t) j ) or “no” (large negative α(t) j ). When testing the predictive quality of the model for the held-out year 2008, we assume α(t) j = 0 (since this parameter cannot be inferred without modeling the text and votes jointly, while for 2008 we are only modeling the documents); we therefore only test the model on legislation from 2008 for which less than 90% of the senators agreed, such legislation assumed corresponding to small |α(t) j | (it is assumed that in practice it would be simple to determine whether a piece of legislation is likely to be near-unanimous “yes” or “no”, and therefore model-based prediction of votes for such legislation is deemed less interesting). In Figure 4 we compare the predicted, probit-based probability of a given senator voting “yes” for legislation within clusters 1-4 (see Figure 2); the points in Figure 4 represent the empirical data for each senator, and the curve represents the predictions of the probit link function. These results are deemed to be remarkably good. In Figure 4, the senators along each horizontal axis are ordered according to the probability of voting “yes”. One interesting issue that arises in this prediction concerns clusters 1 and 4 in Figure 2, and the associated predictions for the held-out year 2008, in Figure 4. Since the distributions of these clusters over topics is very similar, the documents alone cannot distinguish between clusters 1 and 4. However, we also have the sponsor of each piece of legislation, and based upon the data from 1989-2007, if a piece of legislation from 2008 is mapped to either cluster 1 or 4, it is disambiguated based upon the party affiliation of the sponsor (cluster 1 is a Republican viewpoint on these topics, while cluster 4 is a Democratic viewpoint, based upon voting records from 1989-2007). 3.3 Time evolution of congressman and legislation The above joint analysis of text and votes was restricted to 1989-2008, since the documents (legislation) were only available for those years. However, the dataset contains votes on all legislation from 1789 to the present, and we now analyze the vote data from 1789-1988. Figure 5 shows snapshots in time of the latent space for voters and legislation, for the House of Representatives (similar results have been computed for the Senate, and are omitted for brevity; as supplemental material, at http://sites.google.com/site/matrixtopics/ we present movies of how legislation and congressman evolve across all times, for both the House and Senate). Five features were inferred, with the two highest-variance features chosen for the axes. The blue symbols denote Democratic legislators, or legislation sponsored by a Democrat, and the red points correspond to Republicans. Results like these are of interest to political scientists, and allow examination of the degree of partisanship over time, for example. 7 −6 −4 −2 0 2 4 −12 −10 −8 −6 −4 −2 0 2 4 6 8 10 Year: 1789−1790 Democrat Republican Others −6 −4 −2 0 2 4 −12 −10 −8 −6 −4 −2 0 2 4 6 8 10 Year: 1939−1940 Democrat Republican Others −6 −4 −2 0 2 4 −12 −10 −8 −6 −4 −2 0 2 4 6 8 10 Year: 1947−1948 Democrat Republican Others −6 −4 −2 0 2 4 −12 −10 −8 −6 −4 −2 0 2 4 6 8 10 Year: 1963−1964 Democrat Republican Others −6 −4 −2 0 2 4 −12 −10 −8 −6 −4 −2 0 2 4 6 8 10 Year: 1983−1984 Democrat Republican Others −6 −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 Year: 1789−1790 Democrat Republican Others −6 −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 Year: 1939−1940 Democrat Republican Others −6 −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 Year: 1947−1948 Democrat Republican Others −6 −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 Year: 1963−1964 Democrat Republican Others −6 −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 Year: 1983−1984 Democrat Republican Others Figure 5: Congressman (top) and legislation (bottom) in latent space for sessions 1–98 of the House of Representatives. The Democrat/Republican separation is usually sharper than for the Senate, and frequently only the partisan information seems to matter. Note the gradual rotation of the red/blue blue axis. Best viewed electronically, zoomed-in. 3.4 Additional quantitative tests One may ask how well this model addresses the more-classical problem of estimating the values of matrix data that are missing uniformly at random, in the absence of documents. To examine this question, we considered binary Senate vote data from 1989-2008, and removed a fraction of the votes uniformly at random, and then use the proposed time-evolving matrix model to process the observed data, and to compute the probability of a “yes” vote on all missing data (via the probit link function). If the probability is larger than 0.5 the vote is set to “yes”, and otherwise it is set to “no”. We compare our time-evolving model to [12], with the addition of a probit link function; for the latter we processed all 20 years as one large matrix, rather than analyzing time-evolving structure. Up to 40% missingness, the proposed model and a modified version of that in [12] performed almost identically, with an average probability of error (on the binary vote) of approximately 0.1. For greater than 40% missingness, the proposed time-evolving model manifested a “phase transition”, and the probability of error increased smoothly up to 0.3, as the fraction of missing data rose to 80%; in contrast, the generalized model in [12] (with probit link) continued to yield a probability of error of about 0.1. The phase transition of the proposed model is likely manifested because the entire matrix is partitioned by year, with a linkage between years manifested via the Markov process between legislators (we don’t analyze all data by one contiguous, large matrix). The phase transition is expected based on the theory in [5], when the fraction of missing data gets large enough (since the size of the contiguous matrices analyzed by the time-evolving model is much smaller than that of the entire matrix, such a phase transition is expected with less missingness than via analysis of the entire matrix at once). While the above results are of interest and deemed encouraging, such uniformly random missingness on matrix data alone is not the motivation of the proposed model. Rather, traditional matrix-analysis methods [10, 15, 1, 12, 13, 18] are incapable of predicting votes on new legislation based on the words alone (as in Figure 4), and such models do not allow analysis of the time-evolving properties of elements of the matrix, as in Figure 5. 4 Conclusions A new model has been developed for the joint analysis of time-evolving matrices and associated documents. To the authors’ knowledge, this paper represents the first integration of research heretofore performed separately on topic models and on matrix analysis/completion. The model has been implemented efficiently via Gibbs sampling. A unique set of results are presented using data from the US Senate and House of Representatives, demonstrating the ability to predict the votes on new legislation, based only on the associated documents. The legislation data was considered principally because it was readily available and interesting in its own right; however, the proposed framework is of interest for many other problems. For example, the model is applicable to analysis of timeevolving relationships between multiple entities, augmented by the presence of documents (e.g., links between websites, and the associated document content). Acknowledgement The research reported here was supported by the US Army Research Office, under grant W911NF08-1-0182, and the Office of Naval Research under grant N00014-09-1-0212. 8 References [1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering: operator estimation with spectral regularization. J. Machine Learning Research, 2009. [2] D. M. Blei and J. D. Lafferty. Dynamic topic models. Proceedings of the 23rd International Conference on Machine Learning, pages 113–120, 2006. [3] D. M. Blei and J. D. Lafferty. A correlated topic model of science. The Annals of Applied Statistics, 1(1):17–35, 2007. [4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [5] E.J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [6] J. Cinton, S. Jackman, and D. Rivers. The statistical analysis of roll call data. Am. Political Sc. Review, 2004. [7] T. S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230, 1973. [8] P. D. Hoff. Multiplicative latent factor models for description and prediction of social networks. Computational and Mathematical Organization Theory, 2009. [9] J. Ishwaran and L. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96:161174, 2001. [10] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent factors. In Advances in NIPS, pages 977–984, 2007. [11] I. Pruteanu-Malinici, L. Ren, J. Paisley, E. Wang, and L. Carin. Hierarchical bayesian modeling of topics in time-stamped documents. IEEE Trans. Pattern Analysis Mach. Intell., 2010. [12] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization with mcmc. In Advances in NIPS, 2008. [13] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in NIPS, 2008. [14] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639–650, 1994. [15] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. In Advances in NIPS, 2005. [16] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the indian buffet process. In International Conference on Artificial Intelligence and Statistics, 2007. [17] H. M. Wallach. Topic modeling: beyond bag of words. Proceedings of the 23rd International Conference on Machine Learning, 2006. [18] K. Yu, J. Lafferty, S. Zhu, and Y. Gong. Large-scale collaborative prediction using a nonparametric random effects model. In Proc. Int. Conf. Machine Learning, 2009. 9
2010
85
4,130
Discriminative Clustering by Regularized Information Maximization Ryan Gomes gomes@vision.caltech.edu Andreas Krause krausea@caltech.edu Pietro Perona perona@vision.caltech.edu California Institute of Technology Pasadena, CA 91106 Abstract Is there a principled way to learn a probabilistic discriminative classifier from an unlabeled data set? We present a framework that simultaneously clusters the data and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function which balances class separation, class balance and classifier complexity. The approach can flexibly incorporate different likelihood functions, express prior assumptions about the relative size of different classes and incorporate partial labels for semi-supervised learning. In particular, we instantiate the framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation indicates that RIM outperforms existing methods on several real data sets, and demonstrates that RIM is an effective model selection method. 1 Introduction Clustering algorithms group data items into categories without requiring human supervision or definition of categories. They are often the first tool used when exploring new data. A great number of clustering principles have been proposed, most of which can be described as either generative or discriminative in nature. Generative clustering algorithms provide constructive definitions of categories in terms of their geometric properties in a feature space or as statistical processes for generating data. Examples include k-means and Gaussian mixture model clustering. In order for generative clustering to be practical, restrictive assumptions must be made about the underlying category definitions. Rather than modeling categories explicitly, discriminative clustering techniques represent the boundaries or distinctions between categories. Fewer assumptions about the nature of categories are made, making these methods powerful and flexible in real world applications. Spectral graph partitioning [1] and maximum margin clustering [2] are example discriminative clustering methods. A disadvantage of existing discriminative approaches is that they lack a probabilistic foundation, making them potentially unsuitable in applications that require reasoning under uncertainty or in data exploration. We propose a principled probabilistic approach to discriminative clustering, by formalizing the problem as unsupervised learning of a conditional probabilistic model. We generalize the work of Grandvalet and Bengio [3] and Bridle et al. [4] in order to learn probabilistic classifiers that are appropriate for multi-class discriminative clustering, as explained in Section 2. We identify two fundamental, competing quantities, class balance and class separation, and develop an information theoretic objective function which trades off these quantities. Our approach corresponds to maximizing mutual information between the empirical distribution on the inputs and the induced 1 label distribution, regularized by a complexity penalty. Thus, we call our approach Regularized Information Maximization (RIM). In summary, our contribution is RIM, a probabilistic framework for discriminative clustering with a number of attractive properties. Thanks to its probabilistic formulation, RIM is flexible: it is compatible with diverse likelihood functions and allows specification of prior assumptions about expected class proportions. We show how our approach leads to an efficient, scalable optimization procedure that also provides a means of automatic model selection (determination of the number of clusters). RIM is easily extended to semi-supervised classification. Finally, we show that RIM performs better than competing approaches on several real-world data sets. 2 Regularized Information Maximization Suppose we are given an unlabeled dataset of N feature vectors (datapoints) X = (x1, · · · , xN), where xi = (xi1, . . . , xiD)T ∈RD are D-dimensional vectors with components xid. Our goal is to learn a conditional model p(y|x, W) with parameters W which predicts a distribution over label values y ∈{1, . . . , K} given an input vector x. Our approach is to construct a functional F(p(y|x, W); X, λ) which evaluates the suitability of p(y|x, W) as a discriminative clustering model. We then use standard discriminative classifiers such as logistic regression for p(y|x, W), and maximize the resulting function F(W; X, λ) over the parameters W. λ is an additional tuning parameter that is fixed during optimization. We are guided by three principles when constructing F(p(y|x, W); X, λ). The first is that the discriminative model’s decision boundaries should not be located in regions of the input space that are densely populated with datapoints. This is often termed the cluster assumption [5], and also corresponds to the idea that datapoints should be classified with large margin. Grandvalet & Bengio [3] show that a conditional entropy term −1 N P i H{p(y|xi, W)} very effectively captures the cluster assumption when training probabilistic classifiers with partial labels. However, in the case of fully unsupervised learning this term alone is not enough to ensure sensible solutions, because conditional entropy may be reduced by simply removing decision boundaries and unlabeled categories tend to be removed. We illustrate this in Figure 1 (left) with an example using the multilogit regression classifier as the conditional model p(y|x, W), which we will develop in Section 3. In order to avoid degenerate solutions, we incorporate the notion of class balance: we prefer configurations in which category labels are assigned evenly across the dataset. We define the empirical label distribution ˆp(y; W) = Z ˆp(x)p(y|x, W)dx = 1 N X i p(y|xi, W), which is an estimate of the marginal distribution of y. A natural way to encode our preference towards class balance is to use the entropy H{ˆp(y; W)}, because it is maximized when the labels are uniformly distributed. Combining the two terms, we arrive at IW{y; x}=H{ˆp(y; W)}−1 N X i H{p(y|xi, W)} (1) which is the empirical estimate of the mutual information between x and y under the conditional model p(y|x, W). Bridle et al. [4] were the first to propose maximizing IW{y; x} in order to learn probabilistic classifiers without supervision. However, they note that IW{y; x} may be trivially maximized by a conditional model that classifies each data point xi into its own category yi, and that classifiers trained with this objective tend to fragment the data into a large number of categories, see Figure 1 (center). We therefore introduce a regularizing term R(W; λ) whose form will depend on the specific choice of p(y|x, W). This term penalizes conditional models with complex decision boundaries in order to yield sensible clustering solutions. Our objective function is F(W; X, λ) = IW{y; x} −R(W; λ) (2) and we therefore refer to our approach as Regularized Information Maximization (RIM), see Figure 1 (right). While we motivated this objective with notions of class balance and seperation, our approach may be interpreted as learning a conditional distribution for y that preserves information from the data set, subject to a complexity penalty. 2 Grandvalet & Bengio [3] Bridle et al. [4] RIM Decision Regions x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 Cond. Entropy x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 0.1 0.2 0.3 0.4 0.5 0.6 x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 0 0.2 0.4 0.6 0.8 x1 x2 −2 −1 0 1 2 −2 −1 0 1 2 0.2 0.4 0.6 0.8 1 Figure 1: Example unsupervised multilogit regression solutions on a simple dataset with three clusters. The top and bottom rows show the category label arg maxy p(y|x, W) and conditional entropy H{p(y|x, W)} at each point x, respectively. We find that both class balance and regularization terms are necessary to learn unsupervised classifiers suitable for multi-class clustering. 3 Example application: Unsupervised Multilogit Regression The RIM framework is flexible in the choice of p(y | x; W) and R(W; λ). As an example instantiation, we here choose multiclass logistic regression as the conditional model. Specifically, if K is the maximum number of classes, we choose p(y = k|x, W) ∝exp(wT k x + bk) and R(W; λ) = λ X k wT k wk, (3) where the set of parameters W = {w1, . . . , wK; b1, . . . , bK} consists of weight vectors wk and bias values bk for each class k. Each weight vector wk ∈RD is D-dimensional with components wkd. The regularizer is the squared L2 norm of the weight vectors, and may be interpreted as an isotropic normal distribution prior on the weights W. The bias terms are not penalized. In order to optimize Eq. 2 specialized with Eqs. 3, we require the gradients of the objective function. For clarity, we define pki ≡p(y = k|xi, W), and ˆpk ≡ˆp(y = k; W). The partial derivatives are ∂F ∂wkd = 1 N X ic ∂pci ∂wkd log pci ˆpc −2λwkd and ∂F ∂bk = 1 N X ic ∂pci ∂bk log pci ˆpc . (4) Naive computation of the gradient requires O(NK2D), since there are K(D + 1) parameters and each derivative requires a sum over NK terms. However, the form of the conditional probability derivatives for multi-logit regression are: ∂pci ∂wkd = (δkc −pci)pkixid and ∂pci ∂bk = (δkc −pci)pki, where δkc is equal to one when indices k and c are equal, and zero otherwise. When these expressions are substituted into Eq. 4, we find the following expressions: ∂F ∂wkd = 1 N X i xidpki  log pki ˆpk − X c pci log pci ˆpc  −2λwkd (5) ∂F ∂bk = 1 N X i pki  log pki ˆpk − X c pci log pci ˆpc  Computing the gradient requires only O(NKD) operations since the terms P c pci log pci ˆpc may be computed once and reused in each partial derivative expression. The above gradients are used in the L-BFGS [6] quasi-Newton optimization algorithm1. We find empirically that the optimization usually converges within a few hundred iterations. When specialized 1We used Mark Schmidt’s implementation at http://www.cs.ubc.ca/∼schmidtm/Software/ minFunc.html. 3 0 20 40 0 0.2 0.4 0.6 Class Probabilities Class Index 0 20 40 −30 −20 −10 0 10 20 Bias Class Index bk 0 20 40 0 5 10 15 Weight Vector Norms wT k wk Class Index Figure 2: Demonstration of model selection on the toy problem from Figure 1. The algorithm is initialized with 50 category weight vectors wk. Upon convergence, only three of the categories are populated with data examples. The negative bias terms of the unpopulated categories drive the unpopulated class probabilities ˆpk towards zero. The corresponding weight vectors wk have norms near zero. to multilogit regression, the objective function F(W; x, λ) is non-concave. Therefore the algorithm can only be guaranteed to halt at locally optimal stationary points of F. In Section 3.1, we explain how we can obtain an initialization that is robust against local optima. 3.1 Model Selection Setting the derivatives (Eq. 5) equal to zero yields the following condition at stationary points of F: wk = X i α′ kixi (6) where we have defined α′ ki ≡ 1 2λN pki  log pki ˆpk − X c pci log pci ˆpc  . (7) The L2 regularizing function R(W; λ) in Eq. 3 is additively composed of penalty terms associated with each category: wT k wk = P ij α′ kiα′ kjxT i xj. It is instructive to observe the limiting behavior of the penalty term wT k wk when datapoints are not assigned to category k; that is, when ˆpk = 1 N P i pki →0. This implies that pki →0 for all i, and therefore α′ ki →0 for all i. Finally, wT k wk = P ij α′ kiα′ kjxT i xj →0. This means that the regularizing function does not penalize unpopulated categories. We find empirically that when we initialize with a large number of category weights wk, many decay away depending on the value of λ. Typically as λ increases, fewer categories are discovered. This may be viewed as model selection (automatic determination of the number of categories) since the regularizing function and parameter λ may be interpreted as a form of prior on the weight parameters. The bias terms bk are unpenalized and are adjusted during optimization to drive the class probablities ˆpk arbitrarily close to zero for unpopulated classes. This is illustrated in Figure 2. This behavior suggests an effective initialization procedure for our algorithm. We first oversegment the data into a large number of clusters (using k-means or other suitable algorithm) and train a supervised multi-logit classifier using these cluster labels. (This initial classifier may be trained with a small number of L-BFGS iterations since it only serves as a starting point.) We then use this classifier as the starting point for our RIM algorithm and optimize with different values of λ in order to obtain solutions with different numbers of clusters. 4 Example Application: Unsupervised Kernel Multilogit Regression The stationary conditions have another interesting consequence. Equation 6 indicates that at stationary points, the weights are located in the span of the input datapoints. We use this insight as justification to define explicit coefficients αki and enforce the constraint wk = P i αkixi during optimization. Substituting this equation into the multilogit regression conditional likelihood allows replacement of all inner products wT k x with P i αkiK(xi, x), where K is a positive definite kernel function that evaluates the inner product xT i x. The conditional model now has the form p(y = k|x, α, b) ∝exp X i αkiK(xi, x) + bk  . 4 Substituting the constraint into the regularizing function P k wT k wk yields a natural replacement of wT k wk by the Reproducing Hilbert Space (RKHS) norm of the function P i αkiK(xi, ·): R(α) = X k X ij αkiαkjK(xi, xj). (8) We use the L-BFGS algorithm to optimize the kernelized algorithm over the coefficients αki and biases bk. The partial derivatives for the kernel coefficients are ∂F ∂αkj = 1 N X i K(xj, xi)pki  log pki ˆpk − X c pci log pci ˆpc  −2λ X i αkiK(xj, xi) and the derivatives for the biases are unchanged. The gradient of the kernelized algorithm requires O(KN 2) to compute. Kernelized unsupervised multilogit regression exhibits the same model selection behavior as the linear algorithm. 5 Extensions We now discuss how RIM can be extended to semi-supervised classification, and to encode prior assumptions about class proportions. 5.1 Semi-supervised Classification In semi-supervised classification, we assume that there are unlabeled examples XU = {xU 1 , · · · , xU N} as well as labeled examples XL = {xL 1 , · · · , xL M} with labels Y = {y1, · · · , yM}. We again use mutual information IW{y; x} (Eq. 1) to define the relationship between unlabeled points and the model parameters, but we incorporate an additional parameter τ which will define the tradeoff between labeled and unlabeled examples. The conditional likelihood is incorporated for labeled examples to yield the semi-supervised objective: S(W; τ, λ) =τIW{y; x} −R(W; λ) + X i log p(yi|xL i , W) The gradient is computed and again used in the L-BFGS algorithm in order to optimize this combined objective. Our approach is related to the objective in [3], which does not contain the class balance term H(ˆp(y; W)). 5.2 Encoding Prior Beliefs about the Label Distribution So far, we have motivated our choice for the objective function F through the notion of class balance. However, in many classification tasks, different classes have different number of members. In the following, we show how RIM allows flexible expression of prior assumptions about non-uniform class label proportions. First, note that the following basic identity holds H{ˆp(y; W)} = log(K) −KL{ˆp(y; W)||U} (9) where U is the uniform distribution over the set of labels {1, · · · , K}. Substituting the identity, then dropping the constant log(K) yields another interpretation of the objective F(W; X, λ) = −1 N X i H{p(y|xi, W)} −KL{ˆp(y; W)||U} −R(W; λ). (10) The term −KL{ˆp(y; W)||U} is maximized when the average label distribution is uniform. We can capture prior beliefs about the average label distribution by substituting a reference distribution D(y; γ) in place of U (γ is a parameter that may be fixed or optimized during learning). [7] also use relative entropy as a means of enforcing prior beliefs, although not with respect to class distributions in multi-class classification problems. This construction may be used in a clustering task in which we believe that the cluster sizes obey a power law distribution as, for example, considered by [8] who use the Pitman-Yor process for nonparametric language modeling. Simple manipulation yields the following objective: F(W; X, λ, γ) = IW{x; y} −H{ˆp(y; W)||D(y; γ)} −R(W; λ) where H{ˆp(y; W)||D(y; γ)} is the cross entropy −P k ˆp(y = k; W) log D(y = k; γ). We therefore find that label distribution priors may be incorporated using an additional cross entropy regularization term. 5 2 4 6 8 10 12 14 16 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 # of clusters Adjusted Rand Index Caltech Images MMC k−means RIM SC 2 4 6 8 10 12 14 16 18 20 −0.05 0 0.05 0.1 0.15 0.2 # of clusters Adjusted Rand Index DandD Graphs RIM k−means MMC SC 2 4 6 8 10 12 14 16 18 20 22 0 0.01 0.02 0.03 0.04 0.05 0.06 # of clusters Adjusted Rand Index NCI109 Graphs RIM k−means SC MMC Figure 3: Unsupervised Clustering: Adjusted Rand Index (relative to ground truth) versus number of clusters. 6 Experiments We empirically evaluate our RIM approach on several real data sets, in both fully unsupervised and semisupervised configurations. 6.1 Unsupervised Learning Kernelized RIM is initialized according to the procedure outlined in Section 3.1, and run until LBFGS converges. Unlabeled examples are then clustered according to arg maxk p(y = k|x, W). We compare RIM against the spectral clustering (SC) algorithm of [1], the fast maximum margin clustering (MMC) algorithm of [9], and kernelized k-means [10]. MMC is a binary clustering algorithm. We use the recursive scheme outlined by [9] to extend the approach to multiple categories. The MMC algorithm requires an initial clustering estimate for initialization, and we use SC to provide this. We evaluate unsupervised clustering performance in terms of how well the discovered clusters reflect known ground truth labels of the dataset. We report the Adjusted Rand Index (ARI) [11] between an inferred clustering and the ground truth categories. ARI has a maximum value of 1 when two clusterings are identical. We evaluated a number of other measures for comparing clusterings to ground truth including mutual information, normalized mutual information [12], and cluster impurity [13]. We found that the relative rankings of the algorithms were the same as indicated by ARI. We evaluate the performance of each algorithm while varying the number of clusters that are discovered, and we plot ARI for each setting. For SC and k-means the number of clusters is given as an input parameter. MMC is evaluated at {2, 4, 8, · · · } clusters (powers of two, due to the recursive scheme.) For RIM, we sweep the regularization parameter λ and allow the algorithm to discover the final number of clusters. Image Clustering. We test the algorithms on an image clustering task with 350 images from four Caltech-256 [14] categories (Faces-Easy, Motorbikes, Airplanes, T-Shirt) for a total of N = 1400 images. We use the Spatial Pyramid Match kernel [15] computed between every pair of images. We sweep RIM’s λ parameter across [ 0.125 N , 4 N ]. The results are summarized in figure 3. Overall, the clusterings that best match ground truth are given by RIM when it discovers four clusters. We find that RIM outperforms both SC and MMC at all settings. RIM outperforms kernelized k-means when discovering between 4 and 8 clusters. Their performances are comparable for other numbers of clusters. Figure 4 shows example images taken from clusters discovered by RIM. Our RIM implementation takes approximately 110 seconds per run on the Caltech Images datset on a quad core Intel Xeon server. SC requires 38 seconds per run, while MMC requires 44-51 seconds per run depending on the number of clusters specified. Molecular Graph Clustering. We further test RIM’s unsupervised learning performance on two molecular graph datasets. D&D [16] contains N = 1178 protein structure graphs with binary ground truth labels indicating whether or not they function as enzymes. NCI109 [17] is composed of N = 4127 compounds labeled according to whether or not they are active in an anti-cancer screening. We use the subtree kernel developed by [18] with subtree height of 1. For D&D, we sweep RIM’s lambda parameter through the range [ 0.001 N , 0.05 N ] and for NCI we sweep through the interval [ 0.001 N , 1 N ]. Results are summarized in Figures 3 (center and right). We find that of all methods, RIM produces the clusterings that are nearest to ground truth (when discovering 2 clusters 6 C1 C2 C3 C4 C5 0 50 100 150 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Number of labeled examples Test Accuracy Classification Performance Grandvalet & Bengio Supervised RIM Figure 4: Left: Randomly chosen example images from clusters discovered by unsupervised RIM on Caltech Image. Right: Semi-supervised learning on Caltech Images. Average Waveform Most Uncertain Waveform Figure 5: Left, Tetrode dataset average waveform. Right, the waveform with the most uncertain cluster membership according to the classifier learned by RIM. for D&D and 5 clusters for NCI109). RIM outperforms both SC and MMC at all settings. RIM has the advantage over k-means when discovering a small number of clusters and is comparable at other settings. On NCI109, RIM required approximately 10 minutes per run. SC required approximately 13 minutes, while MMC required on average 18 minutes per run. Neural Tetrode Recordings. We demonstrate RIM on a large scale data set of 319, 209 neural activity waveforms recorded from four co-located electrodes implanted in the hippocampus of a behaving rat. The waveforms are composed of 38 samples from each of the four electrodes and are the output of a neural spike detector which aligns signal peaks to the 13-th sample, see the average waveform in Figure 5 (left). We concatenate the samples into a single 152-dimensional vector and preprocess by subtracting the mean waveform and divide each vector component by its variance. We use the linear RIM algorithm given in Section 3, initialized with 100 categories. We set λ to 4 N and RIM discovers 33 clusters and finishes in 12 minutes. There is no ground truth available for this dataset, but we use it to demonstrate RIM’s efficacy as a data exploration tool. Figure 6 shows two clusters discovered by RIM. The top row consists of cluster member waveforms superimposed on each other, with the cluster’s mean waveform plotted in red. We find that the clustered waveforms have substantial similarity to each other. Taken as a whole, the clusters give an idea of the typical waveform patterns. The bottom row shows the learned classifier’s discriminative weights wk for each category, which can be used to gain a sense for how the cluster’s members differ from the dataset mean waveform. We can use the probabilistic classifier learned by RIM to discover atypical waveforms by ranking them according to their conditional entropy H{p(y|xi, W)}. Figure 5 (right) shows the waveform whose cluster membership is most uncertain. Cluster 1 Cluster 2 Wave Wts. Figure 6: Two clusters discovered by RIM on the Tetrode data set. Top row: Superimposed waveform members, with cluster mean in red. Bottom row: The discriminative category weights wk associated with each cluster. 7 6.2 Semi-supervised Classification We test our semi-supervised classification method described in Section 5.1 against [3] on the Caltech Images dataset. The methods were trained using both unlabeled and labeled examples, and classification performance is assessed on the unlabeled portion. As a baseline, a supervised classifier was trained on labeled subsets of the data and tested on the remainder. Parameters were selected via cross-validation on a subset of the labeled examples. The results are summarized in Figure 4. We find that both semi-supervised methods significantly improve classification performance relative to the supervised baseline when the number of labeled examples is small. Additionally, we find that RIM outperforms Grandvalet & Bengio. This suggests that incorporating prior knowledge about class size distributions (in this case, we use a uniform prior) may be useful in semi-supervised learning. 7 Related Work Our work has connections to existing work in both unsupervised learning and semi-supervised classification. Unsupervised Learning. The information bottleneck method [19] learns a conditional model p(y|x) where the labels y form a lossy representation of the input space x, while preserving information about a third “relevance” variable z. The method maximizes I(y; z) −λI(x; y), whereas we maximize the information between y and x while constraining complexity with a parametric regularizer. The method of [20] aims to maximize a similarity measure computed between members within the same cluster while penalizing the mutual information between the cluster label y and the input x. Again, mutual information is used to enforce a lossy representation of y|x. Song et al. [22] also view clustering as maximization of the dependence between the input variable and output label variable. They use the Hilbert-Schmidt Independence Criterion as a measure of dependence, whereas we use Mutual Information. There is also an unsupervised variant of the Support Vector Machine, called max-margin clustering. Like our approach, the works of [2] and [21] use notions of class balance, seperation, and regularization to learn unsupervised discriminative classifiers. However, they are formulated in the max-margin framework rather than our probabilistic approach. Ours appears more amenable to incorporating prior beliefs about the class labels. Unsupervised SVMs are solutions to a convex relaxation of a non-convex problem, while we directly optimize our non-convex objective. The semidefinite programming methods required are much more expensive than our approach. Semi-supervised Classification. Our semi-supervised objective is related to [3], as discussed in section 5.1. Another semi-supervised method [23] uses mutual information as a regularizing term to be minimized, in contrast to ours which attempts to maximize mutual information. The assumption underlying [23] is that any information between the label variable and unlabeled examples is an artifact of the classifier and should be removed. Our method encodes the opposite assumption: there may be variability (e.g. new class label values) not captured by the labeled data, since it is incomplete. 8 Conclusions We considered the problem of learning a probabilistic discriminative classifier from an unlabeled data set. We presented Regularized Information Maximization (RIM), a probabilistic framework for tackling this challenge. Our approach consists of optimizing an intuitive information theoretic objective function that incorporates class separation, class balance and classifier complexity, which may be interpreted as maximizing the mutual information between the empirical input and implied label distributions. The approach is flexible, in that it allows consideration of different likelihood functions. It also naturally allows expression of prior assumptions about expected label proportions by means of a cross-entropy with respect to a reference distribution. Our framework allows natural incorporation of partial labels for semi-supervised learning. In particular, we instantiate the framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation indicates that RIM outperforms existing methods on several real data sets, and demonstrates that RIM is an effective model selection method. Acknowledgements We thank Alex Smola for helpful comments and discussion, and Thanos Siapas for providing the neural tetrode data. This research was partially supported by NSF grant IIS-0953413, a gift from Microsoft Corporation, and ONR MURI Grant N00014-06-1-0734. 8 References [1] A. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2001. [2] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In AAAI, 2005. [3] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS, 2004. [4] John S. Bridle, Anthony J. R. Heading, and David J. C. MacKay. Unsupervised classifiers, mutual information and ‘phantom targets’. In John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors, Advances in Neural Information Processing Systems, volume 4, pages 1096–1101. Morgan Kaufmann Publishers, Inc., 1992. [5] Olivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation, September 2004. [6] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503–528, 1989. [7] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. In NIPS, 1999. [8] Y. W. Teh. A hierarchical bayesian language model based on pitman-yor processes. In ACL, 2006. [9] K. Zhang, I. W. Tsang, and J. T. Kwok. Maximum margin clustering made practical. In ICML, 2007. [10] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004. [11] Lawrence Hubert and Phipps Arabie. Comparing partitions. Journal of Classification, 2:193– 218, 1985. [12] Alexander Strehl and Joydeep Ghosh. Cluster ensembles — A knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3:583–617, 2002. [13] Y. Chen, J. Ze Wang, and R. Krovetz. CLUE: cluster-based retrieval of images by unsupervised learning. IEEE Trans. Image Processing, 14(8):1187–1201, 2005. [14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. [15] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006. [16] P. D. Dobson and A. J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. J. Mol. Biol., 330:771–783, Jul 2003. [17] Nikil Wale and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. In ICDM, pages 678–689, 2006. [18] N. Shervashidze and K. M. Borgwardt. Fast subtree kernels on graphs. In NIPS, 2010. [19] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. CoRR, physics/0004057, 2000. [20] N. Slonim, G. S. Atwal, G. Tkacik, and W. Bialek. Information-based clustering. Proc Natl Acad Sci U S A, 102(51):18297–18302, December 2005. [21] Francis Bach and Za¨ıd Harchaoui. DIFFRAC: a discriminative and flexible framework for clustering. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS. MIT Press, 2007. [22] Le Song, Alex Smola, Arthur Gretton, and Karsten M. Borgwardt. A dependence maximization view of clustering. In ICML ’07: Proceedings of the 24th international conference on Machine learning, pages 815–822, New York, NY, USA, 2007. ACM. [23] A. Corduneanu and T. Jaakkola. On information regularization. In UAI, 2003. 9
2010
86
4,131
Learning to localise sounds with spiking neural networks Dan F. M. Goodman D´epartment d’Etudes Cognitive Ecole Normale Sup´erieure 29 Rue d’Ulm Paris 75005, France dan.goodman@ens.fr Romain Brette D´epartment d’Etudes Cognitive Ecole Normale Sup´erieure 29 Rue d’Ulm Paris 75005, France romain.brette@ens.fr Abstract To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism’s lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back. Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary) 1 Introduction For many animals, it is vital to be able to quickly locate the source of an unexpected sound, for example to escape a predator or locate a prey. For humans, localisation cues are also used to isolate a speaker in a noisy environment. Psychophysical studies have shown that source localisation relies on a variety of acoustic cues such as interaural time and level differences (ITDs and ILDs) and spectral cues (Blauert 1997). These cues are highly dependent on the geometry of the head, body and pinnae, and can change significantly during an animal’s lifetime, notably during its development but also in mature animals (which are known to be able to adapt to these changes, for example Hofman et al. 1998). Previous neural models addressed the mechanisms of cue extraction, in particular neural mechanisms underlying ITD sensitivity, using simplified binaural stimuli such as tones or noise bursts with artificially induced ITDs (Colburn, 1973; Reed and Blum, 1990; Gerstner et al., 1996; Harper and McAlpine, 2004; Zhou et al., 2005; Liu et al., 2008), but did not address the problem of learning to localise natural sounds in realistic acoustical environments. Since the physical laws of sound propagation are linear, the sound S produced by a source is received at any point x of an acoustical environment as a linearly filtered version Fx ∗S (linear convolution), where the filter is specific of the location x of the listener, the location of the source and the acoustical environment (ground, wall, objects, etc.). For binaural hearing, the acoustical environment includes the head, body and pinnae, and the sounds received at the two ears are FL ∗S and FR ∗S, where (FL, FR) is a pair of location-specific filters. Because the two sounds originate from the same 1 signal, the binaural stimulus has a specific structure, which should result in synchrony patterns in the encoding neurons. Specifically, we modelled the response of monaural neurons by a linear filtering of the sound followed by a spiking nonlinearity. Two neurons A and B responding to two different sides (left and right), with receptive fields NA and NB, transform the signals NA ∗FL ∗S and NB ∗FR ∗S into spike trains. Thus, synchrony between A and B occurs whenever NA ∗FL = NB ∗FR, i.e., for a specific set of filter pairs (FL, FR). Thus, in our model, sounds presented at a given location induce specific synchrony patterns, which then activate a specific assembly of postsynaptic neurons (coincidence detection neurons), in a way that is independent of the source signal (see Goodman and Brette, in press). Learning a new location consists in assigning a label to the activated assembly, using a teacher signal (for example visual input). We used measured human HRTFs to generate binaural signals at different source locations from a set of various sounds. These signals were used to train the model and we tested the localisation accuracy with new sounds. After learning, the model was able to accurately locate unknown sounds in both azimuth and elevation. 2 Methods 2.1 Virtual acoustics Sound sources used were: broadband white noise; recordings of instruments and voices from the RWC Music Database (http://staff.aist.go.jp/m.goto/RWC-MDB/); and recordings of vowel-consonant-vowel sounds (Lorenzi et al., 1999). All sounds were of 1 second duration and were presented at 80 dB SPL. Sounds were filtered by head-related impulse responses (HRIRs) from the IRCAM LISTEN HRTF Database (http://recherche.ircam.fr/equipes/ salles/listen/index.html). This database includes 187 approximately evenly spaced locations at all azimuths in 15 degree increments (except for high elevations) and elevations from -45 to 90 degrees in 15 degree increments. HRIRs from this and other databases do not provide sufficiently accurate timing information at frequencies below around 150Hz, and so subsequent cochlear filtering was restricted to frequencies above this point. 2.2 Mathematical principle Consider two sets of neurons which respond monaurally to sounds from the left ear and from the right ear by filtering sounds through a linear filter N (modeling their receptive field, corresponding to cochlear and neural transformations on the pathway between the ear and the neuron) followed by spiking. Each neuron has a different filter. Spiking is modeled by an integrate-and-fire description or some other spiking model. Consider two neurons A and B which respond to sounds from the left and right ear, respectively. When a sound S is produced by a source at a given location, it arrives at the two ears as the binaural signal (FL ∗S, FR ∗S) (convolution), where (FL, FR) is the location-specific pair of acoustical filters. The filtered inputs to the two spiking models A and B are then NA ∗FL ∗S and NB ∗FR ∗S. These will be identical for any sound S whenever NA ∗FL = NB ∗FR, implying that the two neurons fire synchronously. For each location indicated by its filter pair (FL, FR), we define the synchrony pattern as the set of binaural pairs of neurons (A, B) such that NA ∗FL = NB ∗FR. This pattern is location-specific and independent of the source signal S. Therefore, the identity of the synchrony pattern induced by a binaural stimulus indicates the location of the source. Learning consists in assigning a synchrony pattern induced by a sound to the location of the source. To have a better idea of these synchrony patterns, consider a pair of filters (F ∗ L, F ∗ R) that corresponds to a particular location x (azimuth, elevation, distance, and possibly also position of the listener in the acoustical environment), and suppose neuron A has receptive field NA = F ∗ R and neuron B has receptive field NB = F ∗ L. Then neurons A and B fire in synchrony whenever F ∗ R ∗FL = F ∗ L ∗FR, in particular when FL = F ∗ L and FR = F ∗ R, that is, at location x (since convolution is commutative). More generally, if U is a band-pass filter and the receptive fields of neurons A and B are U ∗F ∗ R and U ∗F ∗ L, respectively, then the neurons fire synchronously at location x. The same property applies if a nonlinearity (e.g. compression) is applied after filtering. If the bandwidth of U is very small, then U ∗F ∗ R is essentially the filter U followed by a delay and gain. Therefore, to represent all possible 2 locations in pairs of neuron filters, we consider that the set of neural transformations N is a bank of band-pass filters followed by a set of delays and gains. To decode synchrony patterns, we define a set of binaural neurons which receive input spike trains from monaural neurons on both sides (two inputs per neuron). A binaural neuron responds preferentially when its two inputs are synchronous, so that synchrony patterns are mapped to assemblies of binaural neurons. Each location-specific assembly is the set of binaural neurons for which the input neurons fire synchronously at that location. This is conceptually similar to the Jeffress model (Jeffress, 1948), where a neuron is maximally activated when acoustical and axonal delays match, and related models (Lindemann, 1986; Gaik, 1993). However, the Jeffress model is restricted to azimuth estimation and it is difficult to implement it directly with neuron models because ILDs always co-occur with ITDs and disturb spike-timing. 2.3 Implementation with spiking neuron models (30°, 15°) (45°, 15°) (90°, 15°) Neural fltering Neural fltering Cochlear fltering Cochlear fltering Coincidence detection γi Fj R Fj L γi L R HRTF HRTF Figure 1: Implementation of the model. The source signal arrives at the two ears after acoustical filtering by HRTFs. The two monaural signals are filtered by a set of gammatone filters γi with central frequencies between 150 Hz and 5 kHz (cochlear filtering). In each band (3 bands shown, between dashed lines), various gains and delays are applied to the signal (neural filtering F L j and F R j ) and spiking neuron models transform the resulting signals into spike trains, which converge from each side on a coincidence detector neuron (same neuron model). The neural assembly corresponding to a particular location is the set of coincidence detector neurons for which their input neurons fire in synchrony at that location (one pair for each frequency channel). The overall structure and architecture of the model is illustrated in Figure 1. All programming was done in the Python programming language, using the “Brian” spiking neural network simulator package (Goodman and Brette, 2009). Simulations were performed on Intel i7 Core processors. The largest model involved approximately one million neurons. Cochlear and neural filtering. Head-filtered sounds were passed through a bank of fourth-order gammatone filters with center frequencies distributed on the ERB scale (central frequencies from 150 Hz to 5 kHz), modeling cochlear filtering (Glasberg and Moore, 1990). Linear filtering was carried out in parallel with a custom algorithm designed for large filterbanks (around 30,000 filters in our simulations). Gains and delays were then applied, with delays at most 1 ms and gains at most ±10 dB. Neuron model. The filtered sounds were half-wave rectified and compressed by a 1/3 power law I = k([x]+)1/3 (where x is the sound pressure in pascals). The resulting signal was used as an input current to a leaky integrate-and-fire neuron with noise. The membrane potential V evolves according to the equation: τm dV dt = V0 −V + I(t) + σ √ 2τmξ(t) 3 Table 1: Neuron model parameters Parameter Value Description Vr -60 mV Reset potential V0 -60 mV Resting potential Vt -50 mV Threshold potential trefrac 5 ms Absolute refractory period 0 ms (for binaural neurons) σ 1 mV Standard deviation of membrane potential due to noise τm 1 ms Membrane time constant W 5 mV Synaptic weight for coincidence detectors k 0.2 V/Pa1/3 Acoustic scaling constant where τm is the membrane time constant, V0 is the resting potential, ξ(t) is Gaussian noise (such that ⟨ξ(t), ξ(s)⟩= δ(t−s)) and σ is the standard deviation of the membrane potential in the absence of spikes. When V crosses the threshold Vt a spike is emitted and V is reset to Vr and held there for an absolute refractory period trefrac. These neurons make synaptic connections with binaural neurons in a second layer (two presynaptic neurons for each binaural neuron). These coincidence detector neurons are leaky integrate-and-fire neurons with the same equations but their inputs are synaptic. Spikes arriving at these neurons cause an instantaneous increase W in V (where W is the synaptic weight). Parameter values are given in Table 1. Estimating location from neural activation. Each location is assigned an assembly of coincidence detector neurons, one in each frequency channel. When a sound is presented to the model, the total firing rate of all neurons in each assembly is computed. The estimated location is the one assigned to the maximally activated assembly. Figure 2 shows the activation of all location-specific assemblies in an example where a sound was presented to the model, after learning. Computing assemblies from HRTFs. In the hardwired model, we defined the location-specific assemblies from the knowledge of HRTFs (the learning algorithm is explained in section 2.4). For a given location (filter pair (FL, FR)) and frequency channel (gammatone filter G), we choose the binaural neuron for which the the gains (gL, gR) and delays (dL, dR) of the two presynaptic monaural neurons minimize the RMS difference ∆= sZ (gL(G ∗FL)(t −dL) −gR(G ∗FR)(t −dR))2dt, that is, the RMS difference between the inputs of the two neurons for a sound impulse at that location. We also impose max(gL, gR) = 1 and max(dL, dR) = 0 (so that one delay is null and the other is positive). The RMS difference is minimized when the delays correspond to the maximum of the cross-correlation between L and R, C(s) = R (G∗FL)(t)·(G∗FR)(t+s)dt, so that C(dR−dL) is the maximum, and gR/gL = C(dR −dL)/ R R(t)2dt. 2.4 Learning In the hardwired model, the knowledge of the full set of HRTFs is used to estimate source location. But HRTFs are never directly accessible to the auditory system, because they are always convolved with the source signal. They cannot be genetically wired either, because they depend on the geometry of the head (which changes during development). In our model, when HRTFs are not explicitly known, location-specific assemblies are learned by presenting unknown sounds at different locations to the model, where there is one coincidence detector neuron for each choice of frequency, relative delay and relative gain. Relative delays were uniformly chosen between −0.8 ms and 0.8 ms, and relative gains between −8 dB and 8 dB uniformly on a dB scale. In total 69 relative delays were chosen and 61 relative gains. With 80 frequency channels, this gives a total of roughly 106 neurons in the model. When a sound is presented at a given location, we define the assembly for this location by picking the maximally activated neuron in each frequency channel, as would be expected from a supervised Hebbian learning process with a teacher signal (e.g. visual cues). For practical reasons, 4 150 100 50 0 50 100 150 Azimuth (deg) 40 20 0 20 40 60 80 Elevation (deg) Figure 2: Activation of all location-specific assemblies in response to a sound coming from a particular location indicated by a black +. The white x shows the model estimate (maximally activated assembly). The mapping from assemblies to locations were learned from a set of sounds. we did not implement this supervised learning with spiking models, but supervised learning with spiking neurons has been described in several previous studies (Song and Abbott, 2001; Davison and Frgnac, 2006; Witten et al., 2008). 3 Results When the model is “hardwired” using the explicit knowledge of HRTFs, it can accurately localise a wide range of sounds (Figure 3A-C): for the maximum number of channels we tested (80), we obtained an average error of between 2 and 8 degrees for azimuth and 5 to 20 degrees for elevation (depending on sound type), and with more channels this error is likely to further decrease, as it did not appear to have reached an asymptote at 80 channels. Performance is better for sounds with broader spectrums, as each channel provides additional information. The model was also able to distinguish between sounds coming from the left and right (with an accuracy of almost 100%), and performed well for the more difficult tasks of distinguishing between front and back (80-85%) and between up and down (70-90%). Figure 3D-F show the results using the learned best delays and gains, using the full training data set (seven sounds presented at each location, each of one second duration) and different test sounds. Performance is comparable to the hardwired model. Average azimuth errors for 80 channels are 4-8 degrees, and elevation errors are 10-27 degrees. Distinguishing left and right is done with close to 100% accuracy, front and back with 75-85% and up and down with 65-90%. Figure 4 shows how the localisation accuracy improves with more training data. With only a single sound of one second duration at each location, the performance is already very good. Increasing the training to three seconds of training data at each location improves the accuracy, but including further training data does not appear to lead to any significant improvement. Although it is close, the performance does not seem to converge to that of the hardwired model, which might be due to a limited sampling of delays and gains (69 relative delays and 61 relative gains), or perhaps to the presence of physiological noise in our models (Goodman and Brette, in press). Figure 5 shows the properties of neurons in a location-specific assembly: interaural delay (Figure 5A) and interaural gain difference (Figure 5B) for each frequency channel. For this location, the assemblies in the hardwired model and with learning were very similar, which indicates that the learning procedure was indeed able to catch the binaural cues associated with that location. The distributions of delays and gain differences were similar in the hardwired model and with learning. In the hardwired model, these interaural delays and gains correspond to the ITDs and ILDs in fine frequency bands. To each location corresponds a specific frequency-dependent pattern of ITDs and ILDs, which is informative of both azimuth and elevation. In particular, these patterns are different when the location is reversed between front and back (not shown), and this difference is exploited by the model to distinguish between these two cases. 5 A B C D E F Figure 3: Performance of the hard-wired model (A-C) and with learning (D-F). A, D, Mean error in azimuth estimates as a function of the number of frequency channels (i.e., assembly size) for white noise (red), vowel-consonant-vowel (blue) and musical instruments (green). Front-back reversed locations were considered as having the same azimuth. The channels were selected at random between 150 Hz and 5 kHz and results were averaged over many random choices. B, E, Mean error in elevation estimates. C, F, Categorization performance discriminating left and right (solid), front and back (dashed) and up and down (dotted). A B Figure 4: Performance improvement with training (80 frequency channels). A, Average estimation error in azimuth (blue) and elevation (green) as a function of the number of sounds presented at each location during learning (each sound lasts 1 second). The error bars represent 95% confidence intervals. The dashed lines indicate the estimation error in the hardwired model (when HRTFs are explicitly known). B, Categorization performance vs. number of sounds per location for discriminating left and right (green), front and back (blue) and up and down (red). 4 Discussion The sound produced by a source propagates to the ears according to linear laws. Thus the ears receive two differently filtered versions of the same signal, which induce a location-specific structure 6 A B Figure 5: Location-specific assembly in the hardwired model and with learning. A, Preferred interaural delay vs. preferred frequency for neurons in an assembly corresponding to one particular location, in the hardwired model (white circles) and with learning (black circles). The colored background shows the distribution of preferred delays in all neurons in the hardwired model. B, Interaural gain difference vs. preferred frequency for the same assemblies. in the binaural stimulus. When binaural signals are transformed by a heterogeneous population of neurons, this structure is mapped to synchrony patterns, which are location-specific. We designed a simple spiking neuron model which exploits this property to estimate the location of sound sources in a way that is independent of the source signal. In the model, each location activates a specific assembly. We showed that the mapping between assemblies and locations can be directly learned in a supervised way from the presentation of a set of sounds at different locations, with no previous knowledge of the HRTFs or the sounds. With 80 frequency channels, we found that 1 second of training data per location was enough to estimate the azimuth of a new sound with mean error 6 degrees and the elevation with error 18 degrees. Humans can learn to localise sound sources when their acoustical cues change, for example when molds are inserted into their ears (Hofman et al., 1998; Zahorik et al., 2006). Learning a new mapping can take a long time (several weeks in the first study), which is consistent with the idea that the new mapping is learned from exposure to sounds from known locations. Interestingly, the previous mapping is instantly recovered when the ear molds are removed, meaning that the representations of the two acoustical environments do not interfere. This is consistent with our model, in which two acoustical environments would be represented by two possibly overlapping sets of neural assemblies. In our model, we assumed that the receptive field of monaural neurons can be modeled as a band-pass filter with various gains and delays. Differences in input gains could simply arise from differences in membrane resistance, or in the number and strength of the synapses made by auditory nerve fibers. Delays could arise from many causes: axonal delays (either presynaptic or postsynaptic), cochlear delays (Joris et al., 2006), inhibitory delays (Brand et al., 2002). The distribution of best delays of the binaural neurons in our model reflect the distribution of ITDs in the acoustical environment. This contradicts the observation in many species that the best delays are always smaller than half the characteristic period, i.e., they are within the π-limit (Joris and Yin, 2007). However, we checked that the model performed almost equally well with this constraint (Goodman and Brette, in press), which is not very surprising since best delays above the π-limit are mostly redundant. In small mammals (guinea pigs, gerbils), it has been shown that the best phases of binaural neurons in the MSO and IC are in fact even more constrained, since they are scattered around ±π/4, in constrast with birds (e.g. barn owl) where the best phases are continuously distributed (Wagner et al., 2007). However, in larger mammals such as cats, best IPDs in the MSO are more continuously distributed (Yin and Chan, 1990), with a larger proportion close to 0 (Figure 18 in Yin and Chan, 1990). It has not been measured in humans, but the same optimal coding theory that predicts the discrete distribution of phases in small mammals predicts that best delays should be continuously distributed above 400 Hz (80% of the frequency channels in our model). In addition, psychophysical results also 7 imply that humans can estimate both the azimuth and elevation of low-pass filtered sound sources (< 3 kHz) (Algazi et al., 2001), which only contain binaural cues. This is contradictory with the twochannel model (best delays at ±π/4) and in agreement with ours (including the fact that elevation could only be estimated away from the median plane in these experiments). Our model is conceptually similar to a recent signal processing method (with no neural implementation) to localize sound sources in the horizontal plane (Macdonald, 2008), where coincidence detection is replaced by Pearson correlation between the two transformed monaural broadband signals (no filterbank). However, that method requires explicit knowledge of the HRTFs, so that it cannot be directly learned from natural exposure to sounds. The HRTFs used in our virtual acoustic environment were recorded at a constant distance, so that we could only test the model performance in estimating the azimuth and elevation of a sound source. However, in principle, it should also be able to estimate the distance when the source is close. It should also apply equally well to non-anechoic environments, because our model only relies on the linearity of sound propagation. However, a difficult task, which we have not addressed, is to locate sounds in a new environment, because reflections would change the binaural cues and therefore the location-specific assemblies. One possibility would be to isolate the direct sound from the reflections, but this requires additional mechanisms, which probably underlie the precedence effect (Litovsky et al., 1999). References Algazi, V. R., C. Avendano, and R. O. Duda (2001, March). Elevation localization and head-related transfer function analysis at low frequencies. The Journal of the Acoustical Society of America 109(3), 1110–1122. Brand, A., O. Behrend, T. Marquardt, D. McAlpine, and B. Grothe (2002). Precise inhibition is essential for microsecond interaural time difference coding. Nature 417(6888), 543. Colburn, H. S. (1973, December). Theory of binaural interaction based on auditory-nerve data. i. general strategy and preliminary results on interaural discrimination. The Journal of the Acoustical Society of America 54(6), 1458–1470. Davison, A. P. and Y. Frgnac (2006, May). Learning Cross-Modal spatial transformations through spike Timing-Dependent plasticity. J. Neurosci. 26(21), 5604–5615. Gaik, W. (1993, July). Combined evaluation of interaural time and intensity differences: Psychoacoustic results and computer modeling. The Journal of the Acoustical Society of America 94(1), 98–110. Gerstner, W., R. Kempter, J. L. van Hemmen, and H. Wagner (1996). A neuronal learning rule for submillisecond temporal coding. Nature 383(6595), 76. Glasberg, B. R. and B. C. Moore (1990, August). Derivation of auditory filter shapes from notched-noise data. Hearing Research 47(1-2), 103–138. PMID: 2228789. Goodman, D. F. M. and R. Brette (2009). The Brian simulator. Frontiers in Neuroscience 3(2), 192–197. Goodman, D. F. M. and R. Brette (in press). Spike-timing-based computation in sound localization. PLoS Comp. Biol.. Harper, N. S. and D. McAlpine (2004). Optimal neural population coding of an auditory spatial cue. Nature 430(7000), 682–686. Hofman, P. M., J. G. V. Riswick, and A. J. V. Opstal (1998). Relearning sound localization with new ears. Nat Neurosci 1(5), 417–421. Jeffress, L. A. (1948, February). A place theory of sound localization. Journal of Comparative and Physiological Psychology 41(1), 35–9. PMID: 18904764. Joris, P. and T. C. T. Yin (2007, February). A matter of time: internal delays in binaural processing. Trends in Neurosciences 30(2), 70–8. PMID: 17188761. Joris, P. X., B. V. de Sande, D. H. Louage, and M. van der Heijden (2006). Binaural and cochlear disparities. Proceedings of the National Academy of Sciences 103(34), 12917. Lindemann, W. (1986, December). Extension of a binaural cross-correlation model by contralateral inhibition. i. simulation of lateralization for stationary signals. The Journal of the Acoustical Society of America 80(6), 1608–1622. Litovsky, R. Y., H. S. Colburn, W. A. Yost, and S. J. Guzman (1999, October). The precedence effect. The Journal of the Acoustical Society of America 106(4), 1633–1654. Liu, J., H. Erwin, S. Wermter, and M. Elsaid (2008). A biologically inspired spiking neural network for sound localisation by the inferior colliculus. In Artificial Neural Networks - ICANN 2008, pp. 396–405. 8 Lorenzi, C., F. Berthommier, F. Apoux, and N. Bacri (1999, October). Effects of envelope expansion on speech recognition. Hearing Research 136(1-2), 131–138. Macdonald, J. A. (2008, June). A localization algorithm based on head-related transfer functions. The Journal of the Acoustical Society of America 123(6), 4290–4296. PMID: 18537380. Reed, M. C. and J. J. Blum (1990, September). A model for the computation and encoding of azimuthal information by the lateral superior olive. The Journal of the Acoustical Society of America 88(3), 1442– 1453. PMID: 2229677. Song, S. and L. F. Abbott (2001, October). Cortical development and remapping through spike TimingDependent plasticity. Neuron 32(2), 339–350. Wagner, H., A. Asadollahi, P. Bremen, F. Endler, K. Vonderschen, and M. von Campenhausen (2007). Distribution of interaural time difference in the barn owl’s inferior colliculus in the low- and High-Frequency ranges. J. Neurosci. 27(15), 4191–4200. Witten, I. B., E. I. Knudsen, and H. Sompolinsky (2008, August). A hebbian learning rule mediates asymmetric plasticity in aligning sensory representations. J Neurophysiol 100(2), 1067–1079. Yin, T. C. and J. C. Chan (1990). Interaural time sensitivity in medial superior olive of cat. J Neurophysiol 64(2), 465–488. Zahorik, P., P. Bangayan, V. Sundareswaran, K. Wang, and C. Tam (2006, July). Perceptual recalibration in human sound localization: Learning to remediate front-back reversals. The Journal of the Acoustical Society of America 120(1), 343–359. Zhou, Y., L. H. Carney, and H. S. Colburn (2005, March). A model for interaural time difference sensitivity in the medial superior olive: Interaction of excitatory and inhibitory synaptic inputs, channel dynamics, and cellular morphology. J. Neurosci. 25(12), 3046–3058. 9
2010
87
4,132
Dynamic Infinite Relational Model for Time-varying Relational Data Analysis Katsuhiko Ishiguro Tomoharu Iwata Naonori Ueda NTT Communication Science Laboratories Kyoto, 619-0237 Japan {ishiguro,iwata,ueda}@cslab.kecl.ntt.co.jp Joshua Tenenbaum MIT Boston, MA. jbt@mit.edu Abstract We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets. 1 Introduction Analysis of “relational data”, such as the hyperlink structure on the Internet, friend links on social networks, or bibliographic citations between scientific articles, is useful in many aspects. Many statistical models for relational data have been presented [10, 1, 18]. The stochastic block model (SBM) [11] and the infinite relational model (IRM) [8] partition objects into clusters so that the relations between clusters abstract the relations between objects well. SBM requires specifying the number of clusters in advance, while IRM automatically estimates the number of clusters. Similarly, the mixed membership model [2] associates each object with multiple clusters (roles) rather than a single cluster. These models treat the relations as static information. However, a large amount of relational data in the real world is time-varying. For example, hyperlinks on the Internet are not stationary since links disappear while new ones appear every day. Human relationships in a company sometimes drastically change by the splitting of an organization or the merging of some groups due to e.g. Mergers and Acquisitions. One of our modeling goals is to detect these sudden changes in network structure that occur over time. Recently some researchers have investigated the dynamics in relational data. Tang et al.[13] proposed a spectral clustering-based model for multi-mode, time-evolving relations. Yang et al.[16] developed the time-varying SBM. They assumed a transition probability matrix like HMM, which governs all the cluster assignments of objects for all time steps. This model has only one transition probability matrix for the entire data. Thus, it cannot represent more complicated time variations such as split & merge of clusters that only occur temporarily. Fu et al.[4] proposed a time-series extension of the mixed membership model. [4] assumes a continuous world view: roles follow a mixed membership structure; model parameters evolve continuously in time. This model is very general for time series relational data modeling, and is good for tracking gradual and continuous changes of the relationships. Some works in bioinformatics [17, 5] have also adopted similar strategies. However, a continuous model approach does not necessarily best capture sudden transitions of the relationships we are interested in. In addition, previous models assume the number of clusters is fixed and known, which is difficult to determine a priori. 1 In this paper we propose yet another time-varying relational data model that deals with temporal and dynamic changes of cluster structures such as additions, deletions and split & merge of clusters. Instead of the continuous world view of [4], we assume a discrete structure: distinct clusters with discrete transitions over time, allowing for birth, death and split & merge dynamics. More specifically, we extend IRM for time-varying relational data by using a variant of the infinite HMM (iHMM) [15, 3]. By incorporating the idea of iHMM, our model is able to infer clusters of objects without specifying a number of clusters in advance. Furthermore, we assume multiple transition probabilities that are dependent on time steps and clusters. This specific form of iHMM enables the model to represent time-sensitive dynamic properties such as split & merge of clusters. Inference is performed efficiently with the slice sampler. 2 Infinite Relational Model We first explain the infinite relational model (IRM) [8], which can estimate the number of hidden clusters from a relational data. In IRM, Dirichlet process (DP) is used as a prior for clusters of an unknown number, and is denoted as DP(γ,G0) where γ > 0 is a parameter and G0 is a base measure. We write G ∼DP(γ,G0) when a distribution G (θ) is sampled from DP. In this paper, we implement DP by using a stick-breaking process [12], which is based on the fact that G is represented as an infinite mixture of θs: G (θ) = ∑∞ k=1 βkδθk(θ), θk ∼G0. β = (β1, β2, . . .) is a mixing ratio vector with infinite elements whose sum equals one, constructed in a stochastic way: βk = vk k−1 ∏ l=1 (1 −vl), vk ∼Beta (1, γ) . (1) Here vk is drawn from a Beta distribution with a parameter γ. The IRM is an application of the DP for relational data. Let us assume a binary two-place relation on the set of objects D = {1, 2, . . . , N} as D × D →{0, 1}. For simplicity, we only discuss a two place relation between the identical domain (D × D). The IRM divides the set of N objects into multiple clusters based on the observed relational data X = {xi, j ∈{0, 1}; 1 ≤i, j ≤N}. The IRM is able to infer the number of clusters at the same time because it uses DP as a prior distribution of the cluster partition. Observation xi, j ∈{0, 1} denotes the existence of a relation between objects i, j ∈{1, 2, . . . , N}. If there is (not) a relation between i and j, then xi,j = 1 (0). We allow asymmetric relations xi, j , x j,i throughout the paper. The probabilistic generative model (Fig. 1(a)) of the IRM is as follows: β|γ ∼Stick (γ) (2) zi|β ∼Multinomial (β) (3) ηk,l|ξ, ψ ∼Beta (ξ, ψ) (4) xi, j|Z, H ∼Bernoulli ( ηzi,zj ) . (5) Here, Z = {zi}N i=1 and H = {ηk,l}∞ k,l=1. In Eq. (2) “Stick” is the stick-breaking process (Eq. (1)). We sample a cluster index of the object i, zi = k, k ∈{1, 2, . . . , } using β as in Eq. (3). In Eq. (4) ηk,l is the strength of a relation between the objects in clusters k and l. Generating the observed relational data xi, j follows Eq. (5) conditioned by the cluster assignments Z and the strengths H. 3 Dynamic Infinite Relational Model (dIRM) 3.1 Time-varying relational data First, we define the time-varying relational data considered in this paper. Time-varying relational data X have three subscripts t, i, and j: X = { xt,i, j ∈{0, 1} } , where i, j ∈{1, 2, . . . , N}, t ∈ {1, 2, . . . , T}. xt,i, j = 1(0) indicates that there is (not) an observed relationship between objects i and j at time step t. T is the number of time steps, and N is the number of objects. We assume that there is no relation between objects belonging to a different time step t and t′. The time-varying relational data X is a set of T (static) relational data for T time steps. 2 β γ N zi xi,j N X N ηk,l ξ ψ (a) β γ ηk,l ξ ψ zt,i xt,i,j (b) N N X N T zt+1,i κ α0 β γ ηk,l ξ ψ zt,i xt,i,j π t,k zt-1,i (c) N N X N T Figure 1: Graphical model of (a)IRM (Eqs.2-5), (b)“tIRM” (Eqs.7-10), and (c)dIRM (Eqs.11-15). Circle nodes denote variables, square nodes are constants and shaded nodes indicate observations. It is natural to assume that every object transits between different clusters along with the time evolution. Observing several real world time-varying relational data, we assume there are several properties of transitions, as follows: • P1. Cluster assignments in consecutive time steps have higher correlations. • P2. Time evolutions of clusters are not stationary nor uniform. • P3. The number of clusters is time-varying and unknown a priori. P1 is a common assumption for many kinds of time series data, not limited to relational data. For example, a member of a firm community on SNSs will belong to the same community for a long time. A hyperlink structure in a news website may alter because of breaking news, but most of the site does not change as rapidly every minute. P2 tries to model occasional and drastic changes from frequent and minor modifications in relational networks. Such unstable changes are observed elsewhere. For example, human relationships in companies will evolve every day, but a merger of departments sometimes brings about drastic changes. On an SNS, a user community for the upcoming Olympics games may exist for a limited time: it will not last years after the games end. This will cause an addition and deletion of a user cluster (community). P3 is indispensable to track such changes of clusters. 3.2 Naive extensions of IRM We attempt to modify the IRM to satisfy these properties. We first consider several straightforward solutions based on the IRM for analyzing time-varying relational data. The simplest way is to convert time-varying relational data X into “static” relational data ˜X = {˜xi, j} and apply the IRM to ˜X. For example, we can generate ˜X as follows: ˜xi, j = {1 1 T ∑T t=1 xt,i, j > σ, 0 otherwise, (6) where σ denotes a threshold. This solution cannot represent the time changes of clustering because it assume the same clustering results for all the time steps (z1,i = z2,i = · · · = zT,i). We may separate the time-varying relational data X into a series of time step-wise relational data Xt and apply the IRM for each Xt. In this case, we will have a different clustering result for each time step, but the analysis ignores the dependency of the data over time. 3 Another solution is to extend the object assignment variable zi to be time-dependent zt,i. The resulting “tIRM” model is described as follows (Fig. 1(b)): β|γ ∼Stick (γ) (7) zt,i|β ∼Multinomial (β) (8) ηk,l|ξ, ψ ∼Beta (ξ, ψ) (9) xt,i, j|Zt, H ∼Bernoulli ( ηzt,i,zt,j ) . (10) Here, Zt = {zt,i}N i=1. Since β is shared over all time steps, we may expect that the clustering results between time steps will have higher correlations. However, this model assumes that zt,i is conditionally independent from each other for all t given β. This implies that the tIRM is not suitable for modeling time evolutions since the order of time steps are ignored in the model. 3.3 dynamic IRM To address three conditions P1∼3 above, we propose a new probabilistic model called the dynamic infinite relational model (dIRM). The generative model is given below: β|γ ∼Stick (γ) (11) πt,k|α0, κ, β ∼DP ( α0 + κ, α0β + κδk α0 + κ ) (12) zt,i|zt−1,i, Πt ∼Multinomial ( πt,zt−1,i ) (13) ηk,l|ξ, ψ ∼Beta (ξ, ψ) (14) xt,i, j|Zt, H ∼Bernoulli ( ηzt,i,zt,j ) . (15) Here, Πt = {πt,k : k = 1, . . . , ∞}. A graphical model of the dIRM is presented in Fig. 1(c). β in Eq. (11) represents time-average memberships (mixing ratios) to clusters. Newly introduced πt,k = (πt,k,1, πt,k,2, . . . , πt,k,l, . . .) in Eq. (12) is a transition probability that an object remaining in the cluster k ∈{1, 2, . . .} at time t −1 will move to the cluster l ∈{1, 2, . . .} at time t. Because of the DP, this transition probability is able to handle infinite hidden states like iHMM [14]. The DP used in Eq. (12) has an additional term κ > 0, which is introduced by Fox et al. [3]. δk is a vector whose elements are zero except the kth element, which is one. Because the base measure in Eq. (12) is biased by κ and δk, the kth element of πt,k prefers to take a larger value than other elements. This implies that this DP encourages the self-transitions of objects, and we can achieve the property P1 for time-varying relational data. One difference from conventional iHMMs [14, 3] lies in P2, which is achieved by making the transition probability π time-dependent. πt,k is sampled for every time step t, thus, we can model time-varying patterns of transitions, including additions, deletions and split & merge of clusters as extreme cases. These changes happen only temporarily, therefore, time-dependent transition probabilities are indispensable for our purpose. Note that the transition probability is also dependent on the cluster index k, as in conventional iHMMs. Also the dIRM can automatically determine the number of clusters thanks to DP: this enables us to hold P3. Equation (13) generates a cluster assignment for the object i at time t, based on the cluster, where the object was previously (zt−1,i) and its transition probability π. Equation (14) generates a strength parameter η for the pair of clusters k and l, then we obtain the observed sample xt,i, j in Eq. (15). The difference between iHMMs and dIRM is two-fold. One is the time-dependent transition probability of the dIRM discussed above. The another is that the iHMMs have one hidden state sequence s1:t to be inferred, while the dIRM needs to estimate multiple hidden state sequences z1:t,i given one time sequence observation. Thus, we may interpret the dIRM as an extension of the iHMM, which has N (= a number of objects) hidden sequences to handle relational data. 4 4 Inference We use a slice sampler [15], which enables fast and efficient sampling of the sequential hidden states. The slice sampler introduces auxiliary variables U = {ut,i}. Given U, the number of clusters can be reduced to a finite number during the inference, and it enables us an efficient sampling of variables. 4.1 Sampling parameters First, we explain the sampling of an auxiliary variable ut,i. We assume a prior of ut,i as a uniform distribution. Also we define the joint distribution of u, z, and x: p ( xt,i, j, ut,i, ut, j, zt−1:t,i, zt−1:t, j ) = I ( ut,i <πt,zt−1,i,zt,i ) I ( ut, j <πt,zt−1,j,zt, j ) x ηzt,i,zt,j t,i, j ( 1−xt,i, j )1−ηzt,i,zt,j . (16) Here, I(·) is 1 if the predicate holds, otherwise zero. Using Eq. (16), we can derive the posterior of ut,i as follows: ut,i ∼Uniform ( 0, πt,zt−1,i,zt,i ) . (17) Next, we explain the sampling of an object assignment variable zt,i. We define the following message variable p: pt,i,k = p (zt,i = k|X1:t, U1:t, Π, H, β) . (18) Sampling of zt,i is similar to the forward-backward algorithm for the original HMM. First, we compute the above message variables from t = 1 to t = T (forward filtering). Next, we sample zt,i from t = T to t = 1 using the computed message variables (backward sampling). In forward filtering we compute the following equation from t = 1 to t = T: pt,i,k ∝p (xt,i,i|zt,i =k, H) ∏ j,i p ( xt,i, j|zt,i =k, H ) p ( xt, j,i|zt,i =k, H ) ∑ l:ut,i<πt,l,k pt−1,i,l. (19) Note that the summation is conditioned by ut,i. The number of ls (cluster indices) that hold this condition is limited to a certain finite number. Thus, we can evaluate the above equation. In backward sampling, we sample zt,i from t = T to t = 1 from the equation below: p (zt,i = k|zt+1,i = l) ∝pt,i,kπt+1,k,lI (ut+1,i < πt+1,k,l ) . (20) Because of I(u < π), values of cluster indices k are limited within a finite set. Therefore, the variety of sampled zt,i will be limited a certain finite number K given U. Given U and Z, we have finite K-realized clusters. Thus, computing the posteriors of πt,k and ηk,l becomes easy and straightforward. First β is assumed as a K + 1-dimensional vector (mixing ratios of unrepresented clusters are aggregated in βK+1 = 1 −∑K k=1 βk ). mt,k,l denotes a number of objects i such that zt−1,i = k and zt,i = l. Also, let us denote a number of xt,i, j such that zt,i = k and zt, j = l as Nk,l. Similarly, nk,l denotes a number of xt,i, j such that zt,i = k, zt, j = l and xt,i, j = 1. Then we obtain following posteriors: πt,k ∼Dirichlet (α0β + κδk + mt,k ) . (21) ηk,l ∼Beta (ξ + nk,l, ψ + Nk,l −nk,l ) . (22) mt,k is a K + 1-dimensional vector whose lth element is mt,k,l (mt,k,K+1 = 0). We omit the derivation of the posterior of β since it is almost the same with that of Fox et al. [3]. 4.2 Sampling hyperparameters Sampling hyperparameters is important to obtain the best results. This could be done normally by putting vague prior distributions [14]. However, it is difficult to evaluate the precise posteriors for some hyperparameters [3]. Instead, we reparameterize and sample a hyperparameter in terms of a ∈(0, 1) [6]. For example, if the hyperparameter γ is assumed as Gamma-distributed, we convert γ by a = γ 1+γ. Sampling a can be achieved from a uniform grid on (0, 1). We compute (unnormalized) posterior probability densities at several as and choose one to update the hyperparameter. 5 0 5 10 15 20 25 30 0 5 10 15 20 25 30 i j IOtables data t = 1 0 5 10 15 20 25 30 0 5 10 15 20 25 30 i j IOtables data t = 5 0 50 100 150 0 50 100 150 i j Enron data t = 2 0 50 100 150 0 50 100 150 i j Enron data t = 10 (a) (b) (c) (d) Figure 2: Example of real-world datasets. (a)IOtables data, observations at t = 1, (b)IOtables data, observations at t = 5, (c)Enron data, observations at t = 2, and (d)Enron data, observations at t = 10. 5 Experiments Performance of the dIRM is compared with the original IRM [8] and its naive extension tIRM (described in Eqs. (7-10)). To apply the IRM to time-varying relational data, we use Eq. (6) to X with a threshold σ = 0.5. The difference between the tIRM (Eqs. (7-10)) and the dIRM is that the tIRM does not incorporate the dependency between successive time steps while the dIRM does. Hyperparameters were estimated simultaneously in all experiments. 5.1 Datasets and measurements We prepared two synthetic datasets (Synth1 and Synth2). To synthesize datasets, we first determined the number of time steps T, the number of clusters K, and the number of objects N. Next, we manually assigned zt,i in order to obtain cluster split & merge, additions, and deletions. After obtaining Z, we defined the connection strengths between clusters H = {ηk,l}. In this experiment, each ηk,l may take one of two values η = 0.1 (weakly connected) or η = 0.9 (strongly connected). Observation X was randomly generated according to Z and H. Synth1 is smaller (N = 16) and stable while Synth2 is much larger (N = 54), and objects actively transit between clusters. Two real-world datasets were also collected. The first one is the National Input-Output Tables for Japan (IOtables) provided by the Statistics Bureau of the Ministry of Internal Affairs and Communications of Japan. IOtables summarize the transactions of goods and services between industrial sectors. We used an inverted coefficient matrix, which is a part of the IOtables. Each element in the matrix ei, j represents that one unit of demand in the jth sector invokes ei, j productions in the ith sector. We generated xi, j from ei, j by binarizaion: setting xi, j = 1 if ei, j exceeds the average, and setting xi, j = 0 otherwise. We collected data from 1985, 1990, 1995, 2000, and 2005, in 32 sectors resolutions. Thus we obtain a time-varying relational data of N = 32 and T = 5. The another real-world dataset is the Enron e-mail dataset [9], used in many studies including [13, 4]. We extracted e-mails sent in 2001. The number of time steps was T = 12, so the dataset was divided into monthly transactions. The full dataset contained N = 151 persons. xt,i, j = 1(0) if there is (not) an e-mail sent from i to j at time (month) t. We also generated a smaller dataset (N = 68) by excluding those who send few e-mails for convenience. Quantitative measurements were computed with this smaller dataset. Fig. 2 presents examples of IOtables dataset ((a),(b)) and Enron dataset ((c),(d)). IOtables dataset characterized by its stable relationships, compared to Enron dataset. In Enron dataset, the amount of communication rapidly increases after the media reported on the Enron scandals. We used three evaluating measurements. One is the Rand index, which computes the similarity between true and estimated clustering results [7]. The Rand index takes the maximum value (1) if the two clustering results completely match. We computed the Rand index between the ground truth Zt and the estimated ˆZt for each time step, and averaged the indices for T steps. We also compute the error in the number of estimated clusters. Differences in the number of realized clusters were computed between Zt and ˆZt, and we calculated the average of these errors for T steps. We 6 Table 1: Computed Rand indices, numbers of erroneous clusters, and averaged test data log likelihoods. Data Rand index # of erroneous clusters Test log likelihood IRM tIRM dIRM IRM tIRM dIRM IRM tIRM dIRM Synth1 0.796 0.946 0.982 1.00 0.20 0.13 -0.542 -0.508 -0.505 Synth2 0.433 0.734 0.847 3.00 0.98 0.65 -0.692 -0.393 -0.318 IOtables -0.354 -0.358 -0.291 Enron -0.120 -0.135 -0.106 calculated these measurements for the synthetic datasets. The third measure is an (approximated) test-data log likelihood. For all datasets, we generated noisy datasets whose observation values are inverted. The number of inverted elements was kept small so that inversions would not affect the global clustering results. The ratios of inverted elements over the entire elements are set to 5% for two synthetic data, 1% for IOtables data and 0.5% for Enron data. We made inferences on the noisy datasets, and computed the likelihoods that “inverted observations take the real value”. We used the averaged log-likelihood per a observation as a measurement. 5.2 Results First, we present the quantitative results. Table 1 lists the computed Rand index, errors in the estimated number of clusters, and test-data log likelihoods. We confirmed that dIRM outperformed the other models in all datasets for the all measures. Particularly, dIRM showed good results in the Synth2 and Enron datasets, where the changes in relationships are highly dynamic and unstable. On the other hand, the dIRM did not achieve a remarkable improvement against tIRM for the Synth1 dataset whose temporal changes are small. Thus we can say that the dIRM is superior in modeling time-varying relational data, especially for dynamic ones. Next, we evaluate results of the real-world datasets qualitatively. Figure 3 shows the results from IOtables data. The panel (a) illustrates the estimated ηk,l using the dIRM, and the panel (b) presents the time evolution of cluster assignments, respectively. The dIRM obtained some reasonable and stable industrial clusters, as shown in Fig. 3 (b). For example, dIRM groups the machine industries into cluster 5, and infrastructure related industries are grouped into cluster 13. We believe that the self-transition bias κ helps the model find these stable clusters. Also relationships between clusters presented in Fig. 3 (a) are intuitively understandable. For example, demands for machine industries (cluster 5) will cause large productions for “iron and steel” sector (cluster 7). The “commerce & trade” and “enterprise services” sectors (cluster 10) connects strongly to almost all the sectors. There are some interesting cluster transitions. First, look at the “finance, insurance” sector. At t = 1, this sector belongs to cluster 14. However, the sector transits to cluster 1 afterwards, which does not connect strongly with clusters 5 and 7. This may indicates the shift of money from these matured industries. Next, the “transport” sector enlarges its roll in the market by moving to cluster 14, and it causes the deletion of cluster 8. Finally, note the transitions of “telecom, broadcast” sector. From 1985 to 2000, this sector is in the cluster 9 which is rather independent from other clusters. However, in 2005 the cluster separated, and telecom industry merged with cluster 1, which is a influential cluster. This result is consistent with the rapid growth in ITC technologies and its large impact on the world. Finally, we discuss results on the Enron dataset. Because this e-mail dataset contains many individuals’ names, we refrain from cataloging the object assignments as in the IOtables dataset. Figure 4 (a) tells us that clusters 1 ∼7 are relatively separated communities. For example, members in cluster 4 belong to a restricted domain business such energy, gas, or pipeline businesses. Cluster 5 is a community of financial and monetary departments, and cluster 7 is a community of managers such as vice presidents, and CFOs. One interesting result from the dIRM is finding cluster 9. This cluster notably sends many messages to other clusters, especially for management cluster 7. The number of objects belonging to this cluster is only three throughout the time steps, but these members are the key-persons at that time. 7 Cluster 1 Cluster 5 Cluster 7 Cluster 8 Cluster 9 Cluster 10 Cluster 13 Cluster 14 machinery electronic machinery transport machinery precision machinery iron and steel transport transport transport (unborn) (deleted) (deleted) consumer services finance, insurance finance, insurance finance, insurance telecom, broadcast finance, insurance commerce, trades enterprise services transport transport finance, insurance (deleted) (deleted) petroleum electric powers, gas water, waste disposal mining 1985 ( t = 1) 1990 ( t = 2) 1995 ( t = 3) 2000 ( t = 4) 2005 ( t = 5) telecom, broadcast consumer services telecom, broadcast consumer services telecom, broadcast consumer services telecom, broadcast consumer services machinery electronic machinery transport machinery precision machinery iron and steel commerce, trades enterprise services petroleum electric powers, gas water, waste disposal mining machinery electronic machinery transport machinery precision machinery iron and steel commerce, trades enterprise services petroleum electric powers, gas water, waste disposal mining machinery electronic machinery transport machinery precision machinery iron and steel commerce, trades enterprise services petroleum electric powers, gas water, waste disposal mining machinery electronic machinery transport machinery precision machinery iron and steel commerce, trades enterprise services petroleum electric powers, gas water, waste disposal mining (b) 2 4 10 12 14 2 4 6 8 10 12 14 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 3 5 7 9 11 13 1 3 9 11 13 6 8 5 7 1 0 k l dIRM: learned for IOtables data ηkl (a) Figure 3: (a) Example of estimated ηk,l (strength of relationship between clusters k, l) for IOtable data by dIRM. (d) Time-varying clustering assignments for selected clusters by dIRM. The founder COO CEO of Enron America dIRM: Learned ηkl for Enron data (a) (b) “Inactive” object cluster Figure 4: (a): Example of estimated ηk,l for Enron dataset using dIRM. (b): Number of items belonging to clusters at each time step for Enron dataset using dIRM. First, the CEO of Enron America stayed at cluster 9 in May (t = 5). Next, the founder of Enron was a member of the cluster in August t = 8. The CEO of Enron resigned that month, and the founder actually made an announcement to calm down the public. Finally, the COO belongs to the cluster in October t = 10. This is the month that newspapers reported the accounting violations. Fig. 4 (b) presents the time evolutions of the cluster memberships; i.e. the number of objects belonging to each cluster at each time step. In contrast to the IOtables dataset, this Enron e-mail dataset is very dynamic, as you can see from Fig. 2(c), (d). For example, the volume of cluster 6 (inactive cluster) decreases as time evolves. This result reflects the fact that the transactions between employees increase as the scandal is more and more revealed. On the contrary, cluster 4 is stable in membership. Thus, we can imagine that the group of energy and gas is a dense and strong community. This is also true for cluster 5. 6 Conclusions We proposed a new time-varying relational data model that is able to represent dynamic changes of cluster structures. The dynamic IRM (dIRM) model incorporates a variant of the iHMM model and represents time-sensitive dynamic properties such as split & merge of clusters. We explained a generative model of the dIRM, and showed an inference algorithm based on a slice sampler. Experiments with synthetic and real-world time series datasets showed that the proposed model improves the precision of time-varying relational data analysis. We will apply this model to other datasets to study the capability and the reliability of the model. We also are interested in modifying the dIRM to deal with multi-valued observation data. 8 References [1] A. Clauset, C. Moore, and M. E. J. Newman. Hierarchical structure and the prediction of missing links in networks. Nature, 453:98–101, 2008. [2] E. Erosheva, S. Fienberg, and J. Lafferty. Mixed-membership models of scientific publications. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 101(Suppl 1):5220–5227, 2004. [3] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. An HDP-HMM for systems with state persistence. In Proceedings of the 25th International Conference on Machine Learning (ICML), 2008. [4] Wenjie Fu, Le Song, and Eric P. Xing. Dynamic mixed membership blockmodel for evolving networks. In Proceedings of the 26th International Conference on Machine Learning (ICML), 2009. [5] O. Hirose, R. Yoshida, S. Imoto, R. Yamaguchi, T. Higuchi, D. S. Chamock-Jones, C. Print, and S. Miyano. Statistical inference of transcriptional module-based gene networks from time course gene expression profiles by using state space models. Bioinformatics, 24(7):932–942, 2008. [6] P. D. Hoff. Subset clustering of binary sequences, with an application to genomic abnormality data. Biometrics, 61(4):1027–1036, 2005. [7] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2(1):193–218, 1985. [8] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), 2006. [9] B. Klimat and Y. Yang. The enron corpus: A new dataset for email classification research. In Proceedings of the European Conference on Machine Learning (ECML), 2004. [10] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In Proceedings of the Twelfth International Conference on Information and Knowledge Management, pages 556–559. ACM, 2003. [11] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077–1087, 2001. [12] J. Sethuraman. A constructive definition of dirichlet process. Statistica Sinica, 4:639–650, 1994. [13] L. Tang, H. Liu, J. Zhang, and Z. Nazeri. Community evolution in dynamic multi-mode networks. In Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 677–685, 2008. [14] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet process. Journal of The American Statistical Association, 101(476):1566–1581, 2006. [15] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov model. In Proceedings of the 25th International Conference on Machine Learning (ICML), 2008. [16] T. Yang, Y. Chi, S. Zhu, Y. Gong, and R. Jin. A Bayesian approach toward finding communities and their evolutions in dynamic social networks. In Proceedings of SIAM International Conference on Data Mining (SDM), 2009. [17] R. Yoshida, S. Imoto, and T. Higuchi. Estimating time-dependent gene networks from time series microarray data by dynamic linear models with markov switching. In Proceedings of the International Conference on Computational Systems Bioinformatics, 2005. [18] S. Zhu, K. Yu, and Y. Gong. Stochastic relational models for large-scale dyadic data using mcmc. In Advances in Neural Information Processing Systems 21 (NIPS), 2009. 9
2010
88
4,133
Exact learning curves for Gaussian process regression on large random graphs Matthew J. Urry Department of Mathematics King’s College London London, WC2R 2LS, U.K. matthew.urry@kcl.ac.uk Peter Sollich Department of Mathematics King’s College London London, WC2R 2LS, U.K. peter.sollich@kcl.ac.uk Abstract We study learning curves for Gaussian process regression which characterise performance in terms of the Bayes error averaged over datasets of a given size. Whilst learning curves are in general very difficult to calculate we show that for discrete input domains, where similarity between input points is characterised in terms of a graph, accurate predictions can be obtained. These should in fact become exact for large graphs drawn from a broad range of random graph ensembles with arbitrary degree distributions where each input (node) is connected only to a finite number of others. Our approach is based on translating the appropriate belief propagation equations to the graph ensemble. We demonstrate the accuracy of the predictions for Poisson (Erdos-Renyi) and regular random graphs, and discuss when and why previous approximations of the learning curve fail. 1 Introduction Learning curves are a convenient way of characterising the performance that can be achieved with machine learning algorithms: they give the generalisation error ϵ as a function of the number of training examples n, averaged over all datasets of size n under appropriate assumptions about the data-generating process. Such a characterization is particularly useful in the case of non-parametric approaches such as Gaussian processes (GPs) [1], where in contrast to the parametric case [2] there is no generic classification of possible learning curves. Here we study GP regression, where a real-valued output function f(x) is to be learned. Qualitatively, GP learning curves are relatively well understood for the scenario where the inputs x come from a continuous space, typically Rn [3, 4, 5, 6, 7, 8, 9, 10, 11]. However, except in the limit of large n, or for very specific situations like one-dimensional inputs [3], the learning curves cannot be calculated exactly. Here we show that this is possible for discrete input spaces where similarity between input points can be represented as a graph whose edges connect similar points, inspired by work at last year’s NIPS that developed simple approximations for this scenario [12]. There are many potential application domains where learning of such functions of discrete inputs x could be relevant, for example if x is a research paper whose impact f(x) we would like to predict; the similarity graph could then be constructed on the basis of shared authorship. Or we could be trying to learn functions on generic symbol strings x, for example ones characterizing protein amino acid sequences, and the similarity graph would have edges between homologous molecules. Our aim is to find out how well GP regression can perform in such discrete domains; alternative inference approaches including online algorithms [13, 14, 15, 16] would also be interesting to study but are outside the scope of the present paper. We focus on large sparse random graphs, where each node is connected only to a finite number of other nodes even though the overall number of nodes in the graph is large. 1 In section 2 we give a brief overview of GP regression and summarize the approximation for the learning curves used in previous work [4, 8, 12]. Section 3 then explains our method: following a similar approach in [17] for random matrix spectra, we write down the belief propagation equations for a given graph in the form normally used in the cavity method [18] of statistical mechanics, and then translate them to graphs drawn from a random graph ensemble. Because for sparse random graphs typical loop lengths grow with the graph size, the belief propagation equations and hence our learning curve predictions should become exact for large graphs. Section 4 compares the predictions with simulation results for Poisson (Erdos-Renyi) graphs, where each edge is independently present with some small probability, and random regular graphs, where each node has the same degree (number of neighbours). The new predictions are indeed very accurate, and substantially more so than previous approximations. In section 4.1 we discuss in more detail the relationship between our work and these approximations to rationalize where the strongest deviations occur. Finally, section 5 summarises our results and discusses open questions and directions for future work. 2 GP regression and approximate learning curves Gaussian processes have become a well known machine learning technique used in a wide range of areas, see e.g. [19, 20, 21]. One reason for their success is the intuitive way that a priori information about the function to be learned is transparently encoded by the covariance and mean functions of the GP. A GP is a Gaussian prior over functions f with a fixed covariance function (kernel) C and mean function (assumed to be 0)1. In the simplest case the likelihood is also Gaussian, i.e. we assume that the outputs yµ in a set of examples D = {(i1, y1, . . . , (iN, yN)} are obtained by corrupting the clean function values fiµ with i.i.d. Gaussian noise of variance σ2. Then the posterior distribution over functions is, from Bayes’ theorem P(f|D) ∝P(f)P(D|f): P(f|D) ∝exp(−1 2f T C−1f − 1 2σ2 N X µ=1 (yµ −fiµ)2) (1) We consider GPs in discrete spaces, where each input is a node of a graph and can therefore be given a discrete label i as anticipated above; fi is the associated function value. If the graph has V nodes, the covariance function is then just a V × V matrix. A number of possible forms for covariance functions on graphs have been proposed. We will focus on the relatively flexible random walk covariance function [22], C = 1 κ((1 −a−1)I + a−1D−1/2AD−1/2)p a ≥2, p ≥0 (2) Here A is the adjacency matrix of the graph, with Aij = 1 if nodes i and j are connected by an edge, and 0 otherwise; D = diag{d1, . . . , dV } is a diagonal matrix containing the degrees of the nodes in the graph (di = P j Aij). One can easily see the relationship to a random walk: the unnormalised covariance function is a (symmetrised) p-step ‘lazy’ random walk, with probability a−1 of moving to a neighbouring node at each step. The prior thus assumes that function values up to a distance p along the graph are correlated with each other, to an extent determined by the hyperparameter a−1. The constant κ will be chosen throughout to normalise C so that 1 V P i Cii = 1, which corresponds to setting the average prior variance of the function values to unity. Our main concern in this paper are GP learning curves in discrete input spaces. The learning curve describes how the average generalisation error (mean square error) ϵ decreases with the number of examples N. Qualitatively, it gives the rate at which one would expect a GP to learn a function in the average case. The generalisation error on an ensemble of graphs is given by ϵ = ⟨1 V X i ( ¯fi −fi)2⟩f|D,D,graphs (3) 1We focus on the zero prior mean case throughout. All results translate fairly straightforwardly to the non-zero mean case, but this complicates the algebra without leading to substantially new insights. 2 where f is the uncorrupted (clean) teacher or target function, and ¯f is the posterior mean function of the GP which gives the function values we predict on the basis of the data D. It is worth noting that the generalisation error for a graph ensemble contains an additional average over this ensemble. As is standard in the study of learning curves we have assumed a matched scenario where the posterior P(f|D) for our predictions is also the posterior over the underlying target functions. The generalisation error is then the Bayes error, and is given by the average posterior variance. Sollich [4] and later Opper [7] with a more general replica approach showed that for continuous input spaces a reasonable approximation to the learning curve could be expressed as the solution of the following self-consistent equation: ϵ = g  N ϵ + σ2  , g(h) = V X α=1 (λ−1 α + h)−1 (4) Here the λα are appropriately defined eigenvalues of the covariance function. The motivation for our study is work presented at NIPS2009 [12], which demonstrated that this approximation can also be used in discrete domains, but is not always accurate. Studying random walk and diffusion kernels [22] on random regular graphs, the authors showed that although the eigenvalue-based approximation is reasonable for both the large and the small N limits, it fails to accurately predict the learning curve in the important transition region between these two extremes, drastically so for low noise variances σ2. In the next section we will show that this shortcoming can be overcome by the cavity method (belief propagation) which explicitly takes advantage of the sparse structure of the underlying graph. This will give an accurate approximation for the learning curves in a broad range of ensembles of sparse random graphs. 3 Accurate predictions with the cavity method The cavity method was developed in statistical physics [18] but is closely related to belief propagation; for a good overview of these and other mean field methods, see e.g. [23]. We begin with equation (3). Because we only need the posterior variance in the matched case considered here, we can shift f so that ¯f = 0; fi is then the deviation of the function value at node i from the posterior mean. In this notation, the Bayes error is ϵ = ⟨1 V X i Z dff 2 i P(f|D)⟩D,graphs (5) where P(f|D) now contains in the exponent only the terms from (1) that are quadratic in f. To set up the cavity method, we begin by defining a generating or partition function Z, for a fixed graph, as Z = Z df exp(−1 2f T C−1f − 1 2σ2 X µ f 2 iµ −λ 2 X i f 2 i ) (6) An auxiliary parameter λ has been added here to allow us to represent the Bayes error as ϵ = −limλ→0(2/V ) ∂ ∂λ⟨log Z⟩D,graphs. The dependence on the dataset D appears in Z only through the sum over µ. It will be more useful to write this as a sum over all nodes: if ni counts the number of examples seen at node i, then P µ f 2 iµ = P i nif 2 i . Even with this replacement, the partition function in equation (6) is not yet suitable for an application of the cavity method since the inverse covariance function cannot be written explicitly and generates interaction terms fifj between nodes that can be far away from each other along the graph. To eliminate the inverse of the covariance function we therefore perform a Fourier transform on the first term in the exponent, exp(−1 2f T C−1f) ∝ R dh exp(−1 2hT Ch + i P i hifi). The integral over f then factorizes over the fi, and one finds Z ∝ Z dh exp(−1 2hT Ch −1 2hT diag{( ni σ2 + λ)−1}h) (7) Substituting the explicit form of the covariance function (2) into equation (7) we have Z ∝ Z dh exp(−1 2hT p X q=0 cq(D−1/2AD−1/2)qh −1 2hT diag{( ni σ2 + λ)−1}h) (8) 3 where we have written the power in equation (2) as a binomial sum and defined cq = p!/[q!(p −q)!]a−q(1 −a−1)p−q/κ. For p > 1, equation (8) still has interactions with more than the immediate neighbours. To solve this we introduce additional variables hq, defined recursively via hq = (D−1/2AD−1/2)hq−1 for q ≥1 and h0 = h. These definitions are enforced via Dirac delta-functions, each i and q ≥1 giving a factor δ(hq i −d−1/2 i P j Aijd−1/2 j hq−1 j ) ∝ R dˆhq i exp[iˆhq i (hq i −d−1/2 i P j Aijd−1/2 j hq−1 j )]. Substituting this into equation (8) gives the key advantage that now the adjacency matrix appears only linearly in the exponent, so that we have interactions only across edges of the graph. Rescaling the hq i to d1/2 i hq i and similarly for the ˆhq i , and explicitly separating off the local terms from the interactions finally yields Z ∝ Z p Y q=0 dhq p Y q=1 dˆhq Y i exp(−1 2 p X q=0 cqdih0 i hq i −1 2 di(h0 i )2 ni/σ2 + λ + i X q=1 diˆhq i hq i ) × Y (ij) exp(−i X q=1 (ˆhq i hq−1 j + ˆhq jhq−1 i )) (9) We now have the partition function of a (complex-valued) Gaussian graphical model. By differentiating log Z with respect to λ, keeping track of λ-dependent prefactors not written above, one finds that the Bayes error is, ϵ = lim λ→0 1 V X i 1 ni/σ2 + λ  1 −di⟨(h0 i )2⟩ ni/σ2 + λ  (10) and so we need the marginal distributions of the h0 i . This is where the cavity method enters: for a large random graph the structure is locally treelike, so that if node i were eliminated the corresponding subgraphs (locally trees) rooted at the neighbours j ∈N(i) of i would become independent [17]. The resulting cavity marginals P (i) j (hj, ˆhj|D) can then be calculated iteratively within these subgraphs, giving the cavity update equations P (i) j (hj, ˆhj|D) ∝exp(−1 2 p X q=0 cqdjh0 jhq j −1 2 dj(h0 j)2 nj/σ2 + λ + i p X q=1 djˆhq jhq j) Z Y k∈N (j)\i dhkdˆhk exp(−i p X q=1 (ˆhq jhq−1 k + ˆhq khq−1 j ))P (j) k (hk, ˆhk|D) (11) One sees that these equations are solved self-consistently by complex-valued Gaussian distributions with mean zero and covariance matrices V (i) j . By performing the Gaussian integrals in the cavity update equations (11) explicitly, these equations then take the rather simple form V (i) j = (Oj − X k∈N (j)\i XV (j) k X)−1 (12) where we have defined the (2p + 1) × (2p + 1) matrices Oi = di             c0+ 1 ni/σ2+λ 1 2c1 . . . 1 2cp 0 . . . 0 1 2c1 −i ... ... 1 2cp −i 0 −i ... ... 0p,p 0 −i             , X =            i 0p+1,p+1 ... i 0 . . . 0 i 0 ... ... 0p,p i 0            Finally we need to translate these equations to an ensemble of large sparse graphs. Each ensemble is characterised by the distribution p(d) of the degrees di, with every graph that has the desired degree distribution being assigned the same probability. Instead of individual cavity covariance 4 matrices V (i) j , we need to consider their probability distribution W(V ) across all edges of the graph. Picking at random an edge (i, j) of a graph, the probability that node j will have degree dj is then p(dj)dj/ ¯d, because such a node has dj “chances” of being picked. (The normalisation factor is the average degree ¯d.) Using again the locally treelike structure, the incoming (to node j) cavity covariances V (j) k will be i.i.d. samples from W(V ). Thus a fixed point of the cavity update equations corresponds to a fixed point of an update equation for W(V ): W(V ) = *X d p(d)d ¯d Z d−1 Y k=1 dVk W(Vk) δ(V −(O − d−1 X k=1 XVkX)−1) + n (13) Because the node label is now arbitrary, we have abbreviated V (i) j to V , dj to d, Oj to O and V (j) k to Vk. The average is over the distribution over the number of examples n ≡nj at node j in the dataset D. Assuming for simplicity that examples are drawn with uniform input probability across all nodes, this distribution is simply n ∼Poisson(ν) in the limit of large N and V at fixed ν = N/V . In general equation (13) – which can also be formally derived using the replica approach [24] – cannot be solved analytically, but we can solve it numerically using a standard population dynamics method [25]. Once we have W(V ), the Bayes error can be found from the graph ensemble version of equation (10), which is obtained by inserting the explicit expression for ⟨(h0 i )2⟩in terms of the cavity marginals of the neighbouring nodes, and replacing the average over nodes with an average over p(d): ϵ = lim λ→0 *X d p(d) n/σ2 + λ 1 − d n/σ2 + λ Z d Y k=1 dVk W(Vk) (O − d X k=1 XVkX)−1 00 !+ n (14) The number of examples at the node is again to be averaged over n ∼Poisson(ν). The subscript “00” indicates the top left element of the matrix, which determines the variance of h0. To be able to use equation (14), it needs to be rewritten in a form that remains explicitly nonsingular when n = 0 and λ →0. We split off the n-dependence of the matrix inverse by writing O −Pd k=1 XVkX = M + [d/(n/σ2 + λ)]e0eT 0 , where eT 0 = (1, 0, . . . , 0). The matrix inverse appearing above can then be expressed using the Woodbury formula as M −1 − M −1e0eT 0 M −1 (n/σ2 + λ)/d + eT 0 M −1e0 (15) To extract the (0,0)-element (top left) as required we multiply by eT 0 · · · e0. After some simplification the λ →0 limit can then be taken, with the result ϵ = *X d p(d) Z d Y k=1 dVk W(Vk) 1 n/σ2 + d(M −1)00 + n (16) This has a simple interpretation: the cavity marginals of the neighbours provide an effective Gaussian prior for each node, whose inverse variance is d(M −1)00. The self-consistency equation (13) for W(V ) and the expression (16) for the resulting Bayes error are our main results. They allow us to predict learning curves as a function of the number of examples per node, ν, for arbitrary degree distributions p(d) of our random graph ensemble providing the graphs are sparse, and for arbitrary noise level σ2 and covariance function hyperparameters p and a. We note briefly that in graphs with isolated nodes (d = 0), one has to be slightly careful as already in the definition of the covariance function (2) one should replace D →D +δI to avoid division by zero, taking δ →0 at the end. For d = 0 one then finds in the expression (16) that (M −1)00 = 1 c0δ so that (δ + d)(M −1)00 = δ(M −1)00 = 1/c0. This is to be expected since isolated nodes each have a separate Gaussian prior with variance c0. 5 4 Results We will begin by comparing the performance of our new cavity prediction (equation (16)) against the eigenvalue approximation (equation (4)) from [4, 7], for random regular graphs with degree 3 (so that p(d) = δd,3). In this way we can exploit the work of [12], where the quality of the approximation (4) for this case was studied in some detail. Figure 1: (Left) A comparison of the cavity prediction (solid line with triangles) against the eigenvalue approximation (dashed line) for the learning curves for random regular graphs of degree 3, and against simulation results for graphs with V = 500 nodes (solid line with circles). Random walk kernel with p = 1, a = 2; noise level as shown. (Right) As before with p = 10, a = 2. (Bottom) Similarly for Poisson (Erdos-Renyi) graphs with c = 3. As can be seen in figure 1 (left) & (right) the cavity approach is accurate along the entire learning curve, to the point where the prediction is visually almost indistinguishable from the numerical simulation results. Importantly, the cavity approach predicts even the midsection of the learning curve for intermediate values of ν, where the eigenvalue prediction clearly fails. The deviations between cavity theory and the eigenvalue predictions are largest in this central part because at this point fluctuations in the number examples seen at each node have the greatest effect. Indeed, for much smaller ν, the dataset does not contain any examples from many of the nodes, i.e. n = 0 is dominant and fluctuations towards larger n have low probability. For large ν, the dataset typically contains many examples for each node and Poisson fluctuations around the average value n = ν are small. The fluctuation effects for intermediate ν are suppressed when the noise level σ2 is large, because then the generalisation error in the range of intermediate ν is still fairly close to its initial value (ν = 0). But for the smaller noise levels fluctuations in the number of examples for each node can have a large effect, and correspondingly the eigenvalue prediction becomes very poor for intermediate ν. We discuss this further in section 4.1. Comparing figure 1 (left) and (right), it can also be seen that unlike the eigenvalue-based approximation, the cavity prediction for the learning curve does not deteriorate as p is varied towards lower values. Similar conclusions apply with regard to changes of a (results not shown). 6 Next we consider Poisson (Erdos-Renyi) graphs, where each edge is present independently with probability c/V [26]. This leads to a Poisson distribution of degrees, p(d) = e−ccd/d!. Figure 1 (bottom) shows the performance of our cavity prediction for this graph ensemble with c = 3 for a GP with p = 10, a = 2, in comparison to simulation results for V = 500. The cavity prediction clearly outperforms the eigenvalue-based approximation and again remains accurate even in the central part of the learning curve. Taken together, the results for random regular and Poisson graphs clearly confirm our expectation that the cavity prediction for the learning curve that we have derived should be exact for large graphs. It is worth noting that our new cavity prediction will work for arbitrary degree distributions and is limited only by the assumption of graph sparsity. 4.1 Why the eigenvalue approximation fails The derivation of the eigenvalue approximation (4) by Opper in [8] gives some insight into when and how this approximation breaks down. Opper takes equation (6) and uses the replica trick to write ⟨log Z⟩D = limn→0 1 n log⟨Zn⟩D. The average of Zn is calculated for integer n and then appropriately continued to n →0. The required nth power of equation (6) is in our case ⟨Zn⟩D = Z n Y a=1 df a⟨exp(−1 2 X a f aT C−1f a − 1 2σ2 X i,a ni(f a i )2 −λ 2 X i,a (f a i )2)⟩D (17) The dataset average, over ni ∼Poisson(ν), then gives ⟨Zn⟩D = Z n Y a=1 df a exp(−1 2 X a f aT C−1f a + ν X i (e−P a(f a i )2/2σ2 −1) −λ 2 X i,a (f a i )2) (18) If one now wants to proceed without explicitly exploiting the sparse graph structure, one has to approximate the exponential term in the exponent. Opper does this using a variational approximation for the distribution of the f a, of Gaussian form, and this eventually leads to the approximation (4) for the learning curve. This approach is evidently justified for large σ2, where a Taylor expansion of the exponential term in (18) can be truncated after the quadratic term. For small noise levels, on the other hand, the Gaussian variational approach clearly does not capture all the details of the fluctuations in the numbers of examples ni. By comparison, in this paper, using the cavity method we are able to retain the average over D explicitly, without the need to approximate the distribution of the ni. The result of this is that the section of the learning curve where fluctuations in numbers of examples play a large role is captured accurately, while the Gaussian variational (eigenvalue) approach can give wildly inaccurate results there. 5 Conclusions and further work In this paper we have studied the learning curves of GP regression on large random graphs. In a significant advance on the work of [12], we showed that the approximations for learning curves proposed by Sollich [4] and Opper [7] for continuous input spaces can be greatly improved upon in the graph case, by using the cavity method. We argued that the resulting predictions should in fact become exact in the limit of large random graphs. Section 3 derived the learning curve approximation using the cavity method for arbitrary degree distributions. We defined a generating function Z (equation (6)) from which the generalisation error ϵ can be obtained by differentiation. We then rewrote this using Fourier transforms (equation (7)) and introduced additional variables (equation (9)) to get Z into the required form for a cavity approach: the partition function of a complex-valued Gaussian graphical model. By standard arguments we then derived the cavity update equations for a fixed graph (equation (12)). Finally we generalised from these to graph ensembles (equation (13)), taking the limit of large graph size. The resulting prediction for the generalisation error (equation (16)) has an intuitively appealing interpretation, where each node in the graph learns subject to an effective (and data-dependent) Gaussian prior provided by its neighbours. In section 4 we compared our new prediction to the eigenvalue approximation results in [12]. We showed that our new method is far more accurate in the challenging midsection of the learning curves than the eigenvalue version, both for random regular and Poisson graph ensembles (figure 1). 7 Subsection 4.1 discusses why the older approximation, derived from a replica perspective in [7], is inaccurate compared to the cavity method. To retain tractable averages in continuous input spaces, it has to approximate fluctuations in the dataset of the number of examples for each node, thus resulting in the inaccurate predictions seen in figure 1. On graphs one is able to perform this average explicitly when calculating cavity updates and the resulting Bayes error, giving a far more accurate prediction of the learning curves. Although the learning curves predicted using the cavity method cover a broad range of graph ensembles because they apply for arbitrary p(d), there do remain some interesting types of graph ensembles (for instance graphs with community structure) that cannot be generated by imposing only the degree distribution. Indeed, an important assumption in the current work is that small loops are rare whilst in community graphs, where nodes exhibit preferential attachment, there can be many small loops. We are in the process of analysing GP learning on such graphs using the approach of Rogers et al. [27], where community graphs are modelled as having a sparse superstructure joining clusters of densely connected nodes. Following previous studies [12], we have in this paper set the scale of the covariance function by normalising the average prior covariance over all nodes. For the Poisson graph case our learning curve simulations then show, however, that there can be large variations in the local prior variances Cii, while from the Bayesian modelling point of view it would seem more plausible to use covariance functions where all Cii = 1. This could be achieved by pre- and post-multiplying the random walk covariance matrix by an appropriate diagonal matrix. We hope to study this modified covariance function in future, and to extend the cavity prediction for the learning curves to this case. It would also be interesting to expand our approach to model mismatch, where we assume the datagenerating process is a GP with hyperparameters that differ from those of the GP being used for inference. This was studied for continuous input spaces in [10]; equally interesting would be a study of mismatch with a fixed target function as analysed by Opper et al. [8]. It should further be useful to study the case of mismatched graphs, rather than hyperparameters. This is relevant because frequently in real world learning one will have only partial knowledge of the graph structure, for instance in metabolic networks when not all of the pathways have been discovered, or social networks where friendships are continuously being made and broken. Another interesting avenue for further research would be to look at multiple output (multi-task) GPs on graphs, to see if the work of Chai [28] can be extended to this scenario. One would hope that, as seen with the learning curves for single output GPs in this paper, input domains defined by graphs might allow simplifications in the analysis and provide more accurate bounds or even exact predictions. Finally, it would be worth extending the study of graph mismatch to the case of evolving graphs and functions. Here spatio-temporal GP regression could be employed to predict functions changing over time, perhaps including a model based approach as in [29] to account for the evolving graph structure. References [1] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). MIT Press, December 2005. [2] Shun-ichi Amari, Naotake Fujita, and Shigeru Shinomoto. Four types of learning curves. Neural Computation, 4(4):605–618, 1992. [3] M. Opper. Regression with Gaussian processes: Average case performance. Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective. Springer-Verlag, pages 17–23, 1997. [4] P. Sollich. Learning curves for Gaussian processes. In Advances in Neural Information Processing Systems 11, pages 344–350. MIT Press, 1999. [5] F. Vivarelli and M. Opper. General bounds on Bayes errors for regression with Gaussian processes. In Advances in Neural Information Processing Systems 11, pages 302–308. MIT Press, 1999. [6] C. K. I. Williams and F. Vivarelli. Upper and lower bounds on the learning curve for Gaussian processes. Machine Learning, 40(1):77–102, 2000. [7] M. Opper and D. Malzahn. Learning curves for gaussian processes regression: A framework for good approximations. In Advances in Neural Information Processing Systems 14, pages 273–279. MIT Press, 2001. 8 [8] M. Opper and D. Malzahn. A variational approach to learning curves. In Advances in Neural Information Processing Systems 14, pages 463–469. MIT Press, 2002. [9] P. Sollich and A. Halees. Learning curves for Gaussian process regression: Approximations and bounds. Neural Computation, 14(6):1393–1428, 2002. [10] P. Sollich. Gaussian process regression with mismatched models. In Advances in Neural Information Processing Systems 14, pages 519–526. MIT Press, 2002. [11] P. Sollich. Can Gaussian process regression be made robust against model mismatch? In N Lawrence J Winkler and M Niranjan, editors, Deterministic and Statistical Methods in Machine Learning, pages 211–228, Berlin, 2005. Springer. [12] P. Sollich, M. J. Urry, and C. Coti. Kernels and learning curves for Gaussian process regression on random graphs. In Advances in Neural Information Processing Systems 22, pages 1723–1731. Curran Associates, Inc., 2009. [13] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 305–312, New York, NY, USA, 2005. ACM. [14] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. In Advances in Neural Information Processing Systems 19, pages 577–584. MIT Press, 2007. [15] M. Herbster. Exploiting cluster-structure to predict the labeling of a graph. In Proceedings of the 19th international conference on Algorithmic Learning Theory, pages 54–69. Springer, 2008. [16] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. Learning theory, 3120:624–638, 2004. [17] Tim Rogers, Koujin Takeda, Issac P´erez Castillo, and Reimer K¨uhn. Cavity approach to the spectral density of sparse symmetric random matricies. Physical Review E, 78(3):31116–31121, 2008. [18] M. Mezard, G. Parisi, and M. A. Virasoro. Random free energies in spin glasses. Le journal de physique - lettres, 46(6):217–222, 1985. [19] M. T. Farrell and A. Correa. Gaussian process regression models for predicting stock trends. Relation, 10:3414, 2007. [20] B. Ferris, D. Haehnel, and D. Fox. Gaussian processes for signal strength-based location estimation. In Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006. [21] Sunho Park and Seungjin Choi. Gaussian process regression for voice activity detection and speech enhancement. In International Joint Conference on Neural Networks, pages 2879–2882, Hong Kong, China, 2008. Institute of Electrical and Electronics Engineers (IEEE). [22] A. J. Smola and R. Kondor. Kernels and regularization on graphs. In M. Warmuth and B. Scholkopf, editors, Learning theory and Kernel machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop (COLT), pages 144–158, Heidelberg, 2003. Springer. [23] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT Press, 2001. [24] Reimer K¨uhn. Finitely coordinated models for low-temperature phases of amorphous systems. Journal of Physics A, 40(31):9227, 2007. [25] M. M´ezard and G. Parisi. The Bethe lattice spin glass revisited. The European Physical Journal B, 20(2):217–233, 2001. [26] P. Erd¨os and A. R´enyi. On random graphs, I. Publicationes Mathematicae (Debrecen), 6:290–297, 1959. [27] Tim Rogers, Conrad P´erez Vicente, Koujin Takeda, and Isaac P´erez Castillo. Spectral density of random graphs with topological constraints. Journal of Physics A, 43(19):195002, 2010. [28] Kian Ming Chai. Generalization errors and learning curves for regression with multi-task Gaussian processes. In Advances in Neural Information Processing Systems 22, pages 279–287. Curran Associates, Inc., 2009. [29] M. Alvarez, D. Luengo, and N. D. Lawrence. Latent force models. In D. van Dyk and M. Welling, editors, Proceedings of the Twelfth International Workshop on Artificial Intelligence and Statistics, pages 9–16, Clearwater Beach, FL, USA, 2009. MIT Press. 9
2010
89
4,134
Learning Kernels with Radiuses of Minimum Enclosing Balls Kun Gai Guangyun Chen Changshui Zhang State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology (TNList) Department of Automation, Tsinghua University, Beijing 100084, China {gaik02, cgy08}@mails.thu.edu.cn, zcs@mail.thu.edu.cn Abstract In this paper, we point out that there exist scaling and initialization problems in most existing multiple kernel learning (MKL) approaches, which employ the large margin principle to jointly learn both a kernel and an SVM classifier. The reason is that the margin itself can not well describe how good a kernel is due to the negligence of the scaling. We use the ratio between the margin and the radius of the minimum enclosing ball to measure the goodness of a kernel, and present a new minimization formulation for kernel learning. This formulation is invariant to scalings of learned kernels, and when learning linear combination of basis kernels it is also invariant to scalings of basis kernels and to the types (e.g., L1 or L2) of norm constraints on combination coefficients. We establish the differentiability of our formulation, and propose a gradient projection algorithm for kernel learning. Experiments show that our method significantly outperforms both SVM with the uniform combination of basis kernels and other state-of-art MKL approaches. 1 Introduction In the past years, kernel methods, like support vector machines (SVM), have achieved great success in many learning problems, such as classification and regression. For such tasks, the performance strongly depends on the choice of the kernels used. A good kernel function, which implicitly characterizes a suitable transformation of input data, can greatly benefit the accuracy of the predictor. However, when there are many available kernels, it is difficult for the user to pick out a suitable one. Kernel learning has been developed to jointly learn both a kernel function and an SVM classifier. Chapelle et al. [1] present several principles to tune parameters in kernel functions. In particular, when the learned kernel is restricted to be a linear combination of multiple basis kernels, the problem of learning the combination coefficients as well as an SVM classifier is usually called multiple kernel learning (MKL). Lanckriet et al. [2] formulate the MKL problem as a quadratically constrained quadratic programming problem, which implicitly uses an L1 norm constraint to promote sparse combinations. To enhance the computational efficiency, different approaches for solving this MKL problem have been proposed using SMO-like strategies [3], semi-infinite linear program [4], gradient-based methods [5], and second-order optimization [6]. Some other subsequent work explores more generality of multiple kernel learning by promoting non-sparse [7, 8] or group-sparse [9] combinations of basis kernels, or using other forms of learned kernels, e.g., a combination of an exponential number of kernels [10] or nonlinear combinations [11, 12, 13]. Most existing MKL approaches employ the objective function used in SVM. With an acceptable empirical loss, they aim to find the kernel which leads to the largest margin of the SVM classifier. However, despite the substantial progress in both the algorithmic design and the theoretical understanding for the MKL problem, none of the approaches seems to reliably outperform baseline 1 methods, like SVM with the uniform combination of basis kernels [13]. As will be shown in this paper, the large margin principle used in these methods causes the scaling problem and the initialization problem, which can strongly affect final solutions of learned kernels as well as performances. It implicates that the large margin preference can not reliably result in a good kernel, and thus the margin itself is not a suitable measure of the goodness of a kernel. Motivated by the generalization bounds for SVM and kernel learning, we use the ratio between the margin of the SVM classifier and the radius of the minimum enclosing ball (MEB) of data in the feature space endowed with the learned kernel as a measure of the goodness of the kernel, and propose a new kernel learning formulation. Our formulation differs from the radius-based principle by Chapelle et al. [1]. Their principle is sensitive to kernel scalings when a nonzero empirical loss is allowed, also causing the same problems as the margin-based formulations. We prove that our formulation is invariant to scalings of learned kernels, and also invariant to initial scalings of basis kernels and to the types (e.g., L1 or L2) of norm constraints on kernel parameters for the MKL problem. Therefore our formulation completely addresses the scaling and initialization problems. Experiments show that our approach gives significant performance improvements both over SVM with the uniform combination of basis kernels and over other state-of-art kernel learning methods. Our proposed kernel learning problem can be reformulated to a tri-level optimization problem. We establish the differentiability of a general family of multilevel optimization problems. This enables us to generally tackle the radius of the minimal enclosing ball, or other complicated optimal value functions, in the kernel learning framework by simple gradient-based methods. We hope that our results will also benefit other learning problems. The paper is structured as follows. Section 2 shows problems in previous MKL formulations. In Section 3 we present a new kernel learning formulation and give discussions. Then, we study the differentiability of multilevel optimization problems and give an efficient algorithm in Section 4 and Section 5, respectively. Experiments are shown in Section 6. Finally, we close with a conclusion. 2 Measuring how good a kernel is Let D = {(x1, y1), ..., (xn, yn)} denote a training set of n pairs of input points xi ∈X and target labels yi ∈{±1}. Suppose we have a kernel family K = {k : X × X →R}, in which any kernel function k implicitly defines a transformation φ(·; k) from the input space X to a feature space by k(xc, xd) = ⟨φ(xc; k), φ(xd; k)⟩. Let a classifier be linear in the feature space endowed with k, as f(x; w, b, k) = ⟨φ(x; k), w⟩+ b, (1) the sign of which is used to classify data. The task of kernel learning (for binary classification) is to learn both a kernel function k ∈K and a classifier w and b. To make the problem trackable, the learned kernel is usually restricted to a parametric form k(θ)(·, ·), where θ = [θi]i is the kernel parameter. Then the problem of learning a kernel transfers to the problem of learning a kernel parameter θ. The most common used kernel form is a linear combination of multiple basis kernels, as k(θ)(·, ·) = Pm j=1 θjkj(·, ·), θj ≥0. (2) 2.1 Problems in multiple kernel learning Most existing MKL approaches, e.g., [2, 4, 5], employ the equivalent objective function as in SVM: mink,w,b,ξi 1 2∥w∥2 + C P i ξi, s.t. yif(xi; w, b, k) + ξi ≥1, ξi ≥0, (3) where ξi is the hinge loss. This problem can be reformulated to mink : ˜G(k), (4) where ˜G(k) = minw,b,ξi 1 2∥w∥2 + C P i ξi, s.t. yif(xi; w, b, k) + ξi ≥1, ξi ≥0. (5) For any kernel k, the optimal classifier w and b is actually the SVM classifier with the kernel k. Let γ denote the margin of the SVM classifier in the feature space endowed with k. We have γ−2 = ∥w∥2. Thus the term ∥w∥2 makes formulation (3) prefer the kernel that results in an SVM classifier with a larger margin (as well as an acceptable empirical loss). Here, a natural question is that for different kernels whether the margins of SVM classifiers can well measure the goodness of the kernels. 2 To answer this question, we consider what happens when a kernel k is enlarged by a scalar a: knew = ak, where a > 1. The corresponding transformations satisfy φ(·; knew) = √aφ(·; k). For k, let {w∗, b∗} denote the optimal solution of (5). For knew, we set w2 = w∗ 1/√a and b2 = b∗ 1, then we have ∥w2∥2 = ∥w∗ 1∥2/a, and f(x; w2, b2, knew) and f(x; w∗ 1, b∗ 1, k) are the same classifier, resulting in the same ξi. Then we obtain: ˜G(ak) = ˜G(knew) ≤1 2∥w2∥2 + C P i ξi < 1 2∥w∗ 1∥2 + C P i ξi = ˜G(k), which means the enlarged kernel gives a larger margin and a smaller objective value. As a consequence, on one hand, the large margin preference guides the scaling of the learned kernel to be as large as possible. On the other hand, any kernel, even the one resulting in a bad performance, can give an arbitrarily large margin by enlarging its scaling. This problem is called the scaling problem. It shows that the margin is not a suitable measure of the goodness of a kernel. In the linear combination case, the scaling problem causes that the kernel parameter θ does not converge in the optimization. A remedy is to use a norm constraint on θ. However, it has been shown in recent literature [7, 9] that different types of norm constraints fit different data sets. So users face the difficulty of choosing a suitable norm constraint. Even after a norm constraint is selected, the scaling problem also causes another problem about the initialization. Consider an L1 norm constraint and a learned kernel which is a combination of two basis kernels, as k(θ)(·, ·) = θ1k1(·, ·) + θ2k2(·, ·), θ1, θ2 ≥0, θ1 + θ2 = 1. (6) To leave the empirical loss out of consideration, assume: (a) both k1 and k2 can lead to zero empirical loss, (b) k1 results in a larger margin than k2. For simplicity, we further restrict θ1 and θ2 to be equal to 0 or 1, to enable kernel selection. The MKL formulation (3), of course, will choose k1 from {k1, k2} due to the large margin preference. Then we set knew 1 (·, ·) = ak1(·, ·), where a is a small scalar to make that knew 1 has a smaller margin than k2. After knew 1 substitutes for k1, the MKL formulation (3) will select k2 from {knew 1 , k2}. The example shows that the final solution can be greatly affected by the initial scalings of basis kernels, although a norm constraint is used. This problem is called the initialization problem. When the MKL framework is extended from the linear combination cases to the nonlinear cases, the scaling problem becomes more serious, as even a finite scaling of the learned kernel may not be generally guaranteed by a simple norm constraint on kernel parameters for some kernel forms. These problems implicate that the margin itself is not enough to measure the goodness of kernels. 2.2 Measuring the goodness of kernels with the radiuses of MEB Now we need to find a more reasonable way to measure the goodness of kernels. Below we introduce the generalization error bounds for SVM and kernel learning, which inspire us to consider the minimum enclosing ball to learn a kernel. For SVM with a fixed kernel, it is well known that the estimation error, which denotes the gap between the expected error and the empirical error, is bounded by p O(R2γ−2)/n, where R is the radius of the minimum enclosing ball (MEB) of data in the feature space endowed with the kernel used. For SVM with a kernel learned from a kernel family K, if we restrict that the radius of the minimum enclosing ball in the feature space endowed with the learned kernel to be no larger than R, then the theoretical results of Srebro and Ben-David [14] say: for any fixed margin γ > 0 and any fixed radius R > 0, with probability at least 1 −δ over a training set of size n, the estimation error is no larger than q 8 n(2 + dφ log 128en3R2 γ2dφ + 256 R2 γ2 log enγ 8R log 128nR2 γ2 −log δ). Scalar dφ denotes the pseudodimension [14] of the kernel family K. For example, dφ of linear combination kernels is no larger than the number of basis kernels, and dφ of the Gaussian kernels with a form of k(θ)(xa, xb) = e−θ∥xa−xb∥2 is no larger than 1 (See [14] for more details). The above results clearly state that the generalization error bounds for SVM with both fixed kernels and learned kernels depend on the ratio between the margin γ and the radius R of the minimum enclosing ball of data. Although some new results of the generalization bounds for kernel learning, like [15], give different types of dependencies on dφ, they also rely on the margin-and-radius ratio. In SVM with a fixed kernel, the radius R is a constant and we can safely minimize ∥w∥2 (as well as the empirical loss). However, in kernel learning, the radius R changes drastically from one kernel to another (An example is given in the supplemental materials: when we uniformly combine p basis kernels by kunif = Pp j=1 1 pkj, the squared radius becomes only 1 p of the squared radius of each basis kernel.). Thus we should also take the radius into account. As a result, we use the ratio between the margin γ and the radius R to measure how good a kernel is for kernel learning. 3 Given any kernel k, the radius of the minimum enclosing ball, denoted by R(k), can be obtained by: R2(k) = miny,c y, s.t. y ≥∥φ(xi; k) −c∥2. (7) This problem is a convex minimization problem, being equivalent to its dual problem, as R2(k) = maxβi P iβik(xi, xi) −P i,jβik(xi, xj)βj, s.t. P i βi = 1, βi ≥0, (8) which shows a property of R2(k): for any kernel k and any scalar a>0, we have R2(ak) = aR2(k). 3 Learning kernels with the radiuses Considering the ratio between the margin and the radius of MEB, we propose a new formulation, as mink,w,b,ξi 1 2R2(k)∥w∥2 + C P i ξi, s.t. yi(⟨φ(xi; k), w⟩+ b) + ξi ≥1, ξi ≥0, (9) where R2(k)∥w∥2 is a radius-based regularizer that prefers a large ratio between the margin and the radius, and P i ξi is the hinge loss which is an upper bound of empirical misclassified error. This optimization problem is called radius based kernel learning problem, referred to as RKL. Chapelle et al. [1] also utilize the radius of MEB to tune kernel parameters for hard margin SVM. Our formulation (9) is equivalent to theirs if ξi is restricted to be zero. To give a soft margin version, they modify the kernel matrix K(θ) = K(θ) + 1 C I, resulting in a formulation equivalent to: min θ,w,b,ξi 1 2R2(k(θ))∥w∥2 + CR2(k(θ)) P i ξ2 i , s.t. yi(⟨φ(xi; k(θ)), w⟩+ b) + ξi ≥1, ξi ≥0. (10) The function R2(k(θ)) in the second term, which may become small, makes that minimizing the objective function can not reliably give a small empirical loss, even when C is large. Besides, when we reduce the scaling of a kernel by multiplying it with a small scalar a and substitute ˜w = w/√a for w to keep the same ξi, the objective function always decreases (due to the decrease of R2 in the empirical loss term), still leading to scaling problems. Do et al. [16] recently propose to learn a linear kernel combination, as defined in (2), through min θ,wj,b,ξi 1 2 P j ∥wj∥2 θj + C P j θjR2(kj) P i ξ2 i , s.t. yi(P j⟨wj, φ(xi; kj)⟩+ b) + ξi ≥1, ξi ≥0. (11) Their objective function also can be always decreased by multiplying θ with a large scalar. Thus their method does not address the scaling problem, also resulting in the initialization problem. If we initially adjust the scalings of basis kernels to make each R(kj) be equal to each other, then their formulation is equivalent to the margin-based formulation (3). Different from the above formulations, our formulation (9) is invariant to scalings of kernels. 3.1 Invariance to scalings of kernels Now we discuss the properties of formulation (9). The RKL problem can be reformulated to mink G(k), (12) where G(k) = min w,b,ξi 1 2R2(k)∥w∥2 + C P i ξi, s.t. yi(⟨φ(xi; k), w⟩+ b) + ξi ≥1, ξi ≥0. (13) Functional G(k) defines a measure of the goodness of kernel functions, which consider a trade-off between the margin-and-radius ratio and the empirical loss. This functional is invariant to the scaling of k, as stated by the following proposition. Proposition 1. For any kernel k and any scalar a > 0, equation G(ak) = G(k) holds. Proof. For the scaled kernel ak, equation R2(ak) = aR2(k) holds. Thereby, we get G(ak) = minw,b,ξi a 2R2(k)∥w∥2 + C P i ξi, s.t. yi(⟨√aφ(xi; k), w⟩+ b) + ξi ≥1, ξi ≥0.(14) Let ˜ w √a =w replace w in (14), and then (14) becomes equivalent to (13). Thus G(ak)=G(k). For a parametric kernel form k(θ), the RKL problem transfers to minimizing a function g(θ) .= G(k(θ)). Here we temporarily focus on the linear combination case defined by (2), and use glinear(θ) to denote g(θ) in such case. Due to the scaling invariance, for any θ and any a > 0, we have glinear(aθ) = glinear(θ). It makes the problem of minimizing glinear(θ) be invariant to the types of norm constraints on θ, as stated in the following. 4 Proposition 2. Given any norm definition N(·) and any set S ⊆R, suppose there exists c > 0 that satisfies c ∈S. Let (a) denote the problem of minimizing glinear(θ) s.t. θi ≥0, and (b) denote the problem of minimizing glinear(θ) s.t. θi ≥0 and N(θ) ∈S. Then we have: (1) For any local (global) optimal solution of (a), denoted by θa, c N(θa)θa is also the local (global) optimal solution of (b). (2) For any local (global) optimal solution of (b), denoted by θb, θb is also the local (global) optimal solution of (a). Proof. The complete proof is given in the the supplemental materials. Here we only prove the equivalence of global optimal solutions of (a) and (b). On one hand, if θa is the global optimal solution of (a), then for any θ that satisfies θi ≥0 and N(θ) ∈S, we have glinear( c N(θ)θa) = glinear(θa) ≤g(θ). Due to N( c N(θa)θa) = c ∈S, c N(θ)θa also satisfies the constraint of (b), and thus c N(θ)θa is the global optimal solution of (b). On the other hand, for any θ (θi ≥0), glinear( c N(θ)θ) = glinear(θ) due to the scaling invariance. If θb is the global optimal solution of (b), then for any θ (θi ≥0), as c N(θ)θ satisfies the constraint of (b), we have glinear(θb)≤glinear( c N(θ)θ), giving glinear(θb)≤glinear(θ). Thus θb is the global optimal solution of (a). As the problems of minimizing glinear(θ) under different types of norm constraints on θ are all equivalent to the same problem without any norm constraint, they are equivalent to each other. Based on the above proposition, we can also get the another conclusion: in the linear combination case the minimization problem (12) is also invariant to the initial scalings of basis kernels (see below). Proposition 3. Let kj denote basis kernels, and aj >0 be initial scaling coefficients of basis kernels. Give a norm constraint N(θ)∈S, which is by the same definition as in Proposition 2. Let (a) denote the problem of minimizing G(P j θjkj) w.r.t. θ s.t. θi ≥0 and N(θ)∈S, and (b) denote the problem with different initial scalings: minimizing G(P j θjajkj) w.r.t. θ s.t. θi ≥0 and N(θ) ∈S. Then: (1) Problem (a) and problem (b) have the same local and global optimums. (2) For any local (global) optimal solution of (b), denoted by θb, [ cajθb j N([atθb t ]t)]j is also the local (global) optimal solution of (a). Proof. By proposition 2, problems (b) is equivalent to the one without any norm constraint: minimizing G(P j θjajkj) w.r.t. θ s.t. θi ≥0, which is denoted by problem (c). Let ˜θj =ajθj, and then problem (c) is equivalent to the problem of minimizing G(P j ˜θjkj) w.r.t. ˜θ s.t. ˜θi ≥0, which is denoted by problem (d) (local and global optimal solutions of problems (c) and (d) have one-to-one correspondences due to the simple transform ˜θj = ajθj). Again, by Proposition 2, problem (d) is equivalent to the one with N(θ) ∈S, which is indeed problem (a). So we have conclusion (1). By proper transformations of optimal solutions of these equivalent problems, we get conclusion (2). Note that in Proposition 3, optimal solutions of problems (a) and (b), which are with different initial scalings of basis kernels, actually result in the same kernel combinations up to the scalings. As shown in the above three propositions, our proposed formulation not only completely addresses scaling and initialization problems, but also is not sensitive to the types of norm constraints used. 3.2 Reformulation to a tri-level optimization problem The remaining task is to optimize the RKL problem (12). Given a parametric kernel form k(θ), for any parameter θ, to obtain the value of the objective function g(θ) = G(k(θ)) in (12), we need to solve the SVM-like problem in (13), which is a convex minimization problem and can be solved by its dual problem. Indeed, the whole RKL problem is transformed to a tri-level optimization problem: minθ g(θ), (15) where g(θ) = n maxαi P iαi − 1 2r2(θ) P i,jαiαjyiyjKi,j(θ), s.t. P iαiyi = 0, 0 ≤αi ≤C o ,(16) where r2(θ) = n maxβi P iβiKi,j(θ) −P i,jβiKi,j(θ)βj, s.t. P iβi = 1, βi ≥0 o . (17) Notation K(θ) denotes the kernel matrix [k(θ)(xi, xj)]i,j. The above formulations show that given any θ the calculation of a value of g(θ) requires solving a bi-level optimization problem. First, solve the MEB dual problem (17), and obtain the optimal value r2(θ) and the optimal solution, denoted by 5 β∗ i . Then, take r2(θ) into the objective function of the SVM dual problem (16), solve it, and obtain the value of g(θ), as well as the optimal solution of (16), denoted by α∗ i . Unlike in other kernel learning approaches, here the optimization of the SVM dual problem relies on another optimal value function r2(θ), making the RKL problem more challenging. If g(θ), which is the objective function in the top-level optimization, is differentiable and we can get its derivatives, then we can use a variety of gradient-based methods to solve the RKL problem. So in next section, we study the differentiability of a general family of multilevel optimization problems. 4 Differentiability of the multilevel optimization problem The Danskin’s theorem [17] states the differentiability of the optimal value of a single-level optimization problem, and has been applied in many MKL algorithms, e.g., [5, 12]. Unfortunately, it is not directly applicable to the optimal value of a multilevel optimization problem. Below we generalize the Danskin’s theorem and give new results about the multilevel optimization problem. Let Y be a metric space, and X, U and Z be normed spaces. Suppose: (1) The function g1(x, u, z), is continuous on X ×U ×Z. (2) For all x ∈X the function g1(x, ·, ·) is continuously differentiable. (3) The function g2(y, x, u) (g2 : Y × X × U →Z) is continuous on Y × X × U. (4) For all y ∈Y the function g2(y, ·, ·) is continuously differentiable. (5) Sets ΦX ⊆X and ΦY ⊆Y are compact. By these notations, we propose the following theorem about bi-level optimal value functions. Theorem 1. Let us define a bi-level optimal value function as v1(u) = infx∈ΦX g1(x, u, v2(x, u)), (18) where v2(x, u) is another optimal value function as v2(x, u) = infy∈ΦY g2(y, x, u). (19) If for any x and u, g2(·, x, u) has a unique minimizer y∗(x, u) over ΦY , then y∗(x, u) are continuous on X × U, and v1(u) is directionally differentiable. Furthermore, if for any u, the g1(·, u, v2(·, u)) has also a unique minimizer x∗(u) over ΦX, then 1. the minimizer x∗(u) are continuous on U, 2. v1(u) is continuously differentiable, and its derivative is equal to dv1(u) du =  ∂g1(x∗,u,v2) ∂u + ∂v2(x∗,u) ∂u ∂g1(x∗,u,v2) ∂v2  v2=v2(x∗,u), where ∂v2(x∗,u) ∂u = ∂g2(y∗,x∗,u) ∂u . (20) The proof is given in supplemental materials. To apply Theorem 1 to the objective function g(θ) in the RKL problem (15), we shall make sure the following two conditions are satisfied. First, both the MEB dual problem (17) and the SVM dual problem (16) must have unique optimal solutions. This can be guaranteed by that the kernel matrix K(θ) is strictly positive definite. Second, the kernel matrix K(θ) shall be continuously differentiable to θ. Both conditions can be met in the linear combination case when each basis kernel matrix is strictly positive definite, and can also be easily satisfied in nonlinear cases, like in [11, 12]. If these two conditions are met, then g(θ) is continuously differentiable and dg(θ) dθ = − 1 2r2(θ) P i,jα∗ i α∗ jyiyj dKi,j(θ) dθ + 1 2r4(θ) P i,jα∗ i α∗ jyiyjKi,j(θ) dr2(θ) dθ , (21) where α∗ i is the optimal solution of the SVM dual problem (16), and dr2(θ) dθ = P iβ∗ i dKi,i(θ) dθ −P i,jβ∗ i dKi,j(θ) dθ β∗ j , (22) where β∗ i is the optimal solution of the MEB dual problem (17). In above equations, the value of dKi,j(θ) dθ is needed. It depends on the specific form of the parametric kernels, and the deriving of it is easy. For example, for the linear combination kernel Ki,j(θ) = P m θmKm i,j, we have ∂Ki,j(θ) ∂θm = Km i,j. For the Gaussian kernel Ki,j(θ) = e−θ∥xi−xj∥2, we have dKi,j(θ) dθ = −Ki,j(θ)∥xi −xj∥2. 5 Algorithm With the derivative of g(θ), we use the standard gradient projection approach with the Armijo rule [18] for selecting step sizes to address the RKL problem. To compare with the most popular kernel learning algorithm, simpleMKL [5], in experiments we employ the linear combination 6 kernel form with nonnegative combination coefficients, as defined in (2). In addition, we also consider three types of norm constraints on kernel parameters (combination coefficients): L1, L2 and no norm constraint. The L1 and L2 norm constraints are as P j θj = 1 and P j θ2 j = 1, respectively. The projection for the L1 norm and nonnegative constraints can be efficiently done by the method of Duchi et al. [19]. The projection for only nonnegative constraints can be accomplished by setting negative elements to be zero. The projection for the L2 norm and nonnegative constraints need another step after eliminating negative values: normalize θ by multiplying it with ∥θ∥−1 2 . In our gradient projection algorithm, each calculation of the objective functions g(θ) needs solving an MEB problem (17) and an SVM problem (16), whereas the gradient calculation and projection steps have ignorable time complexity compared to MEB and SVM solvers. The MEB and SVM problems have similar forms of objective functions and constraints, and both of them can be efficiently solved by SMO algorithms. Moreover, previous solutions α∗ i and β∗ i can be used as “hotstart” to accelerate the solvers. It is because optimal solutions of two problems are continuous to kernel parameter θ according to Theorem 1. Thus when θ moves a small step, the optimal solutions also will only change a little. In real experiments our approach usually achieves approximate convergence within one or two dozens of invocations of SVM and MEB solvers (For lack of space, examples of the convergence speed of our algorithm are shown in the supplemental materials). In linear combination cases, the RKL problem, as the radius-based formulation by Chapelle et al. [1], is not convex. Gradient-based methods only guarantee local optimums. The following states the nontrivial quality of local optimal solutions and their connections to related convex problems. Proposition 4. In linear combination cases, for any local optimal solution of the RKL problem, denoted by θ∗, there exist C1 > 0 and C2 > 0 that θ∗is the global optimal solution of the following convex problem: min θ,wj,b,ξi 1 2 P j ∥wj∥2 + C1r2(θ) + C2 P i ξ2 i , s.t. yi(P j⟨wj, φ(xi; θjkj)⟩+b)+ξi ≥1, ξi ≥0. (23) The proof can be found in the supplemental materials. The proposition also gives another possible way to address the RKL problem: iteratively solve the convex problem (23) with a search for C1 and C2. However, it is difficult to find exact values of C1 and C2 by a grid search, and even a rough search will result in too high computational load. Besides, such method is also lack of extension ability to nonlinear parametric kernel forms. Then, in the experiments, we demonstrate that the gradient-based approach can give satisfactory performances, which are significantly better than ones of SVM with the uniform combination of basis kernels and of other kernel learning approaches. 6 Experiments In this section, we illustrate the performances of our presented RKL approach, in comparison with SVM with the uniform combination of basis kernels (Unif), the margin-based MKL method using formulation (3) (MKL), and the kernel learning principle by Chapelle et al. [1] using formulation (10) (KL-C). The evaluation is made on eleven public available data sets from UCI repository [20] and LIBSVM Data [21] (see Table 1). All data sets have been normalized to be zero-means and unit-variances on every feature. The used basis kernels are the same as in SimpleMKL [5]: 10 Gaussian kernels with bandwidths γG ∈{0.5, 1, 2, 5, 7, 10, 12, 15, 17, 20} and 10 polynomial kernels of degree 1 to 10. All kernel matrices have been normalized to unit trace, as in [5, 7]. Note that although our RKL formulation is theoretically invariant to the initial scalings, the normalization is still applied in RKL to avoid numerical problems caused by large value kernel matrices in SVM and MEB solvers. To show impacts of different norm constraints, we use three types of them: L1, L2 and no norm constraint. With no norm constraint, only RKL can converge, and so only its results are reported. The SVM toolbox used is LIBSVM [21]. MKL with the L1 norm constraint is solved by the code from SimpleMKL [5]. Other problems are solved by standard gradient-projection methods, where the calculation of gradients of the MKL formulation (3) and Chapelle’s formulation (10) is the same as in [5] and [1], respectively. The initial θ is set to be 1 20e, where e is an all-ones vector. The trade-off coefficients C in SVM, MKL, KL-C and RKL are automatically determined by 3-fold cross-validations on training sets. In all methods, C is selected from the set Scoef .= {0.01, 0.1, 1, 10, 100}. For each data set, we split it to five parts, and each time we use four parts as the training set and the remaining one as the test set. The average accuracies with standard deviations and average numbers of selected basis kernels are reported in Table 1. 7 Table 1: The testing accuracies (Acc.) with standard deviations (in parentheses), and the average numbers of selected basis kernels (Nk). We set the numbers of our method to be bold if our method outperforms both Unif and other two kernel learning approaches under the same norm constraint. Index 1 2 3 4 5 6 7 8 Unif MKL KL-C Ours MKL KL-C Ours Ours Constraint L1 L1 L1 L2 L2 L2 No Data set Acc. Nk Acc. Nk Acc. Nk Acc. Nk Acc. Nk Acc. Nk Acc. Nk Acc. Nk Ionosphere 94.0(1.4) 20 92.9(1.6) 3.8 86.0(1.9) 4.0 95.7(0.9) 2.8 94.3(1.5) 20 84.4(1.6) 18 95.7(0.9) 3.0 95.7(0.9) 3.0 Splice 51.7(0.1) 20 79.5(1.9) 1.0 80.5(1.9) 2.8 86.5(2.4) 3.2 82.0(2.2) 20 74.0(2.6) 14 86.5(2.4) 2.2 86.3(2.5) 3.2 Liver 58.0(0.0) 20 59.1(1.4) 4.2 62.9(3.5) 4.0 64.1(4.2) 3.6 67.0(3.8) 20 64.1(3.9) 11 64.1(4.2) 8.0 64.3(4.3) 6.6 Fourclass 81.2(1.9) 20 97.7(1.2) 7.0 94.0(1.2) 2.0 100 (0.0) 1.0 97.3(1.6) 20 94.0(1.3) 17 100 (0.0) 1.0 100 (0.0) 1.6 Heart 83.7(6.1) 20 84.1(5.7) 7.4 83.3(5.9) 1.8 84.1(5.7) 5.2 83.7(5.8) 20 83.3(5.1) 19 84.4(5.9) 5.4 84.8(5.0) 5.8 Germannum70.0(0.0) 20 70.0(0.0) 7.2 71.9(1.8) 9.8 73.7(1.6) 4.8 71.5(0.8) 20 71.6(2.1) 13 73.9(1.2) 6.0 73.9(1.8) 5.8 Musk1 61.4(2.9) 20 85.5(2.9) 1.6 73.9(2.9) 2.0 93.3(2.3) 4.0 87.4(3.0) 20 61.9(3.1) 19 93.5(2.2) 3.8 93.3(2.3) 3.8 Wdbc 94.4(1.8) 20 97.0(1.8) 1.2 97.4(2.3) 4.6 97.4(1.6) 6.2 96.8(1.6) 20 97.4(2.0) 11 97.6(1.9) 5.8 97.6(1.9) 5.8 Wpbc 76.5(2.9) 20 76.5(2.9) 7.2 52.2(5.9) 9.6 76.5(2.9) 17 75.9(1.8) 20 51.0(6.6) 17 76.5(2.9) 15 76.5(2.9) 15 Sonar 76.5(1.8) 20 82.3(5.6) 2.6 80.8(5.8) 7.4 86.0(2.6) 2.6 85.2(2.9) 20 80.2(5.9) 11 86.0(2.6) 2.6 86.0(3.3) 3.0 Coloncancer 67.2(11) 20 82.6(8.5) 13 74.5(4.4) 11 84.2(4.2) 7.2 76.5(9.0) 20 76.0(3.6) 15 84.2(4.2) 5.6 84.2(4.2) 7.6 The results in Table 1 can be summarized as follows. (a) RKL gives the best results on most sets. Under L1 norm constraints, RKL (Index 4) outperforms all other methods (Index 1, 2, 3) on 8 out of 11 sets, and also gives results equal to the best ones of other methods on the remaining 3 sets. In particular, RKL gains 5 or more percents of accuracies on Splice, Liver and Musk1 over MKL, and gains more than 9 percents on four sets over KL-C. Under L2 norm constraints, the results are similar: RKL (Index 7) outperforms other methods (Index 5, 6) on 10 out of 11 sets, with only 1 inverse result. (b) Both MKL and KL-C are sensitive to the types of norm constraints (Compare Index 2 and 5, as well as 3 and 6). As shown in recent literature [7, 9], for the MKL formulation, different types of norm constraints fit different data sets. However, RKL outperforms MKL (as well as KL-C) under both L1 and L2 norm constraints on most sets. (c) RKL is invariant to the types of norm constraints. See Index 4, 7 and 8. Most accuracy numbers of them are the same. Several exceptions with slight differences are possibly due to precisions of numerical computation. (d) For MKL, the L1 norm constraint always results in sparse combinations, whereas the L2 norm constraint always gives non-sparse results (see Index 2 and 5). (e) An interesting thing is that, our presented RKL gives sparse solutions on most sets, whatever types of norm constraints are used. As there usually exist redundancies in the basis kernels, the searching for good kernels and small empirical loss often directly leads to sparse solutions. We notice that KL-C under L2 norm constraints also slightly promotes sparsity (Index 6). Compared to KL-C under L2 norm constraints, RKL provides not only higher performances but also more sparsity, which benefits both interpretability and computational efficiency in prediction. 7 Conclusion In this paper, we show that the margin term used in previous MKL formulations is not a suitable measure of the goodness of kernels, resulting in scaling and initialization problems. We propose a new formulation, called RKL, which uses the ratio between the margin and the radius of MEB to learn kernels. We prove that our formulation is invariant to kernel scalings, and also invariant to scalings of basis kernels and to the types of norm constraints for the MKL problem. Then, by establishing the differentiability of a general family of multilevel optimal value functions, we propose a gradient-based algorithm to address the RKL problem. We also provide the property of solutions of our algorithm. The experiments validate that our approach outperforms both SVM with the uniform combination of basis kernels and other state-of-art kernel learning methods. Acknowledgments The work is supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. 60835002 and 61075004) and the National Basic Research Program (973 Program) (No. 2009CB320602). 8 References [1] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131–159, 2002. [2] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L.E. Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming. The Journal of Machine Learning Research, 5:27–72, 2004. [3] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the twenty-first international conference on Machine learning (ICML 2004), 2004. [4] S. Sonnenburg, G. R¨atsch, and C. Sch¨afer. A general and efficient multiple kernel learning algorithm. In Adv. Neural. Inform. Process Syst. (NIPS 2005), 2006. [5] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning Research, 9:2491–2521, 2008. [6] O. Chapelle and A. Rakotomamonjy. Second order optimization of kernel parameters. In Proc. of the NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, 2008. [7] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K. M¨uller, and A. Zien. Efficient and Accurate lp-Norm Multiple Kernel Learning. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [8] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Uncertainty in Artificial Intelligence, 2009. [9] J. Saketha Nath, G. Dinesh, S. Raman, Chiranjib Bhattacharyya, Aharon Ben-Tal, and K. R. Ramakrishnan. On the algorithmics and applications of a mixed-norm based kernel learning formulation. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [10] F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Adv. Neural. Inform. Process Syst. (NIPS 2008), 2008. [11] M. G¨onen and E. Alpaydin. Localized multiple kernel learning. In Proceedings of the 25th international conference on Machine learning (ICML 2008), 2008. [12] M. Varma and B.R. Babu. More generality in efficient multiple kernel learning. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), 2009. [13] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning Non-Linear Combinations of Kernels. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [14] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In Proceedings of the International Conference on Learning Theory (COLT 2006), pages 169–183. Springer, 2006. [15] Yiming Ying and Colin Campbell. Generalization bounds for learning the kernel. In Proceedings of the International Conference on Learning Theory (COLT 2009), 2009. [16] H. Do, A. Kalousis, A. Woznica, and M. Hilario. Margin and Radius Based Multiple Kernel Learning. In Proceedings of the European Conference on Machine Learning (ECML 2009), 2009. [17] J.M. Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, pages 641–664, 1966. [18] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, September 1999. [19] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l1ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning (ICML 2008), 2008. [20] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. Software available at http://www.ics.uci.edu/∼mlearn/MLRepository.html. [21] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm. 9
2010
9
4,135
Sparse Instrumental Variables (SPIV) for Genome-Wide Studies Felix V. Agakov Public Health Sciences University of Edinburgh felixa@aivalley.com Paul McKeigue Public Health Sciences University of Edinburgh paul.mckeigue@ed.ac.uk Jon Krohn WTCHG, Oxford jon.krohn@magd.ox.ac.uk Amos Storkey School of Informatics University of Edinburgh a.storkey@ed.ac.uk Abstract This paper describes a probabilistic framework for studying associations between multiple genotypes, biomarkers, and phenotypic traits in the presence of noise and unobserved confounders for large genetic studies. The framework builds on sparse linear methods developed for regression and modified here for inferring causal structures of richer networks with latent variables. The method is motivated by the use of genotypes as “instruments” to infer causal associations between phenotypic biomarkers and outcomes, without making the common restrictive assumptions of instrumental variable methods. The method may be used for an effective screening of potentially interesting genotype-phenotype and biomarker-phenotype associations in genome-wide studies, which may have important implications for validating biomarkers as possible proxy endpoints for early-stage clinical trials. Where the biomarkers are gene transcripts, the method can be used for fine mapping of quantitative trait loci (QTLs) detected in genetic linkage studies. The method is applied for examining effects of gene transcript levels in the liver on plasma HDL cholesterol levels for a sample of sequenced mice from a heterogeneous stock, with ∼105 genetic instruments and ∼47 × 103 gene transcripts. 1 Introduction A problem common to both epidemiology and to systems biology is to infer causal relationships between phenotypic measurements (biomarkers) and disease outcomes or quantitative traits. The problem is complicated by the fact that in large bio-medical studies, the number of possible genetic and environmental causes is very large, which makes it implausible to conduct exhaustive interventional experiments. Moreover, it is generally impossible to remove the confounding bias due to unmeasured latent variables which influence associations between biomarkers and outcomes. Also, in situations when the biomarkers are mRNA transcript levels, the measurements are known to be quite noisy; additionally, the number of unique candidate causes may exceed the number of observations by several orders of magnitude (the p ≫n problem). A fundamentally important practical task is to reduce the number of possible causes of a trait to a much more manageable subset of candidates for controlled interventions. Developing an efficient framework for addressing this problem may be fundamental for overcoming bottlenecks in drug development, with possible applications in the validation of biomarkers as causal risk factors, or developing proxies for clinical trials. Whether or not causation may be inferred from observational data has been a matter of philosophical debate. Pearl [28] argues that causal assumptions cannot be verified unless one makes a recourse 1 to experimental control, and that there is nothing in the probability distribution p(x, y) which can tell whether a change in x may have an effect on y. Traditional discussions of causality are largely focused on the question of identifiability, i.e. determining sets of graph-theoretic conditions when a post-intervention distribution p(y|do(x)) may be uniquely determined from a pre-intervention distribution p(y, x, z) [27, 4, 32]. If the causal effects are shown to be identifiable, their magnitudes can be obtained by statistical estimation, which for common models often reduces to solving systems of linear equations. In contrast, from the Bayesian perspective, the causality detection problem may be viewed as that of model selection, where a model Mx→y is compared with My→x. The problem is complicated by the likelihood-equivalence, where for each setting of parameters of one model there may exist a setting of parameters of the other giving rise to the identical likelihoods. However, unless the priors are chosen in such a way that Mx→y and My→x also have identical posteriors, it may be possible to infer the direction of the arrow. The view that the priors of likelihood-equivalent models do not need to be set to ensure the equivalence of the posteriors is in contrast to e.g. [12] (and references therein), but has been defended by MacKay (see [21], Section 35). In this paper we are leaving aside debates about the nature of causality and focus instead on identifying a set of candidate causes for a large partially observed under-determined genetic problem. The approach builds on the instrumental variable methods that were historically used in epidemiological studies, and on approximate Bayesian inference in sparse linear latent variable models. Specific modeling hypotheses are tested by comparing approximate marginal likelihoods of the corresponding direct, reverse, and pleiotropic models with and without latent confounders, where we follow [21] in allowing for flexible priors. The approach is largely motivated by the observation that independent variables do not establish a causal relation, while strong unconfounded direct dependencies retained in the posterior modes even under large sparseness-inducing penalties may indicate potential causality and suggest candidates for further controlled experiments. 2 Previous work Inference of causal direction of x on y is to some extent simplified if we assume existence of an auxiliary variable g, such that g’s effect on x may only be causal, and g’s effect on y may only be through x. The idea is exploited in instrumental variable methods [3, 2, 29] which typically deal with low-dimensional linear models, where the strength of the causal effect may be estimated as wx→y = cov(g, y)/cov(g, x). Note also that the hypothesized cause-outcome models such as Mg→x→y and Mg→y→x are no longer Markov-equivalent, i.e. it may be possible to select an appropriate model via likelihood-based tests. Selecting a plausible instrument g may be difficult in some domains; however, in genetic studies it may be possible to exploit as an instrument a measure of genotypic variation. In quantitative genetics, such applications of instrumental variable methods have been termed Mendelian randomization [15, 34]. In accordance with the requirements of the classic instrumental variable methods, it is assumed that effects of the genetic instrument g on the biomarker x are unconfounded, and that effects of the instrument on the outcome y are mediated only through the biomarker (i.e. there is no pleiotropy) [17, 35]. The former assumption is grounded in the laws of Mendelian genetics and is satisfied as long as population stratification has been adequately controlled. However, the assumption of no hidden pleiotropy severely restricts the application of this approach, as most genotypic effects on complex traits are not sufficiently well understood to exclude pleiotropy as a possible explanation of an association. Thus the classical instrumental variable argument is limited to biomarkers for which suitable non-pleiotropic instruments exist, and cannot be easily extended to exploit studies with multiple biomarkers and genome-wide data. A more general approach to exploiting genotypic variation to infer causal relationships between gene transcript levels and quantitative traits has been developed by Schadt et. al. [30] and subsequently extended (see e.g. [5]). They relax the assumption of no pleiotropy, but instead compare models with and without pleiotropy by computing standard likelihood-based scores. After filtering to select a set of gene transcripts {xj} that are associated with the trait y, and loci {gi} at which genotypes have effects on transcript levels xj, each possible triad of marker locus gi, transcript xj and trait y is evaluated to compare three possible models: causal effect of transcript on trait, reverse causation, and a pleiotropic model (see Figure 1 left, (i)–(iii)). The support for these three models is compared by a measure of model fit penalized by complexity: either Akaike’s Information Criterion (AIC) [30], or the Bayesian Information Criterion (BIC) [5]. Schadt et. al. [30] denote this procedure as the “likelihood-based causality model selection” (LCMS) approach. While the LCMS 2 gi xj y (i) gi y xj (ii) gi xj y (iii) gi gk xj y (iv) −6 −4 −2 0 2 4 6 0 1 2 3 4 5 6 7 8 p(AICCSL−AICREV) −2 0 2 4 6 8 Figure 1: Left: (i–iii): Causal, reverse, and pleiotropic models of the LCMS approach [30]; (iv): pleiotropic model with two genetic instruments. Center: Possible arbitrariness of LCMS inference. The histogram shows the difference of the AIC scores for the causal and reverse models for a fixed biomarker and outcome, and various choices of loci from predictive regions. Right: AIC scores of the causal (top) and reverse (bottom) models for each choice of instrument gi (the straight lines link the scores for a fixed choice of gi). Scores were centered relative to those of the pleiotropic model. Biomarker and outcome are liver expressions of Cyp27b1 and plasma HDL measurements for heterogeneous mice. Based on the choice of gi, either causal or reverse explanations are favored. and related methods [30, 5] relax the assumption of no hidden pleiotropy of the classic Mendelian randomization method, they have three key limitations. First, effects of loci and biomarkers on outcomes are not modeled jointly, so widely varying inferences are possible depending on the choice of the triads {gi, xj, y}. Figure 1 center, right compares differences in the AIC scores for the causal and reverse models constructed for a fixed biomarker and outcome, and for various choices of the genetic instruments from the predictive region. Depending on the choice of instrument gi, either causal or reverse explanations are favored. A second key limitation is that the LCMS method does not allow for dependencies between multiple biomarkers, measurement noise, or latent variables (such as unobserved confounders of the biomarker-outcome associations). Thus, for instance, without allowance for noise in the biomarker measurements, non-zero conditional mutual information I(gi, y|xj) will be interpreted as evidence of pleiotropy or reverse causation even when the relation between the underlying biomarker and outcome is causal. Also, the method is not Bayesian (the BIC score is only a crude approximation to the Bayesian procedure for model selection). One extension of the classic instrumental variable methods has been proposed by [4], who described graph-theoretic conditions which need to be satisfied in order for parameters of edges xi →y to be identifiable by solving a system of linear equations; however, they focus on the identifiability problem rather than on addressing a large practical under-determined task with latent variables. For example, their method does not allow for an easy integration of unmeasured confounders with unknown correlations with the intermediate and outcome variables. Another approach to modeling joint effects of genetic loci and biomarkers (gene expressions) was described by [41]. They modeled the expression measurements as three ordered levels, and used a biased greedy search over model structures from multiple starting points, to find models with high BIC scores. Though applicable for large-scale studies, the approach does not allow for measurement noise or latent variables (and looses information by using categorical measurements). The vast majority of other recent model selection and structure learning methods from machine learning literature are also either not easily extended to include latent confounders (e.g. [16], [19], [22]), or applicable only for dealing with relatively low-dimensional problems with abundant data (e.g. [33] and references therein). 3 Methods To address the problem of causal discovery in large bio-medical studies, we need a unified framework for modeling relations between genotypes, biomarkers, and outcomes that is computationally tractable to handle a large number of variables. Our approach extends LCMS and the instrumental variable methods by the joint modeling of effects of genetic loci and biomarkers, and by allowing for both pleiotropic genotypic effects and latent variables that generate couplings between biomarkers and confound the biomarker-outcome associations. It relies on Bayesian modeling of linear associations between the modeled variables, with sparseness-inducing priors on the linear weights. The 3 y(i) x(i) g(i) i = 1 . . . n ˜x(i) z(i) Ψ˜x Ψx U Σz V Ψy Wz Ψ˜y Wg W ˜y(i) Concentration parameter γ1 Empirical correlation ρ Bayes Factor: log10 Lx−>y − log10 Lx<−z−>y, σz 2=1.0 0.05 0.10 0.19 0.38 0.74 1.46 2.87 5.64 11.09 21.78 40.0 −0.35 −0.28 −0.21 −0.14 −0.07 0.00 0.07 0.14 0.22 0.29 0.35 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Figure 2: Left: SPIV structure. Filled/clear nodes correspond to observed/ latent variables. Right: log Bayes factor of Mx←z→y and Mx→y as a function of empirical correlations ρ and γ1 for n = 100 observations, σ2 z = σ2 x = σ2 y = 1, |x| = |y| = |z| = 1 and γ2 = 0, on the log10 scale. For intermediate γ1’s and high empirical correlations, there is a strong preference for the causal model. Bayesian framework allows prior biological information to be included if available: for instance, cis-acting genotypic effects on transcript levels are likely to be stronger and less pleiotropic than trans-acting effects on transcript levels. It also offers a rigorous approach to model comparison, and is particularly attractive for addressing under-determined genetics problems (p ≫n). The method builds on automatic relevance determination approaches (e.g. [20], [25], [37]) and adaptive shrinkage (e.g. [36], [8], [42]). Here it is used in the context of sparse multi-factor instrumental variable analysis in the presence of unobserved confounders, pleiotropy, and noise. Model Parameterization Our sparse instrumental variables model (SPIV) is specified with four classes of variables: genotypic and environmental covariates g ∈R|g|, phenotypic biomarkers x ∈R|x|, outcomes y ∈R|y|, and latent factors z1, . . . , z|z|. The dimensionality of the latent factors |z| is fixed at a moderately high value (extraneous dimensions will tend to be pruned under the sparse prior). The latent factors z play two major roles: they represent the shared structure between groups of biomarkers, and confound biomarker-outcome associations. The biomarkers x and outcomes y are specified as hidden variables inferred from noisy observations ˜x ∈R|˜x| and ˜y ∈R|˜y| (note that |˜x| = |x|, |˜y| = |y|). The effects of genotype on biomarkers and outcome are assumed to be unconfounded. Pleiotropic effects of genotype (effects on outcome that are not mediated through the phenotypic biomarkers) are accounted for by an explicit parameterization of p(y|g, x, z). Graphical representation of the model is shown on Figure 2 (left). It is clear that the SPIV structure extends that of the instrumental variable methods [2, 3, 29] by allowing for the pleiotropic links, and also extends the pleiotropic model of Schadt et. al. [30] (Figure 1 left (iii)) by allowing for multiple instruments and latent variables. All the likelihood terms of p(x,˜x, y, ˜y, z|g) are linear Gaussians with diagonal covariances x = UT g + VT z + ex, y = WT x + WT z z + WT g g + ey, ˜x = Ax + e˜x, (1) and ˜y = y + e˜y, where e˜x ∼N(0, Ψy), ey ∼N(0, Ψy), e˜y ∼N(0, Ψ˜y), e˜x ∼N(0, Ψ˜x), z ∼ N(0, Ψz), W ∈R|x|×|y|, Wz ∈R|z|×|y|, Wg ∈R|g|×|y|, V ∈R|z|×|x|, U ∈R|g|×|x| are regression coefficients (factor loadings) – for clarity, we assume the data is centered. A ∈R|x|×|x| has a banded structure (accounting for possible couplings of the neighboring microarray measurements). Prior Distribution All model parameters are specified as random variables with prior distributions. For computational convenience, the variance components of the diagonal covariances Ψy, Ψ˜y, etc. are specified with inverse Gamma priors Γ−1(ai, bi), with hyperparameters ai and bi fixed at values motivating the prior beliefs about the projection noise (often available to lab technicians collecting trait or biomarker measurements). One way to view the latent confounders z is as missing genotypes or environmental covariates, so that prior variances of the latent factors are peaked at values representative of the empirical variances of the instruments g. Empirically, the choice of priors on the variance components appears to be relatively unimportant, and other choices may be considered [9]. 4 The considered choice of a sparseness-inducing prior on parameters W, Wz, Wg, etc. is a product of zero-mean Laplace and zero-mean normal distributions p(w) ∝ |w| Y i=1 Lwi(0, γ1)Nwi(0, γ2), (2) Lwi(0, γ1) ∝exp{−γ1|wi|}, and Nwi(0, γ2) ∝exp{−γ2w2 i }. Due to the heavy tails of the Laplacian Lwi, the prior p(w) is flexible enough to capture large associations even if they are rare. Higher values of γ1 give a stronger tendency to shrink irrelevant weights to zero. It is possible to set different γ1 parameters for different linear weights (e.g. for the cis- and trans-acting effects); however, for clarity of this presentation we shall only use a global parameter γ1. The isotropic Gaussian component with the inverse variance γ2 contributes to the grouping effect (see [42], Theorem 1). The considered family of priors (2) induces better consistency properties [40] than the commonly used Laplacians [36, 9, 39, 26, 31]. It has also been shown [14] that important associations between variables may be recovered even for severely under-determined problems (p ≫n) common in genetics. The SPIV model with p(w) defined as in (2) generalizes LASSO and elastic net regression [36, 42]. As a special case, it also includes sparse conditional factor analysis. Other sparse priors on the weights, such as Student-t, “spike-and-slab”, or inducing Lq<1 penalties tend to result in less tractable posteriors even for linear regression [10, 37, 8], which also motivates the choice (2). Some additional intuition of the influence of the sparse prior on the causal inference may be gained by numerically comparing the marginal likelihoods of the Markov-equivalent models with and without confounders Mx←z→y, Mx→y. (Comparison of these models is of particular importance in epidemiology, because while the temporal data may often be available for distinguishing direct and reverse models Mx→y and My→x, it is generally difficult to ensure that there is no confounding). Figure 2 shows that when the empirical correlations are strong and γ1 is at intermediate levels, there is a strong preference for a causal model. This is because the alternative model with the confounders will have more parameters, and the weights will need to be larger (and therefore more strongly penalized by the prior) in order to lead to the same likelihood (note that for var(x) = var(y) = 1, the likelihood-equivalence is achieved for w = vwz, |w| ≤1). Larger values of γ1 will tend to strongly penalize all the weights, which makes the models largely indistinguishable. Also, as the number of genetic instruments grows, evidence in favor of the causal or pleiotropic model will be less dependent upon the priors on model parameters. For instance, with two genotypic variables that perturb a single transcript, the causal model has three adjustable parameters, but the pleiotropic model has five (see Figure 1 left, (iv)). Where several genotypic variables perturb a single transcript and the causal model fits the data nearly as well as the pleiotropic model, the causal model will have higher marginal likelihood under almost any plausible prior, because the slightly better fit of the pleiotropic model will be outweighed by the penalty imposed by several extra adjustable parameters. Inference While the choice of prior (2) encourages sparse solutions, it makes exact inference of the posterior parameters p(θ|D) analytically intractable. The most efficient approach is based on the maximuma-posteriori (MAP) treatment ([36], [9]), which reduces to solving the optimization problem θMAP = arg max θ {log p ({˜y}, {˜x}|{g}, θ) + log p(θ)} (3) for the joint parameters θ, where the latent variables have been integrated out. Note that the MAP solution for SPIV may also be easily derived for the semi-supervised case where the biomarker and outcome vectors are only partially observed. Compared to other approximations of inference in sparse linear models based e.g. on sampling or expectation propagation [26, 31], the MAP approximation allows for an efficient handling of very large networks with multiple instruments and biomarkers, and makes it straightforward to incorporate latent confounders. Depending on the choice of the global sparseness and grouping hyperparameters γ1, γ2, the obtained solutions for the weights will tend to be sparse, which is also in contrast to the full inference methods. In high dimensions in particular, the parsimony induced by the point-estimates will facilitate structure discovery and interpretations of the findings. One way to optimize (3) is by an EM-like algorithm. For example, the fixed-point update for ui ∈ R|g| linking biomarker xi with the vector of instruments g is easily expressed as u(t) i =  GT G + σ2 xi  γ1´U(t−1) i + γ2I|g| −1 GT ⟨xi⟩−GT ⟨Z⟩vi  , (4) 5 Rgs5 Uap1 Nr1i3 Apoa2 Fcer1g Slamf7 Coq4 Trim33 Dph5 Slc30a7 Dbt Lrrc39 Agl Snx7 Abcd3 4930417M19Rik Hadha Csn1s1 Dck BC053749 Msx3 Brd7 Mbd3l2 Tmem45b Ttc12 Isl2 Pik3r4 Hbs1l 4930520K10Rik Cyp27b1 Trpv3 5530401A14Rik Tbx2 1110001A07Rik Prkar2b Rbm25 Ear14 Glycam1 Atp5j Fez2 AW061290 AC150274.2 4933429F08Rik Cidea St8sia5 Olfr1453 MI between biomarkers and HDL at ΘMAP, γ1 = 40.0, γ2 = 10.0 Figure 3: Top: SPIV for artificial datasets. Left/right plots show typical applications for the high and low observation noise (σ2 ˜x = 0.25 and σ2 ˜x = 0.05 respectively). Top and bottom rows of each Hinton diagram correspond to the ground truth and the MAP weights U (1–18), W (19–21), Wg (22– 27). Bottom: SPIV for a genome-wide study of causal effects on HDL in heterogeneous stock mice. Left/right plots show maximum a-posteriori weights θMAP and the mutual information I(xi, y|e) between the unobserved biomarkers and outcome evaluated from the model at θMAP , under the joint Gaussian assumption. A cluster of pleiotropic links on chromosome 1 at about 173 MBP is consistent with biology. The biomarker with the strongest unconfounded effect on HDL is Cyp27b1. Transcripts that are most predictive of HDL through their links with pleiotropic genetic markers on chrom 1 are Uap1, Rgs5, Apoa2, and Nr1i3. Parameters γ1,2 have been obtained by cross-validation. where G ∈Rn×|g| is the design matrix, (´Ui)kl = δkl/|uki| ∀k, l ∈[1, |g|]∩Z, xi ∈Rn, Z ∈Rn×|z|, vi ∈R|z|, and σ2 xi = (Ψx)ii. The expectations ⟨.⟩are computed with respect to p(.|{˜x}, {˜y}, {g}), which for (1) are easily expressed in the closed form. The rest is expressed analogously, and extensions to the partially observed cases are straight-forward. Faster (although more heuristic) alternatives may be used for speeding up the M-step (e.g. [7]). The hyperparameters may be set by cross-validation, marginalized out by specifying a hyper-prior, or set heuristically based on the expected number of links to be retained in the posterior mode. Once a sparse representation is produced by pruning irrelevant dimensions, more computationally-intensive inference methods for the full posterior (such as expectation propagation or MCMC) may be used in the resulting lowerdimensional model if needed. After fitting SPIV to data, formal hypotheses tests were performed by comparing the marginal likelihoods of the specific models for the retained instruments, biomarkers, and target outcomes. These were evaluated by the Laplace approximation at θMAP (e.g. [20]). 4 Results Artificial data: We applied SPIV to several simulated datasets, and compared specific modeling hypotheses for the biomarkers retained in the posterior modes. The structures were consistent with the generic SPIV model, with all non-zero weights sampled from N(0, 1). Figure 3 (top) shows typical results for the high/low observation noise (∀i, σ2 ˜xi = σ2 ˜y = 0.25/0.05). Note excellent signconsistency of the results for the more important factors. Separate simulations showed robustness under multiple EM runs and under- or over-estimation of the true number of confounders. Subsequent testing of the specific modeling hypotheses for the most important factors resulted in the correct discrimination of causal and confounded associations in ≈86% of cases. Genome-wide study of HDL cholesterol in mice: To demonstrate our method for a large-scale practical application, we examined effects of gene transcript levels in the liver on plasma highdensity lipoprotein (HDL) cholesterol levels for a mice from a heterogeneous stock. The genetic factors influencing HDL in mice have been well explored in biology e.g. by Valdar et. al. [38]. The gene expression data was collected and preprocessed by [13], who have kindly agreed to share a part of their data. Breeding pairs for the stock were obtained at 50 generations after the stock 6 foundation. At each of the 12500 marker loci, genotypes were described by 8-D vectors of expected founder ancestry proportions inferred from the raw marker genotypes by an HMM-based reconstruction method [23]. Mouse-specific covariates included age and sex, which were used to augment the set of genetic instruments. The full set of phenotypic biomarkers consisted of 47429 transcript levels, appropriately transformed and cleaned. Available data included 260 animals. Before applying our method, we decreased the dimensionality of the genetic features and RNA expressions by using a combination of seven feature (subset) selection methods, based on applications of filters, greedy (step-wise) regression, sequential approximations of the mutual information between the retained set and the outcome of interest, and applications of regression methods with LASSO and elastic net (EN) shrinkage priors for the genotypes g, observed biomarkers ˜x, and observed HDL measurements ˜y. For the LASSO and EN methods, global hyper-parameters were obtained by 10-fold cross-validation. Note that feature selection is unavoidable for genome-wide studies using gene expressions as biomarkers. Indeed, the considered case of ∼O(105) instruments and 47K biomarkers would give rise to ≳O(109) interaction weights, which is expensive to analyze or even keep in memory. After applying subset selection methods, SPIV was typically applied to subsets of data with ∼O(105) loci-biomarker interactions. The results of the SPIV analysis of this dataset are shown on Figure 3 (bottom). The bottom left plot shows maximum a-posteriori weights θMAP computed by running the EM-like optimization procedure to convergence from 20 random initializations. For a model with latent variables and about 30, 000 weights, each run took approximately 10 minutes of execution time (only weakly optimized Matlab code, simple desktop). The parameters γ1,2 were obtained by 10-fold CV. Note that only a fraction of the variables remains in the posterior. In this case and for the considered sparseness-inducing priors, no hidden confounders appear to have strong effects on the outcome in the posterior1. The spikes of the pleiotropic activations in sex chromosome 20 and around chromosome 1 are consistent with the biological knowledge [38]. The biomarker with the strongest direct effect on HDL (computed as the mean MAP weight wi : xi →y divided by its standard deviation over multiple runs, where each mean weight exceeds a threshold) is the expression of Cyp27b1 (gene responsible for vitamin D metabolism). Knockout of the Cyp27b1 gene in mice has been shown to alter body fat stores [24], which might be expected to affect HDL cholesterol levels. Recently it has also been shown that quantitative trait locus for circulating vitamin D levels in humans includes a gene that codes for the enzyme that synthesizes cholesterol [1]. A subsequent comparison of 18 specific reverse, pleiotropic, and causal models for Cyp27b1, HDL, and the whole vector of retained genetic instruments (known to be causal by definition) showed a slightly stronger evidence in favor of the reverse hypothesis without latent confounders (with the ratio of Laplace approximations of the marginal likelihoods of reverse vs causal models of ≈1.95 ± 0.27). This is in contrast to the LCMS where the results are strongly affected by the choice of an instrument (Figure 1 right shows the results for Cyp27b1, HDL, and the same choice of instruments). To demonstrate an application to gene fine-mapping studies, Figure 3 (bottom right) shows the approximate mutual information I(xi, y|e = {age, sex}) between the underlying biomarkers and unobserved HDL levels expressed from the model at θMAP . The mutual information takes into account not only the strength of the direct effect of xi on y, but also associations with the pleiotropic instruments, strengths of the pleiotropic effects, and dependencies between the instruments. Under the as-if Gaussian assumption, I(xi, yj|θMAP ) = log(σ2 yjσ2 xi) −log(σ2 yjσ2 xi −σ4 yjxi), where σ2 yj = ∥Σ1/2 gg (Uwj + wgj)∥2 + ∥Ψ1/2 z (Vwj + wzj)∥2 + wT j Ψxwj + Ψyj, (5) with the rest expressed analogously. Here Σgg ∈R|g|×|g| is the empirical covariance of the instruments, wj ∈R|x|, wzj ∈R|z|, and wgj ∈R|g| are the MAP weights of the couplings of yj with the biomarkers, confounders, and genetic instruments respectively. When the outcome is HDL, the majority of predictive transcripts are fine-mapped to a small region on chromosome 1 which includes Uap1, Rgs5, Apoa2, and Nr1i3. The informativeness of these genes about the HDL cholesterol cannot be inferred simply from correlations between the measured gene expression and HDL levels; for example, when ranked in accordance to ρ2(˜xi, ˜y|age, sex), the top 4 genes have the rankings 1No confounder effects in the posterior mode for the considered γ1,2 is specific to the considered mouse HDL dataset, which shows relatively strong correlations between the measured biomarkers and the outcome. An application of SPIV to proprietary human data for a study of effects of vitamins and calcium levels on colorectal cancer (which we are not yet allowed to publish) showed very strong effects of the latent confounders. 7 of 838, 961, 6284, and 65 respectively. The findings are also biologically plausible and consistent with high-profile biological literature (with associations between Apoa2 and HDL described in [38], and strong links of Rgs5 to a genomic region strongly associated with metabolic traits discussed in [5], while Nr1i3 and Uap1 are their neighbors on chromosome 1 within ∼1Mbp). Note that the couplings are via the links with the pleiotropic genetic markers on chromosome 1. Adjusting for sex and age prior to performing feature selection and inference did not significantly change the results. The results reported here appear to be stable for different choices of feature selection methods, data adjustments, and algorithm runs. We note, however, that different results may potentially be obtained based on the choice of animal populations and/or processing of the biomarker (gene expression) measurements. Details of the data collection, microarray preprocessing, and feature selection, along with the detailed findings for other biomarkers and phenotypic outcomes will be made available online. Definitive confirmation of these relationships would require gene knock-out experiments. 5 Discussion and extensions In large-scale genetic and bio-medical studies, we are facing a practical task of reducing a huge set of candidate causes of complex traits to a more manageable subset of candidates where experimental control (such as gene knockout experiments or biomarker alternations) may be performed. SPIV performs the screening of interesting biomarker-phenotype and genotype-biomarker-phenotype associations by exploiting the maximum-a-posteriori inference in a sparse linear latent variable model. Additional screening is performed by comparing approximate marginal likelihoods of specific modeling hypotheses, including direct, reverse, and pleiotropic models with and without confounders, which (under the assumption of no “prior equivalence”) may serve as an additional test of possible causation [21]. Intuitively, the approach is motivated by the observation that while independence of variables implies that they are not in a causal relation, a preference for an unconfounded causal model may indicate possible causality and require further controlled experiments. Technically, SPIV may be viewed as an extension of LASSO and elastic net regression which allows for latent variables and pleiotropic dependencies. While being particularly attractive for genetic studies, SPIV or its modifications may potentially be applied for addressing more general structure learning tasks. For example, when applied iteratively, SPIV may be used to guide search over richer model structures (where a greedy search over parent nodes is replaced by a continuous optimization problem which combines subset selection and regression in the presence of latent variables), which may be used for structure learning problems. Other extensions of the framework could involve hybrid (discrete- and real-valued) outcomes with nonlinear/nongaussian likelihoods. Also, as mentioned earlier, once sparse representations are produced by the MAP inference, it may be possible to utilize more accurate approximations of the inference applicable for the induced sparse structures [6]. Also note that sparse priors on the linear weights tend to give rise to sparse covariance matrices. A potentially interesting alternative may involve a direct estimation of conditional precision matrices with a sparse group penalty. While SPIV attempts to focus the attention on important biomarkers establishing strong direct associations with the phenotypes, modeling of the precisions may be used for filtering out unimportant factors (conditionally) independent of the outcome variables. Our future work will involve a direct estimation of the sparse conditional precision matrix Σ−1 xyz|g of the biomarkers, outcomes, and unmeasured confounders (given the instruments), through latent variable extensions of the recently proposed graphical LASSO and related methods [11, 18]. The key purpose of this paper is to draw attention of the machine learning community to the problem of inferring causal relationships between phenotypic measurements and complex traits (disease risks), which may have tremendous implications in epidemiology and systems biology. Our specific approach to the problem is inspired by the ideas of instrumental variable analysis commonly used in epidemiological studies, which we have extended to properly address situations when the genetic variables may be direct causes of the hypothesized outcomes. The sparse instrumental variable framework (SPIV) overcomes limitations of the likelihood-based LCMS methods often used by geneticists, by modeling joint effects of genetic loci and biomarkers in the presence of noise and latent variables. The approach is tractable enough to be used in genetic studies with tens of thousands of variables. It may be used for identifying specific genes associated with phenotypic outcomes, and may have wide applications in identification of biomarkers as possible targets for interventions, or as proxy endpoints for early-stage clinical trials. 8 References [1] J. Ahn, K. Yu, and R. Stolzenberg-Solomon et. al. Genome-wide association study of circulating vitamin D levels. Human Molecular Genetics, 2010. Epub ahead of print. [2] J. D. Angrist, G. W. Imbens, and D. B. Rubin. Identification of causal effects using instrumental variables (with discussion). J. of the Am. Stat. Assoc., 91:444–455, 1996. [3] R. J. Bowden and D. A. Turkington. Instrumental Variables. Cambridge Uni Press, 1984. [4] C. Brito and J. Pearl. Generalized instrumental variables. In UAI, 2002. [5] Y. Chen, J. Zhu, and P. Y. Lum et. al. Variations in DNA elucidate molecular networks that cause disease. Nature, 452:429–435, 2008. [6] B. Cseke and T. Heskes. Improving posterior marginal approximations in latent Gaussian models. In AISTATS, 2010. [7] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Ann. of Stat., 32, 2004. [8] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. of the Am. Stat. Assoc., 96(456):1348–1360, 2001. [9] M. Figueiredo. Adaptive sparseness for supervised learning. IEEE Trans. on PAMI, 25(9), 2003. [10] I. E. Frank and J. H. Friedman. A statistical view of some chemometrics regression tools. Technometrics, 35(2):109–135, 1993. [11] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 2008. [12] D. Heckerman, C. Meek, and G. F. Cooper. A Bayesian approach to causal discovery. In C. Glymour and G. F. Cooper, editors, Computation, Causation, and Discovery. MIT, 1999. [13] G. J. Huang, S. Shifman, and W. Valdar et. al. High resolution mapping of expression QTLs in heterogeneous stock mice in multiple tissues. Genome Research, 19(6):1133–40, 2009. [14] J. Jia and B. Yu. On model selection consistency of the elastic net when p ≫n. Technical Report 756, UC Berkeley, Department of Statistics, 2008. [15] M. B. Katan. Apolipoprotein E isoforms, serum cholesterol and cancer. Lancet, i:507–508, 1986. [16] S. Kim and E. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLOS Genetics, 5(8), 2009. [17] D. A. Lawlor, R. M. Harbord, and J. Sterne et. al. Mendelian randomization: using genes as instruments for making causal inferences in epidemiology. Stat. in Medicine, 27:1133–1163, 2008. [18] E. Levina, A. Rothman, and J. Zhu. Sparse estimation of large covariance matrices via a nested lasso penalty. The Ann. of App. Stat., 2(1):245–263, 2008. [19] M. H. Maathius, M. Kalisch, and P. Buhlmann. Estimating high-dimensional intervention effects from observation data. The Ann. of Stat., 37:3133–3164, 2009. [20] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4:415–447, 1992. [21] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge Uni Press, 2003. [22] J. Mooij, D. Janzing, J. Peters, and B. Schoelkopf. Regression by dependence minimization and its application to causal inference in additive noise models. In ICML, 2009. [23] R. Mott, C. J. Talbot, M. G. Turri, A. C. Collins, and J. Flint. A method for fine mapping quantitative trait loci in outbred animal stocks. Proc. Nat. Acad. Sci. USA, 97:12649–12654, 2000. [24] C. J. Narvaez and D. Matthews et. al. Lean phenotype and resistance to diet-induced obesity in vitamin D receptor knockout mice correlates with induction of uncoupling protein-1. Endocrinology, 150(2), 2009. [25] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996. [26] T. Park and G. Casella. The Bayesian LASSO. J. of the Am. Stat. Assoc., 103(482), 2008. [27] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge Uni Press, 2000. [28] J. Pearl. Causal inference in statistics: an overview. Statistics Surveys, 3:96–146, 2009. [29] J. M. Robins and S. Greenland. Identification of causal effects using instrumental variables: comment. J. of the Am. Stat. Assoc., 91:456–458, 1996. [30] E. E. Schadt, J. Lamb, X. Yang, and J. Zhu et. al. An integrative genomics approach to infer causal associations between gene expression and disease. Nature Genetics, 37(7):710–717, 2005. [31] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. JMLR, 9, 2008. [32] I. Shpitser and J. Pearl. Identification of conditional interventional distributions. In UAI, 2006. [33] R. Silva, R. Scheines, C. Glymour, and P. Spirtes. Learning the structure of linear latent variable models. JMLR, 7, 2006. [34] G. D. Smith and S. Ebrahim. Mendelian randomisation: can genetic epidemiology contribute to understanding environmental determinants of disease? Int. J. of Epidemiology, 32:1–22, 2003. [35] D.C. Thomas and D.V. Conti. Commentary: The concept of Mendelian randomization. Int. J. of Epidemiology, 32, 2004. [36] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSS B, 58(1):267–288, 1996. [37] M. E. Tipping. Sparse Bayesian learning and the RVM. JMLR, 1:211–244, 2001. [38] W. Valdar, L. C. Solberg, and S. Burnett et. al. Genome-wide genetic association of complex traits in heterogeneous stock mice. Nature Genetics, 38:879–887, 2006. [39] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using L1-constrained quadratic programmming. IEEE Trans. on Inf. Theory, 55:2183 – 2202, 2007. [40] M. Yuan and Y. Lin. On the nonnegative garrote estimator. JRSS:B, 69, 2007. [41] J. Zhu, M. C. Wiener, and C. Zhang et. al. Increasing the power to detect causal associations by combining genotypic and expression data in segregating populations. PLOS Comp. Biol., 3(4):692–703, 2007. [42] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. JRSS:B, 67(2), 2005. 9
2010
90
4,136
Natural Policy Gradient Methods with Parameter-based Exploration for Control Tasks Atsushi Miyamae†‡, Yuichi Nagata†, Isao Ono†, Shigenobu Kobayashi† †: Department of Computational Intelligence and Systems Science Tokyo Institute of Technology, Kanagawa, Japan ‡: Research Fellow of the Japan Society for the Promotion of Science {miyamae@fe., nagata@fe., isao@, kobayasi@}dis.titech.ac.jp Abstract In this paper, we propose an efficient algorithm for estimating the natural policy gradient using parameter-based exploration; this algorithm samples directly in the parameter space. Unlike previous methods based on natural gradients, our algorithm calculates the natural policy gradient using the inverse of the exact Fisher information matrix. The computational cost of this algorithm is equal to that of conventional policy gradients whereas previous natural policy gradient methods have a prohibitive computational cost. Experimental results show that the proposed method outperforms several policy gradient methods. 1 Introduction Reinforcement learning can be used to handle policy search problems in unknown environments. Policy gradient methods [22, 20, 5] train parameterized stochastic policies by climbing the gradient of the average reward. The advantage of such methods is that one can easily deal with continuous state-action and continuing (not episodic) tasks. Policy gradient methods have thus been successfully applied to several practical tasks [11, 21, 16]. In the domain of control, a policy is often constructed with a controller and an exploration strategy. The controller is represented by a domain-appropriate pre-structured parametric function. The exploration strategy is required to seek the parameters of the controller. Instead of directly perturbing the parameters of the controller, conventional exploration strategies perturb the resulting control signal. However, a significant problem with the sampling strategy is that the high variance in their gradient estimates leads to slow convergence. Recently, parameter-based exploration [18] strategies that search the controller parameter space by direct parameter perturbation have been proposed, and these have been demonstrated to work more efficiently than conventional strategies [17, 18, 13]. Another approach to speeding up policy gradient methods is to replace the gradient with the natural gradient [2], the so-called natural policy gradient [9, 4, 15]; this is motivated by the intuition that a change in the policy parameterization should not influence the result of the policy update. The combination of parameter-based exploration strategies and the natural policy gradient is expected to result in improvements in the convergence rate; however, such an algorithm has not yet been proposed. However, natural policy gradients with parameter-based exploration strategies have a disadvantage in that the computational cost is high. The natural policy gradient requires the computation of the inverse of the Fisher information matrix (FIM) of the policy distribution; this is prohibitively expensive, especially for a high-dimensional policy. Unfortunately, parameter-based exploration strategies tend to have higher dimensions than control-based ones. Therefore, the expected method is difficult to apply for realistic control tasks. 1 In this paper, we propose a new reinforcement learning method that combines the natural policy gradient and parameter-based exploration. We derive an efficient algorithm for estimating the natural policy gradient with a particular exploration strategy implementation. Our algorithm calculates the natural policy gradient using the inverse of the exact FIM and the Monte Carlo-estimated gradient. The resulting algorithm, called natural policy gradients with parameter-based exploration (NPGPE), has a computational cost similar to that of conventional policy gradient algorithms. Numerical experiments show that the proposed method outperforms several policy gradient methods, including the current state-of-the-art NAC [15] with control-based exploration. 2 Policy Search Framework We consider the standard reinforcement learning framework in which an agent interacts with a Markov decision process. In this section, we review the estimation of policy gradients and describe the difference between control- and parameter-based exploration. 2.1 Markov Decision Process Notation At each discrete time t, the agent observes state st ∈S, selects action at ∈A, and then receives an instantaneous reward rt ∈ℜresulting from a state transition in the environment. The state S and the action A are both defined as continuous spaces in this paper. The next state st+1 is chosen according to the transition probability pT (st+1|st, at), and the reward rt is given randomly according to the expectation R(st, at). The agent does not know pT (st+1|st, at) and R(st, at) in advance. The objective of the reinforcement learning agent is to construct a policy that maximizes the agent’s performance. A parameterized policy π(a|s, θ) is defined as a probability distribution over an action space under a given state with parameters θ. We assume that each θ ∈ℜd has a unique well-defined stationary distribution pD(s|θ). Under this assumption, a natural performance measure for infinite horizon tasks is the average reward η(θ) = Z S pD(s|θ) Z A π(a|s, θ)R(s, a)dads. 2.2 Policy Gradients Policy gradient methods update policies by estimating the gradient of the average reward w.r.t. the policy parameters. The state-action value is Qθ(s, a) = E[P∞ t=1 rt −η(θ)|s1 = s, a1 = a, θ], and it is assumed that π(a|s, θ) is differentiable w.r.t. θ. The exact gradient of the average reward (see [20]) is given by ∇θη(θ) = Z S pD(s|θ) Z A π(a|s, θ)∇θ log π(a|s, θ)Qθ(s, a)dads. (1) The natural gradient [2] has a basis in information geometry, which studies the Riemannian geometric structure of the manifold of probability distributions. A result in information geometry states that the FIM defines a Riemannian metric tensor on the space of probability distributions [3] and that the direction of the steepest descent on a Riemannian manifold is given by the natural gradient, given by the conventional gradient premultiplied by the inverse matrix of the Riemannian metric tensor [2]. Thus, the natural gradient can be computed from the gradient and the FIM, and it tends to converge faster than the conventional gradient. Kakade [9] applied the natural gradient to policy search; this was called as the natural policy gradient. If the FIM is invertible, the natural policy gradient ˜∇θη(θ) ≡F−1 θ ∇θη(θ) is given by the policy gradient premultiplied by the inverse matrix of the FIM Fθ. In this paper, we employ the FIM proposed by Kakade [9], defined as Fθ = Z S pD(s|θ) Z A π(a|s, θ)∇θ log π(a|s, θ)∇θ log π(a|s, θ)Tdads. 2 Figure 1: Illustration of the main difference between control-based exploration and parameter-based exploration. The controller ψ(u|s, w) is represented by a single-layer perceptron. While the controlbased exploration strategy (left) perturbs the resulting control signal, the parameter-based exploration strategy (right) perturbs the parameters of the controller. 2.3 Learning from Samples The calculation of (1) requires knowledge of the underlying transition probabilities pD(s|θ). The GPOMDP algorithm [5] instead computes a Monte Carlo approximation of (1): the agent interacts with the environment, producing an observation, action, and reward sequence {s1, a1, r1, s2, ..., sT , aT , rT }. Under mild technical assumptions, the policy gradient approximation is ∇θη(θ) ≈1 T T X t=1 rtzt, where zt = βzt−1 + ∇θ log π(at|st, θ) is called the eligibility trace [12], ∇θ log π(at|st, θ) is called the characteristic eligibility [22], and β denotes the discount factor (0 ≤β < 1). As β →1, the estimation approaches the true gradient 1 , but the variance increases (β is set to 0.9 in all experiments). We define ˜∇θ log π(at|st, θ) ≡F−1 θ ∇θ log π(at|st, θ). Therefore, the natural policy gradient approximation is ˜∇θη(θ) ≈1 T T X t=1 F−1 θ rtzt = 1 T T X t=1 rt˜zt, (2) where ˜zt = β˜zt−1 + ˜∇θ log π(at|st, θ). To estimate the natural policy gradient, the heuristic suggested by Kakade [9] used Fθ,t = (1 −1 t )Fθ,t−1 + 1 t (∇θ log π(at|st, θ)∇θ log π(at|st, θ)T + λI), (3) the online estimate of the FIM, where λ is a small positive constant. 2.4 Parameter-based Exploration In most control tasks, we attempt to have a (deterministic or stochastic) controller ψ(u|s, w) and an exploration strategy, where u ∈U ⊆ℜm denotes control and w ∈W ⊆ℜn, the parameters of the controller. The objective of learning is to seek suitable values of the parameters w, and the exploration strategy is required to carry out stochastic sampling near the current parameters. A typical exploration strategy model, we call control-based exploration, would be a normal distribution for the control space (Figure1 (left)). In this case, the action of the agent is control, and the policy is represented by πU(u|s, θ) = 1 (2π)m/2|Σ|1/2 exp µ −1 2(u −ψ(s, w))TΣ−1(u −ψ(s, w)) ¶ : S →U, where Σ is the m × m covariance matrix and the agent seeks θ = ⟨w, Σ⟩. The control at time t is generated by ˜ut = ψ(st, w), ut ∼N(˜ut, Σ). 1[5] showed that the approximation error is proportional to (1−β)/(1−|κ2|), where κ2 is the sub-dominant eigenvalue of the Markov chain 3 One useful feature of such a Gaussian unit [22] is that the agent can potentially control its degree of exploratory behavior. The control-based exploration strategy samples near the output of the controller. However, the structures of the parameter space and the control space are not always identical. Therefore, the sampling strategy generates controls that are not likely to be generated from the current controller, even if the exploration variances decrease. This property leads to large variance gradient estimates. This might be one reason why the policy improvement gets stuck. To address this issue, Sehnke et al. [18] introduced a different exploration strategy for policy gradient methods called policy gradients with parameter-based exploration (PGPE). In this approach, the action of the agent is the parameters of the controller, and the policy is represented by πW ( ˜w|s, θ) = 1 (2π)n/2|˜Σ|1/2 exp µ −1 2( ˜w −w)T ˜Σ−1( ˜w −w) ¶ : S →W, where ˜Σ is the n × n covariance matrix and the agent seeks θ = ⟨w, ˜Σ⟩. The controller is included in the dynamics of the environment, and the control at time t is generated by ˜wt ∼N(w, ˜Σ), ut = ψ(st, ˜wt). GPOMDP-based methods can estimate policy gradients such as partially observable settings, i.e., the policy πW ( ˜w|s, θ) excludes the observation of the current state. Because this exploration strategy directly perturbs the parameters (Figure1 (right)), the samples are generated near the current parameters under small exploration variances. Note that the advantage of this framework is that because the gradient is estimated directly by sampling the parameters of the controller, the implementation of the policy gradient algorithms does not require ∂ ∂θψ, which is difficult to derive from complex controllers. Sehnke et al. [18] demonstrated that PGPE can yield faster convergence than the control-based exploration strategy in several challenging episodic tasks. However, the parameter-based exploration tends to have a higher dimension than the control-based one. Therefore, because of the computational cost of the inverse of Fθ calculated by (3), natural policy gradients find limited applications. 3 Natural Policy Gradients with Parameter-based Exploration In this section, we propose a new algorithm called natural policy gradients with parameter-based exploration (NPGPE) for the efficient estimation of the natural policy gradient. 3.1 Implementation of Gaussian-based Exploration Strategy We employ the policy representation model µ( ˜w|θ), a multivariate normal distribution with parameters θ = ⟨w, C⟩, where w represents the mean and C, the Cholesky decomposition of the covariance matrix ˜Σ such that C is an n × n upper triangular matrix and ˜Σ = CTC. Sun et al. [19] noted two advantages of this implementation: C makes explicit the n(n + 1)/2 independent parameters determining the covariance matrix ˜Σ; in addition, the diagonal elements of C are the square roots of the eigenvalues of ˜Σ, and therefore, CTC is always positive semidefinite. In the remainder of the text, we consider θ to be an [n(n + 3)/2]-dimensional column vector consisting of the elements of w and the upper-right elements of C, i.e., θ = [wT, (C1:n,1)T, (C2:n,2)T, ..., (Cn:n,n)T]T. Here, Ck:n,k is the sub-matrix in C at row k to n and column k. 3.2 Inverse of Fisher Information Matrix Previous natural policy gradient methods [9] use the empirical FIM, which is estimated from a sample path. Such methods are highly inefficient for µ( ˜w|θ) to invert the empirical FIM, a matrix with O(n4) elements. We avoid this problem by directly computing the exact FIM. 4 Algorithm 1 Natural Policy Gradient Method with Parameter-based Exploration Require: θ = ⟨w, C⟩: policy parameters, ψ(u|s, w): controller, α: step size, β: discount rate, b: baseline. 1: Initialize ˜z0 = 0, observe s1. 2: for t = 1, ... do 3: Draw ξt ∼N(0, I), compute action ˜wt = CTξt + w. 4: Execute ut ∼ψ(ut|st, ˜wt), obtain observation st+1 and reward rt. 5: ˜∇w log µ( ˜wt|θ) = ˜wt −w, ˜∇C log µ( ˜wt|θ) = {triu(ξtξT t ) −1 2diag(ξtξT t ) −1 2I}C 6: ˜zt = β˜zt−1 + ˜∇θ log µ( ˜wt|θ) 7: θ ←θ + α(rt −b)˜zt 8: end for Substituting π = µ( ˜w|θ) into (1), we can rewrite the policy gradient to obtain ∇θη(θ) = Z S pD(s|θ) Z W µ( ˜w|θ)∇θ log µ( ˜w|θ)Qθ(s, ˜w)d ˜wds. Furthermore, the FIM of this distribution is Fθ = Z S pD(s|θ) Z W µ( ˜w|θ)∇θ log µ( ˜w|θ)∇θ log µ( ˜w|θ)Td ˜wds = Z W µ( ˜w|θ)∇θ log µ( ˜w|θ)∇θ log µ( ˜w|θ)Td ˜w. Because Fθ is independent of pD(s|θ), we can use the real FIM. Sun et al. [19] proved that the precise FIM of the Gaussian distribution N(w, CTC) becomes a block-diagonal matrix diag(F0, ..., Fn) whose first block F0 is identical to ˜Σ−1 and whose k-th (1 ≤k ≤n) block Fk is given by Fk = · c−2 k,k 0 0 0 ¸ + ˜Σ−1 k:n,k:n = [0 I¯k] C−1 ¡ vkvT k + I ¢ C−T · 0 I¯k ¸ , where vk denotes an n-dimensional column vector of which the only nonzero element is the k-th element that is one, and I¯k is the [n −k + 1]-dimensional identity matrix. Further, Akimoto et al. [1] derived the inverse matrix of the k-th diagonal block Fk of the FIM. Because Fθ is a block-diagonal matrix and C is upper triangular, it is easy to verify that the inverse matrix of the FIM is F−1 k = [0 I¯k] CT µ −1 2vkvT k + · 0 0 0 I¯k ¸¶ C · 0 I¯k ¸ , where we use vT k C · 0 0 0 I¯k ¸ C−1 = vT k and [0 I¯k] C · 0 0 0 I¯k ¸ C−1 = [0 I¯k] . (4) 3.3 Natural Policy Gradient Now, we derive the eligibility premultiplied by the inverse matrix of the FIM ˜∇θ log µ( ˜wt|θ) = F−1 θ ∇θ log µ( ˜wt|θ) in the same manner as [1]. The characteristic eligibility w.r.t. w is given by ∇w log µ( ˜wt|θ) = ˜Σ−1( ˜wt −w). Obviously, F−1 0 = ˜Σ and ˜∇w log µ( ˜wt|θ) = F−1 0 ∇w log µ( ˜wt|θ) = ˜wt −w. The characteristic eligibility w.r.t. C is given by ∂ ∂ci,j log µ( ˜wt|θ) = vT i ¡ triu(YtC−T) −diag(C−1) ¢ vj, 5 -2 -1.5 -1 -0.5 0 103 104 105 106 mean return step empirical optimum VPG(w) VPG(u) NPG(w) NPG(u) -4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 control state mean paramter-based control-based -2 -1.5 -1 -0.5 0 0.5 1 -4 -3 -2 -1 0 1 2 3 4 paramter state mean paramter-based control-based Figure 2: Performance of NPG(w) as compared to that of NPG(u), VPG(w), and VPG(u) in the linear quadratic regulation task averaged over 100 trials. Left: The empirical optimum denotes the mean return under the optimum gain. Center and Right: Illustration of the main difference between control- and parameter-based exploration. The sampling area of 1σ in the state-control space (center) and the state-parameter space (right) is plotted. where triu(YtC−T) denotes the upper triangular matrix whose (i, j) element is identical to the (i, j) element of YtC−T if i ≤j and zero otherwise, and Yt = C−T( ˜wt −w)( ˜wt −w)TC−1 is a symmetric matrix. Let ck = (ck,k, ..., ck,n)T (of dimension n + 1 −k); then, the characteristic eligibility w.r.t. ck is expressed as ∇ck log µ( ˜wt|θ) = [0 I¯k] ¡ C−1Yt −diag(C−1) ¢ vk. According to (4), diag(C−1)vk = c−1 k,kvk and vT k Cvk = ck,k and · 0 0 0 I¯k ¸ Cvk = ck,kvk, the k-th block of F−1 θ ∇θ log µ( ˜wt|θ) is therefore ˜∇ck log µ( ˜wt|θ) = F−1 k ∇ck log µ( ˜wt|θ) = [0 I¯k] CT µ −1 2vkvT k + · 0 0 0 I¯k ¸¶ C · 0 0 0 I¯k ¸ ¡ C−1Yt −diag(C−1) ¢ vk = [0 I¯k] CT µ −1 2vkvT k + · 0 0 0 I¯k ¸¶ (Yt −I)vk. Because ˜∇ck log µ( ˜wt|θ)T = ³ ˜∇C log µ( ˜wt|θ) ´ k,k:n, we obtain ˜∇C log µ( ˜wt|θ) = µ triu(Yt) −1 2diag(Yt) −1 2I ¶ C. (5) Therefore, the time complexity of computing ˜∇θ log µ( ˜wt|θ) = [ ˜∇w log µ( ˜wt|θ)T, ˜∇c1 log µ( ˜wt|θ)T, ..., ˜∇cn log µ( ˜wt|θ)T]T is O(n3), which is of the same order as the computation of ∇θ log µ( ˜wt|θ). This is a significant improvement over the current natural policy gradient estimation using (2) and (3) with parameter-based exploration, whose complexity is O(n6). Note that more simple forms for exploration distribution could be used. When we use the exploration strategy that is represented as an independent normal distribution for each parameter wi in w, the natural policy gradient is estimated in O(n) time. This limited form ignores the relationship between parameters, but it is practical for high-dimensional controllers. 3.4 An Algorithm For a parameterized class of controllers ψ(u|s, w), we can use the exploration strategy µ( ˜w|θ). An online version based on the GPOMDP algorithm of this implementation is shown in Algorithm 1. In practice, the parameters of the controller ˜wt are generated by ˜wt = CTξt + w, where ξt ∼N(0, I) are normal random numbers. Now, we can instead use Yt = C−T( ˜wt−w)( ˜wt−w)TC−1 = ξtξT t . To reduce the variance of the gradient estimation, we employ variance reduction techniques [6] to adapt the reinforcement baseline b. 6 Figure 3: Simulator of a two-link arm robot. 4 Experiments In this section, we evaluate the performance of our proposed NPGPE method. The efficiency of parameter-based exploration has been reported for episodic tasks [18]. We compare parameter- and control-based exploration strategies with natural gradient and conventional ”vanilla” gradients using a simple continuing task as an example of a linear control problem. We also demonstrate NPGPE’s usefulness for a physically realistic locomotion task using a two-link arm robot simulator. 4.1 Implementation We compare two different exploration strategies. The first is the parameter-based exploration strategy µ( ˜w|θ) presented in Section 3.1. The second is the control-based exploration strategy ϵ(u|˜u, D) represented by a normal distribution for a control space, where ˜u is the mean vector of the control generated by controller ψ and D represents the Cholesky decomposition of the covariance matrix Σ such that D is an m × m upper triangular matrix and Σ = DTD. The parameters of the policy πU(u|s, θ) are θ = ⟨w, D⟩to be an [n + m(m + 1)/2]-dimensional column vector consisting of the elements of w and the upper-right elements of D. 4.2 Linear Quadratic Regulator The following linear control problem can serve as a benchmark of delayed reinforcement tasks [10]. The dynamics of the environment is st+1 = st + ut + δ, where s ∈ℜ1, u ∈ℜ1, and δ ∼N(0, 0.52). The immediate reward is given by rt = −s2 t −u2 t. In this experiment, the set of possible states is constrained to lie in the range [-4, 4], and st is truncated. When the agent chooses an action that does not lie in the range [−4, 4], the action executed in the environment is also truncated. The controller is represented by ψ(u|s, w) = s · w, where w ∈ℜ1. The optimal parameter is given by w∗= 2/(1 + 2β + p 4β2 + 1) −1 from the Riccati equation. For clarification, we now write an NPG that employs the natural policy gradient and a VPG that employs the ”vanilla” policy gradient. Therefore, NPG(w) and VPG(w) denote the use of the parameterbased exploration strategy, and NPG(u) and VPG(u) denote the use of the control-based exploration strategy. Our proposed NPGPE method is NPG(w). Figure2 (left) shows the performance of all compared methods. We can see that the algorithm using parameter-based exploration had better performance than that using control-based exploration in the continuing task. The natural policy gradient also improved the convergence speed, and a combination with parameter-based exploration outperformed all other methods. The reason for the acceleration in learning in this case may be the fact that the samples generated by the parameter-based exploration strategy allow effective search. Figure2 (center and right) show plots of the sampling area in the state-control space and the state-parameter space, respectively. Because control-based exploration maintains the sampling area in the control space, the sampling is almost uniform in the parameter space at around s = 0, where the agent visits frequently. Therefore, the parameter-based exploration may realize more efficient sampling than the control-based exploration. 4.3 Locomotion Task on a Two-link Arm Robot We applied the algorithm to the robot shown in Figure3 of Kimura et al. [11]. The objective of learning is to find control rules to move forward. The joints are controlled by servo motors that react 7 0 1 2 3 4 5 6 104 105 106 107 mean return step NPG(w) NPG(u) NAC(u) 100 101 102 102 103 104 105 106 107 gain step -1 -0.5 0 0.5 1 weight 100 101 102 102 103 104 105 106 107 gain step -1 -0.5 0 0.5 1 weight Figure 4: Performance of NPG(w) as compared to that of NPG(u) and NAC(u) in the locomotion task averaged over 100 trials. Left: Mean performance of all compared methods. Center: Parameters of controller for NPG(w). Right: Parameters of controller for NPG(u). The parameters of the controller are normalized by gaini = qP j wi,j and weighti,j = wi,j/gaini, where wi,j denotes the j-th parameter of the i-th joint. Arrows in the center and right denote the changing points of the relation between two important parameters. to angular-position commands. At each time step, the agent observes the angular position of two motors, where each observation o1, o2 is normalized to [0, 1], and selects an action. The immediate reward is the distance of the body movement caused by the previous action. When the robot moves backward, the agent receives a negative reward. The state vector is expressed as s = [o1, o2, 1]T. The control for motor i is generated by ui = 1/(1 + exp(−P j sjwi,j)). The dimension of the parameters of the policies is dW = n(n + 3)/2 = 27 and dU = n + m(m + 1)/2 = 9 for the parameter- and control-based exploration strategy, respectively. We compared NPG(w), i.e., NPGPE, with NPG(u) and NAC(u). NAC is the state-of-the-art policy gradient algorithm [15] that combines natural policy gradients, actor-critic framework, and leastsquares temporal-difference Q-learning. NAC computes the inverse of a d×d matrix to estimate the natural steepest ascent direction. Because NAC(w) has O(d3 W ) time complexity for each iteration, which is prohibitively expensive, we apply NAC to only control-based exploration. Figure4 (left) shows our results. Initially, NPG(w) is outperformed by NAC(u); however, it then reaches good solutions with fewer steps. Furthermore, at a later stage, NAC(u) matches NPG(u). Figure4 (center and right) show the path of the relation between the parameters of the controller. NPG(w) is much slower than NPG(u) to adapt the relation at an early stage; however, it can seek the relations of important parameters (indicated by arrows in the figures) faster, whereas NPG(u) gets stuck because of inefficient sampling. 5 Conclusions This paper proposed a novel natural policy gradient method combined with parameter-based exploration to cope with high-dimensional reinforcement learning domains. The proposed algorithm, NPGPE, is very simple and quickly calculates the estimation of the natural policy gradient. Moreover, the experimental results demonstrate a significant improvement in the control domain. Future works will focus on developing actor-critic versions of NPGPE that might encourage performance improvements at an early stage, and on combining other gradient methods such as natural conjugate gradient methods [8]. In addition, a comparison with other direct parameter perturbation methods such as finite difference gradient methods [14], CMA-ES [7], and NES [19] will be necessary to gain a better understanding of the properties and efficacy of the combination of parameter-based exploration strategies and the natural policy gradient. Furthermore, the application of the algorithm to real-world problems is required to assess its utility. Acknowledgments This work was suported by the Japan Society for the Promotion of Science (22 9031). 8 References [1] Youhei Akimoto, Yuichi Nagata, Isao Ono, and Shigenobu Kobayashi. Bidirectional Relation between CMA Evolution Strategies and Natural Evolution Strategies. Parallel Problem Solving from Nature XI, pages 154–163, 2010. [2] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2):251– 276, 1998. [3] S. Amari and H. Nagaoka. Methods of Information Geometry. American Mathematical Society, 2007. [4] J. Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI’03: Proceedings of the 18th international joint conference on Artificial intelligence, pages 1019–1024, 2003. [5] Jonathan Baxter and Peter L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319–350, 2001. [6] Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471–1530, 2004. [7] V. Heidrich-Meisner and C. Igel. Variable metric reinforcement learning methods applied to the noisy mountain car problem. In EWRL 2008, pages 136–150, 2008. [8] Antti Honkela, Matti Tornio, Tapani Raiko, and Juha Karhunen. Natural conjugate gradient in variational inference. In ICONIP 2007, pages 305–314, 2008. [9] S. A. Kakade. A natural policy gradient. In In Advances in Neural Information Processing Systems, pages 1531–1538, 2001. [10] H. Kimura and S. Kobayashi. Reinforcement learning for continuous action using stochastic gradient ascent. In Intelligent Autonomous Systems (IAS-5), pages 288–295, 1998. [11] Hajime Kimura, Kazuteru Miyazaki, and Shigenobu Kobayashi. Reinforcement learning in pomdps with function approximation. In ICML ’97: Proceedings of the Fourteenth International Conference on Machine Learning, pages 152–160, 1997. [12] Hajime Kimura, Masayuki Yamamura, and Shigenobu Kobayashi. Reinforcement learning by stochastic hill climbing on discounted reward. In ICML, pages 295–303, 1995. [13] Jens Kober and Jan Peters. Policy search for motor primitives in robotics. In Advances in Neural Information Processing Systems 21, pages 849–856, 2009. [14] Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2219–2225, 2006. [15] Jan Peters and Stefan Schaal. Natural actor-critic. Neurocomputing, 71(7–9):1180–1190, 2008. [16] Silvia Richter, Douglas Aberdeen, and Jin Yu. Natural actor-critic for road traffic optimisation. In Advances in Neural Information Processing Systems 19, pages 1169–1176. MIT Press, Cambridge, MA, 2007. [17] Thomas R¨uckstieß, Martin Felder, and J¨urgen Schmidhuber. State-dependent exploration for policy gradient methods. In ECML PKDD ’08: Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II, pages 234–249, 2008. [18] Frank Sehnke, C Osendorfer, T Rueckstiess, A. Graves, J. Peters, and J. Schmidhuber. Policy gradients with parameter-based exploration for control. In Proceedings of the International Conference on Artificial Neural Networks (ICANN), pages 387–396, 2008. [19] Yi Sun, Daan Wierstra, Tom Schaul, and Juergen Schmidhuber. Efficient natural evolution strategies. In GECCO ’09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation, pages 539–546, 2009. [20] R. S. Sutton. Policy gradient method for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 12, pages 1057–1063, 2000. [21] Daan Wierstra, Er Foerster, Jan Peters, and Juergen Schmidhuber. Solving deep memory pomdps with recurrent policy gradients. In In International Conference on Artificial Neural Networks, 2007. [22] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, pages 229–256, 1992. 9
2010
91
4,137
Kernel Descriptors for Visual Recognition Liefeng Bo University of Washington Seattle WA 98195, USA Xiaofeng Ren Intel Labs Seattle Seattle WA 98105, USA Dieter Fox University of Washington & Intel Labs Seattle Seattle WA 98195 & 98105, USA Abstract The design of low-level image features is critical for computer vision algorithms. Orientation histograms, such as those in SIFT [16] and HOG [3], are the most successful and popular features for visual object and scene recognition. We highlight the kernel view of orientation histograms, and show that they are equivalent to a certain type of match kernels over image patches. This novel view allows us to design a family of kernel descriptors which provide a unified and principled framework to turn pixel attributes (gradient, color, local binary pattern, etc.) into compact patch-level features. In particular, we introduce three types of match kernels to measure similarities between image patches, and construct compact low-dimensional kernel descriptors from these match kernels using kernel principal component analysis (KPCA) [23]. Kernel descriptors are easy to design and can turn any type of pixel attribute into patch-level features. They outperform carefully tuned and sophisticated features including SIFT and deep belief networks. We report superior performance on standard image classification benchmarks: Scene-15, Caltech-101, CIFAR10 and CIFAR10-ImageNet. 1 Introduction Image representation (features) is arguably the most fundamental task in computer vision. The problem is highly challenging because images exhibit high variations, are highly structured, and lie in high dimensional spaces. In the past ten years, a large number of low-level features over images have been proposed. In particular, orientation histograms such as SIFT [16] and HOG [3] are the most popular low-level features, essential to many computer vision tasks such as object recognition and 3D reconstruction. The success of SIFT and HOG naturally raises questions on how they measure the similarity between image patches, how we should understand the design choices in them, and whether we can find a principled way to design and learn comparable or superior low-level image features. In this work, we highlight the kernel view of orientation histograms and provide a unified way to low-level image feature design and learning. Our low-level image feature extractors, kernel descriptors, consist of three steps: (1) design match kernels using pixel attributes; (2) learn compact basis vectors using kernel principle component analysis; (3) construct kernel descriptors by projecting the infinite-dimensional feature vectors to the learned basis vectors. We show how our framework is applied to gradient, color, and shape pixel attributes, leading to three effective kernel descriptors. We validate our approach on four standard image category recognition benchmarks, and show that our kernel descriptors surpass both manually designed and well tuned low-level features (SIFT) [16] and sophisticated feature learning approaches (convolutional networks, deep belief networks, sparse coding, etc.) [10, 26, 14, 24]. 1 The most relevant work to this paper is that of efficient match kernels (EMK) [1], which provides a kernel view to the frequently used Bag-of-Words representation and forms image-level features by learning compact low dimensional projections or using random Fourier transformations. While the work on efficient match kernels is interesting, the hand-crafted SIFT features are still used as the basic building block. Another related work is based on mathematics of the neural response, which shows that the hierarchical architectures motivated by the neuroscience of the visual cortex is associated to the derived kernel [24]. Instead, the goal of this paper is to provide a deep understanding of how orientation histograms (SIFT and HOG) work, and we can generalize them and design novel low-level image features based on the kernel insight. Our kernel descriptors are general and provide a principled way to convert pixel attributes to patch-level features. To the best of our knowledge, this is the first time that low-level image features are designed and learned from scratch using kernel methods; they can serve as the foundation of many computer vision tasks including object recognition. This paper is organized as follows. Section 2 introduces the kernel view of histograms. Our novel kernel descriptors are presented in Section 3, followed by an extensive experimental evaluation in Section 4. We conclude in Section 5. 2 Kernel View of Orientation Histograms Orientation histograms, such as SIFT [16] and HOG [3], are the most commonly used low-level features for object detection and recognition. Here we describe the kernel view of such orientation histograms features, and show how this kernel view can help overcome issues such as orientation binning. Let θ(z) and m(z) be the orientation and magnitude of the image gradient at a pixel z. In HOG and SIFT, the gradient orientation of each pixel is discretized into a d−dimensional indicator vector δ(z) = [δ1(z), · · · , δd(z)] with δi(z) = ½ 1, ⌊dθ(z) 2π ⌋= i −1 0, otherwise (1) where ⌊x⌋takes the largest integer less than or equal to x (we will describe soft binning further below). The feature vector of each pixel z is a weighted indicator vector F(z) = m(z)δ(z). Aggregating feature vectors of pixels over an image patch P, we obtain the histogram of oriented gradients: Fh(P) = X z∈P em(z)δ(z) (2) where em(z) = m(z)/ pP z∈P m(z)2 + ϵg is the normalized gradient magnitude, with ϵg a small constant. P is typically a 4 × 4 rectangle in SIFT and an 8 × 8 rectangle in HOG. Without loss of generality, we consider L2-based normalization here. In object detection [3, 5] and matching based object recognition [18], linear support vector machines or the L2 distance are commonly applied to sets of image patch features. This is equivalent to measuring the similarity of image patches using a linear kernel in the feature map Fh(P) in kernel space: Kh(P, Q) = Fh(P)⊤Fh(Q) = X z∈P X z′∈Q em(z) em(z′)δ(z)⊤δ(z′) (3) where P and Q are patches usually from two different images. In Eq. 3, both k em(z, z′) = em(z) em(z′) and kδ(z, z′) = δ(z)⊤δ(z′) are the inner product of two vectors and thus are positive definite kernels. Therefore, Kh(P, Q) is a match kernel over sets (here the sets are image patches) as in [8, 1, 11, 17, 7]. Thus Eq. 3 provides a kernel view of HOG features over image patches. For simplicity, we only use one image patch here; it is straightforward to extend to sets of image patches. The hard binning underlying Eq. 1 is only for ease of presentation. To get a kernel view of soft binning [13], we only need to replace the delta function in Eq. 1 by the following, soft δ(·) function: δi(z) = max(cos(θ(z) −ai)9, 0) (4) where a(i) is the center of the i−th bin. In addition, one can easily include soft spatial binning by normalizing gradient magnitudes using the corresponding spatial weights. The L2 distance between P and Q can be expressed as D(P, Q) = 2 −2F(P)⊤F(Q) as we know F(P)⊤F(P) = 1, and the kernel view can be provided in the same manner. 2 Figure 1: Pixel attributes. Left: Gradient orientation representation. To measure similarity between two pixel orientation gradients θ and θ′, we use the L2 norm between the normalized gradient vectors eθ = [sin(θ) cos(θ)] and eθ′ = [sin(θ′) cos(θ′)]. The red dots represent the normalized gradient vectors, and the blue line represents the distance between them. Right: Local binary patterns. The values indicate brightness of pixels in a 3×3 patch. Red pixels have intensities larger than the center pixel, blue pixels are darker. The 8-dimensional indicator vector is the resulting local binary pattern. Note that the kernel k em(z, z′) measuring the similarity of gradient magnitudes of two pixels is linear in gradient magnitude. kδ(z, z′) measures the similarity of gradient orientations of two pixels: 1 if two gradient orientations are in the same bin, and 0 otherwise (Eq.1, hard binning). As can be seen, this kernel introduces quantization errors and could lead to suboptimal performance in subsequent stages of processing. While soft binning results in a smoother kernel function, it still suffers from discretization. This motivates us to search for alternative match kernels which can measure the similarity of image patches more accurately. 3 Kernel Descriptors 3.1 Gradient, Color, and Shape Match Kernels We introduce the following gradient match kernel, Kgrad, to capture image variations: Kgrad(P, Q) = X z∈P X z′∈Q em(z) em(z′)ko(eθ(z), eθ(z′))kp(z, z′) (5) where kp(z, z′) = exp(−γp∥z −z′∥2) is a Gaussian position kernel with z denoting the 2D position of a pixel in an image patch (normalized to [0, 1]), and ko(eθ(z), eθ(z′)) = exp(−γo∥eθ(z) −eθ(z′)∥2) is a Gaussian kernel over orientations. To estimate the difference between orientations at pixels z and z′, we use the following normalized gradient vectors in the kernel function ko: eθ(z) = [sin(θ(z)) cos(θ(z))] . (6) The L2 distance between such vectors measures the difference of gradient orientations very well (see Figure 1). Note that computing the L2 distance on the raw angle values θ instead of the normalized gradient vectors eθ would cause wrong similarity in some cases. For example, consider the two angles 2π −0.01 and 0.01, which have very similar orientation but very large L2 distance. To summarize, our gradient match kernel Kgrad consists of three kernels: the normalized linear kernel is the same as that in the orientation histograms, weighting the contribution of each pixel using gradient magnitudes; the orientation kernel ko computes the similarity of gradient orientations; and the position Gaussian kernel kp measures how close two pixels are spatially. The kernel view of orientation histograms provides a simple, unified way to turn pixel attributes into patch-level features. One immediate extension is to construct color match kernels over pixel values: Kcol(P, Q) = X z∈P X z′∈Q kc(c(z), c(z′))kp(z, z′) (7) where c(z) is the pixel color at position z (intensity for gray images and RGB values for color images). kc(c(z), c(z′)) = exp ¡ −γc∥c(z) −c(z′)∥2¢ measures how similar two pixel values are. 3 While the gradient match kernel can capture image variations and the color match kernel can describe image appearance, we find that a match kernel over local binary patterns can capture local shape more effectively: [19]: Kshape(P, Q) = X z∈P X z′∈Q es(z)es(z′)kb(b(z), b(z′))kp(z, z′) (8) where es(z) = s(z)/ pP z∈P s(z)2 + ϵs, s(z) is the standard deviation of pixel values in the 3 × 3 neighborhood around z, ϵs a small constant, and b(z) is binary column vector binarizes the pixel value differences in a local window around z (see Fig. 1(right)). The normalized linear kernel es(z)es(z′) weighs the contribution of each local binary pattern, and the Gaussian kernel kb(b(z), b(z′)) = exp(−γb∥b(z)−b(z′)∥2) measures shape similarity through local binary patterns. Match kernels defined over various pixel attributes provide a unified way to generate a rich, diverse visual feature set, which has been shown to be very successful to boost recognition accuracy [6]. As validated by our own experiments, gradient, color and shape match kernels are strong in their own right and complement one another. Their combination turn out to be always (much) better than the best individual feature. 3.2 Learning Compact Features Match kernels provide a principled way to measure the similarity of image patches, but evaluating kernels can be computationally expensive when image patches are large [1]. Both for computational efficiency and for representational convenience, we present an approach to extract the compact lowdimensional features from match kernels: (1) uniformly and densely sample sufficient basis vectors from support region to guarantee accurate approximation to match kernels; (2) learn compact basis vectors using kernel principal component analysis. An important advantage of our approach is that no local minima are involved, unlike constrained kernel singular value decomposition [1]. We now describe how our compact low-dimensional features are extracted from the gradient kernel Kgrad; features for the other kernels can be generated the same way. Rewriting the kernels in Eq. 5 as inner products ko(eθ(z), eθ(z′)) = φo(eθ(z))⊤φo(eθ(z′)), kp(z, z′) = φp(z)⊤φp(z′), we can derive the following feature over image patches: Fgrad(P) = X z∈P em(z)φo(eθ(z)) ⊗φp(z) (9) where ⊗is the tensor product. For this feature, it follows that Fgrad(P)⊤Fgrad(Q) = Kgrad(P, Q). Because we use Gaussian kernels, Fgrad(P) is an infinite-dimensional vector. A straightforward way to dimension reduction is to sample sufficient image patches from training images and perform KPCA for match kernels. However, such approach makes the learned features depend on the task at hand. Moreover, KPCA can become computationally infeasible when the number of patches is very large. Sufficient Finite-dimensional Approximation. We present an approach to approximate match kernels directly without requiring any image. Following classic methods, we learn finite-dimensional features by projecting Fgrad(P) into a set of basis vectors. A key issue in this projection process is how to choose a set of basis vectors which makes the finite-dimensional kernel approximate well the original kernel. Since pixel attributes are low-dimensional vectors, we can achieve a very good approximation by sampling sufficient basis vectors using a fine grid over the support region. For example, consider the Gaussian kernel ko(eθ(z), eθ(z′)) over gradient orientation. Given a set of basis vectors {φo(xi)}do i=1 where xi are sampled normalized gradient vectors, we can approximate a infinite-dimensional vector φo(eθ(z)) by its projection into the space spanned by the set of these do basis vectors. Following the formulation in [1], such a procedure is equivalent to using a finitedimensional kernel: eko(eθ(z), eθ(z′)) = ko(eθ(z), X)⊤£ Ko −1¤ ij ko(eθ(z′), X) = h Gko(eθ(z), X) i⊤h Gko(eθ(z′), X) i (10) where ko(eθ(z), X) = [ko(eθ(z), x1), · · · , ko(eθ(z), xdo)]⊤is a do × 1 vector, Ko is a do × do matrix with Koij = ko(xi, xj), and Ko −1 = G⊤G. The resulting feature map eφo(eθ(z)) = Gko(eθ(z), X) 4 0 2 4 6 0 0.2 0.4 0.6 0.8 1 Ground Truth 10 Grid App 14 Grid App 16 Grid App 0 100 200 300 400 500 −1 0 1 2 3 4 5 x 10 −4 Dimensionality RMSE Kgrad Kcol Kshape Figure 2: Finite dimensional approximation. Left: the orientation kernel ko(eθ(z), eθ(z′)) and its finite-dimensional approximation. γo is set to be 5 (as used in the experiments) and eθ(z′) is fixed to [1 0]. All curves show kernel values as functions of eθ(z). The red line is the ground truth kernel function ko, and the black, green and blue lines are the finite approximation kernels with different grid sizes. Right: root mean square error (RMSE) between KPCA approximation and the corresponding match kernel as a function of dimensionality. We compute the RMSE on randomly sampled 10000 datapoints. The three lines show the RMSE between the kernels Kgrad (red) and Kcol (blue) and Kshape (green), and their respective approximation kernels. is now only do−dimensional. In a similar manner, we can also approximate the kernels kp, kc and kb. The finite-dimensional feature for the gradient match kernel is eFgrad(P) = P z∈P em(z)eφo(eθ(z)) ⊗ eφp(z), and may be efficiently used as features over image patches. We validate our intuition in Fig. 2. As we expect, the approximation error rapidly drops with increasing grid sizes. When the grid size is larger than 16, the finite kernel and the original kernel become virtually indistinguishable. For the shape kernel over local binary patterns, because the variables are binary, we simply choose the set of all 28 = 256 basis vectors and thus no approximation error is introduced. Compact Features. Although eFgrad(P) is finite-dimensional, the dimensionality can be high due to the tensor product. For example, consider the shape kernel descriptor: the size of basis vectors on kernel kb is 256; if we choose the basis vectors of the position kernel kp on a 5 × 5 regular grid, the dimensionality of the resulting shape kernel descriptor Fshape would be 256 × 25 = 6400, too high for practical purposes. Dense uniform sampling leads to accurate approximation but does not guarantee orthogonality of the basis vectors, thus introducing redundance. The size of basis vectors can be further reduced by performing kernel principal component analysis over joint basis vectors: {φo(x1) ⊗φp(y1), · · · , φo(xdo) ⊗φp(ydp)}, where φp(ys) are basis vectors for the position kernel and dp is the number of basis vectors. The t−th kernel principal component can be written as PCt = do X i=1 dp X j=1 αt ijφo(xi) ⊗φp(yj) (11) where do and dp are the sizes of basis vectors for the orientation and position kernel, respectively, and αt ij is learned through kernel principal component analysis: Kcαt = λtαt, where Kc is a centered kernel matrix with [Kc]ijst = ko(xi, xj)kp(ys, yt) −2 P i′,s′ ko(xi′, xj)kp(ys′, yt) + P i′,j′,s′,t′ ko(xi′, xj′)kp(ys′, yt′). As shown in fig. (2), match kernels can be approximated rather accurately using the reduced basis vectors by KPCA. Under the framework of kernel principal component analysis, our gradient kernel descriptor (Eq. 5) has the form F t grad(P) = do X i=1 dp X j=1 αt ij (X z∈P em(z)ko(eθ(z), xi)kp(z, yj) ) (12) The computational bottleneck of extracting kernel descriptors are to evaluate the kernel function kokp between pixels. Fortunately, we can compute two kernel values separately at the cost do + dp, rather than dodp. Our most expensive kernel descriptor, the shape kernel, takes about 4 seconds in MATLAB to compute on a typical image (300 × 300 resolution and 16 × 16 image patches over 5 8×8 grids). It is about 1.5 seconds for the gradient kernel descriptor, compared to about 0.4 seconds for SIFT under the same setting. A more efficient GPU-based implementation will certainly reduce the computation time for kernel descriptors such that real time applications become feasible. 4 Experiments We compare gradient (KDES-G), color (KDES-C), and shape (KDES-S) kernel descriptors to SIFT and several other state of the art object recognition algorithms on four publicly available datasets: Scene-15, Caltech101, CIFAR10, and CIFAR10-ImageNet (a subset of ImageNet). For gradient and shape kernel descriptors and SIFT, all images are transformed into grayscale ([0, 1]) and resized to be no larger than 300 × 300 pixels with preserved ratio. Image intensity or RGB values are normalized to [0 1]. We extracted all low level features with 16×16 image patches over dense regular grids with spacing of 8 pixels. We used publicly available dense SIFT code at http://www.cs.unc.edu/ lazebnik [13], which includes spatial binning, soft binning and truncation (nonlinear cutoff at 0.2), and has been demonstrated to obtain high accuracy for object recognition. For our gradient kernel descriptors we use the same gradient computation as used for SIFT descriptors. We also evaluate the performance of the combination of the three kernel descriptors (KDES-A) by simply concatenating the image-level features vectors. Instead of spatial pyramid kernels, we compute image-level features using efficient match kernels (EMK), which has been shown to produce more accurate quantization. We consider 1×1, 2×2 and 4 × 4 pyramid sub-regions (see [1]), and perform constrained kernel singular value decomposition (CKSVD) to form image-level features, using 1,000 visual words (basis vectors in CKSVD) learned by K-means from about 100,000 image patch features. We evaluate classification performance with accuracy averaged over 10 random training/testing splits with the exception of the CIFAR10 dataset, where we report the accuracy on the test set. We have experimented both with linear SVMs and Laplacian kernel SVMs and found that Laplacian kernel SVMs over efficient match kernel features are always better than linear SVMs (see (§4.2)). We use Laplacian kernel SVMs in our experiments (except for the tiny image dataset CIFAR10). 4.1 Hyperparameter Selection We select kernel parameters using a subset of ImageNet. We retrieve 8 everyday categories from the ImageNet collection: apple, banana, box, coffee mug, computer keyboard, laptop, soda can and water bottle. We choose basis vectors for ko, kc, and kp from 25, 5 × 5 × 5 and 5 × 5 uniform grids, respectively, which give sufficient approximations to the original kernels (see also Fig. 2). We optimize the dimensionality of KPCA and match kernel parameters jointly using exhaustive grid search. Our experiments suggest that the optimal parameter settings are r = 200 (dimensionality of kernel descriptors), γo = 5, γc = 4, γb = 2, γp = 3, ϵg = 0.8 and ϵs = 0.2 (fig. 3). In the following experiments, we will keep these values fixed, even though the performance may improve if considering task-dependent hyperparameter selection. 4.2 Benchmark Comparisons Scene-15. Scene-15 is a popular scene recognition benchmark from [13] which contains 15 scene categories with 200 to 400 images in each. SIFT features have been extensively used on Scene15. Following the common experimental setting, we train our models on 1,500 randomly selected images (100 images per category) and test on the rest. We report the averaged accuracy of SIFT, KDES-S, KDES-C, KDES-G, and KDES-A over 10 random training/test splits in Table 1. As we see, both gradient and shape kernel descriptors outperform SIFT with a margin. Gradient kernel descriptors and shape kernel descriptors have similar performance. It is not surprising that the intensity kernel descriptor has a lower accuracy, as all the images are grayscale. The combination of the three kernel descriptors further boosts the performance by about 2 percent. Another interesting finding is that Laplacian kernel SVMs are significantly better than linear SVMs, 86.7%. In our recognition system, the accuracy of SIFT is 82.2% compared to 81.4% in spatial pyramid match (SPM). We also tried to replace SIFT features with our gradient and shape kernel descriptors in SPM, and both obtained 83.5% accuracy, 2 percent higher than SIFT features. To our best knowledge, our gradient kernel descriptor alone outperforms the best published result 84.2% [27]. 6 100 200 300 400 0.7 0.75 0.8 0.85 KDES−S KDES−G 0 0.2 0.4 0.6 0.8 1 0.74 0.76 0.78 0.8 0.82 KDES−S KDES−G 0 2 4 6 0.74 0.76 0.78 0.8 0.82 KDES−S KDES−G Figure 3: Hyperparameter selection. left: Accuracy as functions of feature dimensionality for orientation kernel (KDES-G) and shape kernel (KDES-S), respectively. center: Accuracy as functions of ϵg and ϵs. right: Accuracy as function of γo and γb. Methods SIFT KDES-C KDES-G KDES-S KDES-A Linear SVM 76.7±0.7 38.5±0.4 81.6±0.6 79.8±0.5 81.9±0.6 Laplacian kernel SVM 82.2±0.9 47.9±0.8 85.0±0.6 84.9±0.7 86.7±0.4 Table 1: Comparisons of recognition accuracy on Scene-15: kernel descriptors and their combination vs SIFT. Caltech-101. Caltech-101 [15] consists of 9,144 images in 101 object categories and one background category. The number of images per category varies from 31 to 800. Because many researchers have reported their results on Caltech-101, we can directly compare our algorithm to the existing ones. Following the standard experimental setting, we train classifiers on 30 images and test on no more than 50 images per category. We report our results in Table 2. We compare our kernel descriptors with recently published results obtained both by low-level feature learning algorithms, convolutional deep belief networks (CDBN), and sparse coding methods: invariant predictive sparse decomposition (IPSD) and locality-constrained linear coding. We observe that SIFT features in conjunction with efficient match kernels work well on this dataset and obtain 70.8% accuracy using a single patch size, which beat SPM with the same SIFT features by a large margin. Both our gradient kernel descriptor and shape kernel descriptor are superior to CDBN by a large margin. We have performed feature extraction with three different patch sizes: 16 × 16, 25 × 25 and 31 × 31 and reached the same conclusions with many other researchers: multiple patch sizes (scales) can boost the performance by a few percent compared to the single patch size. Notice that both naive Bayesian nearest neighbor (NBNN) and locality-constrained linear coding should be compared to our kernel descriptors over multiple patch sizes because both of them used multiple scales to boost the performance. Using only our gradient kernel descriptor obtains 75.2% accuracy, higher than the results obtained by all other single feature based methods, to our best knowledge. Another finding is that the combination of three kernel descriptors outperforms any single kernel descriptor. We note that better performance has been reported with the use of more image features [6]. Our goal in this paper is to evaluate the strengths of kernel descriptors. To improve accuracy further, kernel descriptors can be combined with other types of image features. CIFAR-10. CIFAR-10 is a labeled subset of the 80 million tiny images dataset [25, 12]. This dataset consists of 60,000 32x32 color images in 10 categories, with 5,000 images per category as training set and 1,000 images per category as test set. Deep belief networks have been extensively investigated on this dataset [21, 22]. We extract kernel descriptors over 8×8 image patches per pixel. Efficient match kernels over the three spatial grids 1×1, 2×2, and 3×3 are used to generate imagelevel features. The resulting feature vectors have a length of (1+4+9)∗1000(visual words)= 14000 per kernel descriptor. Linear SVMs are trained due to the large number of training images. SPM [13] 64.4±0.5 kCNN [28] 67.4 KDES-C 40.8±0.9 KDES-C(M) 42.4±0.5 NBNN [2] 73.0 IPSD [10] 56.0 KDES-G 73.3±0.6 KDES-G(M) 75.2±0.4 CDBN [14] 65.5 LLC [26] 73.4 ±0.5 KDES-S 68.2±0.7 KDES-S(M) 70.3±0.6 SIFT 70.8±0.8 SIFT(M) 73.2±0.5 KDES-A 74.5±0.8 KDES-A(M) 76.4±0.7 Table 2: Comparisons on Caltech-101. Kernel descriptors are compared to recently published results. (M) indicates that features are extracted with multiple image patch sizes. 7 LR 36.0 GRBM, ZCAd images 59.6 mRBM 59.7 KDES-C 53.9 SVM 39.5 GRBM 63.8 cRBM 64.7 KDES-G 66.3 GIST[20] 54.7 fine-tuning GRBM 64.8 mcRBM 68.3 KDES-S 68.2 SIFT 65.6 GRBM two layers 56.6 mcRBM-DBN 71.0 KDES-A 76.0 Table 3: Comparisons on CIFAR-10. Both logistic regression and SVMs are trained over image pixels. Methods SIFT KDES-C KDES-G KDES-S KDES-A Laplacian kernel SVMs 66.5 ±0.4 56.4±0.8 69.0 ±0.8 70.5±0.7 75.2±0.7 Table 4: Comparisons on CIFAR10-ImageNet, subset of ImageNet using the 10 CIFAR categories. We compare our kernel descriptors to deep networks [14, 9] and several baselines in table 3. One immediate observation is that sophisticated feature extractions are significantly better than raw pixel features. Linear logistic regression and linear SVMs over raw pixels only have accuracies of 36% and 39.5%, respectively, over 30 percent lower than deep belief networks and our kernel descriptors. SIFT features still work well on tiny images and have an accuracy of 65.2%. Color kernel descriptor, KDES-C, has 53.9% accuracy. This result is a bit surprising since each category has a large color variation. A possible explanation is that spatial information can help a lot. To validate our intuitions, we also evaluated the color kernel descriptor without spatial information (kernel features are extracted on 1 × 1 spatial grid), and only obtained 38.5% accuracy, 18 percent lower than the color kernel descriptor over pyramid spatial grids. KDES-G is slightly better than SIFT features. The shape kernel feature, KDES-S, has accuracy of 68.2%, and is the best single feature on this dataset. Combing the three kernel descriptors, we obtain the best performance of 76%, 5 percent higher than the most sophisticated deep network mcRBM-DBN, which model pixel mean and covariance jointly using factorized third-order Boltzmann machines. CIFAR-10-ImageNet. Motivated by CIFAR-10, we collect a labeled subset of ImageNet [4] by retrieving 10 categories used in ImageNet: Airplane, Automobile, Bird, Cat, Deer, Dog, Frog, Horse, Ship and Truck. The total number of images is 15,561 with more than 1,200 images per category. This dataset is very challenging due to the following facts: multiple objects can appear in one image, only a small part of objects are visible, backgrounds are cluttered, and so on. We train models on 1,000 images per class and test on 200 images per category. We report the averaged results over 10 random training/test splits in Table 4. We can’t finish running deep belief networks in a reasonable time since they are slow for running images of this scale. Both gradient and shape kernel descriptors achieve higher accuracy than SIFT features, which again confirms that our gradient kernel descriptor and shape kernel descriptor outperform SIFT features on high resolution images with the same category as CIFAR-10. We also ran the experiments on the downsized images, no larger than 50×50 with preserved ratio. We observe that the accuracy drops 4-6 percents compared to those on high resolution images. This validates that high resolution is helpful for object recognition. 5 Conclusion We have proposed a general framework, kernel descriptors, to extract low-level features from image patches. Our approach is able to turn any pixel attribute into patch-level features in a unified and principled way. Kernel descriptors are based on the insight that the inner product of orientation histograms is a particular match kernel over image patches. We have performed extensive comparisons and confirmed that kernel descriptors outperform both SIFT features and hierarchical feature learning, where the former is the default choice for object recognition and the latter is the most popular low-level feature learning technique. To our best knowledge, we are the first to show how kernel methods can be applied for extracting low-level image features and show superior performance. This opens up many possibilities for learning low-level features with other kernel methods. Considering the huge success of kernel methods in the last twenty years, we believe that this direction is worth being pursued. In the future, we plan to investigate alternative kernels for low-level feature learning and learn pixel attributes from large image data collections such as ImageNet. 8 References [1] L. Bo and C. Sminchisescu. Efficient Match Kernel between Sets of Features for Visual Recognition. In NIPS, 2009. [2] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [3] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. [5] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 2008. [6] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009. [7] K. Grauman and T. Darrell. The pyramid match kernel: discriminative classification with sets of image features. In ICCV, 2005. [8] D. Haussler. Convolution kernels on discrete structures. Technical report, 1999. [9] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009. [10] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In CVPR, 2009. [11] R. Kondor and T. Jebara. A kernel between sets of vectors. In ICML, 2003. [12] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. [13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006. [14] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009. [15] F. Li, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE PAMI, 2006. [16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91–110, 2004. [17] S. Lyu. Mercer kernels for object recognition with local features. In CVPR, 2005. [18] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE PAMI, 27(10):1615–1630, 2005. [19] T. Ojala, M. Pietik¨ainen, and T. M¨aenp¨a¨a. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE PAMI, 24(7):971–987, 2002. [20] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001. [21] M. Ranzato, Krizhevsky A., and G. Hinton. Factored 3-way restricted boltzmann machines for modeling natural images. In AISTATS, 2010. [22] M. Ranzato and G. Hinton. Modeling pixel means and covariances using factorized third-order boltzmann machines. In CVPR, 2010. [23] B. Sch¨olkopf, A. Smola, and K. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [24] S. Smale, L. Rosasco, J. Bouvrie, A. Caponnetto, and T. Poggio. Mathematics of the neural response. Foundations of Computational Mathematics, 10(1):67–91, 2010. [25] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE PAMI, 30(11):1958–1970, 2008. [26] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Guo. Locality-constrained linear coding for image classification. In CVPR, 2010. [27] J. Wu and J. Rehg. Beyond the euclidean distance: Creating effective visual codebooks using the histogram intersection kernel. 2002. [28] K. Yu, W. Xu, and Y. Gong. Deep learning with kernel regularization for visual recognition. In NIPS, 2008. 9
2010
92
4,138
Computing Marginal Distributions over Continuous Markov Networks for Statistical Relational Learning Matthias Br¨ocheler, Lise Getoor University of Maryland, College Park College Park, MD 20742 {matthias, getoor}@cs.umd.edu Abstract Continuous Markov random fields are a general formalism to model joint probability distributions over events with continuous outcomes. We prove that marginal computation for constrained continuous MRFs is #P-hard in general and present a polynomial-time approximation scheme under mild assumptions on the structure of the random field. Moreover, we introduce a sampling algorithm to compute marginal distributions and develop novel techniques to increase its efficiency. Continuous MRFs are a general purpose probabilistic modeling tool and we demonstrate how they can be applied to statistical relational learning. On the problem of collective classification, we evaluate our algorithm and show that the standard deviation of marginals serves as a useful measure of confidence. 1 Introduction Continuous Markov random fields are a general and expressive formalism to model complex probability distributions over multiple continuous random variables. Potential functions, which map the values of sets (cliques) of random variables to real numbers, capture the dependencies between variables and induce a exponential family density function as follows: Given a finite set of n random variables X = {X1, . . . , Xn} with an associated bounded interval domain Di ⊂R, let φ = {φ1, . . . , φm} be a finite set of m continuous potential functions defined over the interval domains, i.e. φj : D →[0, M], for some bound M ∈R+, where D = D1 × D2 . . . × Dn. For a set of free parameters Λ = {λ1, . . . , λm}, we then define the probability measure P over X with respect to φ through its density function f as: f(x) = 1 Z(Λ)exp[− m X j=1 λjφj(x)] ; Z(Λ) = Z D exp  − m X j=1 λjφj(x)  dx (1) where Z is the normalization constant. The definition is analogous to the popular discrete Markov random fields (MRF) but using integration over the bounded domain rather than summation for the partition function Z. In addition, we assume the existence of a set of kA equality and kB inequality constraints on the random variables, that is, A(x) = a, where A : D →RkA, a ∈RkA and B(x) ≤b, where B : D →RkB, b ∈RkB. Both equality and inequality constraints restrict the possible combinations of values the random variables X can assume. That is, we set f(x) = 0 whenever any of the constraints are violated and constrain the domain of integration, denoted ˜D, for the normalization constant correspondingly. Constraints are useful in probabilistic modeling to exclude inconsistent outcomes based on prior knowledge about the distribution. We call this class of MRFs constrained continuous Markov random fields (CCMRF). Probabilistic inference often requires the computation of marginal distributions for all or a subset of the random variables X. Marginal computation for discrete MRFs has been extensively studied due to its wide applicability in probabilistic reasoning. In this work, we study the theoretical and practical aspects of computing marginal density functions over CCMRFs. General continuous MRFs can 1 λ1 : A.text ∼= B.text ˜∧class(A, C) ˜⇒class(B, C) λ2 : link(A, B) ˜∧class(A, C) ˜⇒class(B, C) Constraint : functional(class) Table 1: Example PSL program for collective classification. be used in a variety of probabilistic modeling scenarios and have been studied for applications with continuous domains such as computer vision. Gaussian Random Fields are a type of continuous MRF which assume normality. In this work, we make no restrictive assumptions about the marginal distributions other than boundedness. For general continuous MRFs, non-parametric belief propagation (NBP) [1] has been proposed as a method to estimate marginals. NBP represents the “belief” as a combination of kernel densities which are propagated according to the structure of the MRF. In contrast to NBP, our approach provides polynomial-time approximation guarantees and avoids the representational choice of kernel densities. The main contributions of this work are described in Section 3. We begin by showing that computing marginals in CCMRFs is #P-hard in the number of random variables n. We then discuss a Markov chain Monte Carlo (MCMC) sampling scheme that can approximate the exact distribution to within ϵ error in polynomial time under the general assumption that the potential functions and inequality constraints are convex. Based on this result, we propose a tractable sampling algorithm and present a novel approach to increasing its effectiveness by detecting and counteracting slow convergence. Our theoretical results are based on recent advances in computational geometry and the study of logconcave functions [2]. In Section 4, we investigate the performance, scalability, and convergence of the sampling algorithm on the probabilistic inference problem of collective classification on a set of Wikipedia documents. In particular, we show that the standard deviation of the marginal density function can serve as a strong indicator for the “confidence” in the classification prediction, thereby demonstrating a useful qualitative aspect of marginals over continuous MRFs. Before turning to the main contributions of the paper, in the next section, we give background motivation for the form of CCMRFs we study. 2 Motivation Our treatment of CCMRFs is motivated by probabilistic similarity logic (PSL) [3]. PSL is a relational language that provides support for probabilistic reasoning about similarities. PSL is similar to existing SRL models, e.g., MLNs [4], BLPs [5], RMNs [6], in that it defines a probabilistic graphical model over the properties and relations of the entities in a domain as a grounding of a set of rules that have attached parameters. However, PSL supports reasoning about “soft” truth values, which can be seen as similarities between entities or sets of entities, degrees of belief, or strength of relationships. PSL uses annotated logic rules to capture the dependency structure of the domain, based on which it builds a joint continuous probabilistic model over all decision atoms which can be expressed as a CCMRF as defined above. PSL has been used to reason about the similarity between concepts from different ontologies as well as articles from Wikipedia. Table 1 shows a simple PSL program for collective classification. The first rule states that documents with similar text are likely to have the same class. The second rule says that two documents which are linked to each other are also likely to be assigned the same class. Finally, we express the constraint that each document can have at most one class, that is, the class predicate is functional and can only map to one value. Such domain specific constraints motivate our introduction of equality and inequality constraints for CCMRFs. Rules and constraints are written in first order logic formalism and are grounded out against the observed data such that each ground rule constitutes one potential function or constraint computing the truth value of the formula. Rules have an associated weight λi which is used as the parameter for each associated potential function. The weights can be learned from training data. In the following, we make some assumptions about the nature of the constraints and the potential functions motivated by the requirements of the PSL framework and the types of CCMRFs modeled therein. Firstly, we assume all domains are in the [0, 1] interval which corresponds to the domain of similarity truth values in PSL. Secondly, all constraints are assumed to be linear. Thirdly, the potential functions φj are of the form φj(x) = max(0, oj·x+qj) where oT j ∈Rn is a n-dimensional row vector and qj ∈R. The particular form of the potential functions is motivated by the way similarity truth values are combined in PSL using t-norms (see [3] for details). While the techniques presented in this work are not specific to PSL, a brief outline of the PSL framework helps in understanding the assumptions about the CCMRFs of interest made in our algorithm and experiments. In Section 3.5 we show how our assumptions can be relaxed while maintaining polynomial-time guarantees for applications outside the PSL framework. 2 1 1 0 X1 X3 X2 P(0.4 ≤ X2 ≤ 0.6) 1 1 0 X1 X3 p p’ p p’ xMAP a) Example of geometric marginal computation b) Hit-and-Run and random ball walk illustration 3 Computing continuous marginals This section contains the main technical contributions of this paper. We start our study of marginal computation for CCMRFs by proving that computing the exact density function is #P hard (3.1). In Section 3.2, we discuss how to approximate the marginal distribution using a MCMC sampling scheme which produces a guaranteed ϵ-approximation in polynomial time under suitable conditions. We show how to improve the sampling scheme by detecting phases of slow convergence and present a technique to counteract them (3.3). Finally, we describe an algorithm based on the sampling scheme and its improvements (3.4). In addition, we discuss how to relax the linearity conditions in Section 3.5. Throughout this discussion we use the following simple example for illustration: Example 1 Let X = {X1, X2, X3} be subject to the inequality constraint x1 + x3 ≤1. Let φ1(x) = x1, φ2(x) = max(0, x1 −x2), φ3(x) = max(0, x2 −x3) where λ = (1, 2, 1) are the associated free parameters. 3.1 Exact marginal computation Theorem 1 Computing the marginal probability density function fX′(x′) = R y∈× ˜ Di,s.t.Xi /∈X′ f(x′, y)dy for a subset X′ ⊂X under a probability measure P defined by a CCMRF is #P hard in the worst case. We prove this statement by a simple reduction from the problem of computing the volume of a n-dimensional polytope defined by linear inequality constraints. To see the relationship to computational geometry, note that the domain D is a n-dimensional unit hypercube1. Each linear inequality constraints Bi from the system B can be represented by a hyperplane which “cuts off” part of the hypercube D. Finally, the potential functions induce a probability distribution over the resulting convex polytope. Figure 3a) visualizes the domain for our running example in the 3-dimensional Euclidean space. The constraint domain is shown as a wedge. The highlighted area marks the region of probability mass that is equal to the probability P(0.4 ≤X2 ≤0.6). Proof 1 (Sketch) For any random variable X ∈X, the marginal probability P(l ≤X ≤u) under the uniform probability distribution defined by a single potential function φ = 0 corresponds to the volume of the “slice” defined by the bounds u < l ∈[0, 1] relative to the volume of the entire polytope. In [7] it was shown that computing the volume of such slices is at least as hard as computing the volume of the entire polytope which is known to be #P-hard [8]. 3.2 Approximate marginal computation and sampling scheme Despite this hardness result, efficient approximation algorithms for convex volume computation based on MCMC techniques have been devised and yield polynomial-time approximation guarantees. We will review the techniques and then relate them to our problem of marginal computation. The first provably polynomial-time approximation algorithm for volume computation was based on “random ball-walks”. Starting from some initial point p inside the polytope, one samples from the local density function of f restricted to the inside of a ball of radius r around the point p. If the newly sampled point p′ lies inside the polytope, we move to p′, otherwise we stay at p and repeat the sampling. If P is the uniform distribution (as typically chosen for volume computation), the 1We ignore equality constraints for now until the discussion of the algorithm in Section 3.4 3 resulting Markov chain converges to P over the polytope in O∗(n3) steps assuming that the starting distribution is not “too far” from P [9].2 More recently, the hit and run sampling scheme [10] was rediscovered which has the advantage that no strong assumptions about the initial distribution need to be made. As in the random ball walk, we start at some interior point p. Next, we generate a direction d (i.e., n dimensional vector of length 1) uniformly at random and compute the line segment l of the line p + αd that resides inside the polytope. We then compute the distribution of P over the segment l, sample from it uniformly at random and move to the new sample point p′ to repeat the process. For P being the uniform distribution, the Markov chain also converges after O∗(n3) steps but for hit-and-run we only need to assume that the starting point p does not lie on the boundary of the polytope [2]. In [7], the authors show that hit-and-run significantly outperforms random ball walk sampling in practice, because it (1) does not get easily stuck in corners since each sample is guaranteed to be drawn from inside the polytope, (2) does not require parameter setting like the radius r which greatly influences the performance of random ball walk. Figure 3 b) shows an iteration of the random ball walk and the hit-and-run sampling schemes for our running example restricted to just two dimensions to simplify the presentation. We can see that, depending on the radius of the ball, a significant portion may not intersect with the feasible region. Lov´asz and Vempala[2] have proven a stronger result which shows that hit-and-run sampling converges for general log-linear distributions. Based on their result, we get a polynomial-time approximation guarantee for distributions induced by CCMRFs as defined above. Theorem 2 The complexity of computing an approximate distribution σ∗using the hit-andrun sampling scheme such that the total variation distance of σ∗and P is less than ϵ is O∗˜n3(kB + ˜n + m)  , where ˜n = n −kA, under the assumptions that we start from an initial distribution σ such that the density function dσ/dP is bounded by M except on a set S with σ(S) ≤ϵ/2. Proof 2 (Sketch) Since A, B are linear, ˜D is an ˜n = n −kA dimensional convex polytope after dimensionality reduction through A. By definition, f is from the exponential family and since all factors are linear or maximums of linear functions, f is a log concave function (maximums and sums of convex functions are convex). More specifically, f is a log concave and log piecewise linear function. Let σs be the distribution of the current point after s steps of hit-and-run have been applied to f starting from σ. Now, according to Theorem 1.3 from [2], for s > 1030 n2R2 r2 ln5 MnR rϵ the total variation distance of σs and P is less than ϵ, where r is such that the level set of f of probability 1 8 contains a ball of radius r and R2 ≥Ef(|x −zf|2), where zf is the centroid of f. Now, each hit-and-run step requires us to iterate over the random variable domain boundaries, O(˜n), compute intersections with the inequality constraints, O(˜nkB), and integrate over the line segment involving all factors, O(˜nm). 3.3 Improved sampling scheme Our proposed sampling algorithm is an implementation of the hit-and-run MCMC scheme. However, the theoretical treatment presented above leaves two questions unaddressed: 1) How do we get the initial distribution σ? 2) The hit-and-run algorithm assumes that all sample points are strictly inside the polytope and bounded away from its boundary. How can we get out of corners if we do get stuck? The theorem above assumes a suitable initial distribution σ, however, in practice, no such distribution is given. Lov´asz and Vempala also show that the hit-and-run scheme converges from a single starting point on uniform distributions under the condition that it does not lie on the boundary and at the expense of an additional factor of n in the number of steps to be taken (compare Theorem 1.1 and Corollary 1.2 in [2]). We follow this approach and use a MAP state xMAP of the distribution P as the single starting point for the sampling algorithm. Choosing a MAP state as the starting point has two advantages: 1) we are guaranteed that xMAP is an interior point and 2) it is the point with the highest probability density and therefore highest probability mass in a small local neighborhood. However, starting from a MAP state elevates the importance of the second question, since the MAP state often lies exactly on the boundary of the polytope and therefore we are likely to start the sampling algorithm from a vertex of the polytope. The problem with corner points p is that most of 2The O∗notation ignores logarithmic and factors and dependence on other parameters like error bounds. 4 the directions sampled uniformly at random will lead to line segments of zero length and hence we do not move between iterations. Let W be the subset of inequality constraints B that are “active” at the corner point p and b the corresponding entries in b, i.e. Wp = b (since all constraints are linear, we abuse notation and consider B, W to be matrices). In other words, the hyperplanes corresponding to the constraints in W intersect in p. Now, for all directions d ∈Rn such that there exist active constraints Wi, Wj with Wid < 0 and Wjd > 0, the line segment through p induced by d must necessarily be 0. It also follows that more active constraints increase the likelihood of getting stuck in a corner. For example, in Figure 3 b) the point xMAP in the upper left hand corner denotes the MAP state of the distribution defined in our running example. If we generate a direction uniformly at random, only 1/4 of those will be feasible, that is, for all other we won’t be able to move away from xMAP . To avoid the problem of repeatedly sampling infeasible directions at corner points, we propose to restrict the sampling of directions to feasible directions only when we determine that a corner point has been reached. We define a corner point p as a point inside the polytope where the number of active constraints is above some threshold θ.3 A direction d is feasible, if Wd < 0. Assuming that there are a active constraints at corner point p (i.e., W has a rows) we sample each entry of the a-dimensional vector z from −|N(0, 1)| where N(0, 1) is the standard Gaussian distribution with zero mean and unit variance. Now, we try to find directions d such that Wd ≤z. A number of algorithms have been proposed to solve such systems of linear inequalities for feasible points d. In our sampling algorithm we implement the relaxation method introduced by Agmon [11] and Motzkin and Schoenberg [12] due to its simplicity. The relaxation method proceeds as follows: We start with d0 = 0. At each iteration we check if Wdi ≤z ; if so, we have found a solution and terminate. If not, we choose the most “violated” inequality constraint Wk from W, i.e., the row vector Wk from W which maximizes Wkdi−zk ∥Wk∥, and update the direction, di+1 = di + 2zk −Wkdi ∥Wk∥2 W T k The relaxation method is guaranteed to terminate, since a feasible direction d always exists [12]. 3.4 Sampling algorithm Algorithm CCMRF Sampling Input: CCMRF specified by RVs X with domains D = [0, 1]n, equality constraints A(x) = a inequality constraints B(x) ≤b, potential functions φ, parameters Λ Output: Marginal probability density histograms H[Xi] : [0, 1] →R+, ∀Xi ∈X 1 if A = ∅ 2 P ←1|X| 3 n′ ←n 4 else 5 r ←rank(A) 6 [U, Σ, V ] ←svd(A) 7 P ←V |columns: [r+1,n] 8 n′ ←n −r 9 x0 ←MAP(A(x) = a, B(x) ≤b, φ) 10 cornered ←FALSE 11 for j = 0 to ρ 12 if cornered 13 d ←⃗0 14 W ←B|rows:active × P 15 z ←zi = ∼−|N(0, 1)| ∀i = 1 . . . n′ 16 while ∃i : Wkd −zk > 0 17 v ←argmaxk Wkd−zk ∥Wk∥ 18 d = d + 2 zv−Wvd ∥Wv∥2 W T v 19 cornered ←FALSE 20 else 21 d ←di = ∼N(0, 1) ∀i = 1 . . . n′ 22 d ← 1 ∥d∥d 23 d ←P × d 24 active ←∅ 25 αlow ←−∞, αhigh ←∞ 26 cd ←B × d ; cx ←B × xj 27 for i = 1 . . . |rows(B)| 28 if cdi ̸= 0 29 a = bi−cxi cdi 30 if cdi > 0 then αhigh ←min(αhigh, a) 31 if cdi < 0 then αlow ←max(αlow, a) 32 if a = 0 then active ←active ∪{i} 33 if αhigh −αlow = 0 ∧|active| > θ 34 cornered ←TRUE 35 continue 36 M ←map : [0, 1] →R × R 37 for φi = max(0, oi · x + qi) ∈φ 38 r ←λi(oi · d) 39 c ←λi(oi · xj + qi) 40 a ←−c/r 41 if r > 0 ∧a < αhigh 42 M(max(a, αlow)) ←M(max(a, αlow)) + [r, c] 43 else if r < 0 ∧a > αlow 44 M(αlow) ←M(αlow) + [r, c] 45 if a < αhigh then M(a) ←M(a) + [−r, −c] 46 else M(αlow) ←M(αlow) + [0, c] 47 [rα, cα] ←P a≤α M(a) 48 Σα ←P a<b<α∧̸∃c:a<c<b 1 ra e−ca(e−raa −e−rab) 49 s ←∼[0, Σαhigh] 50 a ←max{α ∈M | Σα ≤s} 51 α ←−1 ra (log (−sra + raΣa + e−ca−raa) + ca) 52 xj+1 ←xj + αd 53 if j > ρ 100n′3 54 H[i][xj+1 i ] ←H[i][xj+1 i ] + 1 ∀i = 1 . . . n Figure 1: Constrained continuous MRF sampling algorithm 3We used θ = 2 in our experiments. 5 Putting the pieces together, we present the marginal distribution sampling algorithm in Figure 1. The inputs to the algorithm were discussed in Section 1. In addition, we assume that the domain restrictions Di = [l, u] for the random variables Xi are encoded as pairs of linear inequality constraints l ≤xi ≤u in B, b. The algorithm first analyzes the equality constraints A to determine the number of “free” random variables and reduce the dimensionality accordingly. The singularvalue decomposition of A is used to determine the n × n′ projection matrix P which maps from the null-space of A to the original space D, where n′ = n −rank(A) is the dimensionality of the null-space. If no equality constraints have been specified, P is the n-dimensional unit matrix. Next, the algorithm determines a MAP state x0 of the density function defined by the CCMRF, which is the point with the highest probability mass, that is, x0 = argmaxx∈˜Df(x). Since the Z(Λ) is constant and the logarithm monotonic, this is identical to x0 = argminx∈˜D Pm j=1 λiφi(x). Hence, computing a MAP state can be cast as a linear optimization problem, since all constraints are linear and the potential functions maximums of two linear functions. Linear optimization problems can be solved efficiently in time O(n3.5) and are very fast in practice. After determining the null-space and starting point, we begin collecting ρ samples. If we detected being stuck in a corner during the previous iteration, we sample a direction d from the feasible subspace of all possible directions in the reduced null-space using the adapted relaxation method described above (lines 13-19). Otherwise, we sample a direction uniformly at random from the null-space of A. We then normalize the direction and project it back into our original domain D by matrix multiplication with P. The projection ensures that all equality constraints remain satisfied as we move along the direction d. Next, we compute the segment of the line l : xj + αd inside the polytope defined by the inequality constraints B (lines 25-32). Iterating over all inequality constraints, we determine the value of α where l intersects the constraint i. We keep track of the largest negative and smallest positive values to define the bounds [αlow, αhigh] such that the line segment is defined exactly by those values of α inside this interval. In addition, we determine all active constraints, i.e. those constraints where the current sample point xj is the point of intersection and hence α = 0. If the interval [αlow, αhigh] is 0, then we are currently sitting in a corner. If, in addition, the number of active constraints exceed some threshold θ we are stuck in a corner and abort the current iteration to start over with restricted direction sampling. In lines 36-48 we compute the cumulative density function of the probability P over the line segment l with α ∈[αlow, αhigh]. Based on our assumption in Section 2, the sum of potential functions S = Pm i=1 λiφi restricted to the line l is a continuous piece-wise linear function. In order to integrate the density function, we need to segment S into its differentiable parts, so we start by determining the subintervals of [αlow, αhigh] where S is linear and differentiable and can therefore be described by S = rx + c. We compute the slope r and y-intercept c for each potential function individually as well as the point of undifferentiability a where the line crosses 0. We use a map M to store the line description [r, c] with the point of intersection a (lines 36-46). Then, we compute the aggregate slope ra and y-intercept ca for the sum of all potentials for each point of undifferentiability a (line 47) and use this information to compute the unnormalized cumulative density function by integrating over each subinterval and summing those up in Σα (line 48). Now, Σa/Σαhigh gives the cumulative probability mass for all points of undifferentiability a which define the subintervals. Next, we sample a number s from the interval [0, Σαhigh] uniformly at random (line 49) and compute α such that Σα = s (line 50-51). Finally, we move to the new sample point xj+1 = xj + αd and add it to the histogram which approximates the marginal densities if the number of steps taken so far exceeds the burn-in period which we configured to be 1% of the total number of steps. 3.5 Generalizing to convex continuous MRFs In our treatment so far, we made specific assumptions about the constraints and potential functions. More generally, Theorem 2 holds when the inequality constraints as well as the potential functions are convex. A system of inequality constraints is convex if the set of all points that satisfy the constraints is convex, that is, any line connecting two points in the set is completely contained in the set. Our algorithm needs to be modified where we currently assume linearity. Firstly, computing a MAP state requires general convex optimization. Secondly, our method for finding feasible directions when being caught in a corner of the polytope needs to be adapted to the case of arbitrary convex constraints. One simple approach is to use the tangent hyperplane at the point xj as an approximation to the actual constraint and proceed as is. Similarly, we need to modify the computation of intersection points between the line and the convex constraints as well as how we determine the 6 points of undifferentiability. Lastly, the computation of integrals over subintervals for the potential functions requires knowledge of the form of potential functions to be solved analytically or they need to be approximated efficiently. The algorithm can handle arbitrary domains for the random variables as long as they are connected subintervals of R. 4 Experiments This section presents an empirical evaluation of the proposed sampling algorithm on the problem of category prediction for Wikipedia documents based on similarity. After describing the data and the experimental methodology, we demonstrate that the computed marginal distributions effectively predict document categories. Moreover, we show that analysis of the marginal distribution provides an indicator for the confidence in those predictions. Finally, we investigate the convergence rate and runtime performance of the algorithm in detail. For our evaluation dataset, we collected all Wikipedia articles that appeared in the featured list4 for a two week period in Oct. 2009, thus obtaining 2460 documents. Of these, we considered a subset of 1717 documents assigned to the 7 most popular categories. After stemming and stop-word removal, we represented the text of each document as a tf/idf-weighted word vector. To measure the similarity between documents, we used the popular cosine metric on the weighted word vectors. The data contains the relations Link(fromDoc, toDoc), which establishes a hyperlink between two documents. We used K-fold cross-validation for k = 20, 25, 30, 35 by splitting the dataset into K non-overlapping subsets each of which is determined using snowball sampling over the link structure from a randomly chosen initial document. For each training and test data subset, we randomly designate 20% of the documents as “seed documents” of which the category is observed and the goal is to predict the categories of the remaining documents. All experiments were executed on identical hardware powered by two Intel Xeon Quad Core 2.3 GHz Processors and 8 GB of RAM. 4.1 Classification results K Baseline Marginals Improvement 20 39.5% 55.8% 41.4% 25 39.1% 51.5% 31.7% 30 36.7% 51.1% 39.1% 35 38.8% 56.6% 46.1% K P(Null Hypothesis) Relative Difference ∆(σ) 20 1.95E-09 38.3% 25 2.40E-13 41.2% 30 <1.00E-16 43.5% 35 4.54E-08 39.0% Figure 2: a) Classification Accuracy b) Std. deviation as an indicator for confidence The baseline method uses only the document content by propagating document categories via textual similarity measured by the cosine distance. Using rules and constraints similar to those presented in Table 1, we create a joint probabilistic model for collective classification of Wikipedia documents. We use PSL twofold in this process: Firstly, PSL constructs the CCMRF by grounding the rules and constraints against the given data as described in Section 2 and secondly, we use the perceptron weight learning method provided by PSL to learn the free parameters of the CCMRF from the training data (see [3] for more detail). The sampling algorithm takes the constructed CCMRF and learned parameters as input and computes the marginal distributions for all random variables from 3 million samples. We have one random variable to represent the similarity for each possible document-category pair, that is, one RV for each grounding of the category predicate. For each document D we pick the category C with the highest expected similarity as our prediction. The accuracy in prediction of both methods is compared in Table 2 a) over the 4 different splits of the data. We observe that the collective probabilistic model outperforms the baseline by up to 46%. All results are statistically significant at p = 0.02. While this results suggests that the sampling algorithm works in practice, it is not surprising and novel since similar results for collective classification have been produced before using other approaches in statistical relational learning (e.g., compare [13]). However, the marginal distributions we obtain provide additional information beyond the simple point estimate of its expected value. In particular, we show that the standard deviation of the marginals can serve as an indicator for the confidence in the particular classification prediction. In order to show this, we compute the standard deviation of the marginal distributions for those random variables picked during the 4 http://en.wikipedia.org/wiki/Wikipedia:Featured_lists, see [3] for more information on the dataset 7 0.05   0.5   5   30000   300000   3000000   KL  Divergence   Number  of  Samples   KL  Divergence  by  Sample  Size   Average  KL  Divergence   Lowest  Quar8le  KL  Divergence   Highest  Quar8le  KL  Divergence   0   5   10   15   20   25   30   35   0   2000   4000   6000   8000   10000   Time  in  sec   Number  of  Poten1al  Func1ons   Run1me  for  1000  Samples   Figure 3: a) KL Divergence by sample size b) Runtime for 1000 samples prediction stage for each fold. We separate those values into two sets, S+, S−, based on whether the prediction turned out to be correct (+) or incorrect (−) when evaluated against the ground truth. Let σ+, σ−denote the average standard deviation for those values in S+, S−respectively. Our hypothesis is that we have higher confidence in the correct predictions, that is, σ+ will typically be smaller than σ−. In other words, we hypothesize that the relative difference between the average deviations, ∆(σ) = 2 σ−−σ+ σ++σ−, is larger than 0. Under the corresponding null hypothesis, we would expect any difference in average standard deviation, and therefore any nonzero ∆(σ), to be purely coincidental or noise. Assuming that such noise in the ∆(σ)’s, which we computed for each fold, can be approximated by a Gaussian distribution with 0 mean and unknown variance5, we test the null hypothesis using a two tailed Z-test with the observed sample variance. The Z-test scores on the 4 differently sized splits are reported in Table 2 b) and allow us to reject the null hypothesis with very high confidence. Table 2 b) also lists ∆(σ) for each split averaged across the multiple folds and shows that σ−is about 40% larger than σ+ on average. 4.2 Algorithm performance In investigating the performance of the sampling algorithm we are mainly interested in two questions: 1) How many samples does it take to converge on the marginal density functions? and 2) What is the computational cost of sampling? To answer the first question, we collect independent samples of varying size from 31 thousand to 2 million and one reference sample with 3 million steps for all folds. For each of the former samples we compare the marginals thus obtained to the ones of the reference sample by measuring their KL divergence. To compute the KL divergence we discretize the density function using a histogram with 10 bins. The center line in Figure 3 a) shows the average KL divergence with respect to the sample size across all folds. To study the impact of dimensionality on convergence, we order the folds by the number of random variables n and show the average KL divergence for the lowest and highest quartile which contains 174 −224 and 322 −413 random variables respectively. The plot is drawn in log-log scale and therefore suggests that each magnitude increase in sample size yields a magnitude improvement in KL divergence. To answer the second question, Figure 3 b) displays the time needed to generate 1000 samples with respect to the number of potential functions in the CCMRF. Computing the induced probability density function along the sampled line segment dominates the cost of each sampling step and the graph shows that this cost grows linearly with the number of potential functions. 5 Conclusion We have presented a novel approximation scheme for computing marginal probabilities over constrained continuous MRFs based on recent results in computational geometry and discussed techniques to improve its efficiency. We introduced an effective sampling algorithm and verified its performance in an empirical evaluation. To our knowledge, this is the first study of the theoretical, practical, and empirical aspects of marginal computation in general constrained continuous MRFs. While our initial results are quite promising, there are still many further directions for research including improved scalability, applications to other probabilistic inference problems, and using the confidence values to improve the prediction accuracy. 5Even if the standard deviations in S+, S−are not normally distributed, the central limit theorem postulates that their averages will eventually follow a normal distributions under independence assumptions. 8 Acknowledgment We thank Stanley Kok, Stephan Bach, and the anonymous reviewers for their helpful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 0937094. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. References [1] E. B Sudderth. Graphical models for visual object recognition and tracking. Ph.D. thesis, Massachusetts Institute of Technology, 2006. [2] L. Lovasz and S. Vempala. Hit-and-run from a corner. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 310–314, Chicago, IL, USA, 2004. ACM. [3] M. Broecheler, L. Mihalkova, and L. Getoor. Probabilistic similarity logic. In Conference on Uncertainty in Artificial Intelligence, 2010. [4] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1):107–136, 2006. [5] K. Kersting and L. De Raedt. Bayesian logic programs. Technical report, Albert-Ludwigs University, 2001. [6] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proceedings of UAI-02, 2002. [7] M. Broecheler, G. Simari, and VS. Subrahmanian. Using histograms to better answer queries to probabilistic logic programs. Logic Programming, page 4054, 2009. [8] M. E. Dyer and A. M. Frieze. On the complexity of computing the volume of a polyhedron. SIAM Journal on Computing, 17(5):967–974, October 1988. [9] R. Kannan, L. Lovasz, and M. Simonovits. Random walks and an o*(n5) volume algorithm for convex bodies. Random structures and algorithms, 11(1):150, 1997. [10] R. L. Smith. Efficient monte carlo procedures for generating points uniformly distributed over bounded regions. Operations Research, 32(6):1296–1308, 1984. [11] S. Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382392, 1954. [12] T. S. Motzkin and I. J. Schoenberg. The relaxation method for linear inequalities. IJ Schoenberg: Selected Papers, page 75, 1988. [13] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, 2008. 9
2010
93
4,139
Gaussian Process Preference Elicitation Edwin V. Bonilla, Shengbo Guo, Scott Sanner NICTA & ANU, Locked Bag 8001, Canberra ACT 2601, Australia {edwin.bonilla, shengbo.guo, scott.sanner}@nicta.com.au Abstract Bayesian approaches to preference elicitation (PE) are particularly attractive due to their ability to explicitly model uncertainty in users’ latent utility functions. However, previous approaches to Bayesian PE have ignored the important problem of generalizing from previous users to an unseen user in order to reduce the elicitation burden on new users. In this paper, we address this deficiency by introducing a Gaussian Process (GP) prior over users’ latent utility functions on the joint space of user and item features. We learn the hyper-parameters of this GP on a set of preferences of previous users and use it to aid in the elicitation process for a new user. This approach provides a flexible model of a multi-user utility function, facilitates an efficient value of information (VOI) heuristic query selection strategy, and provides a principled way to incorporate the elicitations of multiple users back into the model. We show the effectiveness of our method in comparison to previous work on a real dataset of user preferences over sushi types. 1 Introduction Preference elicitation (PE) is an important component of interactive decision support systems that aim to make optimal recommendations to users by actively querying their preferences. A crucial requirement for PE systems is that they should be able to make optimal or near optimal recommendations based only on a small number of queries. In order to achieve this, a PE system should (a) maintain a flexible representation of the user’s utility function; (b) handle uncertainty in a principled manner; (c) select queries that allow the system to discriminate amongst the highest utility items; and (d) allow for the incorporation of prior knowledge from different sources. While previous Bayesian PE approaches have addressed (a), (b) and (c), they appear to ignore an important aspect of (d) concerning generalization from previous users to a new unseen user in order to reduce the elicitation burden on new users. In this paper we propose a Bayesian PE approach to address (a)–(d), including generalization to new users, in an elegant and principled way. Our approach places a (correlated) Gaussian process (GP) prior over the latent utility functions on the joint space of user features (T , mnemonic for tasks) and item features (X). User preferences over items are then seen as drawn from the comparison of these utility function values. The main advantages of our GP-based Bayesian PE approach are as follows. First, due to the nonparametric Bayesian nature of GPs, we have a flexible model of the user’s utility function that can handle uncertainty and incorporate evidence straightforwardly. Second, by having a GP over the joint T × X space, we can integrate prior knowledge on user similarity or item similarity, or simply have more general-purpose covariances whose parameterization can be learned from observed preferences of previous users (i.e. achieving integration of multi-user information). Finally, our approach draws from concepts in the Gaussian process optimization and decision-making literature [1, 2] to propose a Bayesian decision-theoretic PE approach. Here the required expected value of information computations can be derived in closed-form to facilitate the selection of informative queries and determine the highest utility item from the available item set as quickly as possible. 1 In this paper we focus on pairwise comparison queries for PE, which are known to have low cognitive load [3, 4]. In particular, we assume a likelihood model of pairwise preferences that factorizes over users and preferences and a GP prior over the latent utility functions correlates users and items. 2 Problem Formulation Let x denote a specific item (or product) that is described by a set of features x and t denote a user (mnemonic for task) that can be characterized with features t. For a set of items X = {x1, . . . , xN} and users T = {t1, . . . , tM} we are given a set of training preference pairs: D = n (t(j), x(j) k1 ≻x(j) k2 )|k = 1, . . . , Kj; k1, k2 ∈{1, . . . , N}; j = 1 . . . , M o , (1) where x(j) k1 ≻x(j) k2 denotes that we have observed that user j prefers item k1 over item k2 and Kj is the number of preference relations observed for user j. The preference elicitation problem is that given a new user, described by a set of features t∗, we aim to determine (or elicit) what his/her preferences (or favourite items) are by asking a small number of queries of the form qij def= xi ≻xj, meaning that he/she will prefer item i over item j. Ideally, we would like to obtain the best user preferences with the smallest number of possible queries. The key idea of this paper is that of learning a Gaussian process (GP) model over users’ latent utility functions and use this model in order to drive the elicitation process of a new user. Due to the non-parametric Bayesian nature of the GPs, this allows us to have a powerful model of the user’s utility function and to incorporate the evidence (i.e. the responses the user gives to our queries) in a principled manner. Our approach directly exploits: (a) user-relatedness, i.e. that users with similar characteristics may have similar preferences; (b) items’ similarities and (c) the value of information of obtaining a response to a query in order to elicit the preferences of the user. 3 Likelihood Model Our likelihood model considers that the users’ preference relationships are conditionally independent given the latent utility functions. In other words, the probability of a user t preferring item x over item x′ given their utility functions is: p(xt ≻x′t|f(t, x), f(t, x′), ξ) = I[f(t, x) −f(t, x′) ≥ξ] with p(ξ) = N(ξ|0, σ2), (2) where I[·] is an indicator function that is 1 if the condition [·] is true and 0 otherwise; and σ2 is the variance of the normally distributed variable ξ that dictates how different the latent functions should be for the corresponding relation to hold. Hence: p(xt ≻x′t|f(t, x), f(t, x′)) = Z ∞ −∞ I[f(t, x) −f(t, x′) ≥ξ]N(ξ|0, σ2)dξ (3) = Φ f(t, x) −f(t, x′) σ  , (4) where Φ(·) is the Normal cumulative distribution function (cdf). The conditional data-likelihood is then given by: p(D|f) = M Y j=1 Kj Y k=1 Φ(zj k) with zj k = 1 σ  f(t(j), x(j) k1 ) −f(t(j), x(j) k2 )  . (5) 4 Modeling User Dependencies with a GP Prior As mentioned above, we model user (and item) dependencies via the user latent utility functions, which are assumed to be drawn from a GP prior that accounts for user similarity and item similarity directly: f(t, x) ∼GP 0, κt(t, t′)κx(x, x′)  , (6) 2 where κt(·, ·) is a covariance function on user-descriptors t and κx(·, ·) is a covariance function on item features x. We will denote the parameters of these covariance functions (so-called hyperparameters) by θt and θx. (These types of priors have been considered previously in the regression setting, see e.g. [5].) Additionally, let f be the utility function values for all training users at all training input locations (i.e. items) so that f = [f(t(1), x(1)), . . . f(t(1), x(N)), . . . , f(t(M), x(1)), . . . , f(t(M), x(N))]T and F be the N × M matrix for which the jth column corresponds to the latent values for the jth user at all input points such that f = vec F. Hence: f ∼N(0, Σ) with Σ = Kt ⊗Kx, (7) where Kt is the covariance between all the training users, Kx is the covariance between all the training input locations, and ⊗denotes the Kronecker product. Note that dependencies between users are not arbitrarily imposed but rather they will be learned from the available data by optimizing the marginal likelihood. (We will describe the details of hyper-parameter learning in section 7.) 5 Posterior and Predictive Distributions Given the data in (1) and the prior over the latent utility functions in equation (6), we can obtain the posterior distribution: P(f|D, θ) = p(D|f, θ)p(f|θ) p(D|θ) , (8) where we have emphasized the dependency on the hyper-parameters θ that include θt, θx and σ2 and where p(D|θ) is the marginal likelihood (or evidence) with p(D|θ) = R p(D|f, θ)p(f|θ)df. The non-Gaussian nature of the conditional likelihood term (given in equation (5)) makes the above integral analytically intractable and hence we will require approximations. In this paper we will focus on analytical approximations and more specifically, we will approximate the posterior p(f|D, θ), and the evidence, using the Laplace approximation. The Laplace method approximates the true posterior with a Gaussian: p(f|D, θ) ≈N(f|ˆf, A−1), where ˆf = argmaxf p(f|D, θ) = argmaxf p(D|f, θ)p(f|θ) and A is the Hessian of the negative log-posterior evaluated at ˆf. Hence we consider the unnormalized expression p(D|f, θ)p(f|θ) and, omitting the terms that are independent of f, we focus on the maximization of the following expression: ψ(f) = M X j=1 Kj X k=1 log Φ(zj k) −1 2f T Σ−1f. (9) Using Newton’s method we obtain the following iterative update: f new = (W + Σ−1)−1 ∂log p(D|f, θ) ∂f + Wf  with Wpq = − M X j=1 Kj X k=1 ∂2log Φ(zj k) ∂fp∂fq . (10) Once we have found the maximum posterior ˆf by using the above iteration we can show that: p(f|D) ≈N(f|ˆf, (W + Σ−1)−1). (11) 5.1 Predictive Distribution In order to set-up our elicitation framework we will also need the predictive distribution for a fixed test user t∗at an unseen pair x1 ∗, x2 ∗. This is given by: p(f∗|D) = Z p(f∗|f)p(f|D)df (12) = N(f∗|µ∗, C∗), (13) with: µ∗= kT ∗Σ−1ˆf and C∗= Σ∗−kT ∗(Σ + W−1)−1k∗, (14) 3 where Σ is defined as in equation (7) and: k∗= kt ∗⊗kx ∗ (15) kt ∗=  κt(t∗, t(1)), . . . κt(t∗, t(M)) T (16) kx ∗=  κx(x1 ∗, x(1)), . . . κx(x1 ∗, x(N)) κx(x2 ∗, x(1)), . . . κx(x2 ∗, x(N)) T (17) Σ∗=  κt(t∗, t∗)κx(x1 ∗, x1 ∗) κt(t∗, t∗)κx(x1 ∗, x2 ∗) κt(t∗, t∗)κx(x2 ∗, x1 ∗) κt(t∗, t∗)κx(x2 ∗, x2 ∗)  . (18) 6 Gaussian Process Preference Elicitation Framework Now we have the main components to set up our preference elicitation framework for a test user characterized by features t∗. Our main objective is to use the previously seen data (and the corresponding learned hyper-parameters) in order to drive the elicitation process and to incorporate the information obtained from the user’s responses back into our model in a principled manner. Our main requirement is a function that dictates the value of making a query qij. In other words, we aim at trading-off the expected actual utility of the items involved in the query and the information these items will provide regarding the user’s preferences. This is the exploration-exploitation dilemma, usually seen in optimization and reinforcement learning problems. We can address this issue by computing the expected value of information (EVOI, [2]) of making a query involving items i and j. Before defining the EVOI, we will make use of the concept of expected improvement, a measure that is commonly used in optimization methods based on response surfaces (see e.g. [1]). 6.1 Expected Improvement We have seen in equation (13) that the predictive distribution for the utility function on a test user t∗on item x follows a Gaussian distribution: f(t∗, x|D, θ) ∼N(µ∗(t∗, x), s2 ∗(t∗, x)), (19) where µ∗(t∗, x) and s2 ∗(t∗, x) can be obtained by using (the marginalized version of) equation (14). Let us assume that, at any point during the elicitation process we have an estimate of the utility of the best item and let us denote it by f best. If we define the predicted improvement at x as I = f(t∗, x|D, θ) −f best then the expected improvement (EI) of recommending item x (for a fixed user t∗) instead of recommending the best item xbest is given by: EI(x|D) = Z ∞ 0 Ip(I)dI = s∗(t∗, x)[z′Φ(z′) + φ(z′)], (20) where z′ = (µ∗(t∗, x) −f best)/s∗(t∗, x), Φ(·) is the Normal cumulative distribution function (cdf) and φ(·) is the Normal probability density function (pdf). Note that, for simplicity in the notation, we have omitted the dependency of EI(x|D) on the user’s features t∗. Hence the maximum expected improvement (ME) under the current observed data D is: MEI(D) = max x EI(x|D). (21) 6.2 Expected Value of Information Now we can define the expected value of information (EVOI) as the expected gain in improvement that is obtained by adding a query involving a particular pairwise relation. Thus, the expected value of information of obtaining the response for the queries involving items x∗i, x∗j with corresponding utility values f ∗= (f ∗(t∗, xi ∗), f ∗(t∗, xj ∗))T is given by: EVOI(D, i, j) = −MEI(D) + *X qij p(qij|f ∗, D)MEI(D ∪qij) + p(f ∗|D) (22) = −MEI(D) + D p(x∗i ≻x∗j|f ∗, D) E p(f ∗|D) MEI(D ∪{x∗i ≻x∗j}) + D p(x∗j ≻x∗i|f ∗, D) E p(f ∗|D) MEI(D ∪{x∗j ≻x∗i}), (23) 4 Algorithm 1 Gaussian Process Preference Elicitation Require: hyper-parameters θx, θt, θσ {learned from M previous users} and corresponding D repeat for all candidate pairs (i, j) do Compute EVOI(i, j, D,ˆf, W) {equation (23)} end for (i∗, j∗) ←argmaxi,j EVOI(i, j) {best pair} Remove (i∗, j∗) from candidate list if qi∗,j∗is true then {ask user and set true preference} (itrue, jtrue) ←(i∗, j∗) else (itrue, jtrue) ←(j∗, i∗) end if D ←D ∪(tM+1, xitrue ≻xjtrue) {Expand D and get D+} Update ˆf, W {i.e. P(f|D) as in equation (10)} until Satisfied where D p(x∗i ≻x∗j|f ∗, D) E p(f ∗|D) = p(x∗i ≻x∗j|D) (24) = Z f ∗p(x∗i ≻x∗j|f ∗, D)p(f ∗|D)df ∗ (25) = Z f ∗ Z ξ I[f ∗ i −f ∗ j ≥ξ]N(ξ|0, σ2)N(f∗|µ∗, C∗)dξdf ∗ (26) = Φ  µ∗ i −µ∗ j Ci,i −Cj,j −2Ci,j −σ2  , (27) and µ∗and C∗as defined in (14). Note that in our model p(x∗j ≻x∗i|D) = 1 −p(x∗i ≻x∗j|D). As mentioned above, f best can be thought of as an estimate of the utility of the best item as its true utility is unknown. In practice we maintain our beliefs over the utilities of the items p(f|D+) for the training users and the test user, where D+ denotes the data extended by the set of seen relationships on the test user (which is initially empty). Hence, we can set-up f best = maxi c F+i,M+1, where c F+ is the matrix containing the mean estimates of the latent utility function distribution given by the Laplace approximation in equation (9). Alternatively, we can draw samples from such a distribution and apply the max operator. In order to elicit preferences on a new user we simply select a query so that it maximizes the expected value of information EVOI as defined in equation (23). A summary of our approach is presented in algorithm 1. We note that although, in principle, one could also update the hyper-parameters based on the data provided by the new user, we avoid this in order to keep computations manageable at query time. The reasoning being that, implicitly, we have learned the utility functions over all users and we represent the utility of the test user (explicitly) on demand, updating our beliefs to incorporate the information provided by the user’s responses. 7 Hyper-parameter Learning Throughout this paper we have assumed that we have learned a Gaussian process model for the utility functions over users and items based upon previously seen preference relations. We refer to the hyper-parameters of our model as the hyper-parameters θt and θx of the covariance functions (κt and κx respectively) and θσ = log σ, where σ2 is the “noise” variance. Although it is entirely possible to use prior knowledge on what these covariance functions are (or their corresponding parameter settings) for the specific problem under consideration, in many practical applications such prior knowledge is not available and one requires to tune such parameteriza5 tion based upon the available data. Fortunately, as in the standard GP regression framework, we can achieve this in a principled way through maximization of the marginal likelihood (or evidence). As in the case of the posterior distribution, the marginal likelihood is analytically intractable and approximations are needed. The Laplace approximation to the marginal log-likelihood is given by: log p(D|θ) ≈−1 2 log|ΣW + I| −1 2 ˆf T Σ−1ˆf + M X j=1 Kj X k=1 log Φ(ˆzj k) (28) where ˆzj k = zj k|ˆf, ˆf and W are defined as in (10) and Σ is defined as in equation (7). Note that computations are not carried out at all the M × N data-points but only at those locations that “support” the seen relations and hence we should write e.g. ˆfo, Σo where the subindex {}o indicates this fact. However, for simplicity, we have omitted this notation. Given the equation above, gradient-based optimization can be used for learning the hyper-parameters in our model. As we shall see in the following section, for our experiments we do not have much prior information on suitable hyper-parameter settings and therefore we have carried out hyperparameter learning by maximization of the marginal log-likelihood. 8 Experiments & Results In this section we describe the dataset used in our experiments, the evaluation setting and the results obtained with our model and other baseline methods. 8.1 The Sushi Dataset We evaluate our approach on the Sushi dataset [6]. Here we present a brief description of this dataset and the pre-processing we have carried out in order to apply our method. The reader is referred to [6] for more details. The Sushi dataset contains full rankings given by 5000 Japanese users over N = 10 different types of sushi. Each sushi is associated with a set of features which include style, major group, minor group, heaviness, consumption frequency, normalized price and sell frequency. The first three features are categorical and therefore we have created the corresponding dummy variables to be used by our method. The resulting features are then represented by a 15-dimensional vector (x). Each user is also represented by a set of features wich include gender, age and other features that compile geographical/regional information. As with the item features, we have created dummy variables for those categorical features, which resulted into a 85-dimensional feature vector (t) for each user. As pointed out in the documentation of the dataset, Japanese food preferences are strongly correlated with geographical and regional information. Therefore, modeling user similarities may provide useful information during the elicitation process. 8.2 Evaluation Methodology and Experimental Details We evaluate our method via 10-fold cross-validation, where we have sub-sampled the training folds in order to (a) keep the computational burden as low as possible and (b) show that we can learn sensible parameterizations based upon relatively low requirements in terms of the preferences seen on previous users. In particular, we have subsampled 50 training users and selected about 5 training pairwise preferences drawn from each of the N = 10 available items. For the GPs we have used the squared exponential (SE) covariance functions with automatic relevance determination (ARD) for both κt and κx and have carried out hyperparameter learning via gradient-based optimization of the marginal likelihood in equation (28). We have initialized the hyper-parameters of the models deterministically, setting the signal variance and the length-scales of the covariance function to the initial values of 1 and the σ2 parameter to 0.01. In order to measure the quality of our preference elicitation approach we use the normalized loss as a function of the number of queries, where at each iteration the method provides a recommendation based on the available information. The normalized loss function is defined as: (ubest −upred)/ubest, where ubest is the best utility for a specific item/user and upred is the utility achieved by the recommendation provided by the system. 6 0 2 4 6 8 10 12 14 16 0 0.05 0.1 0.15 0.2 0.25 0.3 NUMBER OF QUERIES NORMALIZED AVERAGE LOSS RVOI B&L GPPE (a) 0 2 4 6 8 10 12 14 16 0 0.05 0.1 0.15 0.2 0.25 0.3 NUMBER OF QUERIES NORMALIZED AVERAGE LOSS GPPE−PRIOR GPPE−OPT (b) Figure 1: The Normalized average loss as a function of the number of queries with 2 standard (errors of the mean) error bars. (a) The performance of our model compared to the RVOI method described in [7] and the B&L heuristic over the full set of 5000 test users. (b) The performance of our model when the hyper-parameters have been optimized via maximization of the marginal likelihood (GPPE-OPT) compared to the same GP elicitation framework when these hyper-parameters have been set to their default values (GPPE-PRIOR). We compare our approach to two baseline methods. One is the restricted value of information algorithm [7] and the other one is the best and largest heuristic, which we wil refer to as the RVOI method and the B&L heuristic respectively. The RVOI approach is also a VOI-based method but it does not leverage information from other users and it considers diagonal Gaussians as prior models of the latent utility functions. The B&L heuristic selects the current best item and the one with the largest uncertainty. Both baselines have been shown to be competitive methods for preference elicitation (see [7] for more details). Additionally, we compare our method when the hyper-parameters have been learned on the set of previously seen users with the same GP elicitation approach when the hyper-parameters have been set to the initial values described above. This allows us to show that, indeed, when prior information on user and item similarity is not available, our model does learn sensible settings of the hyper-parameters, which lead to better quality elicitation outcomes. 8.3 Results Figure 1(a) shows the normalized average loss across all 5000 users as a function of the number of queries. As can be seen, on average, all competing methods reduce the expected loss as the number of queries increases. More importantly, our method (GPPE) clearly outperforms the other algorithms even for a small number of queries. This demonstrates that our approach exploits the inter-relations between users and items effectively in order to enhance the elicitation process on a new user. Although it may be surprising that the B&L heuristic outperforms the RVOI method, we point out that the evaluation of these methods presented in [7] did not consider real datasets as we do in our experiments. Figure 1(b) shows the normalized average loss across all 5000 users for our method when the hyperparameters have been set to the initial values described in section 8 (labeled in the figure as GPPEPRIOR) and when the hyper-parameters have been optimized by maximization of the marginal likelihood on a set of previously seen users (labeled in the figure as GPPE-OPT). We can see that, indeed, the GPPE model that learns the hyper-parameters from previous users’ data significantly outperforms the same method when these (hyper-)parameters are not optimized. 9 Related Work Preference elicitation (PE) is an important component of recommender systems and market research. Traditional PE frameworks focus on modeling and eliciting a single user’s preferences. We can categorize different PE frameworks in terms of query types. In [8], the authors propose to model 7 utilities as random variables, and refines utility uncertainty by using standard gamble queries. The same query type is also used in [9], which differs from [8] in treating PE as a Partially Observable Markov Decision Process (POMDP). However, standard gamble queries are difficult for users to respond to, and naturally lead to noisy responses. Simpler query types have also been used for PE. For example, [7] uses pairwise comparison queries, which are believed to have low cognitive load. Our work also adopts simple pairwise comparison queries, but it differs from [7] in that it makes use of users’ preferences that have been seen before and does not assume additive independent utilities. In the machine learning community preference learning has received substantial interest over the past few years. For example, one the most recent approaches to preference learning is presented in [10], where a multi-task learning approach to the problem of modeling human preferences is adopted by extending the model in [11] to deal with preference data. Their model follows a hierarchical approach based on finite Gaussian processes (GPs), where inter-user similarities are exploited by assuming that the subjects share a set of hyper-parameters. Their model is different to ours in that they consider the dual representation of the GPs as they do not generalize over user features. Furthermore, they do not address the elicitation problem, which is the main concern of this paper. Extensions of the Gaussian process formalism to model ordinal data and user preferences are given in [12] and [13]. Both their prior and their likelihood models can be seen as single-user (task) specifications of our model. In other words, unlike the work of [10], their model (as ours) considers the function space view of the GPs but, unlike [10] and our approach, they do not address the multi-task case or generalize across users. More importantly, an elicitation framework for actively querying the user is not presented in such works. [14] proposes an active preference learning method for discrete choice data. Their approach is based on the model in [13]. Unlike our approach they do not leverage information from seen preferences on previous users and hence their active preference learning process on a new user starts from scratch. This leads to the problem of either relying on good prior information on the covariance function or on hyper-parameter updating during the active learning process, which is computationally too expensive to be used in practice. Additionally, as their concern is on a possibly infinite set of discrete choices, their approach completely relies upon the expected improvement (EI) measure. 10 Conclusions & Future Work In this paper we have presented a Gaussian process approach to the problem of preference elicitation. One of the crucial characteristics of our method is that it exploits user-similarity via a (correlated) Gaussian process prior over the users’ latent utility functions. These similarities are “learned” from preferences on previous users. Our method maintains a flexible representation of the user’s latent utility function, handles uncertainty in a principled manner and allows the incorporation of prior knowledge from different sources. The required expected value of information computations can be derived in closed-form to facilitate the selection of informative queries and determine the highest utility item from the available item set as quickly as possible. We have shown the benefits of our method on a real dataset of 5000 users with preferences over 10 sushi types. In future work we aim at investigating other elicitation problems such as those involving a Likert scale [15] where our approach may be effective. The main practical constraint is that in order to carry out the evaluation (but not the application) of our method on real data we require the full set of preferences of the users over a set of items. Our main motivation for the Laplace method is its computational efficiency. However, [10] has shown that this method is a good approximation to the posterior in the context of the preference learning problem. We intend to investigate other approximation methods to the posterior and marginal likelihood and their joint application with sparse approximation methods within our framework (see e.g. [16]), which will be required if the number of training users is large. Acknowledgments NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] Donald R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4):345–383, 2001. [2] R.A. Howard. Information value theory. IEEE Transactions on Systems Science and Cybernetics, 2(1):22–26, 1966. [3] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility elicitation. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 363–369. AAAI Press / The MIT Press, 2000. [4] Vincent Conitzer. Eliciting single-peaked preferences using comparison queries. Journal of Artificial Intelligence Research, 35:161–191, 2009. [5] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K. I. Williams. Multi-task Gaussian process prediction. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 153–160. MIT Press, Cambridge, MA, 2008. [6] Toshihiro Kamishima. Nantonac collaborative filtering: recommendation based on order responses. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 583–588, New York, NY, USA, 2003. ACM. [7] Shengbo Guo and Scott Sanner. Real-time multiattribute Bayesian preference elicitation with pairwise comparison queries. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010. [8] Urszula Chajewska and Daphne Koller. Utilities as random variables: Density estimation and structure discovery. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 63–71. Morgan Kaufmann Publishers Inc., 2000. [9] Craig Boutilier. A POMDP formulation of preference elicitation problems. In Proceedings of the 18th National Conference on Artificial Intelligence, pages 239–246, Menlo Park, CA, USA, 2002. American Association for Artificial Intelligence. [10] Adriana Birlutiu, Perry Groot, and Tom Heskes. Multi-task preference learning with an application to hearing aid personalization. Neurocomputing, 73(7-9):1177–1185, 2010. [11] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning Gaussian processes from multiple tasks. In Proceedings of the 22nd international conference on Machine learning, pages 1012– 1019, New York, NY, USA, 2005. ACM. [12] Wei Chu and Zoubin Ghahramani. Gaussian processes for ordinal regression. Journal of Machine Learning Research, 6:1019–1041, 2005. [13] Wei Chu and Zoubin Ghahramani. Preference learning with Gaussian processes. In Proceedings of the 22nd international conference on Machine learning, pages 137–144, New York, NY, USA, 2005. ACM. [14] Brochu Eric, Nando De Freitas, and Abhijeet Ghosh. Active preference learning with discrete choice data. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 409–416. MIT Press, Cambridge, MA, 2008. [15] Rensis Likert. A technique for the measurement of attitudes. Archives of Psychology, 22(140):1–55, 1932. [16] Joaquin Qui˜nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6:1939–1959, 2005. 9
2010
94
4,140
A Theory of Multiclass Boosting Indraneel Mukherjee Robert E. Schapire Princeton University, Department of Computer Science, Princeton, NJ 08540 {imukherj,schapire}@cs.princeton.edu Abstract Boosting combines weak classifiers to form highly accurate predictors. Although the case of binary classification is well understood, in the multiclass setting, the “correct” requirements on the weak classifier, or the notion of the most efficient boosting algorithms are missing. In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense, boosting algorithms that assume such requirements. 1 Introduction Boosting [17] refers to a general technique of combining rules of thumb, or weak classifiers, to form highly accurate combined classifiers. Minimal demands are placed on the weak classifiers, so that a variety of learning algorithms, also called weak-learners, can be employed to discover these simple rules, making the algorithm widely applicable. The theory of boosting is well-developed for the case of binary classification. In particular, the exact requirements on the weak classifiers in this setting are known: any algorithm that predicts better than random on any distribution over the training set is said to satisfy the weak learning assumption. Further, boosting algorithms that minimize loss as efficiently as possible have been designed. Specifically, it is known that the Boost-by-majority [6] algorithm is optimal in a certain sense, and that AdaBoost [11] is a practical approximation. Such an understanding would be desirable in the multiclass setting as well, since many natural classification problems involve more than two labels, e.g. recognizing a digit from its image, natural language processing tasks such as part-of-speech tagging, and object recognition in vision. However, for such multiclass problems, a complete theoretical understanding of boosting is lacking. In particular, we do not know the “correct” way to define the requirements on the weak classifiers, nor has the notion of optimal boosting been explored in the multiclass setting. Straightforward extensions of the binary weak-learning condition to multiclass do not work. Requiring less error than random guessing on every distribution, as in the binary case, turns out to be too weak for boosting to be possible when there are more than two labels. On the other hand, requiring more than 50% accuracy even when the number of labels is much larger than two is too stringent, and simple weak classifiers like decision stumps fail to meet this criterion, even though they often can be combined to produce highly accurate classifiers [9]. The most common approaches so far have relied on reductions to binary classification [2], but it is hardly clear that the weak-learning conditions implicitly assumed by such reductions are the most appropriate. The purpose of a weak-learning condition is to clarify the goal of the weak-learner, thus aiding in its design, while providing a specific minimal guarantee on performance that can be exploited by a boosting algorithm. These considerations may significantly impact learning and generalization because knowing the correct weak-learning conditions might allow the use of simpler weak classifiers, which in turn can help prevent overfitting. Furthermore, boosting algorithms that more efficiently and effectively minimize training error may prevent underfitting, which can also be important. In this paper, we create a broad and general framework for studying multiclass boosting that formalizes the interaction between the boosting algorithm and the weak-learner. Unlike much, but not all, of the previous work on multiclass boosting, we focus specifically on the most natural, and perhaps 1 weakest, case in which the weak classifiers are genuine classifiers in the sense of predicting a single multiclass label for each instance. Our new framework allows us to express a range of weak-learning conditions, both new ones and most of the ones that had previously been assumed (often only implicitly). Within this formalism, we can also now finally make precise what is meant by correct weak-learning conditions that are neither too weak nor too strong. We focus particularly on a family of novel weak-learning conditions that have an especially appealing form: like the binary conditions, they require performance that is only slightly better than random guessing, though with respect to performance measures that are more general than ordinary classification error. We introduce a whole family of such conditions since there are many ways of randomly guessing on more than two labels, a key difference between the binary and multiclass settings. Although these conditions impose seemingly mild demands on the weak-learner, we show that each one of them is powerful enough to guarantee boostability, meaning that some combination of the weak classifiers has high accuracy. And while no individual member of the family is necessary for boostability, we also show that the entire family taken together is necessary in the sense that for every boostable learning problem, there exists one member of the family that is satisfied. Thus, we have identified a family of conditions which, as a whole, is necessary and sufficient for multiclass boosting. Moreover, we can combine the entire family into a single weak-learning condition that is necessary and sufficient by taking a kind of union, or logical OR, of all the members. This combined condition can also be expressed in our framework. With this understanding, we are able to characterize previously studied weak-learning conditions. In particular, the condition implicitly used by AdaBoost.MH [19], which is based on a one-against-all reduction to binary, turns out to be strictly stronger than necessary for boostability. This also applies to AdaBoost.M1 [9], the most direct generalization of AdaBoost to multiclass, whose conditions can be shown to be equivalent to those of AdaBoost.MH in our setting. On the other hand, the condition implicit to Zhu et al.’s SAMME algorithm [21] is too weak in the sense that even when the condition is satisfied, no boosting algorithm can guarantee to drive down the training error. Finally, the condition implicit to AdaBoost.MR [19, 9] (also called AdaBoost.M2) turns out to be exactly necessary and sufficient for boostability. Employing proper weak-learning conditions is important, but we also need boosting algorithms that can exploit these conditions to effectively drive down error. For a given weak-learning condition, the boosting algorithm that drives down training error most efficiently in our framework can be understood as the optimal strategy for playing a certain two-player game. These games are nontrivial to analyze. However, using the powerful machinery of drifting games [8, 16], we are able to compute the optimal strategy for the games arising out of each weak-learning condition in the family described above. These optimal strategies have a natural interpretation in terms of random walks, a phenomenon that has been observed in other settings [1, 6]. Our focus in this paper is only on minimizing training error, which, for the algorithms we derive, provably decreases exponentially fast with the number of rounds of boosting. Such results can be used in turn to derive bounds on the generalization error using standard techniques that have been applied to other boosting algorithms [18, 11, 13]. (We omit these due to lack of space.) The game-theoretic strategies are non-adaptive in that they presume prior knowledge about the edge, that is, how much better than random are the weak classifiers. Algorithms that are adaptive, such as AdaBoost, are much more practical because they do not require such prior information. We show therefore how to derive an adaptive boosting algorithm by modifying one of the game-theoretic strategies. We present experiments aimed at testing the efficacy of the new methods when working with a very weak weak-learner to check that the conditions we have identified are indeed weaker than others that had previously been used. We find that our new adaptive strategy achieves low test error compared to other multiclass boosting algorithms which usually heavily underfit. This validates the potential practical benefit of a better theoretical understanding of multiclass boosting. Previous work. The first boosting algorithms were given by Schapire [15] and Freund [6], followed by their AdaBoost algorithm [11]. Multiclass boosting techniques include AdaBoost.M1 and AdaBoost.M2 [11], as well as AdaBoost.MH and AdaBoost.MR [19]. Other approaches include [5, 21]. There are also more general approaches that can be applied to boosting including [2, 3, 4, 12]. Two game-theoretic perspectives have been applied to boosting. The first one [10, 14] views the weak2 learning condition as a minimax game, while drifting games [16, 6] were designed to analyze the most efficient boosting algorithms. These games have been further analyzed in the multiclass and continuous time setting in [8]. 2 Framework We introduce some notation. Unless otherwise stated, matrices will be denoted by bold capital letters like M, and vectors by bold small letters like v. Entries of a matrix and vector will be denoted as M(i, j) or v(i), while M(i) will denote the ith row of a matrix. Inner product of two vectors u, v is denoted by ⟨u, v⟩. The Frobenius inner product of two matrices Tr(MM′) will be denoted by M • M′. The indicator function is denoted by 1 [·]. The distribution over the set {1, . . . , k} will be denoted by ∆{1, . . . , k}. In multiclass classification, we want to predict the labels of examples lying in some set X. Each example x ∈X has a unique y label in the set {1, . . . , k}, where k ≥2. We are provided a training set of labeled examples {(x1, y1), . . . , (xm, ym)}. Boosting combines several mildly powerful predictors, called weak classifiers, to form a highly accurate combined classifier, and has been previously applied for multiclass classification. In this paper, we only allow weak classifier that predict a single class for each example. This is appealing, since the combined classifier has the same form, although it differs from what has been used in much previous work. We adopt a game-theoretic view of boosting. A game is played between two players, Booster and Weak-Learner, for a fixed number of rounds T. With binary labels, Booster outputs a distribution in each round, and Weak-Learner returns a weak classifier achieving more than 50% accuracy on that distribution. The multiclass game is an extension of the binary game. In particular, in each round t: (1) Booster creates a cost-matrix Ct ∈Rm×k, specifying to Weak-Learner that the cost of classifying example xi as l is C(i, l). The cost-matrix may not be arbitrary, but should conform to certain restrictions as discussed below. (2) Weak-Learner returns some weak classifier ht : X → {1, . . . , k} from a fixed space ht ∈H so that the cost incurred is Ct • 1ht = Pm i=1 Ct(i, ht(xi)), is “small enough”, according to some conditions discussed below. Here by 1h we mean the m × k matrix whose (i, j)-th entry is 1 [h(i) = j]. (3) Booster computes a weight αt for the current weak classifier based on how much cost was incurred in this round. At the end, Booster predicts according to the weighted plurality vote of the classifiers returned in each round: H(x) △= argmax l∈{1,...,k} fT (x, l), where fT (x, l) △= T X t=1 1 [ht(x) = l] αt. (1) By carefully choosing the cost matrices in each round, Booster aims to minimize the training error of the final classifer H, even when Weak-Learner is adversarial. The restrictions on cost-matrices created by Booster, and the maximum cost Weak-Learner can suffer in each round, together define the weak-learning condition being used. For binary labels, the traditional weak-learning condition states: for any non-negative weights w(1), . . . , w(m) on the training set, the error of the weak classfier returned is at most (1/2 −γ/2) P i wi. Here γ parametrizes the condition. There are many ways to translate this condition into our language. The one with fewest restrictions on the costmatrices requires labeling correctly should be less costly than labeling incorrectly: ∀i : C(i, yi) ≤ C(i, ¯yi), while the restriction on the returned weak classifier h requires less cost than predicting randomly: P i C(i, h(xi)) ≤P i  1 2 −γ 2  C(i, ¯yi) + 1 2 + γ 2  C(i, yi) . By the correspondence w(i) = C(i, ¯yi) −C(i, yi), we may verify the two conditions are the same. We will rewrite this condition after making some simplifying assumptions. Henceforth, without loss of generality, we assume that the true label is always 1. Let Cbin ⊆Rm×2 consist of matrices C which satisfy C(i, 1) ≤C(i, 2). Further, let Ubin γ ∈Rm×2 be the matrix whose each row is (1/2 + γ/2, 1/2 −γ/2). Then, Weak-Learner searching space H satisfies the binary weak-learning condition if: ∀C ∈Cbin, ∃h ∈H : C • 1h −Ubin γ  ≤0. There are two main benefits to this reformulation. With linear homogeneous constraints, the mathematics is simplified, as will be apparent later. More importantly, by varying the restrictions Cbin on the cost vectors and the matrix Ubin, we can generate a vast variety of weak-learning conditions for the multiclass setting k ≥2 as we now show. 3 Let C ⊆Rm×k and matrix B ∈Rm×k, which we call the baseline; we say a weak classifier space H satisfies the condition (C, B) if ∀C ∈C, ∃h ∈H : C • (1h −B) ≤0, i.e., m X i=1 c(i, h(i)) ≤ m X i=1 ⟨c(i), B(i)⟩. (2) In (2), the variable matrix C specifies how costly each misclassification is, while the baseline B specifies a weight for each misclassification. The condition therefore states that a weak classifier should not exceed the average cost when weighted according to baseline B. This large class of weak-learning conditions captures many previously used conditions, such as the ones used by AdaBoost.M1 [9], AdaBoost.MH [19] and AdaBoost.MR [9, 19] (see below), as well as novel conditions introduced in the next section. By studying this vast class of weak-learning conditions, we hope to find the one that will serve the main purpose of the boosting game: finding a convex combination of weak classifiers that has zero training error. For this to be possible, at the minimum the weak classifiers should be sufficiently rich for such a perfect combination to exist. Formally, a collection H of weak classifiers is eligible for boosting, or simply boostable, if there exists a distribution λ on this space that linearly separates the data: ∀i : argmaxl∈{1,...,k} P h∈H λ(h)1 [h(xi) = l] = yi. The weak-learning condition plays two roles. It rejects spaces that are not boostable, and provides an algorithmic means of searching for the right combination. Ideally, the second factor will not cause the weak-learning condition to impose additional restrictions on the weak classifiers; in that case, the weak-learning condition is merely a reformulation of being boostable that is more appropriate for deriving an algorithm. In general, it could be too strong, i.e. certain boostable spaces will fail to satisfy the conditions. Or it could be too weak i.e., non-boostable spaces might satisfy such a condition. Booster strategies relying on either of these conditions will fail to drive down error; the former due to underfitting, and the latter due to overfitting. In the next section we will describe conditions captured by our framework that avoid being too weak or too strong. 3 Necessary and sufficient weak-learning conditions The binary weak-learning condition has an appealing form: for any distribution over the examples, the weak classifier needs to achieve error not greater than that of a random player who guesses the correct answer with probability 1/2 + γ. Further, this is the weakest condition under which boosting is possible as follows from a game-theoretic perspective [10, 14] . Multiclass weak-learning conditions with similar properties are missing in the literature. In this section we show how our framework captures such conditions. In the multiclass setting, we model a random player as a baseline predictor B ∈Rm×k whose rows are distributions over the labels, B(i) ∈∆{1, . . . , k}. The prediction on example i is a sample from B(i). We only consider the space of edge-over-random baselines Beor γ ⊆Rm×k who have a faint clue about the correct answer. More precisely, any baseline B ∈Beor γ in this space is γ more likely to predict the correct label than an incorrect one on every example i: ∀l ̸= 1, B(i, 1) ≥B(i, l) + γ, with equality holding for some l. When k = 2, the space Beor γ consists of the unique player Ubin γ , and the binary weak-learning condition is given by (Cbin, Ubin γ ). The new conditions generalize this to k > 2. In particular, define Ceor to be the multiclass extension of Cbin: any cost-matrix in Ceor should put the least cost on the correct label, i.e., the rows of the cost-matrices should come from the set  c ∈Rk : ∀l, c(1) ≤c(l) . Then, for every baseline B ∈Beor γ , we introduce the condition (Ceor, B), which we call an edgeover-random weak-learning condition. Since C • B is the expected cost of the edge-over-random baseline B on matrix C, the constraints (2) imposed by the new condition essentially require better than random performance. We now present the central results of this section. The seemingly mild edge-over-random conditions guarantee eligibility, meaning weak classifiers that satisfy any one such condition can be combined to form a highly accurate combined classifier. Theorem 1 (Sufficiency). If a weak classifier space H satisfies a weak-learning condition (Ceor, B), for some B ∈Beor γ , then H is boostable. 4 The proof involves the Von-Neumann Minimax theorem, and is in the spirit of the ones in [10]. On the other hand the family of such conditions, taken as a whole, is necessary for boostability in the sense that every eligible space of weak classifiers satisfies some edge-over-random condition. Theorem 2 (Relaxed necessity). For every boostable weak classifier space H, there exists a γ > 0 and B ∈Beor γ such that H satisfies the weak-learning condition (Ceor, B). The proof shows existence through non-constructive averaging arguments. Theorem 2 states that any boostable weak classifier space will satisfy some condition in our family, but it does not help us choose the right condition. Experiments in Section 5 suggest Ceor, Uγ  is effective with very simple weak-learners compared to popular boosting algorithms. (Here Uγ ∈Beor γ is the edge-overrandom baseline closest to uniform; it has weight (1 −γ)/k on incorrect labels and (1 −γ)/k + γ on the correct label.) However, there are theoretical examples showing each condition in our family is too strong (supplement). A perhaps extreme way of weakening the condition is by requiring the performance on a cost matrix to be competitive not with a fixed baseline B ∈Beor γ , but with the worst of them: ∀C ∈Ceor, ∃h ∈H : C • 1h ≤max B∈Beor γ C • B. (3) Condition (3) states that during the course of the same boosting game, Weak-Learner may choose to beat any edge-over-random baseline B ∈Beor γ , possibly a different one for every round and every cost-matrix. This may superficially seem much too weak. On the contrary, this condition turns out to be equivalent to boostability. In other words, according to our criterion, it is neither too weak nor too strong as a weak-learning condition. However, unlike the edge-over-random conditions, it also turns out to be more difficult to work with algorithmically. Furthermore, this condition can be shown to be equivalent to the one used by AdaBoost.MR [19, 9]. This is perhaps remarkable since the latter is based on the apparently completely unrelated all-pairs multiclass to binary reduction: the MR condition is given by (CMR, BMR γ ), where CMR consists of cost-matrices that put non-negative costs on incorrect labels and whose rows sum up to zero, while BMR γ ∈Rm×k is the matrix that has γ on the first column and −γ on all other columns(supplement). Further, the MR condition, and hence (3), can be shown to be neither too weak nor too strong. Theorem 3 (MR). A weak classifier space H satisfies AdaBoost.MR’s weak-learning condition (CMR, BMR γ ) if and only if it satisfies (3). Moreover, this condition is equivalent to being boostable. Next, we illustrate the strengths of our random-over-edge weak-learning conditions through concrete comparisons with previous algorithms. Comparison with SAMME. The SAMME algorithm of [21] requires the weak classifiers to achieve less error than uniform random guessing for multiple labels; in our language, their weaklearning condition is (C = {(−t, t, t, . . .) : t ≥0} , Uγ). As is well-known, this condition is not sufficient for boosting to be possible. In particular, consider the dataset {(a, 1), (b, 2)} with k = 3, m = 2, and a weak classifier space consisting of h1, h2 which always predict 1, 2, respectively. Since neither classifier distinguishes between a, b we cannot achieve perfect accuracy by combining them in any way. Yet, due to the constraints on the cost-matrix, one of h1, h2 will always manage non-positive cost while random always suffers positive cost. On the other hand our weaklearning condition allows the Booster to choose far richer cost matrices. In particular, when the cost matrix is C = (c(1) = (−1, +1, 0), c(2) = (+1, −1, 0)) ∈Ceor, both classifiers in the above example suffer more loss than the random player Uγ, and fail to satisfy our condition. Comparison with AdaBoost.MH. AdaBoost.MH is a popular multiclass boosting algorithm that is based on the one-against-all reduction[19]. However, we show that its implicit demands on the weak classifier space is too strong. We construct a classifier space that satisfies the condition (Ceor, Uγ) in our family, but cannot satisfy AdaBoost.MH’s weak-learning condition. Consider a space H that has, for every (1/k + γ)m element subset of the examples, a classifier that predicts correctly on exactly those elements. The expected loss of a randomly chosen classifier from this space is the same as that of the random player Uγ. Hence H satisfies this weak-learning condition. On the other hand, it can be shown (supplement) that AdaBoost.MH’s weak-learning condition is the pair (CMH, BMH γ ), where CMH has non-(positive)negative entries on (in)correct labels, and where each row of the matrix BMH γ is the vector (1/2 + γ/2, 1/2 −γ/2, . . . , 1/2 −γ/2). A 5 quick calculation shows that for any h ∈H, and C ∈CMH with −1 in the first column and zeroes elsewhere, C • 1h −BMH γ  = 1/2 −1/k. This is positive when k > 2, so that H fails to satisfy AdaBoost.MH’s condition. 4 Algorithms In this section we devise algorithms by analyzing the boosting games that employ our edge-overrandom weak-learning conditions. We compute the optimum Booster strategy against a completely adversarial Weak-Learner, which here is permitted to choose weak classifiers without restriction, i.e. the entire space Hall of all possible functions mapping examples to labels. By modeling WeakLearner adversarially, we make absolutely no assumptions on the algorithm it might use. Hence, error guarantees enjoyed in this situation will be universally applicable. Our algorithms are derived from the very general drifting games framework [16] for solving boosting games, in turn inspired by Freund’s Boost-by-majority algorithm [6], which we review next. The OS Algorithm. Fix the number of rounds T and an edge-over-random weak-learning condition (C, B). For simplicity of presentation we fix the weights αt = 1 in each round. With fT defined as in (1), the optimum Booster payoff can be written as min C1∈C max h1∈Hall: C1•(1h1−B)≤0 . . . min CT ∈C max hT ∈Hall: CT •(1hT −B)≤0 (1/m) m X i=1 L(fT (xi, 1), fT (xi, 2), . . . , fT (xi, k)). Here the function L : Rk →R is error, but we can also consider other loss functions such as exponential loss, hinge loss, etc. that upper-bound error and are proper: i.e. L(x) is increasing in the weight of the correct label x(1), and decreasing in the weights of the incorrect labels x(l), l ̸= 1. Directly analyzing the optimal payoff is hard. However, Schapire [16] observed that the payoffs can be very well approximated by certain potential functions. Indeed, for any b ∈Rk define the potential function φb t : Rk →R by the following recurrence: φb 0 = L; φb t (s) = min c∈Rk:∀l:c(1)≤c(l) max p∈∆{1,...,k}  El∼p  φb t−1 (s + el)  : El∼p [c(l)] ≤⟨b, c⟩ , (4) where el ∈Rk is the unit-vector whose lth coordinate is 1 and the remaining coordinates zero. These potential functions compute an estimate φb t (st) of whether an example x will be misclassified, based on its current state st consisting of counts of votes received so far on various classes st(l) = Pt−1 t′=1 1 [ht′(x) = l], and the number of rounds t remaining. Using these functions, Schapire [16] proposed a Booster strategy, aka the OS strategy, which, in round t, constructs a cost matrix C ∈C, whose each row C(i) achieves the minimum of the right hand side of (4) with b replaced by B(i), t replaced by T −t, and s replaced by current state st(i). The following theorem provides a guarantee for the loss suffered by the OS algorithm, and also shows that it is the game-theoretically optimum strategy when the number of examples is large. Theorem 4 (Extension of results in [16]). Suppose the weak-learning condition is given by (C, B), If Booster employs the OS algorithm, then the average potential of the states (1/m) Pm i=1 φB(i) t (s(i)) never increases in any round. In particular, loss suffered after T rounds of play is at most (1/m) Pm i=1 φB(i) T (0). Further, for any ϵ > 0, when the loss function satisfies some mild conditions, and m ≫T, k, 1/ϵ, no Booster strategy can achieve loss ϵ less than the above bound in T rounds. Computing the potentials. In order to implement the OS strategy using our weak-learning conditions, we only need to compute the potential φb t for distributions b ∈∆{1, . . . , k}. Fortunately, these potentials have a very simple solution in terms of the homogeneous random-walk Rt b(x), the random position of a particle after t time steps, that starts at location x ∈Rk, and in each step moves in direction el with probability b(l). Theorem 5. If L is proper, and b ∈∆{1, . . . , k} satisfies ∀l : b(1) ≥b(l), then φb t (s) = E [L (Rt b(s))]. Furthermore, the vector achieving the minimum in the right hand side of (4) is given by c(l) = φb t−1(s + el). Theorem (5) implies the OS strategy chooses the following cost matrix in round t: c(i, l) = φb(i) T −t−1(st(i) + el), where st(i) is the state of example i in round t. Therefore everything boils 6 down to computing the potentials, which is made possible by Theorem 5. There is no simple closed form solution for the non-convex 0-1 loss L(s) = 1[s1 ≤(maxi>1 si)]. However, using Theorem 4, we can write the potential φt(s) explicitly, and then compute it using dynamic programming in O(t3k) time. This yields very tight bounds. To obtain a more efficient procedure, and one that we will soon show can be made adaptive, we next focus on the exponential loss associated with AdaBoost that does have a closed form solution. Lemma 1. If L(s) = exp(η2(s2 −s1)) + · · · + exp(ηk(sk −s1)), where each ηl is positive, then the solution in Theorem 5 evaluates to φb t (s) = Pk l=2(al)teηl(sl−s1), where al = 1 −(b1 + bl) + eηlbl + e−ηlb1. The proof by induction is straightforward. In particular, when the condition is (Ceor, Uγ) and η = (η, η, . . .), the relevant potential is φt(s) = κ(γ, η)t Pk l=2 eη(sl−s1) where κ(γ, η) = 1 + (1−γ) k (eη + e−η −2) −(1 −e−η) γ. The cost-matrix output by the OS algorithm can be simplified by rescaling, or adding the same number to each coordinate of a cost vector, without affecting the constraints it imposes on a weak classifier, to the following form c(i, l) = ( (eη −1) eη(sl−s1) if l > 1, (e−η −1) Pk j=2 eη(sj−s1) if l = 1, (5) With such a choice, Theorem 4 and the form of the potential guarantee that the average loss (1/m) Pm i=1 L(st(i)) of the states st(i) changes by a factor of at most κ (γ, η) every round. Hence the final loss is at most (k −1)κ (γ, η)T . Variable edges. So far we have required Weak-Learner to beat random by at least a fixed amount γ > 0 in each round of the boosting game. In reality, the edge over random is larger initially, and gets smaller as the OS algorithm creates harder cost matrices. Therefore requiring a fixed edge is either unduly pessimistic or overly optimistic. If the fixed edge is too small, not enough progress is made in the initial rounds, and if the edge is too large, Weak-Learner fails to meet the weak-learning condition in latter rounds. We attempt to fix this via two approaches: prescribing a decaying sequence of edges γ1, . . . , γT , or being completely flexible, aka adaptive, with respect to the edges returned by the weak-learner. In either case, we only use the edge-over-random condition (Ceor, Uγ), but with varying values of γ. Fixed sequence of edges. With a prescribed sequence of edges γ1, . . . , γT the weak-learning condition (Ceor, Uγt) in each round t is different. We allow the weights α1, . . . , αT to be arbitrary, but they must be fixed in advance. All the results for uniform γ and weights αt = 1 hold in this case as well. In particular, by the arguments leading to (5), if we want to minimize Pm i=1 Pk l=2 e{ft(i,l)−ft(i,1)}, where ft is as defined in (1), then the following strategy is optimal: in round t output the cost matrix C(i, l) = ( (eαt −1) eft−1(i,j)−ft−1(i,1) if l > 1, (e−αt −1) Pk j=2 eft−1(i,j)−ft−1(i,1) if l = 1. (6) This will ensure that the expression Pm i=1 Pk l=2 e{ft(i,l)−ft(i,1)} changes by a factor of at most κ(γt, αt) in each round. Hence the final loss will be at most (k −1) QT t=1 κ(γt, αt). Adaptive. In the adaptive setting, we depart from the game-theoretic framework in that WeakLearner is no longer adversarial. Further, we are no longer guaranteed to receive a certain sequence of edges. Since the choice of cost-matrix in (6) does not depend on the edges, we could fix an arbitrary set of weights αt in advance, follow the same algorithm as before and enjoy the same bound QT t=1 κ(γt, αt). The trouble with this is κ(γt, αt) is not less than 1 unless αt is small compared to γt. To ensure progress, the weight αt must be chosen adaptively as a function of γt. Since we do not know what edge we will receive, we choose the cost matrix as before but anticipating infinitesimally small edge, in the spirit of [7], (and with some rescaling) C(i, l) = lim α→0 Cα(i, l) △= 1 α ( (eα −1) eft−1(i,j)−ft−1(i,1) if l > 1, (e−α −1) Pk j=2 eft−1(i,j)−ft−1(i,1) if l = 1. = ( eft−1(i,j)−ft−1(i,1) if l > 1, −Pk j=2 eft−1(i,j)−ft−1(i,1) if l = 1. (7) 7 5 20 100 500 0.30 0.35 0.40 connect4 5 20 100 500 0.3 0.5 0.7 forest 5 20 100 500 0.0 0.4 0.8 letter 5 20 50 200 0.1 0.3 0.5 pendigits 5 20 100 500 0.20 0.30 0.40 0.50 poker 5 20 50 200 0.08 0.14 0.20 satimage (a) 0.32 0.36 0.40 connect4 0 100 300 500 0.4 0.6 0.8 1.0 forest 0 100 300 500 0.4 0.6 0.8 1.0 letter 0 100 300 500 0.1 0.3 0.5 pendigits 0 100 300 500 0.40 0.50 poker 0 100 300 500 0.10 0.15 0.20 0.25 satimage 0 100 300 500 (b) Figure 1: Figure 1(a) plots the final test-errors of M1(black, dashed), MH(blue, dotted) and New method(red, solid) against the maximum tree-sizes allowed as weak classifiers. Figure 1(b) plots how fast the test-errors of these algorithms drop with rounds, when the maximum tree-size allowed is 5. Since Weak-Learner cooperates, we expect the edge δt of the returned classifier ht on the supplied cost-matrix limα→0 Cα to be more than just infinitesimal. In that case, by continuity, there are noninfinitesimal choices of the weight αt such that the edge γt achieved by ht on the cost-matrix Cαt remains large enough to ensure κ(γt, αt) < 1. In fact, with any choice of αt, we get κ (γt, αt) ≤ 1 −1 2 (eαt −e−αt) δt + 1 2 (eαt + e−αt −2) (supplement). Tuning αt to 1 2 ln  1+δt 1−δt  results in κ (γt, αt) ≤ p 1 −δ2 t . This algorithm is adaptive, and ensures that the loss, and hence error, after T rounds is at most (k −1) QT t=1 p 1 −δ2 t ≤(k −1) exp n −(1/2) PT t=1 δ2 t o . 5 Experiments We report preliminary experimental results on six, varying multiclass UCI datasets. connect4 forest letter pendigits poker satimage 0.0 0.1 0.2 0.3 0.4 MH M1 New Method Figure 2: This is a plot of the final test-errors of standard implementations of M1, MH and New method after 500 rounds of boosting. The first set of experiments were aimed at determining overall performance of our new algorithm. We compared a standard implementation M1 of AdaBoost.M1 with C4.5 as weak learner, and the Boostexter implementation MH of AdaBoost.MH using stumps [20], with the adaptive algorithm described in Section 4, which we call New method, using a naive greedy tree-searching algorithm Greedy for weak-learner. The size of trees was chosen to be of the same order as the tree sizes used by M1. Test errors after 500 rounds of boosting are plotted in Figure 2. The performance is comparable with M1 and far better than MH (understandably since stumps are far weaker than trees), even though our weak-learner is very naive compared to C4.5. We next investigated how each algorithm performs with less powerful weak-classifiers, namely, decision trees whose size has been sharply limited to various pre-specified limits. Figure 1(a) shows test-error plotted as a function of tree size. As predicted by our theory, our algorithm succeeds in boosting the accuracy even when the tree size is too small to meet the stronger weak learning assumptions of the other algorithms. The differences in performance are particularly strong when using the smallest tree sizes. More insight is provided by plots in Figure 1(b) of the rate of convergence of test error with rounds when the tree size allowed is very small (5). Both M1 and MH drive down the error for a few rounds. But since boosting keeps creating harder cost-matrices, very soon the small-tree learning algorithms are no longer able to meet the excessive requirements of M1 and MH. However, our algorithm makes more reasonable demands that are easily met by the weak learner. 8 References [1] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal stragies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, pages 415–424, 2008. [2] Erin L. Allwein, Robert E. Schapire, and Yoram Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, 2000. [3] Alina Beygelzimer, John Langford, and Pradeep Ravikumar. Error-correcting tournaments. In Algorithmic Learning Theory: 20th International Conference, pages 247–262, 2009. [4] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, January 1995. [5] G¨unther Eibl and Karl-Peter Pfeiffer. Multiclass boosting for weak classifiers. Journal of Machine Learning Research, 6:189–210, 2005. [6] Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995. [7] Yoav Freund. An adaptive version of the boost by majority algorithm. Machine Learning, 43(3):293–318, June 2001. [8] Yoav Freund and Manfred Opper. Continuous drifting games. Journal of Computer and System Sciences, pages 113–132, 2002. [9] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148–156, 1996. [10] Yoav Freund and Robert E. Schapire. Game theory, on-line prediction and boosting. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 325–332, 1996. [11] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. [12] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. Annals of Statistics, 26(2):451– 471, 1998. [13] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30(1), February 2002. [14] Gunnar R¨atsch and Manfred K. Warmuth. Efficient margin maximizing with boosting. Journal of Machine Learning Research, 6:2131–2152, 2005. [15] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227, 1990. [16] Robert E. Schapire. Drifting games. Machine Learning, 43(3):265–291, June 2001. [17] Robert E. Schapire. The boosting approach to machine learning: An overview. In MSRI Workshop on Nonlinear Estimation and Classification, 2002. [18] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, October 1998. [19] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, December 1999. [20] Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135–168, May/June 2000. [21] Ji Zhu, Hui Zou, Saharon Rosset, and Trevor Hastie. Multi-class AdaBoost. Statistics and Its Interface, 2:349360, 2009. 9
2010
95
4,141
Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning Prateek Jain Algorithms Research Group Microsoft Research, Bangalore, India prajain@microsoft.com Sudheendra Vijayanarasimhan Department of Computer Science University of Texas at Austin svnaras@cs.utexas.edu Kristen Grauman Department of Computer Science University of Texas at Austin grauman@cs.utexas.edu Abstract We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the database. We propose two hashingbased solutions. Our first approach maps the data to two-bit binary keys that are locality-sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the data into a vector space where the Euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sub-linear time. Our first method’s preprocessing stage is more efficient, while the second has stronger accuracy guarantees. We apply both to pool-based active learning: taking the current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection criterion. We empirically demonstrate our methods’ tradeoffs, and show that they make it practical to perform active selection with millions of unlabeled points. 1 Introduction Efficient similarity search with large databases is central to many applications of interest, such as example-based learning algorithms, content-based image or audio retrieval, and quantization-based data compression. Often the search problem is considered in the domain of point data: given a database of vectors listing some attributes of the data objects, which points are nearest to a novel query vector? Existing algorithms provide efficient data structures for point-to-point retrieval tasks with various useful distance functions, producing either exact or approximate near neighbors while forgoing a brute force scan through all database items, e.g., [1, 2, 3, 4, 5, 6, 7]. By comparison, much less work considers how to efficiently handle instances more complex than points. In particular, little previous work addresses the hyperplane-to-point search problem: given a database of points, which are nearest to a novel hyperplane query? This problem is critical to pool-based active learning, where the goal is to request labels for those points that appear most informative. The widely used margin-based selection criterion of [8, 9, 10] seeks those points that are nearest to the current support vector machine’s hyperplane decision boundary, and can substantially reduce total human annotation effort. However, for large-scale active learning, it is impractical to exhaustively apply the classifier to all unlabeled points at each round of learning; to exploit massive unlabeled pools, a fast (sub-linear time) hyperplane search method is needed. 1 To this end, we propose two solutions for approximate hyperplane-to-point search. For each, we introduce randomized hash functions that offer query times sub-linear in the size of the database, and provide bounds for the approximation error of the neighbors retrieved. Our first approach devises a two-bit hash function that is locality-sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the inputs such that the Euclidean distance reflects the hyperplane distance, thereby making them searchable with existing approximate nearest neighbor algorithms for vector data. While the preprocessing in our first method is more efficient, our second method has stronger accuracy guarantees. We demonstrate our algorithms’ significant practical impact for large-scale active learning with SVM classifiers. Our results show that our method helps scale-up active learning for realistic problems with massive unlabeled pools on the order of millions of examples. 2 Related Work We briefly review related work on approximate similarity search, subspace search methods, and pool-based active learning. Approximate near-neighbor search. For low-dimensional points, spatial decomposition and treebased search algorithms can provide the exact neighbors in sub-linear time [1, 2]. While such methods break down for high-dimensional data, a number of approximate near neighbor methods have been proposed that work well with high-dimensional inputs. Locality-sensitive hashing (LSH) methods devise randomized hash functions that map similar points to the same hash buckets, so that only a subset of the database must be searched after hashing a novel query [3, 4, 5]. A related family of methods design Hamming space embeddings that can be indexed efficiently (e.g., [11, 12, 6]). However, in contrast to our approach, all such techniques are intended for vector/point data. A few researchers have recently examined approximate search tasks involving subspaces. In [13], a Euclidean embedding is developed such that the norm in the embedding space directly reflects the principal angle-based distance between the original subspaces. After this mapping, one can apply existing approximate near-neighbor methods designed for points (e.g., LSH). We provide a related embedding to find the points nearest to the hyperplane; however, in contrast to [13], we provide LSH bounds, and our embedding is more compact due to our proposed sampling strategy. Another method to find the nearest subspace for a point query is given in [14], though it is limited to relatively lowdimensional data due to its preprocessing time/space requirement of O(N d2 log N) and query time of O(d10 log N), where N is the number of database points and d is the dimensionality of the data. Further, unlike [13], that approach is restricted to point queries. Finally, a sub-linear time method to map a line query to its nearest points is derived in [15]. In contrast to all the above work, we propose specialized methods for the hyperplane search problem, and show that they handle high-dimensional data and large databases very efficiently. Margin-based active learning. Existing active classifier learning methods for pool-based selection generally scan all database instances before selecting which to have labeled next.1 One well-known and effective active selection criterion for support vector machines (SVMs) is to choose points that are nearest to the current separating hyperplane [8, 9, 10]. While simple, this criterion is intuitive, has theoretical basis in terms of rapidly reducing the version space [8], and thus is widely used in practice (e.g., [17, 18, 19]). Unfortunately, even for inexpensive selection functions, very large unlabeled datasets make the cost of exhaustively searching the pool impractical. Researchers have previously attempted to cope with this issue by clustering or randomly downsampling the pool [19, 20, 21, 22]; however, such strategies provide no guarantees as to the potential loss in active selection quality. In contrast, when applying our approach for this task, we can consider orders of magnitude fewer points when making the next active label request, yet guarantee selections within a known error of the traditional exhaustive pool-based technique. Other forms of approximate SVM training. To avoid potential confusion, we note that our problem setting differs from both that considered in [23], where computational geometry insights are combined with the QP formulation for more efficient “core vector” SVM training, as well as that considered in [19], where a subset of labeled data points are selected for online LASVM training. 1We consider only a specific hyperplane criterion in this paper; see [16] for an active learning survey. 2 3 Approach We consider the following retrieval problem. Given a database D = [x1, . . . , xN] of N points in Rd, the goal is to retrieve the points from the database that are closest to a given hyperplane query whose normal is given by w ∈Rd. We call this the nearest neighbor to a query hyperplane (NNQH) problem. Without loss of generality, we assume that the hyperplane passes through origin, and that each xi, w is unit norm. We see in later sections that these assumptions do not affect our solution. The Euclidean distance of a point x to a given hyperplane hw parameterized by normal w is: d(hw, x) = ∥(xT w)w∥= |xT w|. (1) Thus, the goal for the NNQH problem is to identify those points xi ∈D that minimize |xT i w|. Note that this is in contrast to traditional proximity problems, e.g., nearest or farthest neighbor retrieval, where the goal is to maximize xT w or −xT w, respectively. Hence, existing approaches are not directly applicable to this problem. We formulate two algorithms for NNQH. Our first approach maps the data to binary keys that are locality-sensitive for the angle between the hyperplane normal and a database point, thereby permitting sub-linear time retrieval with hashing. Our second approach computes a sparse Euclidean embedding for the query hyperplane that maps the desired search task to one handled well by existing approximate nearest-point methods. In the following, we first provide necessary background on locality-sensitive hashing (LSH). The subsequent two sections describe each approach in turn, and Sec. 3.4 reviews their trade-offs. Finally, in Sec. 3.5, we explain how either method can be applied to large-scale active learning. 3.1 Background: Locality-Sensitive Hashing (LSH) Informally, LSH [3] requires randomized hash functions guaranteeing that the probability of collision of two vectors is inversely proportional to their “distance”, where “distance” is defined according to the task at hand. Since similar points are assured (w.h.p.) to fall into the same hash bucket, one need only search those database items with which a novel query collides in the hash table. Formally, let d(·, ·) be a distance function over items from a set S, and for any item p ∈S, let B(p, r) denote the set of examples from S within radius r from p. Definition 3.1. [3] Let hH denote a random choice of a hash function from the family H. The family H is called (r, r(1 + ǫ), p1, p2)−sensitive for d(·, ·) when, for any q, p ∈S, • if p ∈B(q, r) then Pr[hH(q) = hH(p)] ≥p1, • if p /∈B(q, r(1 + ǫ)) then Pr[hH(q) = hH(p)] ≤p2. For a family of functions to be useful, it must satisfy p1 > p2. A k-bit LSH function computes a hash “key” by concatenating the bits returned by a random sampling of H: g(p) = h h(1) H (p), h(2) H (p), . . . , h(k) H (p) i . Note that the probability of collision for close points is thus at least pk 1, while for dissimilar points it is at most pk 2. During a preprocessing stage, all database points are mapped to a series of l hash tables indexed by independently constructed g1, . . . , gl, where each gi is a k-bit function. Then, given a query q, an exhaustive search is carried out only on those examples in the union of the l buckets to which q hashes. These candidates contain the (r, ǫ)-nearest neighbors (NN) for q, meaning if q has a neighbor within radius r, then with high probability some example within radius r(1 + ǫ) is found. In [3] an LSH scheme using projections onto single coordinates is shown to be locality-sensitive for the Hamming distance over vectors. For that hash function, ρ = log p1 log p2 ≤ 1 1+ǫ, and using l = N ρ hash tables, a (1+ǫ)-approximate solution can be retrieved in time O(N 1 (1+ǫ) ). Related formulations and LSH functions for other distances have been explored (e.g., [5, 4, 24]). Our contribution is to define two locality-sensitive hash functions for the NNQH problem. 3 3.2 Hyperplane Hashing based on Angle Distance (H-Hash) Recall that we want to retrieve the database vector(s) x for which |wT x| is minimized. If the vectors are unit norm, then this means that for the “good” (close) database vectors, w and x are almost perpendicular. Let θx,w denote the angle between x and w. We define the distance d(·, ·) in Definition 3.1 to reflect how far from perpendicular w and x are: dθ(x, w) = (θx,w −π/2)2. (2) Consider the following two-bit function that maps two input vectors a, b ∈ℜd to {0, 1}2: hu,v(a, b) = [hu(a), hv(b)] = [sign(uT a), sign(vT b)], (3) where hu(a) = sign(uT a) returns 1 if uT a ≥0, and 0 otherwise, and u and v are sampled independently from a standard d-dimensional Gaussian, i.e., u, v ∼N(0, I). We define our hyperplane hash (H-Hash) function family H as: hH(z) = ½hu,v(z, z), if z is a database point vector, hu,v(z, −z), if z is a query hyperplane vector. Next, we prove that this family of hash functions is locality-sensitive (Definition 3.1). Claim 3.2. The family H is ¡ r, r(1 + ǫ), 1 4 − 1 π2 r, 1 4 − 1 π2 r(1 + ǫ) ¢ -sensitive for the distance dθ(·, ·), where r, ǫ > 0. Proof. Since the vectors u, v used by hash function hu,v are sampled independently, then for a query hyperplane vector w and a database point vector x, Pr[hH(w) = hH(x)] = Pr[hu(w) = hu(x) and hv(−w) = hv(x)], = Pr[hu(w) = hu(x)] Pr[hv(−w) = hv(x)]. (4) Next, we use the following fact proven in [25], Pr[sign(uT a) = sign(uT c)] = 1 −θa,c π , (5) where u is sampled as defined above, and θa,c denotes the angle between the two vectors a and c. Using (4) and (5), we get: Pr[hH(w) = hH(x)] = θx,w π µ 1 −θx,w π ¶ = 1 4 −1 π2 ³ θx,w −π 2 ´2 . Hence, when ¡ θx,w −π 2 ¢2 ≤r, Pr[hH(w) = hH(x)] ≥1 4 − r π2 = p1. Similarly, for any ǫ > 0 such that ¡ θx,w −π 2 ¢2 ≥r(1 + ǫ), Pr[hH(w) = hH(x)] ≤1 4 −r(1+ǫ) π2 = p2. We note that unlike traditional LSH functions, ours are asymmetric. That is, to hash a database point x we use hu,v(x, x), whereas to hash a query hyperplane w, we use hu,v(w, −w). The purpose of the two-bit hash is to constrain the angle with respect to both w and −w, so that we do not simply retrieve examples for which we know only that x is π/2 or less away from w. With these functions in hand, we can now form hash keys by concatenating k two-bit pairs from k hash functions from H, store the database points in the hash tables, and query with a novel hyperplane to retrieve its closest points (see Sec. 3.1). The approximation guarantees and correctness of this scheme can be obtained by adapting the proof of Theorem 1 in [3] (see supplementary file). In particular, we can show that with high probability, our LSH scheme will return a point within a distance (1 + ǫ)r, where r = mini dθ(xi, w), in time O(N ρ), where ρ = log p1 log p2 . As p1 > p2, we have ρ < 1, i.e., the approach takes sub-linear time for all values of r, ǫ. Furthermore, as p1 = 1 4 − r π2 , and p2 = 1 4 −r(1+ǫ) π2 , ρ can also be bounded as ρ ≤ 1−log(1−4r π2 ) 1+ ǫ 1+ π2 4r log 4 . Note that this bound for ρ is dependent on r, and is more efficient for larger values of r. See the supplementary material for more discussion on the bound. 4 3.3 Embedded Hyperplane Hashing based on Euclidean Distance (EH-Hash) Our second approach for the NNQH problem relies on a Euclidean embedding for the hyperplane and points. It offers stronger bounds than the above, but at the expense of more preprocessing. Given a d-dimensional vector a, we compute an embedding inspired by [13] that yields a d2dimensional vector by vectorizing the corresponding rank-1 matrix aaT : V (a) = vec(aaT ) = £ a2 1, a1a2, . . . , a1ad, a2 2, a2a3, . . . , a2 d ¤ , (6) where ai denotes the i-th element of a. Assuming a and b to be unit vectors, the Euclidean distance between the embeddings V (a) and −V (b) is given by ||V (a) −(−V (b))||2 = 2 + 2(aT b)2. Hence, minimizing the distance between the two embeddings is equivalent to minimizing |aT b|, our intended function. Given this, we define our embedding-hyperplane hash (EH-Hash) function family E as: hE(z) = ½hu (V (z)) , if z is a database point vector, hu (−V (z)) , if z is a query hyperplane vector, where hu(z) = sign(uT z) is a one-bit hash function parameterized by u ∼N(0, I). Claim 3.3. The family of functions E defined above is ³ r, r(1 + ǫ), 1 π cos−1 sin2(√r), 1 π cos−1 sin2( p r(1 + ǫ)) ´ -sensitive for dθ(·, ·), where r, ǫ > 0. Proof. Using the result of [25], for any vector w, x ∈Rd, Pr £ sign ¡ uT (−V (w)) ¢ = sign ¡ uT V (x) ¢¤ = 1 −1 π cos−1 µ −V (w)T V (x) ∥V (w)∥∥V (x)∥ ¶ , (7) where u ∈Rd2 is sampled from a standard d2-variate Gaussian distribution, u ∼N(0, I). Note that for any unit vectors a, b ∈Rd2, V (a)T V (b) = Tr(aaT bbT ) = (aT b)2 = cos2 θa,b. Using (7) together with the definition of hE above, given a hyperplane query w and database point x we have: Pr[hE(w) = hE(x)] = 1 −1 π cos−1 ¡ −cos2(θx,w) ¢ = cos−1 ¡ cos2(θx,w) ¢ /π (8) Hence, when (θx,w −π 2 )2 ≤r, Pr[hE(w) = hE(x)] ≥ 1 π cos−1 sin2(√r) = p1, (9) and p2 is obtained similarly. We observe that this p1 behaves similarly to 2( 1 4 − r π2 ). That is, as r varies, EH-Hash’s p1 returns values close to twice those returned by H-Hash’s p1 (see plot illustrating this in supplementary file). Hence, the factor ρ = log p1 log p2 improves upon that of the previous section, remaining lower for lower values of ǫ, and leading to better approximation guarantees. See supplementary material for a more detailed comparison of the two bounds. On the other hand, EH-Hash’s hash functions are significantly more expensive to compute. Specifically, it requires O(d2) time, whereas H-Hash requires only O(d). To alleviate this problem, we use a form of randomized sampling when computing the hash bits for a query that reduces the time to O(1/ǫ′2), for ǫ′ > 0. Our method relies on the following lemma, which states that sampling a vector v according to the weights of each element leads to good approximation to vT y for any vector y (with constant probability). Similar sampling schemes have been used for a variety of matrix approximation problems (see [26]). Lemma 3.4. Let v ∈Rd and define pi = v2 i /∥v∥2. Construct ˜v ∈Rd such that the i-th element is vi with probability pi and is 0 otherwise. Select t such elements using sampling with replacement. Then, for any y ∈Rd, ǫ > 0, c ≥1, t ≥ c ǫ′2 , Pr[|˜vT y −vT y| ≤ǫ′∥v∥2∥y∥2] > 1 −1 c . (10) 5 We defer the proof to the supplementary material. The lemma implies that at query time our hash function hE(w) can be computed while incurring a small additive error in time O( 1 ǫ′2 ), by sampling its embedding V (w) accordingly, and then cycling through only the non-zero indices of V (w) to compute uT (−V (w)). Note that we can substantially reduce the error in the hash function computation by sampling O( 1 ǫ′2 ) elements of the vector w and then using vec(w ˜wT ) as the embedding for w. However, in this case, the computational requirements increase to O( d ǫ′2 ). While one could alternatively use the Johnson-Lindenstrauss (JL) lemma to reduce the dimensionality of the embedding with random projections, doing so has two major difficulties: first, the d −1 dimensionality of a subspace represented by a hyperplane implies the random projection dimensionality must still be large for the JL-lemma to hold, and second, the projection dimension is dependent on the sum of the number of database points and query hyperplanes. The latter is problematic when fielding an arbitrary number of queries over time or storing a growing database of points—both properties that are intrinsic to our target active learning application. In contrast, our sampling method is instance-dependent and incurs very little overhead for computing the hash function. Comparison to [13]. Basri et al. define embeddings for finding nearest subspaces [13]. In particular, they define Euclidean embeddings for affine subspace queries and database points which could be used for NNQH, although they do not specifically apply it to hyperplane-to-point search in their work. Also, their embedding is not tied to LSH bounds in terms of the distance function (2), as we have shown above. Finally, our proposed instance-specific sampling strategy offers a more compact representation with the advantages discussed above. 3.4 Recap of the Hashing Approaches To summarize, we presented two locality-sensitive hashing approaches for the NNQH problem. Our first H-Hash approach defines locality-sensitivity in the context of NNHQ, and then provides suitable two-bit hash functions together with a bound on retrieval time. Our second EH-Hash approach consists of a d2-dimensional Euclidean embedding for vectors of dimension d that in turn reduces NNHQ to the Euclidean space nearest neighbor problem, for which efficient search structures (including LSH) are available. While EH-Hash has better bounds than H-Hash, its hash functions are more expensive. To mitigate the expense for high-dimensional data, we use a well-justified heuristic where we randomly sample the given query embedding, reducing the query time to linear in d. Note that both of our approaches attempt to minimize dθ(w, x) between the retrieved x and the hyperplane w. Since that distance is only dependent on the angle between x and w, any scaling of the vectors do not effect our methods, and we can safely treat the provided vectors to be unit norm. 3.5 Application to Large-Scale Active Learning The search algorithms introduced above can be applied for any task fitting their query/database specifications. We are especially interested in their relevance for making active learning scalable. A practical paradox with pool-based active learning algorithms is that their intended value—to reduce learning time by choosing informative examples to label first—conflicts with the real expense of applying them to very large “unprepared” unlabeled datasets. Generally methods today are tested in somewhat canned scenarios: the implementor has a moderately sized labeled dataset, and simply withholds the labels from the learner until a given point is selected, at which point the “oracle” reveals the label. In reality, one would like to deploy an active learner on a massive truly unlabeled data pool (e.g., all documents on the Web) and let it crawl for the instances that appear most valuable for the target classification task. The problem is that a scan of millions of points is rather expensive to compute exhaustively, and thus defeats the purpose of improving overall learning efficiency. Our algorithms make it possible to benefit from both massive unlabeled collections as well as actively chosen label requests. We consider the “simple margin” selection criterion for linear SVM classifiers [8, 9, 10]. Given a hyperplane classifier and an unlabeled pool of vector data U = {x1, . . . , xN}, the point that minimizes the distance to the current decision boundary is selected for labeling: x∗= argminxi∈U |wT xi|. Our two NNQH solutions supply exactly the hash functions needed to rapidly identify the next point to label: first we hash the unlabeled database into tables, and then at each active learning loop, we hash the current classifier w as a query.2 2The SVM bias term is handled by appending points with a 1. Note, our approach assumes linear kernels. 6 0 100 200 300 0 0.1 0.2 0.3 0.4 Learning curves Selection iterations AUROC Improvement (%) EH−Hash H−Hash Random Exhaustive (a) 10 0 10 1 Time (secs) − log scale H−Hash EH−Hash Exhaustive Selection time (b) 0 0.5 1 1.5 2 |w.x| Random H−Hash EH−Hash Exhaustive Distances to hyperplane (c) Figure 1: Newsgroups results. (a) Improvements in prediction accuracy relative to the initial classifier, averaged across all 20 categories and runs. (b) Time required to perform selection. (c) Value of |wT x| for the selected examples. Lower is better. Both of our approximate methods (H-Hash and EH-Hash) significantly outperform the passive baseline; they are nearly as accurate as ideal exhaustive active selection, yet require 1-2 orders of magnitude less time to select an example. (Best viewed in color.) 0 50 100 150 200 250 300 −0.05 0 0.05 0.1 0.15 Learning curves − All 10 classes Selection iterations AUROC Improvement (%) EH−Hash H−Hash Random Exhaustive (a) 10 0 10 1 10 2 Time (secs) − log scale H−Hash EH−Hash Exhaustive Selection time (b) 0 0.5 1 1.5 2 |w.x| Random H−Hash EH−Hash Exhaustive Distances to hyperplane (c) Figure 2: CIFAR-10 results. (a)-(c) Plotted as in above figure. Our methods compare very well with the significantly more expensive exhaustive baseline. Our EH-Hash provides more accurate selection than our H-Hash (see (c)), though requires noticeably more query time (see (b)). 4 Results We demonstrate our approach applied to large-scale active learning tasks. We compare our methods (H-Hash in Sec. 3.2 and EH-Hash in Sec. 3.3) to two baselines: 1) passive learning, where the next label request is randomly selected, and 2) exhaustive active selection, where the margin criterion in (1) is computed over all unlabeled examples in order to find the true minimum. The main goal is to show our algorithms can retrieve examples nearly as well as the exhaustive approach, but with substantially greater efficiency. Datasets and implementation details. We use three publicly available datasets. 20 Newsgroups consists of 20,000 documents from 20 newsgroup categories. We use the provided 61,118-d bag-ofwords features, and a test set of 7,505. CIFAR-10 [27] consists of 60,000 images from 10 categories. It is a manually labeled subset of the 80 Million Tiny Image dataset [28], which was formed by searching the Web for all English nouns and lacks ground truth labels. We use the provided train and test splits of 50K and 10K images, respectively. Tiny-1M consists of the first 1,000,000 (unlabeled) images from [28]. For both CIFAR-10 and Tiny-1M, we use the provided 384-d GIST descriptors as features. For all datasets, we train a linear SVM in the one-vs-all setting using a randomly selected labeled set (5 examples per class), and then run active selection for 300 iterations. We average results across five such runs. We fix k = 300, N ρ = 500, ǫ′ = 0.01. Newsgroups documents results. Figure 1 shows the results on the 20 Newsgroups, starting with the learning curves for all four approaches (a). The active learners (exact and approximate) have the steepest curves, indicating that they are learning more effectively from the chosen labels compared to the random baseline. Both of our hashing methods perform similarly to the exhaustive selection, yet require scanning an order of magnitude fewer examples (b). Note, Random requires ∼0 time. Fig. 1(c) shows the actual values of |wT x| for the selected examples over all iterations, categories, and runs; in line with our methods’ guarantees, they select points close to those found with exhaustive search. We also observe the expected trade-off: H-Hash is more efficient, while EH-Hash provides better results (only slightly better for this smaller dataset). CIFAR-10 tiny image results. Figure 2 shows the same set of results on the CIFAR-10. The trends are mostly similar to the above, although the learning task is more difficult on this data, narrowing the 7 EH-Hash H-Hash Exhaustive Random (a) 0 100 200 300 400 500 0 0.1 0.2 All categories − newsgroups Selection + labeling time (secs) Improvement in AUROC (%) EH−Hash H−Hash random exhaustive 0 2000 4000 0 0.05 0.1 All categories − tinyimages Selection + labeling time (secs) Improvement in AUROC (%) EH−Hash H−Hash random exhaustive (b) Figure 3: (a) First seven examples selected per method when learning the CIFAR-10 Airplane class. (b) Improvements in prediction accuracy as a function of the total time taken, including both selection and labeling time. By minimizing both selection and labeling time, our methods provide the best accuracy per unit time. 0 0.5 1 1.5 2 2.5 |w.x| Random EH−Hash Exhaustive Distances to hyperplane (a) 10 0 10 1 Time (secs) − log scale EH−Hash Exhaustive Selection time (b) airplane automobile (c) Figure 4: Tiny-1M results. (a) Error of examples selected. (b) Time required. (c) Examples selected by EH-Hash among 1M candidates in the first nine iterations when learning the Airplane and Automobile classes. margin between active and random. Averaged over all classes, we happen to outperform exhaustive selection (Fig. 2(a)); this can happen since there is no guarantee that the best active choice will help test accuracy, and it also reflects the wider variation across per-class results. The boxplots in (c) more directly show the hashing methods are behaving as expected. Both (b) and (c) illustrate their tradeoffs: EH-Hash has stronger guarantees than H-Hash (and thus retrieves lower wT x values), but is more expensive. Figure 3(a) shows example image selection results; both exhaustive search and our hashing methods manage to choose images useful for learning about airplanes/non-airplanes. Figure 3(b) shows the prediction accuracy plotted against the total time taken per iteration, which includes both selection and labeling time, for both datasets. We set the labeling time per instance to 1 and 5 seconds for the Newsgroups and Tiny image datasets, respectively. (Note, however, that these could vary in practice depending on the difficulty of the instance.) These results best show the advantage of our approximate methods: accounting for both types of cost inherent to training the classifier, they outperform both exhaustive and random selection in terms of the accuracy gains per unit time. While exhaustive active selection suffers because of its large selection time, random selection suffers because it wastes expensive labeling time on irrelevant examples. Our algorithms provide the best accuracy gains by minimizing both selection and labeling time. Tiny-1M results. Finally, to demonstrate the practical capability of our hyperplane hashing approach, we perform active selection on the one million tiny image set. We initialize the classifier with 50 examples from CIFAR-10. The 1M set lacks any labels, making this a “live” test of active learning (we ourselves annotated whatever the methods selected). We use our EH-Hash method, since it offers stronger performance. Even on this massive collection, our method’s selections are very similar in quality to the exhaustive method (see Fig. 4(a)), yet require orders of magnitude less time (b). The images (c) show the selections made from this large pool during the “live” labeling test; among all one million unlabeled examples (nearly all of which likely belong to one of the other 1000s of classes) our method retrieves seemingly relevant instances. To our knowledge, this experiment exceeds any previous active selection results in the literature in terms of the scale of the unlabeled pool. Conclusions. We introduced two methods for the NNQH search problem. Both permit efficient large-scale search for points near to a hyperplane, and experiments with three datasets clearly demonstrate the practical value for active learning with massive unlabeled pools. For future work, we plan to further explore more accurate hash-functions for our H-hash scheme and also investigate sublinear time methods for non-linear kernel based active learning. This work is supported in part by DARPA CSSG, NSF EIA-0303609, and the Luce Foundation. 8 References [1] J. Freidman, J. Bentley, and A. Finkel. An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Transactions on Mathematical Software, 3(3):209–226, September 1977. [2] J. Uhlmann. Satisfying General Proximity / Similarity Queries with Metric Trees. Information Processing Letters, 40:175–179, 1991. [3] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In Proceedings of the 25th Intl Conf. on Very Large Data Bases, 1999. [4] A. Andoni and P. Indyk. Near-Optimal Hashing Algorithms for Near Neighbor Problem in High Dimensions. In FOCS, 2006. [5] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In STOC, 2002. [6] Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. In NIPS, 2008. [7] B. Kulis and K. Grauman. Kernelized Locality-Sensitive Hashing for Scalable Image Search. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2009. [8] S. Tong and D. Koller. Support Vector Machine Active Learning with Applications to Text Classification. In Proccedings of International Conference on Machine Learning, 2000. [9] G. Schohn and D. Cohn. Less is More: Active Learning with Support Vector Machines. In Proccedings of International Conference on Machine Learning, 2000. [10] C. Campbell, N. Cristianini, and A. Smola. Query Learning with Large Margin Classifiers. In Proccedings of International Conference on Machine Learning, 2000. [11] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter-Sensitive Hashing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2003. [12] R. Salakhutdinov and G. Hinton. Semantic Hashing. In Proceedings of the SIGIR Workshop on Information Retrieval and Applications of Graphical Models, 2007. [13] R. Basri, T. Hassner, and L. Zelnik-Manor. Approximate Nearest Subspace Search. PAMI, 2010. [14] A. Magen. Dimensionality Reductions that Preserve Volumes and Distance to Affine Spaces, and their Algorithmic Applications. In Randomization and Approximation Techniques in Computer Science, 2002. [15] A. Andoni, P. Indyk, R. Krauthgamer, and H. L. Nguyen. Approximate Line Nearest Neighbor in High Dimensions. In SODA, 2009. [16] B. Settles. Active Learning Literature Survey. TR 1648, University of Wisconsin, 2009. [17] E. Chang, S. Tong, K. Goh, and C. Chang. Support Vector Machine Concept-Dependent Active Learning for Image Retrieval. In IEEE Transactions on Multimedia, 2005. [18] M. K. Warmuth, J. Liao, G. Ratsch, M. Mathieson, S. Putta, and C. Lemmen. Active Learning with Support Vector Machines in the Drug Discovery Process. J. Chem. Inf. Comput. Sci., 43:667–673, 2003. [19] A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast Kernel Classifiers with Online and Active Learning. Journal of Machine Learning Research (JMLR), 6:1579–1619, September 2005. [20] N. Panda, K. Goh, and E. Chang. Active Learning in Very Large Image Databases. Journal of Multimedia Tools and Applications: Special Issue on Computer Vision Meets Databases, 31(3), December 2006. [21] W. Zhao, J. Long, E. Zhu, and Y. Liu. A Scalable Algorithm for Graph-Based Active Learning. In Frontiers in Algorithmics, 2008. [22] R. Segal, T. Markowitz, and W. Arnold. Fast Uncertainty Sampling for Labeling Large E-mail Corpora. In Conference on Email and Anti-Spam, 2006. [23] I. Tsang, J. Kwok, and P.-M. Cheung. Core Vector Machines: Fast SVM Training on Very Large Data Sets. Journal of Machine Learning Research, 6:363–392, 2005. [24] P. Indyk and N. Thaper. Fast Image Retrieval via Embeddings. In Intl Wkshp on Stat. and Comp. Theories of Vision, 2003. [25] M. Goemans and D. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. JACM, 42(6):1115–1145, 1995. [26] R. Kannan and S. Vempala. Spectral Algorithms. Foundations and Trends in Theoretical Computer Science, 4(3-4):157–288, 2009. [27] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009. [28] A. Torralba, R. Fergus, and W. T. Freeman. 80 million Tiny Images: a Large Dataset for Non-Parametric Object and Scene Recognition. PAMI, 30(11):1958–1970, 2008. 9
2010
96
4,142
Bootstrapping Apprenticeship Learning Abdeslam Boularias Department of Empirical Inference Max-Planck Institute for Biological Cybernetics 72076 T¨ubingen, Germany abdeslam.boularias@tuebingen.mpg.de Brahim Chaib-Draa Department of Computer Science Laval University Quebec G1V 0A6, Canada chaib@damas.ift.ulaval.ca Abstract We consider the problem of apprenticeship learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is maximizing a utility function that is a linear combination of state-action features. Most IRL algorithms use a simple Monte Carlo estimation to approximate the expected feature counts under the expert’s policy. In this paper, we show that the quality of the learned policies is highly sensitive to the error in estimating the feature counts. To reduce this error, we introduce a novel approach for bootstrapping the demonstration by assuming that: (i), the expert is (near-)optimal, and (ii), the dynamics of the system is known. Empirical results on gridworlds and car racing problems show that our approach is able to learn good policies from a small number of demonstrations. 1 Introduction Modern robots are designed to perform complicated planning and control tasks, such as manipulating objects, navigating in outdoor environments, and driving in urban settings. Unfortunately, manually programming these tasks is almost infeasible in practice due to their high number of states. Markov Decision Processes (MDPs) provide an efficient tool for handling such tasks with a little help from an expert. The expert’s help consists in simply specifying a reward function. However, in many practical problems, even specifying a reward function is not easy. In fact, it is often easier to demonstrate examples of a desired behavior than to define a reward function (Ng & Russell, 2000). Learning policies from demonstration, a.k.a. apprenticeship learning, is a technique that has been widely used in robotics. An efficient approach to apprenticeship learning, known as Inverse Reinforcement Learning (IRL) (Ng & Russell, 2000; Abbeel & Ng, 2004), consists in recovering a reward function under which the policy demonstrated by an expert is near-optimal, rather than directly mimicking the expert’s actions. The learned reward is then used for finding an optimal policy. Consequently, the expert’s actions can be predicted in states that have not been encountered during the demonstration. Unfortunately, as already pointed by Abbeel & Ng (2004), recovering a reward function is an ill-posed problem. In fact, the expert’s policy can be optimal under an infinite number of reward functions. Most of the work on apprenticeship learning via IRL focused on solving this particular problem by using different types of regularization and loss cost functions (Ratliff et al., 2006; Ramachandran & Amir, 2007; Syed & Schapire, 2008; Syed et al., 2008). In this paper, we focus on another important problem occurring in IRL. IRL-based algorithms rely on the assumption that the reward function is a linear combination of state-action features. Therefore, the value function of any policy is a linear combination of the expected discounted frequency (count) of encountering each state-action feature. In particular, the value function of the expert’s policy is approximated by a linear combination of the empirical averages of the features, estimated from the demonstration (the trajectories). In practice, this method works efficiently only if the number 1 of examples is sufficiently large to cover all the states, or the dynamics of the system is nearly deterministic. For the tasks related to systems with a stochastic dynamics and a limited number of available examples, we propose an alternative method for approximating the expected frequencies of the features under the expert’s policy. Our approach takes advantage of the fact that the expert’s partially demonstrated policy is near-optimal, and generalizes the expert’s policy beyond the states that appeared in the demonstration. We show that this technique can be efficiently used to improve the performance of two known IRL algorithms, namely Maximum Margin Planning (MMP) (Ratliff et al., 2006), and Linear Programming Apprenticeship Learning (LPAL) (Syed et al., 2008). 2 Preliminaries Formally, a finite-state Markov Decision Process (MDP) is a tuple (S, A, {T a}, R, α, γ), where: S is a set of states, A is a set of actions, T a is a transition matrix defined as ∀s, s′ ∈S, a ∈A : T a(s, s′) = Pr(st+1 = s′|st = s, at = a), R is a reward function (R(s, a) is the reward associated with the execution of action a in state s), α is the initial state distribution, and γ is a discount factor. We denote by MDP\R a Markov Decision Process without a reward function, i.e. a tuple (S, A, {T a}, α, γ). We assume that the reward function R is given by a linear combination of k feature vectors fi with weights wi: ∀s ∈S, ∀a ∈A : R(s, a) = Pk i=0 wifi(s, a). A deterministic policy π is a function that returns an action π(s) for each state s. A stochastic policy π is a probability distribution on the action to be executed in each state, defined as π(s, a) = Pr(at = a|st = s). The value V (π) of a policy π is the expected sum of rewards that will be received if policy π will be followed, i.e. V (π) = E[P∞ t=0 γtR(st, at)|α, π, T]. An optimal policy π is one satisfying π = arg maxπ V (π). The occupancy µπ of a policy π is the discounted state-action visit distribution, defined as: µπ(s, a) = E[P∞ t=0 γtδst,sδat,a|α, π, T] where δ is the Kronecker delta. We also use µπ(s) to denote P a µπ(s, a). The following linear constraints, known as Bellman-flow constraints, are necessary and sufficient for defining an occupancy measure of a policy: { µπ(s) = α(s) + γ X s′∈S X a∈A µπ(s′, a)T a(s′, s)  , X a∈A µπ(s, a) = µπ(s)  , µπ(s, a) ⩾0  } (1) A policy π is well-defined by its occupancy measure µπ, one can interchangeably use π and µπ to denote a policy. The set of feasible occupancy measures is denoted by G. The frequency of a feature fi for a policy π is given by vi,π = F(i, .)µπ, where F is a k by |S||A| feature matrix, such that F(i, (s, a)) = fi(s, a). Using this definition, the value of a policy π can be written as a linear function of the frequencies: V (π) = wT Fµπ = wT vπ, where vπ is the vector of vi,π. Therefore, the value of a policy is completely determined by the frequencies (or counts) of the features fi. 3 Apprenticeship Learning 3.1 Overview The aim of apprenticeship learning is to find a policy π that is at least as good as a policy πE demonstrated by an expert, i.e. V (π) ⩾V (πE). The value functions of π and πE cannot be directly compared, unless a reward function is provided. To solve this problem, Ng & Russell (2000) proposed to first learn a reward function, assuming that the expert is optimal, and then use it to recover the expert’s complete policy. However, the problem of learning a reward function given an optimal policy is ill-posed (Abbeel & Ng, 2004). In fact, a large class of reward functions, including all constant functions for instance, may lead to the same optimal policy. To overcome this problem, Abbeel & Ng (2004) did not consider recovering a reward function, instead, their algorithm returns a policy π with a bounded loss in the value function, i.e. ∥V (π) −V (πE) ∥⩽ϵ, where the value is calculated by using the worst-case reward function. This property is derived from the fact that when the frequencies of the features under two policies match, the cumulative rewards of the two policies match as well, assuming that the reward is a linear function of these features. In the next two subsections, we briefly describe two algorithms for apprenticeship learning via IRL. The first one, known as Maximum Margin Planning (MMP) (Ratliff et al., 2006), is a robust algorithm based on learning a reward function under which the expert’s demonstrated actions are optimal. The second one, known as Linear Programming Apprenticeship Learning (LPAL) Syed et al. (2008), is a fast algorithm that directly returns a policy with a bounded loss in the value. 2 3.2 Maximum Margin Planning Maximum Margin Planning (MMP) returns a vector of reward weights w, such that the value of the expert’s policy wT FµπE is higher than the value of an alternative policy wT Fµπ by a margin that scales with the number of expert’s actions that are different from the actions of the alternative policy. This criterion is explicitly specified in the cost function minimized by the algorithm: cq(w) =  max µ∈G (wT F + l)µ −wT FµπE q + λ 2 ∥w∥2 (2) where q ∈{1, 2} defines the slack penalization, λ is a regularization parameter, and l is a deviation cost vector, that can be defined as: l(s, a) = 1−πE(s, a). A policy maximizing the cost-augmented reward vector (wT F + l) is almost completely different from πE, since an additional reward l(s, a) is given for the actions that are different from those of the expert. This algorithm minimizes the difference between the value divergence wT FµπE −wT Fµ and the policy divergence lµ. The cost function cq is convex, but nondifferentiable. Ratliff et al. (2006) showed that cq can be minimized by using a subgradient method. For a given reward w, a subgradient gq w is given by: gq w = q  (wT F + l)µ+ −wT FµπE q−1 F∆wµπE + λw (3) where µ+ = arg maxµ∈G(wT F + l)µ, and ∆wµπE = µ+ −µπE. 3.3 Linear Programming Apprenticeship Learning Linear Programming Apprenticeship Learning (LPAL) is based on the following observation: if the reward weights are positive and sum to 1, then V (π) ⩾V (πE) + mini[vi,π −vi,πE], for any policy π. LPAL consists in finding a policy that maximizes the margin mini[vi,π −vi,πE]. The maximal margin is found by solving the following linear program: max v,µπ v subject to ∀i ∈{0, . . . , k −1} : v ⩽ X s∈S X a∈A µπ(s, a)fi(s, a) | {z } vi,π − X s∈S X a∈A µπE(s, a)fi(s, a) | {z } vi,πE (4) µπ(s) = α(s) + γ X s′∈S X a∈A µπ(s′, a)T(s′, a, s), X a∈A µπ(s, a) = µπ(s), µπ(s, a) ⩾0 The last three constraints in this linear program correspond to the Bellman-flow constraints (Equation (1)) defining G, the feasible set of µπ. The learned policy π is given by: π(s, a) = µπ(s, a) P a′∈A µπ(s, a′) 3.4 Approximating feature frequencies Notice that both MMP and LPAL require the knowledge of the frequencies vi,ππE def = F(i, .)µπE. These frequencies can be analytically calculated (using Bellman-flow constraints) only if πE is completely specified. Given a sequence of M demonstrated trajectories tm = (sm 1 , am 1 , . . . , sm H, am H, ), the frequencies vi,πE are estimated as: ˆvi,πE = 1 M M X m=1 H X t=1 γtfi(sm t , am t ) (5) There are nevertheless many problems related to this approximation. First, the estimated frequencies ˆvi,πE can be very different from the true ones when the demonstration trajectories are scarce. Second, the frequencies ˆvi,πE are estimated for a finite horizon H, whereas the frequencies vi,π used in the objective function (Equations (2) and (4)), are calculated for an infinite horizon (Equation (1)). In practice, these two values are too different and cannot be compared as done in these cost functions. Finally, the frequencies vi,πE are a function of both a policy and the transition probabilities, the empirical estimation of vi,πE does not take advantage of the known transition probabilities. 3 4 Reward loss in Maximum Margin Planning ˆw wE vπE ˆvπE Vl Vl −vπE Vl −ˆvπE V Figure 1: Reward loss in MMP with approximate frequencies ˆvπE. We indicate by vπE (resp. ˆvπE) the linear function defined by the vector vπE (resp. ˆvπE). To show the effect of the error in the estimated feature frequencies on the quality of the learned rewards, we present an analysis of the distance between the vector of reward weights ˆw returned by MMP with estimated frequencies ˆvπE = F ˆµπE, calculated from the examples by using Equation (5), and the vector wE returned by MMP with accurate frequencies vπE = FµπE, calculated by using Equations (1) with the full policy πE. We adopt the following notations: ∆vπ = ˆvπE −vπE, ∆w = ˆw −wE, and Vl(w) = maxµ∈G(wT F + l)µ, and we consider q = 1. The following proposition shows how the reward error ∆w is related to the frequency error ∆vπ. Due to the fact that the cost function of MMP is piecewise defined, one cannot find a closed-form relation between ∆w and ∆vπ. However, we show that for any ˆw ∈Rk, there is a monotonically decreasing function f such that for any ϵ ∈R+, if ∥∆vπ ∥2< f(ϵ) then ∥∆w ∥2⩽ϵ. Proposition 1 Let ϵ ∈R+, if ∀w ∈Rk, such that ∥w −ˆw ∥2= ϵ, if the following condition is verified: ∥∆vπ ∥2< Vl(w) −Vl( ˆw) + ( ˆw −w)T ˆvπE + λ 2 (∥w ∥2 −∥ˆw ∥2) ϵ then ∥∆w ∥2⩽ϵ. Proof The condition stated in the proposition implies: ∥ˆw −w ∥2∥∆vπ ∥2< Vl(w) −Vl( ˆw) + ( ˆw −w)T ˆvπE + λ(∥w ∥2 −∥ˆw ∥2) 2 ⇒( ˆw −w)T ∆vπ < Vl(w) −Vl( ˆw) + ( ˆw −w)T ˆvπE + λ(∥w ∥2 −∥ˆw ∥2) 2 (H¨older) ⇒Vl( ˆw) − ˆwT vπE −λ 2 ∥ˆw ∥2  < Vl(w) − wT vπE −λ 2 ∥w ∥2  In other terms, the point ( ˆwT vπE −λ 2 ∥ˆw ∥2) is closer to the surface Vl than any other point (wT vπE −λ 2 ∥w ∥2), where w is a point on the sphere centered around ˆw with a radius of ϵ. Since the function Vl is convex and (wET vπE −λ 2 ∥wE ∥2) is by definition the closest point to the surface Vl, then wE should be inside the ball centered around ˆw with a radius of ϵ. Therefore, ∥wE −ˆw ∥2⩽ϵ and thus ∥∆w ∥2⩽ϵ. □ Consequently, the reward loss ∥∆w ∥2 approaches zero as the error of the estimated feature frequencies ∥∆vπ ∥2 approaches zero. A simpler bound can be easily derived given admissible heuristics of Vl. Corollary: Let Vl and Vl be respectively a lower and an upper bound on Vl, then Proposition (1) holds if Vl(w) −Vl( ˆw) is replaced by Vl(w) −Vl( ˆw). Figure (1) illustrates the divergence from the optimal reward weight wE when approximate frequencies are used. The error is not a continuous function of ∆vπ when the cost function is not regularized, because the vector returned by MMP is always a fringe point. Informally, the error is proportional to the maximum subgradient of the function Vl −vπE at the fringe point wE. 4 5 Bootstrapping Maximum Margin Planning The feature frequency error ∆vπ can be significantly reduced by using the known transition function for calculating ˆvπE and solving the flow Equations (1), instead of the Monte Carlo estimator (Equation (5)). However, this cannot be done unless the complete expert’s policy πE is provided. Assuming that the expert’s policy πE is optimal and deterministic, the value wT FµπE in Equation (2) can be replaced by maxµ∈GπE wT Fµ, the value of the optimal policy, according to the current reward weight w, that selects the same actions as the expert in all the states that occurred in the demonstration. The cost function of the bootstrapped Maximum Margin Planning becomes: cq(w) =  max µ1∈G(wT F + l)µ1 −max µ2∈GπE wT Fµ2 q + λ 2 ∥w∥2 (6) where GπE is the set of vectors µπ, subject to the following modified Bellman-flow constraints: µπ(s) = α(s) + γ X s′∈Se µπ(s′) X a∈A πE(s′, a)T a(s′, s) + γ X s′∈S\Se X a∈A µπ(s′, a)T a(s′, s) X a∈A µπ(s, a) = µπ(s), µπ(s, a) ⩾0 (7) Se is the set of states encountered in the demonstrations, where the expert’s policy is known. Unfortunately, the new cost function (Equation (6)) is not necessarily convex. In fact, it corresponds to a margin between two convex functions: the value of the bootstrapped expert’s policy maxµ∈GπE wT Fµ and the value of the best alternative policy maxµ∈G(wT F + l)µ. Yet, a local optimal solution of this modified cost function can be found by using the same subgradient as in Equation (3), and replacing µπE by arg maxµ∈GπE wT Fµ. In practice, as we will show in the experimental analysis, the solution returned by the bootstrapped MMP outperforms the solution of MMP where the expert’s frequency is calculated without taking into account the known transition probabilities. This improvement is particularly pronounced in highly stochastic environments. The computational cost of minimizing this modified cost function is twice the one of MMP, since two optimal policies are found at each iteration. In the remainder of this section, we provide a theoretical analysis of the cost function given by Equation (6). For the sake of simplicity, we consider q = 1 and λ = 0. Proposition 2 The cost function defined by Equation (6), has at most |A||S| |A||Se| different local minima. Proof If q = 1 and λ = 0, then the cost cq(w) corresponds to a distance between the convex and piecewise linear functions maxµ∈G(wT F + l)µ and maxµ∈GπE wT Fµ. Therefore, for any vector µ′ ∈GπE, the function cq is monotone in the interval of w where µ′ is optimal, i.e. where wT Fµ′ = maxµ∈GπE wT Fµ. Consequently, the number of local minima of the function cq is at most equal to the number of optimal vectors µ in GπE, which is upper bounded by the number of deterministic policies defined on S\Se, i.e. by |A||S|−|Se|. □ Consequently, the number of different local minima of the function cq decreases as the number of states covered by the demonstration increases. Ultimately, the function cq becomes convex when the demonstration covers all the possible states. Theorem 1 If there exists a reward weight vector w∗∈Rk, such that the expert’s policy πE is the only optimal policy with w∗, i.e. arg maxµ∈G w∗T Fµ = {µπE}, then there exists α > 0 such that: (i), the expert’s policy πE is the only optimal policy with αw∗, and (ii), cq(αw∗) is a local minimum of the function cq, defined in Equation (6). Proof The set of subgradients of function cq at a point w ∈Rk, denoted by ∇wcq(w), corresponds to vectors Fµ′ −Fµ′′, with µ′ ∈arg maxµ∈G(wT F +l)µ and µ′′ ∈arg maxµ∈GπE wT Fµ. In order that cq(w) will be a local minimum, it suffices to ensure that ⃗0 ∈∇wcq(w), i.e. ∃µ′ ∈arg maxµ∈G(wT F + l)µ, ∃µ′′ ∈arg maxµ∈GπE wT Fµ such that Fµ′ = Fµ′′. Let w∗∈Rk 5 be a reward weight vector such that πE is the only optimal policy, and let ϵ = w∗T FµπE −w∗T Fµ′ where µ′ ∈arg maxµ∈G−{µπE } w∗T Fµ. Then, αw∗T FµπE −αw∗T Fµ′ = 2|Se| 1−γ , where α = 2|Se| ϵ(1−γ). Notice that by multiplying w∗by α > 0, πE remains the only optimal policy, i.e. arg maxµ∈G αw∗T Fµ = {µπE}, and µ′ ∈arg maxµ∈G−{µπE } αw∗T Fµ. Therefore, it suffices to show that µπE ∈arg maxµ∈G(αw∗T F + l)µ. Indeed, maxµ∈G−{πE}(αw∗T F + l)µ ⩽ maxµ∈G−{πE} αw∗T Fµ+maxµ∈G−{πE} lµ ⩽ αw∗T FµπE −2|Se| 1−γ  + |Se| 1−γ ⩽αw∗T FµπE −|Se| 1−γ , therefore, µπE ∈arg maxµ∈G(αw∗T F + l)µ.□ 6 Bootstrapping Linear Programming Apprenticeship Learning As with MMP, the feature frequencies in LPAL can be analytically calculated only when a complete policy πE of the expert is provided. Alternatively, the same error bound V (π) ⩾V (πE) + v can be guaranteed by setting v = mini=0,...,k−1 minπ′∈ΠE[vi,π −vi,π′], where ΠE denotes the set of all the policies that select the same actions as the expert in all the states that occurred in the demonstration, assuming πE is deterministic (In LPAL, πE is not necessarily an optimal policy). Instead of enumerating all the policies of the set ΠE in the constraints, note that v = mini=0,...,k−1[vi,π −vE i ], where vE i def = maxπ′∈ΠE vi,π′ for each feature i. Therefore, LPAL can be reformulated as maximizing the margin mini=0,...,k−1[vi,π −vE i ]. The maximal margin is found by solving the following linear program: max v,µπ v subject to ∀i ∈{0, . . . , k −1} : v ⩽ X s∈S X a∈A µπ(s, a)fi(s, a) | {z } vi,π − X s∈S X a∈A µi,π′(s, a)fi(s, a) | {z } vE i µπ(s) = α(s) + γ X s′∈S X a∈A µπ(s′, a)T(s′, a, s), X a∈A µπ(s, a) = µπ(s), µπ(s, a) ⩾0 where the values vE i are found by solving k separate optimization problems (k is the number of features). For each feature i, vE i is the value of the optimal policy in the set ΠE under the reward weights w defined as: wi = 1 and wj = 0, ∀j ̸= i. 7 Experimental Results To validate our approach, we experimented on two simulated navigation problems: a gridworld and two racetrack domains, taken from (Boularias & Chaib-draa, 2010). While these are not meant to be challenging tasks, they allow us to compare our approach to other methods of apprenticeship learning, namely MMP and LPAL with Monte Carlo estimation, and a simple classification algorithm where the action in a given state is selected by performing a majority vote on the k-nearest neighbor states where the expert’s action is known. For each state, the distance k is gradually increased until at least one known state is encountered. The distance between two states corresponds to the shortest path between them with a positive probability. 7.1 Gridworld We consider 16 × 16 and 24 × 24 gridworlds. The state corresponds to the location of the agent on the grid. The agent has four actions for moving in one of the four directions of the compass. The actions succeed with probability 0.9. The gridworld is divided into non-overlapping regions, and the reward varies depending on the region in which the agent is located. For each region i, there is a feature fi, where fi(s) indicates whether state s is in region i. The expert’s policy πE corresponds to the optimal deterministic policy found by value iteration. In all our experiments on gridworlds, we used only 10 demonstration trajectories, which is a significantly small number compared to other methods ( Neu & Szepesvri (2007) for example). The duration of the trajectories is 50 time-steps. 6 Size Features Expert k-NN MMP + MC MMP + Bootstrap LPAL + MC LPAL + Bootstrap 16 × 16 16 0.4672 0.4635 0.0000 0.4678 0.0380 0.1572 16 × 16 64 0.5281 0.5198 0.0000 0.5252 0.0255 0.4351 16 × 16 256 0.3988 0.4062 0.0537 0.3828 0.0555 0.1706 24 × 24 64 0.5210 0.6334 0.0000 0.5217 0.0149 0.2767 24 × 24 144 0.5916 0.5876 0.0122 0.5252 0.0400 0.4432 24 × 24 576 0.3102 0.2814 0.0974 0.0514 0.0439 0.0349 Table 1: Gridworld average reward results Table 1 shows the average reward per step of the learned policy, averaged over 103 independent trials of the same duration as the demonstration trajectories. Our first observation is that Bootstrapped MMP learned policies just as good as the expert’s policy, while both MMP and LPAL using Monte Carlo (MC) estimator remarkably failed to collect any reward. This is due to the fact that we used a very small number of demonstrations (10 × 50 time-steps) compared to the size of these problems. Note that this problem is not specific to MMP or LPAL. In fact, any other algorithm using the same approximation method would produce similar results. The second observation is that the values of the policies learned by bootstrapped LPAL were between the values of LPAL with Monte Carlo and the optimal ones. In fact, the policy learned by the bootstrapped LPAL is one that minimizes the difference between the expected frequency of a feature using this policy and the maximal one among all the policies that resemble to the expert’s policy. Therefore, the learned policy maximizes the frequency of a feature that is not necessary a good one (with a high reward weight). We also notice that the performance of all the tested algorithms was low when 576 features were used. In this case, every feature takes a non null weight in one state only. Therefore, the demonstrations did not provide enough information about the rewards of the states that were not visited by the expert. Finally, we remark that k-NN performed as an expert in this experiment. In fact, since there are no obstacles on the grid, neighboring states often have similar optimal actions. 7.2 Racetrack We implemented a simplified car race simulator, a detailed description of the corresponding racetracks was provided in (Boularias & Chaib-draa, 2010). The states correspond to the position of the car on the racetrack and its velocity. For racetrack (1), the car always starts from the same initial position, and the duration of each demonstration trajectory is 20 time-steps. For racetrack (2), the car starts at a random position, and the length of each trajectory is 40 time-steps. A high reward is given for reaching the finish line, a low cost is associated to each movement, and high cost is associated to driving off-road (or hitting an obstacle). Figure 2 (a-f) shows the average reward per step of the learned policies, the average proportion of off-road steps, and the average number of steps before reaching the finish line, as a function of the number of trajectories in the demonstration. We first notice that k-NN performed poorly, this is principally caused by the effect of driving off-road on both the cumulated reward and the velocity of the car. In this context, neighbor states do not necessarily share the same optimal action. Contrary to the gridworld experiments, MMP with Monte Carlo achieved good performances on racetrack (1). In fact, by fixing the initial state, the demonstration covers most of the reachable states, and the feature frequencies are accurately estimated from the demonstration. On racetrack (2) however, MMP with MC was unable to learn a good policy because all the states were reachable from the initial distribution. Similarly, LPAL with both MC and bootstrapping failed to achieve good results on racetracks (1) and (2). This is due to the fact that LPAL tries to maximize the frequency of features that are not necessary associated to a high reward, such as hitting obstacles. Finally, we notice the nearly optimal performance of the bootstrapped MMP, on both racetracks (1) and (2). 8 Conclusion and Future Work The main question of apprenticeship learning is how to generalize the expert’s policy to states that have not been encountered during the demonstration. Inverse Reinforcement Learning (IRL) provides an efficient answer which consists in first learning a reward function that explains the observed behavior, and then using it for the generalization. A strong assumption considered in IRL-based al7 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 Average reward per step Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (a) Average reward in racetrack 1 20 22 24 26 28 30 32 34 2 4 6 8 10 12 Average number of steps Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (b) Average number of steps in racetrack 1 0 0.1 0.2 0.3 0.4 0.5 2 4 6 8 10 12 Average number of hitted obstacles per step Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (c) Average number of off-roads, racetrack 1 0 5 10 15 20 2 4 6 8 10 12 Average reward per step Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (d) Average reward in racetrack 2 20 30 40 50 60 2 4 6 8 10 12 Average number of steps Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (e) Average number of steps in racetrack 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 2 4 6 8 10 12 Average number of hitted obstacles per step Number of trajectories in the demonstration Expert MMP + MC MMP + Bootstrapping LPAL + MC LPAL + Bootstrapping k−NN (f) Average number of off-roads, racetrack 2 Figure 2: Racetrack results gorithms is that the reward is a linear function of state-action features, and the frequencies of these features can be estimated from a few demonstrations even if these demonstrations cover only a small part of the state space. In this paper, we showed that this assumption does not hold in highly stochastic systems. We also showed that this problem can be solved by modifying the cost function so that the value of the learned policy is compared to the exact value of a generalized expert’s policy. We also provided theoretical insights on the modified cost function, showing that it admits the expert’s true reward as a locally optimal solution, under mild conditions. The empirical analysis confirmed the outperformance of Bootstrapped MMP in particular. These promising results push us to further investigate the theoretical properties of the modified cost function. As a future work, we mainly target to compare this approach with the one proposed by Ratliff et al. (2007), where the base features are boosted by using a classifier. 8 References Abbeel, Pieter and Ng, Andrew Y. Apprenticeship Learning via Inverse Reinforcement Learning. In Proceedings of the Twenty-first International Conference on Machine Learning (ICML’04), pp. 1–8, 2004. Boularias, Abdeslam and Chaib-draa, Brahim. Apprenticeship Learning via Soft Local Homomorphisms. In Proceedings of 2010 IEEE International Conference on Robotics and Automation (ICRA’10), pp. 2971–2976, 2010. Neu, Gergely and Szepesvri, Csaba. Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods. In Conference on Uncertainty in Artificial Intelligence (UAI’07), pp. 295– 302, 2007. Ng, Andrew and Russell, Stuart. Algorithms for Inverse Reinforcement Learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML’00), pp. 663–670, 2000. Ramachandran, Deepak and Amir, Eyal. Bayesian Inverse Reinforcement Learning. In Proceedings of The twentieth International Joint Conference on Artificial Intelligence (IJCAI’07), pp. 2586– 2591, 2007. Ratliff, N., Bagnell, J., and Zinkevich, M. Maximum Margin Planning. In Proceedings of the Twenty-third International Conference on Machine Learning (ICML’06), pp. 729–736, 2006. Ratliff, Nathan, Bradley, David, Bagnell, J. Andrew, and Chestnutt, Joel. Boosting Structured Prediction for Imitation Learning. In Advances in Neural Information Processing Systems 19 (NIPS’07), pp. 1153–1160, 2007. Syed, Umar and Schapire, Robert. A Game-Theoretic Approach to Apprenticeship Learning. In Advances in Neural Information Processing Systems 20 (NIPS’08), pp. 1449–1456, 2008. Syed, Umar, Bowling, Michael, and Schapire, Robert E. Apprenticeship Learning using Linear Programming. In Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML’08), pp. 1032–1039, 2008. 9
2010
97
4,143
Co-regularization Based Semi-supervised Domain Adaptation Hal Daum´e III Department of Computer Science University of Maryland CP, MD, USA hal@umiacs.umd.edu Abhishek Kumar Department of Computer Science University of Maryland CP, MD, USA abhishek@umiacs.umd.edu Avishek Saha School Of Computing University of Utah, UT, USA avishek@cs.utah.edu Abstract This paper presents a co-regularization based approach to semi-supervised domain adaptation. Our proposed approach (EA++) builds on the notion of augmented space (introduced in EASYADAPT (EA) [1]) and harnesses unlabeled data in target domain to further assist the transfer of information from source to target. This semi-supervised approach to domain adaptation is extremely simple to implement and can be applied as a pre-processing step to any supervised learner. Our theoretical analysis (in terms of Rademacher complexity) of EA and EA++ show that the hypothesis class of EA++ has lower complexity (compared to EA) and hence results in tighter generalization bounds. Experimental results on sentiment analysis tasks reinforce our theoretical findings and demonstrate the efficacy of the proposed method when compared to EA as well as few other representative baseline approaches. 1 Introduction A domain adaptation approach for NLP tasks, termed EASYADAPT (EA), augments the source domain feature space using features from labeled data in target domain [1]. EA is simple, easy to extend and implement as a preprocessing step and most importantly is agnostic of the underlying classifier. However, EA requires labeled data in both source and target, and hence applies to fully supervised domain adaptation settings only. In this paper, 1 we propose a semisupervised 2 approach to leverage unlabeled data for EASYADAPT (which we call EA++) and theoretically, as well as empirically, demonstrate its superior performance over EA. There exists prior work on supervised domain adaptation (and multi-task learning) that can be related to EASYADAPT. An algorithm for multi-task learning using shared parameters was proposed for multi-task regularization [3] wherein each task parameter was represented as sum of a mean parameter (that stays same for all tasks) and its deviation from this mean. SVMs were used as the base classifiers and the algorithm was formulated in the standard SVM dual optimization setting. Subsequently, this framework was extended to online multi-domain setting in [4]. Prior work on semi-supervised approaches to domain adaptation also exists in literature. Extraction of specific features from the available dataset was proposed [5, 6] to facilitate the task of domain adaptation. Co-adaptation [7], a combination of co-training and domain adaptation, can also be considered as a semi-supervised approach to domain adaptation. A semi-supervised EM algorithm for domain adaptation was proposed in [8]. Similar to graph based semi-supervised approaches, a label propagation method was proposed [9] to facilitate domain adaptation. Domain Adaptation Machine (DAM) [10] is a semi-supervised extension of SVMs for domain adaptation and presents extensive empirical results. Nevertheless, in almost all of the above cases, the proposed methods either use specifics of the datasets or are customized for some particular base classifier and hence it is not clear how the proposed methods can be extended to other existing classifiers. 1A preliminary version [2] of this work appeared in the DANLP workshop at ACL 2010. 2We define supervised domain adaptation as having labeled data in both source and target and unsupervised domain adaptation as having labeled data in only source. In semi-supervised domain adaptation, we also have access to both labeled and unlabeled data in target. 1 As mentioned earlier, EA is remarkably general in the sense that it can be used as a pre-processing step in conjunction with any base classifier. However, one of the prime limitations of EA is its incapability to leverage unlabeled data. Given its simplicity and generality, it would be interesting to extend EA to semi-supervised settings. In this paper, we propose EA++, a co-regularization based semi-supervised extension to EA. We also present Rademacher complexity based generalization bounds for EA and EA++. Our generalization bounds also apply to the approach proposed in [3] for domain adaptation setting, where we are only concerned with the error on target domain. The closest to our work is a recent paper [11] that theoretically analyzes EASYADAPT. Their paper investigates the necessity to combine supervised and unsupervised domain adaptation (which the authors refer to as labeled and unlabeled adaptation frameworks, respectively) and analyzes the combination using mistake bounds (which is limited to perceptron-based online scenarios). In addition, their work points out that EASYADAPT is limited to only supervised domain adaptation. On the contrary, our work extends EASYADAPT to semi-supervised settings and presents generalization bound based theoretical analysis which specifically demonstrate why EA++ is better than EA. 2 Background In this section, we introduce notations and provide a brief overview of EASYADAPT [1]. 2.1 Problem Setup and Notations Let X ⊂Rd denote the instance space and Y = {−1, +1} denote the label space. Let Ds(x, y) be the source distribution and Dt(x, y) be the target distribution. We have a set of source labeled examples Ls(∼Ds(x, y)) and a set of target labeled examples Lt(∼Dt(x, y)), where |Ls| = ls ≫|Lt| = lt. We also have target unlabeled data denoted by Ut(∼Dt(x)), where |Ut| = ut. Our goal is to learn a hypothesis h : X 7→Y having low expected error with respect to the target domain. In this paper, we consider linear hypotheses only. However, the proposed techniques extend to non-linear hypotheses, as mentioned in [1]. Source and target empirical errors for hypothesis h are denoted by ˆǫs(h, fs) and ˆǫt(h, ft) respectively, where fs and ft are the true source and target labeling functions. Similarly, the corresponding expected errors are denoted by ǫs(h, fs) and ǫt(h, ft). We will use shorthand notations of ˆǫs, ˆǫt, ǫs and ǫt wherever the intention is clear from context. 2.2 EasyAdapt (EA) Let us denote Rd as the original space. EA operates in an augmented space denoted by ˘ X ⊂R3d (for a single pair of source and target domain). For k domains, the augmented space blows up to R(k+1)d. The augmented feature maps Φs, Φt : X 7→˘ X for source and target domains are defined as Φs(x) = ⟨x, x, 0⟩and Φt(x) = ⟨x, 0, x⟩where x and 0 are vectors in Rd, and 0 denotes a zero vector of dimension d. The first d-dimensional segment corresponds to commonality between source and target, the second d-dimensional segment corresponds to the source domain while the last segment corresponds to the target domain. Source and target domain examples are transformed using these feature maps and the augmented features so constructed are passed onto the underlying supervised classifier. One of the most appealing properties of EASYADAPT is that it is agnostic of the underlying supervised classifier being used to learn in the augmented space. Almost any standard supervised learning approach (for e.g., SVMs, perceptrons) can be used to learn a linear hypothesis ˘h ∈R3d in the augmented space. Let us denote ˘h = ⟨gc, gs, gt⟩, where each of gc, gs, gt is of dimension d, and represent the common, source-specific and target-specific components of ˘h, respectively. During prediction on target data, the incoming target sample x is transformed to obtain Φt(x) and ˘h is applied on this transformed sample. This is equivalent to applying (gc + gt) on x. A intuitive insight into why this simple algorithm works so well in practice and outperforms most state-of-the-art algorithms is given in [1]. Briefly, it can be thought to be simultaneously training two hypotheses: hs = (gc + gs) for source domain and ht = (gc + gt) for target domain. The commonality between the domains is represented by gc whereas gs and gt capture the idiosyncrasies of the source and target domain, respectively. 3 EA++: EA using unlabeled data As discussed in the previous section, the EASYADAPT algorithm is attractive because it performs very well empirically and can be used in conjunction with any underlying supervised linear classifier. One drawback of EASYADAPT is its inability to leverage unlabeled target data which is usually available in large quantities in most practical scenarios. In this section, we extend EA to semi-supervised settings while maintaining the desirable classifier-agnostic property. 2 3.1 Motivation In multi-view approach to semi-supervised learning [12], different hypotheses are learned using different views of the dataset. Thereafter, unlabeled data is utilized to co-regularize these learned hypotheses by making them agree on unlabeled samples. In domain adaptation, the source and target data come from two different distributions. However, if the source and target domains are reasonably close, we can employ a similar form of regularization using unlabeled data. A prior co-regularization based idea to harness unlabeled data in domain adaptation tasks demonstrated improved empirical results [10]. However, their technique applies for the particular base classifier they consider and hence does not extend to other supervised classifiers. 3.2 EA++: EASYADAPT with unlabeled data In our proposed semi-supervised approach, the source and target hypotheses are made to agree on unlabeled data. We refer to this algorithm as EA++. Recall that EASYADAPT learns a linear hypothesis ˘h ∈R3d in the augmented space. The hypothesis ˘h contains common, source-specific and target-specific sub-hypotheses and is expressed as ˘h = ⟨gc, gs, gt⟩. In original space (ref. Section 2.2), this is equivalent to learning a source specific hypothesis hs = (gc + gs) and a target specific hypothesis ht = (gc + gt). In EA++, we want the source hypothesis hs and the target hypothesis ht to agree on the unlabeled data. For an unlabeled target sample xi ∈Ut ⊂Rd, the goal of EA++ is to make the predictions of hs and ht on xi, agree with each other. Formally, it aims to achieve the following condition: hs · xi ≈ht · xi ⇐⇒(gc + gs) · xi ≈(gc + gt) · xi ⇐⇒(gs −gt) · xi ≈0 ⇐⇒⟨gc, gs, gt⟩· ⟨0, xi, −xi⟩≈0. (3.1) The above expression leads to the definition of a new feature map Φu : X 7→˘ X for unlabeled data given by Φu(x) = ⟨0, x, −x⟩. Every unlabeled target sample is transformed using the map Φu(.). The augmented feature space that results from the application of three feature maps, namely, Φs(·), Φt(·) and Φu(·) on source labeled samples, target labeled samples and target unlabeled samples is summarized in Figure 1(a). As shown in Eq. 3.1, during the training phase, EA++ assigns a predicted value close to 0 for each unlabeled sample. However, it is worth noting that during the test phase, EA++ predicts labels from two classes: +1 and −1. This warrants further exposition of the implementation specifics which is deferred until the next subsection. EA++ EA 0 0 d d d         0 Ls Ls Lt Lt Ut −Ut ls lt ut (a) Loss Loss Loss (a) (b) (c) (b) Figure 1: (a) Diagrammatic representation of feature augmentation in EA and EA++, (b) Loss functions for class +1, class −1 and their summation. 3.3 Implementation In this section, we present implementation specific details of EA++. For concreteness, we consider SVM as the base supervised learner. However, these details hold for other supervised linear classifiers. In the dual form of SVM optimization function, the labels are multiplied with features. Since, we want the predicted labels for unlabeled data to be 0 (according to Eq. 3.1), multiplication by zero will make the unlabeled samples ineffective in the dual form of 3 the cost function. To avoid this, we create as many copies of Φu(x) as there are labels and assign each label to one copy of Φu(x). For the case of binary classification, we create two copies of every augmented unlabeled sample, and assign +1 label to one copy and −1 to the other. The learner attempts to balance the loss of the two copies, and tries to make the prediction on unlabeled sample equal to 0. Figure 1(b) shows the curves of the hinge loss for class +1, class −1 and their summation. The effective loss for each unlabeled sample is similar to the sum of losses for +1 and −1 classes (shown in Figure 1(b)c). 4 Generalization Bounds In this section, we present Rademacher complexity based generalization bounds for EA and EA++. First, we define hypothesis classes for EA and EA++ using an alternate formulation. Second, we present a theorem (Theorem 4.1) which relates empirical and expected error for the general case and hence applies to both the source and target domains. Third, we prove Theorem 4.2 which relates the expected target error to the expected source error. Fourth, we present Theorem 4.3 which combines Theorem 4.1 and Theorem 4.2 so as to relate the expected target error to empirical errors in source and target (which is the main goal of the generalization bounds presented in this paper). Finally, all that remains is to bound the Rademacher complexity of the various hypothesis classes. 4.1 Define Hypothesis Classes for EA and EA++ Our goal now is to define the hypothesis classes for EA and EA++ so as to make the theoretical analysis feasible. Both EA and EA++ train hypotheses in the augmented space ˘ X ⊂R3d. The augmented hypothesis ˘h is trained using data from both domains, and the three sub-hypotheses (gc + gs + gt) of d-dimension are treated in a different manner for source and target data. We use an alternate formulation of the hypothesis classes and work in the original space X ⊂Rd. As discussed briefly in Section 2.2, EA can be thought to be simultaneously training two hypotheses hs = (gc + gs) and ht = (gc + gt) for source and target domains, respectively. We consider the case when the underlying supervised classifier in augmented space uses a square L2-norm regularizer of the form ||˘h||2 (as used in SVM). This is equivalent to imposing the regularizer (||gc||2+||gs||2+||gt||2) = (||gc||2+||hs−gc||2+||ht−gc||2). Differentiating this regularizer w.r.t. gc gives gc = (hs + ht)/3 at the minimum, and the regularizer reduces to 1 3(||hs||2 + ||ht||2 + ||hs −ht||2). Thus, EA can be thought to be minimizing the sum of empirical source error on hs, empirical target error on ht and this regularizer. The cost function QEA(h1, h2) can now be written as: αˆǫs(h1) + (1 −α)ˆǫt(h2) + λ1||h1||2 + λ2||h2||2 + λ||h1 −h2||2, and (hs, ht) = arg min h1,h2 QEA (4.1) The EA algorithm minimizes this cost function over h1 and h2 jointly to obtain hs and ht. The EA++ algorithm uses target unlabeled data, and encourages hs and ht to agree on unlabeled samples (Eq. 3.1). This can be thought of as having an additional regularizer of the form P i∈Ut(hs(xi) −ht(xi))2 in the cost function. The cost function for EA++ (denoted as Q++(h1, h2)) can then be written as: αˆǫs(h1) + (1 −α)ˆǫt(h2) + λ1||h1||2 + λ2||h2||2 + λ||h1 −h2||2 + λu X i∈Ut (h1(xi) −h2(xi))2 (4.2) Both EA and EA++ give equal weights to source and target empirical errors, so α turns out to be 0.5. We use hyperparameters λ1, λ2, λ, and λu in the cost functions to make them more general. However, as explained earlier, EA implicitly sets all these hyperparameters (λ1, λ2, λ) to the same value (which will be 0.5( 1 3) = 1 6 in our case, since the weights in the entire cost function are multiplied by α = 0.5). The hyperparameter for unlabeled data (λu) is 0.5 in EA++. We assume that the loss L(y, h.x) is bounded by 1 for the zero hypothesis h = 0. This is true for many popular loss functions including square loss and hinge loss when y ∈{−1, +1}. One possible way [13] of defining the hypotheses classes is to substitute trivial hypotheses h1 = h2 = 0 in both the cost functions which makes all regularizers and co-regularizers equal to zero and thus bounds the cost functions QEA and Q++. This gives us QEA(0, 0) ≤1 and Q++(0, 0) ≤1 since ˆǫs(0), ˆǫt(0) ≤1. Without loss of generality, we also assume that final source and target hypotheses can only reduce the cost function as compared to the zero hypotheses. Hence, the final hypothesis pair (hs, ht) that minimizes the cost functions is contained in the following paired hypothesis classes for EA and EA++, H := {(h1, h2) : λ1||h1||2 + λ2||h2||2 + λ||h1 −h2||2 ≤1} H++ := {(h1, h2) : λ1||h1||2 + λ2||h2||2 + λ||h1 −h2||2 + λu X i∈Ut (h1(xi) −h2(xi))2 ≤1} (4.3) 4 The source hypothesis class for EA is the set of all h1 such that the pair (h1, h2) is in H. Similarly, the target hypothesis class for EA is the set of all h2 such that the pair (h1, h2) is in H. Consequently, the source and target hypothesis classes for EA can be defined as: J s EA := {h1 : X 7→R, (h1, h2) ∈H} and J t EA := {h2 : X 7→R, (h1, h2) ∈H} (4.4) Similarly, the source and target hypothesis classes for EA++ are defined as: J s ++ := {h1 : X 7→R, (h1, h2) ∈H++} and J t ++ := {h2 : X 7→R, (h1, h2) ∈H++} (4.5) Furthermore, we assume that our hypothesis class is comprised of real-valued functions over an RKHS with reproducing kernel k(·, ·), k :X ×X 7→R. Let us define the kernel matrix and partition it corresponding to source labeled, target labeled and target unlabeled data as shown below: K = As×s Cs×t Ds×u C′ t×s Bt×t Et×u D′ u×s E′ u×t Fu×u ! , (4.6) where ‘s’, ‘t’ and ‘u’ indicate terms corresponding to source labeled, target labeled and target unlabeled, respectively. 4.2 Relate empirical and expected error (for both source and target) Having defined the hypothesis classes, we now proceed to obtain generalization bounds for EA and EA++. We have the following standard generalization bound based on the Rademacher complexity of a hypothesis class [13]. Theorem 4.1. Suppose the uniform Lipschitz condition holds for L : Y2 → [0, 1], i.e., |L( ˆy1, y) − L( ˆy2, y)| ≤M| ˆy1 −ˆy2|, where y, ˆy1, ˆy2 ∈Y and ˆy1 ̸= ˆy2. Then for any δ ∈(0, 1) and for m samples (X1, Y1), (X2, Y2), . . . , (Xm, Ym) drawn i.i.d. from distribution D, we have with probability at least (1 −δ) over random draws of samples, ǫ(f) ≤ˆǫ(f) + 2M ˆRm(F) + 1 √m(2 + 3 p ln(2/δ)/2). where f ∈F is the class of functions mapping X 7→Y, and ˆRm(F) is the empirical Rademacher complexity of F defined as ˆRm(F) := Eσ[supf∈F | 2 m Pm i=1 σih2(xi)|]. If we can bound the complexity of hypothesis classes J s EA and J t EA, we will have a uniform convergence bound on the difference of expected and empirical errors (|ǫt(h) −ˆǫt(h)| and |ǫs(h) −ˆǫs(h)|) using Theorem 4.1. However, in domain adaptation setting, we are also interested in the bounds that relate expected target error to total empirical error on source and target samples. The following sections aim to achieve this goal. 4.3 Relate source expected error and target expected error The following theorem provides a bound on the difference of expected target error and expected source error. The bound is in terms of ηs := ǫs(fs, ft), νs := ǫs(h∗ t , ft) and νt := ǫt(h∗ t , ft), where fs and ft are the source and target labeling functions, and h∗ t is the optimal target hypothesis in target hypothesis class. It also uses dH∆H(Ds, Dt)− distance [14], which is defined as suph1,h2∈H 2|ǫs(h1, h2) −ǫt(h1, h2)|. The dH∆H−distance measures the distance between two distribution using a hypothesis class-specific distance measure. If the two domains are close to each other, ηs and dH∆H(Ds, Dt) are expected to be small. On the contrary, if the domains are far apart, these terms will be big and the use of extra source samples may not help in learning a better target hypothesis. These two terms also represent the notion of adaptability in our case. Theorem 4.2. Suppose the loss function is M-Lipschitz as defined in Theorem 4.1, and obeys triangle inequality. For any two source and target hypotheses hs, ht (which belong to different hypotheses classes), we have ǫt(ht, ft) −ǫs(hs, fs) ≤M||ht −hs||Es hp k(x, x) i + 1 2dHt∆Ht(Ds, Dt) + ηs + νs + νt. where Ht is the target hypothesis class, and k(·, ·) is the reproducing kernel for the RKHS. ηs, νs, and νt are defined as above. Proof. Please see Appendix A in the supplement. 5 4.4 Relate target expected error with source and target empirical errors EA and EA++ learn source and target hypotheses jointly. So the empirical error in one domain is expected to have its effect on the generalization error in the other domain. In this section, we aim to bound the target expected error in terms of source and target empirical errors. The following theorem achieves this goal. Theorem 4.3. Under the assumptions and definitions used in Theorem 4.1 and Theorem 4.2, with probability at least 1 −δ we have ǫt(ht, ft) ≤1 2(ˆǫs(hs, fs) + ˆǫt(ht, ft)) + 1 2(2M ˆRm(Hs) + 2M ˆRm(Ht)) + 1 2 „ 1 √ ls + 1 √ lt « (2 + 3 p ln(2/δ)/2) + 1 2M||ht −hs||Es hp k(x, x) i + 1 4dHt∆Ht(Ds, Dt) + 1 2(ηs + νs + νt) for any hs and ht. Hs and Ht are the source hypothesis class and the target hypothesis class, respectively. Proof. We first use Theorem 4.1 to bound (ǫt(ht)−ˆǫt(ht)) and (ǫs(hs)−ˆǫs(hs)). The above theorem directly follows by combining these two bounds and Theorem 4.2. This bound provides better a understanding of how the target expected error is governed by both source and target empirical errors, and hypotheses class complexities. This behavior is expected since both EA and EA++ learn source and target hypotheses jointly. We also note that the bound in Theorem 4.3 depends on ||hs −ht||, which apparently might give an impression that the best possible thing to do is to make source and target hypotheses equal. However, due to joint learning of source and target hypotheses (by optimizing the cost function of Eq. 4.1), making the source and target hypotheses close will increase the source empirical error, thus loosening the bound of Theorem 4.3. Noticing that ||hs −ht||2 ≤ 1 λ for both EA and EA++, the bound can be made independent of ||hs −ht|| although with a sacrifice on the tightness. We note that Theorem 4.1 can also be used to bound the target generalization error of EA and EA++ in terms of only target empirical error. However, if the number of labeled target samples is extremely low, this bound can be loose due to inverse dependency on number of target samples. Theorem 4.3 bounds the target expected error using the averages of empirical errors, Rademacher complexities, and sample dependent terms. If the domains are reasonably close and the number of labeled source samples is much higher than target samples, this can provide a tighter bound compared to Theorem 4.1. Finally, we need the Rademacher complexities of source and target hypothesis classes (for both EA and EA++) to be able to use Theorem 4.3, which are provided in the next sections. 4.5 Bound the Complexity of EA and EA++ Hypothesis Classes The following theorems bound the Rademacher complexity of the target hypothesis classes for EA and EA++. 4.5.1 EASYADAPT (EA) Theorem 4.4. For the hypothesis class J t EA defined in Eq. 4.4 we have, 1 4√ 2 2Ct EA lt ≤ˆRm(J t EA) ≤ 2Ct EA lt where, ˆRm(J t EA) = Eσ suph2∈J t EA | P i σih2(x)|, (Ct EA)2 =  1 λ2+ “ 1 λ1 + 1 λ ”−1  tr(B) and B is the kernel sub-matrix defined as in Eq. 4.6. Proof. Please see Appendix B in the supplement. The complexity of target class decreases with an increase in the values of hyperparameters. It decreases more rapidly with change in λ2 compared to λ and λ1, which is also expected since λ2 is the hyperparameter directly influencing the target hypothesis. The kernel block sub-matrix corresponding to source samples does not appear in the bound. This result in conjunction with Theorem 4.1 gives a bound on the target generalization error. To be able to use the bound of Theorem 4.3, we need the Rademacher complexity of the source hypothesis class. Due to the symmetry of paired hypothesis class (Eq. 4.3) in h1 and h2 up to scalar parameters, the complex6 ity of source hypothesis class can be similarly bounded by 1 4√ 2 2Cs EA ls ≤ˆRm(J s EA) ≤ 2Cs EA ls , where (Cs EA)2 =  1 λ1+ “ 1 λ2 + 1 λ ”−1  tr(A), and A is the kernel block sub-matrix corresponding to source samples. 4.5.2 EASYADAPT++ (EA++) Theorem 4.5. For the hypothesis class J t ++ defined in Eq. 4.5 we have, 1 4√ 2 2Ct ++ lt ≤ ˆRm(J t ++) ≤ 2Ct ++ lt where, ˆRm(J t ++) = Eσ suph2∈J t ++ | P i σih2(x)| and (Ct ++)2 =  1 λ2+ “ 1 λ1 + 1 λ ”−1  tr(B) − λu  λ1 λλ1+λλ2+λ1λ2 2 tr E(I + kF)−1E′ , where k = λu(λ1+λ2) λλ1+λλ2+λ1λ2 . Proof. Please see Appendix C in the Supplement. The second term in (Ct ++)2 is always positive since the trace of a positive definite matrix is positive. So, the unlabeled data results in a reduction of complexity over the labeled data case (Theorem 4.4). The trace term in the reduction can also be written as P i ||Ei||2 (I+kF )−1, where Ei is the i’th column of matrix E and || · ||2 Z is the norm induced by a positive definite matrix Z. Since Ei is the vector representing the inner product of i’th target sample with all unlabeled samples, this means that the reduction in complexity is proportional to the similarity between target unlabeled samples and target labeled samples. This result in conjunction with Theorem 4.1 gives a bound on the target generalization error in terms of target empirical error. To be able to use the bound of Theorem 4.3, we need the Rademacher complexity of source hypothesis class too. Again, as in case of EA, using the symmetry of paired hypothesis class H++ (Eq. 4.3) in h1 and h2 up to scalar parameters, the complexity of source hypothesis class can be similarly bounded by 1 4√ 2 2Cs ++ ls ≤ˆRm(J s ++) ≤ 2Cs ++ ls , where (Cs ++)2 =  1 λ1+ “ 1 λ2 + 1 λ ”−1  tr(A) −λu  λ2 λλ1+λλ2+λ1λ2 2 tr D(I + kF)−1D′ , and k is defined similarly as in Theorem 4.5. The trace term can again be interpreted as before, which implies that the reduction in source class complexity is proportional to the similarity between source labeled samples and target unlabeled samples. 5 Experiments We follow experimental setups similar to [1] but report our empirical results for the task of sentiment classification using the SENTIMENT data provided by [15]. The task of sentiment classification is a binary classification task which corresponds to classifying a review as positive or negative for user reviews of eight product types (apparel, books, DVD, electronics, kitchen, music, video, and other) collected from amazon.com. We quantify the domain divergences in terms of the A-distance [16] which is computed [17] from finite samples of source and target domain using the proxy A-distance [16]. For our experiments, we consider the following domain-pairs: (a) DVD→BOOKS (proxy A-distance=0.7616) and, (b) KITCHEN→APPAREL (proxy A-distance=0.0459). As in [1], we use an averaged perceptron classifier from the Megam framework (implementation due to [18]) for all the aforementioned tasks. The training sample size varies from 1k to 16k. In all cases, the amount of unlabeled target data is equal to the total amount of labeled source and target data. We compare the empirical performance of EA++ with a few other baselines, namely, (a) SOURCEONLY (classifier trained on source labeled samples), (b) TARGETONLY-FULL (classifier trained on the same number of target labeled samples as the number of source labeled samples in SOURCEONLY), (c) TARGETONLY (classifier trained on small amount of target labeled samples, roughly one-tenth of the amount of source labeled samples in SOURCEONLY), (d) ALL (classifier trained on combined labeled samples of SOURCEONLY and TARGETONLY), (e) EA (classifier trained in augmented feature space on the same input training set as ALL), (f) EA++ (classifier trained in augmented feature space on the same input training set as EA and an equal amount of unlabeled target data). All these approaches were tested on the entire amount of available target test data. Figure 2 presents the learning curves for (a) SOURCEONLY, (b) TARGETONLY-FULL, (c) TARGETONLY, (d) ALL, (e) EA, and (f) EA++ (EA with unlabeled data). The x-axis represents the number of training samples on which the 7 0.2 0.3 2000 5000 8000 11000 error rate number of samples SrcOnly TgtOnly-Full TgtOnly All EA EA++ (a) 0.2 0.3 0.4 1000 2500 4000 6500 error rate number of samples SrcOnly TgtOnly-Full TgtOnly All EA EA++ (b) Figure 2: Test accuracy of SOURCEONLY, TARGETONLY-FULL, TARGETONLY, ALL, EA, EA++ (with unlabeled data) for, (a) DVD→BOOKS (proxy A-distance=0.7616), (b) KITCHEN→APPAREL (proxy A-distance=0.0459) predictor has been trained. At this point, we note that the number of training samples vary depending on the particular approach being used. For SOURCEONLY, TARGETONLY-FULL and TARGETONLY, it is just the corresponding number of labeled source or target samples, respectively. For ALL and EA, it is the summation of labeled source and target samples. For EA++, the x-value plotted denotes the amount of unlabeled target data used (in addition to an equal amount of source+target labeled data, as in ALL or EA). We plot this number for EA++, just to compare its improvement over EA when using an additional (and equal) amount of unlabeled target data. This accounts for the different x values plotted for the different curves. In all cases, the y-axis denotes the error rate. As can be seen, for both the cases, EA++ outperforms EASYADAPT. For DVD→BOOKS, the domains are far apart as denoted by a high proxy A-distance. Hence, TARGETONLY-FULL achieves the best performance and EA++ almost catches up for large amounts of training data. For different number of sample points, EA++ gives relative improvements in the range of 4.36% −9.14%, as compared to EA. The domains KITCHEN and APPAREL can be considered to be reasonably close due to their low domain divergence. Hence, this domain pair is more amenable for domain adaptation as is demonstrated by the fact that the other approaches (SOURCEONLY, TARGETONLY, ALL) perform better or atleast as good as TARGETONLY-FULL. However, as earlier, EA++ once again outperforms all these approaches including TARGETONLY-FULL. Due to the closeness of the two domains, additional unlabeled data in EA++ helps it in outperforming TARGETONLY-FULL. At this point, we also note that EA performs poorly for some cases, which corroborates with prior experimental results [1]. For this dataset, EA++ yields relative improvements in the range of 14.08% −39.29% over EA for different number of sample points experimented with. Similar trends were observed for other tasks and datasets (refer Figure 3 of [2]). 6 Conclusions We proposed a semi-supervised extension to an existing domain adaptation technique (EA). Our approach EA++, leverages unlabeled data to improve the performance of EA. With this extension, EA++ applies to both fully supervised and semi-supervised domain adaptation settings. We have formulated EA and EA++ in terms of co-regularization, an idea that originated in the context of multiview learning [13, 19]. Our proposed formulation also bears resemblance to existing work [20] in semi-supervised (SSL) literature which has been studied extensively in [21, 22, 23]. The difference being, while in SSL one would try to make the two views (on unlabeled data) agree, in domain adaptation the aim is to make the two hypotheses in source and target agree. Using our formulation, we have presented theoretical analysis of the superior performance of EA++ as compared to EA. Our empirical results further confirm the theoretical findings. EA++ can also be extended to the multiple source settings. If we have k sources and a single target domain then we can introduce a co-regularizer for each source-target pair. Due to space constraints, we defer details to a full version. 8 References [1] Hal Daum´e III. Frustratingly easy domain adaptation. In ACL’07, pages 256–263, Prague, Czech Republic, June 2007. [2] Hal Daum´e III, Abhishek Kumar, and Avishek Saha. Frustratingly easy semi-supervised domain adaptation. In ACL 2010 Workshop on Domain Adaptation for Natural Language Processing (DANLP), pages 53–59, Uppsala, Sweden, July 2010. [3] Theodoros Evgeniou and Massimiliano Pontil. Regularized multitask learning. In KDD’04, pages 109–117, Seattle, WA, USA, August 2004. [4] Mark Dredze, Alex Kulesza, and Koby Crammer. Multi-domain learning by confidence-weighted parameter combination. Machine Learning, 79(1-2):123–149, 2010. [5] Andrew Arnold and William W. Cohen. Intra-document structural frequency features for semi-supervised domain adaptation. In CIKM’08, pages 1291–1300, Napa Valley, California, USA, October 2008. [6] John Blitzer, Ryan Mcdonald, and Fernando Pereira. Domain adaptation with structural correspondence learning. In EMNLP’06, pages 120–128, Sydney, Australia, July 2006. [7] Gokhan Tur. Co-adaptation: Adaptive co-training for semi-supervised learning. In ICASSP’09, pages 3721–3724, Taipei, Taiwan, April 2009. [8] Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. Transferring Naive Bayes classifiers for text classification. In AAAI’07, pages 540–545, Vancouver, B.C., July 2007. [9] Dikan Xing, Wenyuan Dai, Gui-Rong Xue, and Yong Yu. Bridged refinement for transfer learning. In PKDD’07, pages 324–335, Warsaw, Poland, September 2007. [10] Lixin Duan, Ivor W. Tsang, Dong Xu, and Tat-Seng Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML’09, pages 289–296, Montreal, Quebec, June 2009. [11] Ming-Wei Chang, Michael Connor, and Dan Roth. The necessity of combining adaptation methods. In EMNLP’10, pages 767–777, Cambridge, MA, October 2010. [12] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. A co-regularization approach to semi-supervised learning with multiple views. In ICML Workshop on Learning with Multiple Views, pages 824–831, Bonn, Germany, August 2005. [13] D. S. Rosenberg and P. L. Bartlett. The Rademacher complexity of co-regularized kernel classes. In AISTATS’07, pages 396–403, San Juan, Puerto Rico, March 2007. [14] John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In NIPS’07, pages 129–136, Vancouver, B.C., December 2007. [15] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL’07, pages 440–447, Prague, Czech Republic, June 2007. [16] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In NIPS’06, pages 137–144, Vancouver, B.C., December 2006. [17] Piyush Rai, Avishek Saha, Hal Daum´e III, and Suresh Venkatasubramanian. Domain adaptation meets active learning. In NAACL 2010 Workshop on Active Learning for NLP (ALNLP), pages 27–32, Los Angeles, USA, June 2010. [18] Hal Daum´e III. Notes on CG and LM-BFGS optimization of logistic regression. August 2004. [19] Vikas Sindhwani and David S. Rosenberg. An RKHS for multi-view learning and manifold co-regularization. In ICML’08, pages 976–983, Helsinki, Finland, June 2008. [20] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In COLT’98, pages 92–100, New York, NY, USA, July 1998. ACM. [21] Maria-Florina Balcan and Avrim Blum. A PAC-style model for learning from labeled and unlabeled data. In COLT’05, pages 111–126, Bertinoro, Italy, June 2005. [22] Maria-Florina Balcan and Avrim Blum. A discriminative model for semi-supervised learning. J. ACM, 57(3), 2010. [23] Karthik Sridharan and Sham M. Kakade. An information theoretic framework for multi-view learning. In COLT’08, pages 403–414, Helsinki, Finland, June 2008. 9
2010
98
4,144
Structured sparsity-inducing norms through submodular functions Francis Bach INRIA - Willow project-team Laboratoire d’Informatique de l’Ecole Normale Sup´erieure Paris, France francis.bach@ens.fr Abstract Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the ℓ1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lov´asz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning. 1 Introduction The concept of parsimony is central in many scientific domains. In the context of statistics, signal processing or machine learning, it takes the form of variable or feature selection problems, and is commonly used in two situations: First, to make the model or the prediction more interpretable or cheaper to use, i.e., even if the underlying problem does not admit sparse solutions, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse. In these two situations, reducing parsimony to finding models with low cardinality turns out to be limiting, and structured parsimony has emerged as a fruitful practical extension, with applications to image processing, text processing or bioinformatics (see, e.g., [1, 2, 3, 4, 5, 6, 7] and Section 4). For example, in [4], structured sparsity is used to encode prior knowledge regarding network relationship between genes, while in [6], it is used as an alternative to structured nonparametric Bayesian process based priors for topic models. Most of the work based on convex optimization and the design of dedicated sparsity-inducing norms has focused mainly on the specific allowed set of sparsity patterns [1, 2, 4, 6]: if w ∈Rp denotes the predictor we aim to estimate, and Supp(w) denotes its support, then these norms are designed so that penalizing with these norms only leads to supports from a given family of allowed patterns. In this paper, we instead follow the approach of [8, 3] and consider specific penalty functions F(Supp(w)) of the support set, which go beyond the cardinality function, but are not limited or designed to only forbid certain sparsity patterns. As shown in Section 6.2, these may also lead to restricted sets of supports but their interpretation in terms of an explicit penalty on the support leads to additional 1 insights into the behavior of structured sparsity-inducing norms (see, e.g., Section 4.1). While direct greedy approaches (i.e., forward selection) to the problem are considered in [8, 3], we provide convex relaxations to the function w 7→F(Supp(w)), which extend the traditional link between the ℓ1-norm and the cardinality function. This is done for a particular ensemble of set-functions F, namely nondecreasing submodular functions. Submodular functions may be seen as the set-function equivalent of convex functions, and exhibit many interesting properties that we review in Section 2—see [9] for a tutorial on submodular analysis and [10, 11] for other applications to machine learning. This paper makes the following contributions: −We make explicit links between submodularity and sparsity by showing that the convex envelope of the function w 7→F(Supp(w)) on the ℓ∞-ball may be readily obtained from the Lov´asz extension of the submodular function (Section 3). −We provide generic algorithmic tools, i.e., subgradients and proximal operators (Section 5), as well as theoretical guarantees, i.e., conditions for support recovery or high-dimensional inference (Section 6), that extend classical results for the ℓ1-norm and show that many norms may be tackled by the exact same analysis and algorithms. −By selecting specific submodular functions in Section 4, we recover and give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups [1, 2, 7], and we define new norms, in particular ones that can be used as nonfactorial priors for supervised learning (Section 4). These are illustrated on simulation experiments in Section 7, where they outperform related greedy approaches [3]. Notation. For w ∈Rp, Supp(w) ⊂V = {1, . . . , p} denotes the support of w, defined as Supp(w) = {j ∈V, wj ̸= 0}. For w ∈Rp and q ∈[1, ∞], we denote by ∥w∥q the ℓq-norm of w. We denote by |w| ∈Rp the vector of absolute values of the components of w. Moreover, given a vector w and a matrix Q, wA and QAA are the corresponding subvector and submatrix of w and Q. Finally, for w ∈Rp and A ⊂V , w(A) = P k∈A wk (this defines a modular set-function). 2 Review of submodular function theory Throughout this paper, we consider a nondecreasing submodular function F defined on the power set 2V of V = {1, . . . , p}, i.e., such that: ∀A, B ⊂V, F(A) + F(B) ⩾F(A ∪B) + F(A ∩B), (submodularity) ∀A, B ⊂V, A ⊂B ⇒F(A) ⩽F(B). (monotonicity) Moreover, we assume that F(∅) = 0. These set-functions are often referred to as polymatroid set-functions [12, 13]. Also, without loss of generality, we may assume that F is strictly positive on singletons, i.e., for all k ∈V , F({k}) > 0. Indeed, if F({k}) = 0, then by submodularity and monotonicity, if A ∋k, F(A) = F(A\{k}) and thus we can simply consider V \{k} instead of V . Classical examples are the cardinality function (which will lead to the ℓ1-norm) and, given a partition of V into B1 ∪· · · ∪Bk = V , the set-function A 7→F(A) which is equal to the number of groups B1, . . . , Bk with non empty intersection with A (which will lead to the grouped ℓ1/ℓ∞-norm [1, 14]). Lov´asz extension. Given any set-function F, one can define its Lov´asz extensionf : Rp + →R, as follows; given w ∈Rp +, we can order the components of w in decreasing order wj1 ⩾· · · ⩾wjp ⩾ 0, the value f(w) is then defined as: f(w) = Pp k=1 wjk[F({j1, . . . , jk}) −F({j1, . . . , jk−1})]. (1) The Lov´asz extension f is always piecewise-linear, and when F is submodular, it is also convex (see, e.g., [12, 9]). Moreover, for all δ ∈{0, 1}p, f(δ) = F(Supp(δ)): f is indeed an extension from vectors in {0, 1}p (which can be identified with indicator vectors of sets) to all vectors in Rp +. Moreover, it turns out that minimizing F over subsets, i.e., minimizing f over {0, 1}p is equivalent to minimizing f over [0, 1]p [13]. Submodular polyhedron and greedy algorithm. We denote by P the submodular polyhedron [12], defined as the set of s ∈Rp + such that for all A ⊂V , s(A) ⩽F(A), i.e., P = {s ∈Rp +, ∀A ⊂V, s(A) ⩽F(A)}, where we use the notation s(A) = P k∈A sk. One 2 (1,0)/F({1}) (1,1)/F({1,2}) (0,1)/F({2}) Figure 1: Polyhedral unit ball, for 4 different submodular functions (two variables), with different stable inseparable sets leading to different sets of extreme points; changing values of F may make some of the extreme points disappear. From left to right: F(A) = |A|1/2 (all possible extreme points), F(A) = |A| (leading to the ℓ1-norm), F(A) = min{|A|, 1} (leading to the ℓ∞-norm), F(A) = 1 21{A∩{2}̸=∅} + 1{A̸=∅} (leading to the structured norm Ω(w) = 1 2|w2| + ∥w∥∞). important result in submodular analysis is that if F is a nondecreasing submodular function, then we have a representation of f as a maximum of linear functions [12, 9], i.e., for all w ∈Rp +, f(w) = max s∈P w⊤s. (2) Instead of solving a linear program with p + 2p contraints, a solution s may then be obtained by the following “greedy algorithm”: order the components of w in decreasing order wj1 ⩾· · · ⩾wjp, and then take for all k ∈{1, . . . , p}, sjk = F({j1, . . . , jk}) −F({j1, . . . , jk−1}). Stable sets. A set A is said stable if it cannot be augmented without increasing F, i.e., if for all sets B ⊃A, B ̸= A ⇒F(B) > F(A). If F is strictly increasing (such as for the cardinality), then all sets are stable. The set of stable sets is closed by intersection [13], and will correspond to the set of allowed sparsity patterns (see Section 6.2). Separable sets. A set A is separable if we can find a partition of A into A = B1∪· · ·∪Bk such that F(A) = F(B1) + · · · + F(Bk). A set A is inseparable if it is not separable. As shown in [13], the submodular polytope P has full dimension p as soon as F is strictly positive on all singletons, and its faces are exactly the sets {sk = 0} for k ∈V and {s(A) = F(A)} for stable and inseparable sets A. We denote by T the set of such sets. This implies that P = {s ∈Rp +, ∀A ∈T , s(A) ⩽F(A)}. These stable inseparable sets will play a role when describing extreme points of unit balls of our new norms (Section 3) and for deriving concentration inequalities in Section 6.3. For the cardinality function, stable and inseparable sets are singletons. 3 Definition and properties of structured norms We define the function Ω(w) = f(|w|), where |w| is the vector in Rp composed of absolute values of w and f the Lov´asz extension of F. We have the following properties (see proof in [15]), which show that we indeed define a norm and that it is the desired convex envelope: Proposition 1 (Convex envelope, dual norm) Assume that the set-function F is submodular, nondecreasing, and strictly positive for all singletons. Define Ω: w 7→f(|w|). Then: (i) Ωis a norm on Rp, (ii) Ωis the convex envelope of the function g : w 7→F(Supp(w)) on the unit ℓ∞-ball, (iii) the dual norm (see, e.g., [16]) of Ωis equal to Ω∗(s) = maxA⊂V ∥sA∥1 F (A) = maxA∈T ∥sA∥1 F (A) . We provide examples of submodular set-functions and norms in Section 4, where we go from setfunctions to norms, and vice-versa. From the definition of the Lov´asz extension in Eq. (1), we see that Ωis a polyhedral norm (i.e., its unit ball is a polyhedron). The following proposition gives the set of extreme points of the unit ball (see proof in [15] and examples in Figure 1): Proposition 2 (Extreme points of unit ball) The extreme points of the unit ball of Ωare the vectors 1 F (A)s, with s ∈{−1, 0, 1}p, Supp(s) = A and A a stable inseparable set. This proposition shows, that depending on the number and cardinality of the inseparable stable sets, we can go from 2p (only singletons) to 3p −1 extreme points (all possible sign vectors). We show in Figure 1 examples of balls for p = 2, as well as sets of extreme points. These extreme points will play a role in concentration inequalities derived in Section 6. 3 Figure 2: Sequence and groups: (left) groups for contiguous patterns, (right) groups for penalizing the number of jumps in the indicator vector sequence. −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −2 −1 0 0.1 0.2 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights −6 −4 −2 0 0 0.5 1 log(λ) weights Figure 3: Regularization path for a penalized least-squares problem (black: variables that should be active, red: variables that should be left out). From left to right: ℓ1-norm penalization (a wrong variable is included with the correct ones), polyhedral norm for rectangles in 2D, with zoom (all variables come in together), mix of the two norms (correct behavior). 4 Examples of nondecreasing submodular functions We consider three main types of submodular functions with potential applications to regularization for supervised learning. Some existing norms are shown to be examples of our frameworks (Section 4.1, Section 4.3), while other novel norms are designed from specific submodular functions (Section 4.2). Other examples of submodular functions, in particular in terms of matroids and entropies, may be found in [12, 10, 11] and could also lead to interesting new norms. Note that set covers, which are common examples of submodular functions are subcases of set-functions defined in Section 4.1 (see, e.g., [9]). 4.1 Norms defined with non-overlapping or overlapping groups We consider grouped norms defined with potentially overlapping groups [1, 2], i.e., Ω(w) = P G⊂V d(G)∥wG∥∞where d is a nonnegative set-function (with potentially d(G) = 0 when G should not be considered in the norm). It is a norm as soon as ∪G,d(G)>0G = V and it corresponds to the nondecreasing submodular function F(A) = P G∩A̸=∅d(G). In the case where ℓ∞-norms are replaced by ℓ2-norms, [2] has shown that the set of allowed sparsity patterns are intersections of complements of groups G with strictly positive weights. These sets happen to be the set of stable sets for the corresponding submodular function; thus the analysis provided in Section 6.2 extends the result of [2] to the new case of ℓ∞-norms. However, in our situation, we can give a reinterpretation through a submodular function that counts the number of times the support A intersects groups G with non zero weights. This goes beyond restricting the set of allowed sparsity patterns to stable sets. We show later in this section some insights gained by this reinterpretation. We now give some examples of norms, with various topologies of groups. Hierarchical norms. Hierarchical norms defined on directed acyclic graphs [1, 5, 6] correspond to the set-function F(A) which is the cardinality of the union of ancestors of elements in A. These have been applied to bioinformatics [5], computer vision and topic models [6]. Norms defined on grids. If we assume that the p variables are organized in a 1D, 2D or 3D grid, [2] considers norms based on overlapping groups leading to stable sets equal to rectangular or convex shapes, with applications in computer vision [17]. For example, for the groups defined in the left side of Figure 2 (with unit weights), we have F(A) = p −2 + range(A) if A ̸= ∅and F(∅) = 0 (the range of A is equal to max(A) −min(A) + 1). From empty sets to non-empty sets, there is a gap of p −1, which is larger than differences among non-empty sets. This leads to the undesired result, which has been already observed by [2], of adding all variables in one step, rather than gradually, when the regularization parameter decreases in a regularized optimization problem. In order to counterbalance this effect, adding a constant times the cardinality function has the effect of making the first gap relatively smaller. This corresponds to adding a constant times the ℓ1-norm and, as shown in Figure 3, solves the problem of having all variables coming together. All patterns are then allowed, but contiguous ones are encouraged rather than forced. 4 Another interesting new norm may be defined from the groups in the right side of Figure 2. Indeed, it corresponds to the function F(A) equal to |A| plus the number of intervals of A. Note that this also favors contiguous patterns but is not limited to selecting a single interval (like the norm obtained from groups in the left side of Figure 2). Note that it is to be contrasted with the total variation (a.k.a. fused Lasso penalty [18]), which is a relaxation of the number of jumps in a vector w rather than in its support. In 2D or 3D, this extends to the notion of perimeter and area, but we do not pursue such extensions here. 4.2 Spectral functions of submatrices Given a positive semidefinite matrix Q ∈Rp×p and a real-valued function h from R+ →R, one may define tr[h(Q)] as Pp i=1 h(λi) where λ1, . . . , λp are the (nonnegative) eigenvalues of Q [19]. We can thus define the set-function F(A) = tr h(QAA) for A ⊂V . The functions h(λ) = log(λ+t) for t ⩾0 lead to submodular functions, as they correspond to entropies of Gaussian random variables (see, e.g., [12, 9]). Thus, since for q ∈(0, 1), λq = q sin qπ π R ∞ 0 log(1 + λ/t)tq−1dt (see, e.g., [20]), h(λ) = λq for q ∈(0, 1] are positive linear combinations of functions that lead to nondecreasing submodular functions. Thus, they are also nondecreasing submodular functions, and, to the best of our knowledge, provide novel examples of such functions. In the context of supervised learning from a design matrix X ∈Rn×p, we naturally use Q = X⊤X. If h is linear, then F(A) = tr X⊤ AXA = P k∈A X⊤ k Xk (where XA denotes the submatrix of X with columns in A) and we obtain a weighted cardinality function and hence and a weighted ℓ1-norm, which is a factorial prior, i.e., it is a sum of terms depending on each variable independently. In a frequentist setting, the Mallows CL penalty [21] depends on the degrees of freedom, of the form tr X⊤ AXA(X⊤ A XA + λI)−1. This is a non-factorial prior but unfortunately it does not lead to a submodular function. In a Bayesian context however, it is shown by [22] that penalties of the form log det(X⊤ A XA + λI) (which lead to submodular functions) correspond to marginal likelihoods associated to the set A and have good behavior when used within a non-convex framework. This highlights the need for non-factorial priors which are sub-linear functions of the eigenvalues of X⊤ AXA, which is exactly what nondecreasing submodular function of submatrices are. We do not pursue the extensive evaluation of non-factorial convex priors in this paper but provide in simulations examples with F(A) = tr(X⊤ AXA)1/2 (which is equal to the trace norm of XA [16]). 4.3 Functions of cardinality For F(A) = h(|A|) where h is nondecreasing, such that h(0) = 0 and concave, then, from Eq. (1), Ω(w) is defined from the rank statistics of |w| ∈Rp +, i.e., if |w(1)| ⩾|w(2)| ⩾· · · ⩾|w(p)|, then Ω(w) = Pp k=1[h(k) −h(k −1)]|w(k)|. This includes the sum of the q largest elements, and might lead to interesting new norms for unstructured variable selection but this is not pursued here. However, the algorithms and analysis presented in Section 5 and Section 6 apply to this case. 5 Convex analysis and optimization In this section we provide algorithmic tools related to optimization problems based on the regularization by our novel sparsity-inducing norms. Note that since these norms are polyhedral norms with unit balls having potentially an exponential number of vertices or faces, regular linear programming toolboxes may not be used. Subgradient. From Ω(w) = maxs∈P s⊤|w| and the greedy algorithm1 presented in Section 2, one can easily get in polynomial time one subgradient as one of the maximizers s. This allows to use subgradient descent, with, as shown in Figure 4, slow convergence compared to proximal methods. Proximal operator. Given regularized problems of the form minw∈Rp L(w) + λΩ(w), where L is differentiable with Lipschitz-continuous gradient, proximal methods have been shown to be particularly efficient first-order methods (see, e.g., [23]). In this paper, we consider the methods “ISTA” and its accelerated variants “FISTA” [23], which are compared in Figure 4. 1The greedy algorithm to find extreme points of the submodular polyhedron should not be confused with the greedy algorithm (e.g., forward selection) that we consider in Section 7. 5 To apply these methods, it suffices to be able to solve efficiently problems of the form: minw∈Rp 1 2∥w −z∥2 2 + λΩ(w). In the case of the ℓ1-norm, this reduces to soft thresholding of z, the following proposition (see proof in [15]) shows that this is equivalent to a particular algorithm for submodular function minimization, namely the minimum-norm-point algorithm, which has no complexity bound but is empirically faster than algorithms with such bounds [12]: Proposition 3 (Proximal operator) Let z ∈Rp and λ > 0, minimizing 1 2∥w −z∥2 2 + λΩ(w) is equivalent to finding the minimum of the submodular function A 7→λF(A) −|z|(A) with the minimum-norm-point algorithm. In [15], it is shown how a solution for one problem may be obtained from a solution to the other problem. Moreover, any algorithm for minimizing submodular functions allows to get directly the support of the unique solution of the proximal problem and that with a sequence of submodular function minimizations, the full solution may also be obtained. Similar links between convex optimization and minimization of submodular functions have been considered (see, e.g., [24]). However, these are dedicated to symmetric submodular functions (such as the ones obtained from graph cuts) and are thus not directly applicable to our situation of non-increasing submodular functions. Finally, note that using the minimum-norm-point algorithm leads to a generic algorithm that can be applied to any submodular functions F, and that it may be rather inefficient for simpler subcases (e.g., the ℓ1/ℓ∞-norm, tree-structured groups [6], or general overlapping groups [7]). 6 Sparsity-inducing properties In this section, we consider a fixed design matrix X ∈Rn×p and y ∈Rn a vector of random responses. Given λ > 0, we define ˆw as a minimizer of the regularized least-squares cost: minw∈Rp 1 2n∥y −Xw∥2 2 + λΩ(w). (3) We study the sparsity-inducing properties of solutions of Eq. (3), i.e., we determine in Section 6.2 which patterns are allowed and in Section 6.3 which sufficient conditions lead to correct estimation. Like recent analysis of sparsity-inducing norms [25], the analysis provided in this section relies heavily on decomposability properties of our norm Ω. 6.1 Decomposability For a subset J of V , we denote by FJ : 2J →R the restriction of F to J, defined for A ⊂J by FJ(A) = F(A), and by F J : 2Jc →R the contraction of F by J, defined for A ⊂Jc by F J(A) = F(A ∪J) −F(A). These two functions are submodular and nondecreasing as soon as F is (see, e.g., [12]). We denote by ΩJ the norm on RJ defined through the submodular function FJ, and ΩJ the pseudonorm defined on RJc defined through F J (as shown in Proposition 4, it is a norm only when J is a stable set). Note that ΩJc (a norm on Jc) is in general different from ΩJ. Moreover, ΩJ(wJ) is actually equal to Ω( ˜w) where ˜wJ = wJ and ˜wJc = 0, i.e., it is the restriction of Ωto J. We can now prove the following decomposition properties, which show that under certain circumstances, we can decompose the norm Ωon subsets J and their complements: Proposition 4 (Decomposition) Given J ⊂V and ΩJ and ΩJ defined as above, we have: (i) ∀w ∈Rp, Ω(w) ⩾ΩJ(wJ) + ΩJ(wJc), (ii) ∀w ∈Rp, if minj∈J |wj| ⩾maxj∈Jc |wj| , then Ω(w) = ΩJ(wJ) + ΩJ(wJc), (iii) ΩJ is a norm on RJc if and only if J is a stable set. 6.2 Sparsity patterns In this section, we do not make any assumptions regarding the correct specification of the linear model. We show that with probability one, only stable support sets may be obtained (see proof in [15]). For simplicity, we assume invertibility of X⊤X, which forbids the high-dimensional situation p ⩾n we consider in Section 6.3, but we could consider assumptions similar to the ones used in [2]. 6 Proposition 5 (Stable sparsity patterns) Assume y ∈Rn has an absolutely continuous density with respect to the Lebesgue measure and that X⊤X is invertible. Then the minimizer ˆw of Eq. (3) is unique and, with probability one, its support Supp( ˆw) is a stable set. 6.3 High-dimensional inference We now assume that the linear model is well-specified and extend results from [26] for sufficient support recovery conditions and from [25] for estimation consistency. As seen in Proposition 4, the norm Ωis decomposable and we use this property extensively in this section. We denote by ρ(J) = minB⊂Jc F (B∪J)−F (J) F (B) ; by submodularity and monotonicity of F, ρ(J) is always between zero and one, and, as soon as J is stable it is strictly positive (for the ℓ1-norm, ρ(J) = 1). Moreover, we denote by c(J) = supw∈Rp ΩJ(wJ)/∥wJ∥2, the equivalence constant between the norm ΩJ and the ℓ2-norm. We always have c(J) ⩽|J|1/2 maxk∈V F({k}) (with equality for the ℓ1-norm). The following propositions allow us to get back and extend well-known results for the ℓ1-norm, i.e., Propositions 6 and 8 extend results based on support recovery conditions [26]; while Propositions 7 and 8 extend results based on restricted eigenvalue conditions (see, e.g., [25]). We can also get back results for the ℓ1/ℓ∞-norm [14]. As shown in [15], proof techniques are similar and are adapted through the decomposition properties from Proposition 4. Proposition 6 (Support recovery) Assume that y = Xw∗+σε, where ε is a standard multivariate normal vector. Let Q = 1 nX⊤X ∈Rp×p. Denote by J the smallest stable set containing the support Supp(w∗) of w∗. Define ν = minj,w∗ j ̸=0 |w∗ j | > 0, assume κ = λmin(QJJ) > 0 and that for η > 0, (ΩJ)∗[(ΩJ(Q−1 JJQJj))j∈Jc] ⩽1 −η. Then, if λ ⩽ κν 2c(J), the minimizer ˆw is unique and has support equal to J, with probability larger than 1 −3P Ω∗(z) > ληρ(J)√n 2σ  , where z is a multivariate normal with covariance matrix Q. Proposition 7 (Consistency) Assume that y = Xw∗+ σε, where ε is a standard multivariate normal vector. Let Q = 1 nX⊤X ∈Rp×p. Denote by J the smallest stable set containing the support Supp(w∗) of w∗. Assume that for all ∆such that ΩJ(∆Jc) ⩽3ΩJ(∆J), ∆⊤Q∆⩾κ∥∆J∥2 2. Then we have Ω( ˆw −w∗) ⩽24c(J)2λ κρ(J)2 and 1 n∥X ˆw −Xw∗∥2 2 ⩽36c(J)2λ2 κρ(J)2 , with probability larger than 1 −P Ω∗(z) > λρ(J)√n 2σ  where z is a multivariate normal with covariance matrix Q. Proposition 8 (Concentration inequalities) Let z be a normal variable with covariance matrix Q. Let T be the set of stable inseparable sets. Then P(Ω∗(z) > t) ⩽P A∈T 2|A| exp −t2F (A)2/2 1⊤QAA1  . 7 Experiments We provide illustrations on toy examples of some of the results presented in the paper. We consider the regularized least-squares problem of Eq. (3), with data generated as follows: given p, n, k, the design matrix X ∈Rn×p is a matrix of i.i.d. Gaussian components, normalized to have unit ℓ2norm columns. A set J of cardinality k is chosen at random and the weights w∗ J are sampled from a standard multivariate Gaussian distribution and w∗ Jc = 0. We then take y = Xw∗+n−1/2∥Xw∗∥2 ε where ε is a standard Gaussian vector (this corresponds to a unit signal-to-noise ratio). Proximal methods vs. subgradient descent. For the submodular function F(A) = |A|1/2 (a simple submodular function beyond the cardinality) we compare three optimization algorithms described in Section 5, subgradient descent and two proximal methods, ISTA and its accelerated version FISTA [23], for p = n = 1000, k = 100 and λ = 0.1. Other settings and other set-functions would lead to similar results than the ones presented in Figure 4: FISTA is faster than ISTA, and much faster than subgradient descent. Relaxation of combinatorial optimization problem. We compare three strategies for solving the combinatorial optimization problem minw∈Rp 1 2n∥y −Xw∥2 2 + λF(Supp(w)) with F(A) = tr(X⊤ AXA)1/2, the approach based on our sparsity-inducing norms, the simpler greedy (forward selection) approach proposed in [8, 3], and by thresholding the ordinary least-squares estimate. For all methods, we try all possible regularization parameters. We see in the right plots of Figure 4 that 7 0 20 40 60 10 −5 10 0 time (seconds) f(w)−min(f) fista ista subgradient 0 20 40 0 0.5 1 penalty residual error thresholded OLS greedy submodular 0 20 40 0 0.5 1 penalty residual error thresholded OLS greedy submodular 0 20 40 0 0.5 1 penalty residual error thresholded OLS greedy submodular 0 20 40 0 0.5 1 penalty residual error Figure 4: (Left) Comparison of iterative optimization algorithms (value of objective function vs. running time). (Middle/Right) Relaxation of combinatorial optimization problem, showing residual error 1 n∥y −X ˆw∥2 2 vs. penalty F(Supp( ˆw)): (middle) high-dimensional case (p = 120, n = 20, k = 40), (right) lower-dimensional case (p = 120, n = 120, k = 40). p n k submodular ℓ2 vs. submod. ℓ1 vs. submod. greedy vs. submod. 120 120 80 40.8 ± 0.8 -2.6 ± 0.5 0.6 ± 0.0 21.8 ± 0.9 120 120 40 35.9 ± 0.8 2.4 ± 0.4 0.3 ± 0.0 15.8 ± 1.0 120 120 20 29.0 ± 1.0 9.4 ± 0.5 -0.1 ± 0.0 6.7 ± 0.9 120 120 10 20.4 ± 1.0 17.5 ± 0.5 -0.2 ± 0.0 -2.8 ± 0.8 120 120 6 15.4 ± 0.9 22.7 ± 0.5 -0.2 ± 0.0 -5.3 ± 0.8 120 120 4 11.7 ± 0.9 26.3 ± 0.5 -0.1 ± 0.0 -6.0 ± 0.8 120 20 80 46.8 ± 2.1 -0.6 ± 0.5 3.0 ± 0.9 22.9 ± 2.3 120 20 40 47.9 ± 1.9 -0.3 ± 0.5 3.5 ± 0.9 23.7 ± 2.0 120 20 20 49.4 ± 2.0 0.4 ± 0.5 2.2 ± 0.8 23.5 ± 2.1 120 20 10 49.2 ± 2.0 0.0 ± 0.6 1.0 ± 0.8 20.3 ± 2.6 120 20 6 43.5 ± 2.0 3.5 ± 0.8 0.9 ± 0.6 24.4 ± 3.0 120 20 4 41.0 ± 2.1 4.8 ± 0.7 -1.3 ± 0.5 25.1 ± 3.5 Table 1: Normalized mean-square prediction errors ∥X ˆw −Xw∗∥2 2/n (multiplied by 100) with optimal regularization parameters (averaged over 50 replications, with standard deviations divided by √ 50). The performance of the submodular method is shown, then differences from all methods to this particular one are computed, and shown in bold when they are significantly greater than zero, as measured by a paired t-test with level 5% (i.e., when the submodular method is significantly better). for hard cases (middle plot) convex optimization techniques perform better than other approaches, while for easier cases with more observations (right plot), it does as well as greedy approaches. Non factorial priors for variable selection. We now focus on the predictive performance and compare our new norm with F(A) = tr(X⊤ AXA)1/2, with greedy approaches [3] and to regularization by ℓ1 or ℓ2 norms. As shown in Table 1, the new norm based on non-factorial priors is more robust than the ℓ1-norm to lower number of observations n and to larger cardinality of support k. 8 Conclusions We have presented a family of sparsity-inducing norms dedicated to incorporating prior knowledge or structural constraints on the support of linear predictors. We have provided a set of common algorithms and theoretical results, as well as simulations on synthetic examples illustrating the good behavior of these norms. Several avenues are worth investigating: first, we could follow current practice in sparse methods, e.g., by considering related adapted concave penalties to enhance sparsity-inducing norms, or by extending some of the concepts for norms of matrices, with potential applications in matrix factorization or multi-task learning (see, e.g., [27] for application of submodular functions to dictionary learning). Second, links between submodularity and sparsity could be studied further, in particular by considering submodular relaxations of other combinatorial functions, or studying links with other polyhedral norms such as the total variation, which are known to be similarly associated with symmetric submodular set-functions such as graph cuts [24]. Acknowledgements. This paper was partially supported by the Agence Nationale de la Recherche (MGA Project) and the European Research Council (SIERRA Project). The author would like to thank Edouard Grave, Rodolphe Jenatton, Armand Joulin, Julien Mairal and Guillaume Obozinski for discussions related to this work. 8 References [1] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A):3468–3497, 2009. [2] R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, arXiv:0904.3523, 2009. [3] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proc. ICML, 2009. [4] L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlaps and graph Lasso. In Proc. ICML, 2009. [5] S. Kim and E. Xing. Tree-guided group Lasso for multi-task regression with structured sparsity. In Proc. ICML, 2010. [6] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In Proc. ICML, 2010. [7] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In Adv. NIPS, 2010. [8] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. IEEE Transactions on Information Theory, 52(9):4036–4048, 2006. [9] F Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical Report 00527714, HAL, 2010. [10] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proc. UAI, 2005. [11] Y. Kawahara, K. Nagano, K. Tsuda, and J.A. Bilmes. Submodularity cuts and applications. In Adv. NIPS, 2009. [12] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [13] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial optimization - Eureka, you shrink!, pages 11–26. Springer, 2003. [14] S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and perils of ℓ1-ℓ∞-regularization. In Adv. NIPS, 2008. [15] F. Bach. Structured sparsity-inducing norms through submodular functions. Technical Report 00511310, HAL, 2010. [16] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [17] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In Proc. AISTATS, 2009. [18] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused Lasso. J. Roy. Stat. Soc. B, 67(1):91–108, 2005. [19] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge Univ. Press, 1990. [20] T. Ando. Concavity of certain maps on positive definite matrices and applications to hadamard products. Linear Algebra and its Applications, 26:203–241, 1979. [21] C. L. Mallows. Some comments on Cp. Technometrics, 15(4):661–675, 1973. [22] D. Wipf and S. Nagarajan. Sparse estimation using general likelihoods and non-factorial priors. In Adv. NIPS, 2009. [23] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [24] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84(3):288–307, 2009. [25] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Adv. NIPS, 2009. [26] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2563, 2006. [27] A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In Proc. ICML, 2010. 9
2010
99
4,145
Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits Matthias S. Keil∗ Department of Basic Psychology University of Barcelona E-08035 Barcelona, Spain matskeil@ub.edu Abstract Many species show avoidance reactions in response to looming object approaches. In locusts, the corresponding escape behavior correlates with the activity of the lobula giant movement detector (LGMD) neuron. During an object approach, its firing rate was reported to gradually increase until a peak is reached, and then it declines quickly. The η-function predicts that the LGMD activity is a product between an exponential function of angular size exp(−Θ) and angular velocity ˙Θ, and that peak activity is reached before time-to-contact (ttc). The η-function has become the prevailing LGMD model because it reproduces many experimental observations, and even experimental evidence for the multiplicative operation was reported. Several inconsistencies remain unresolved, though. Here we address these issues with a new model (ψ-model), which explicitly connects Θ and ˙Θ to biophysical quantities. The ψ-model avoids biophysical problems associated with implementing exp(·), implements the multiplicative operation of η via divisive inhibition, and explains why activity peaks could occur after ttc. It consistently predicts response features of the LGMD, and provides excellent fits to published experimental data, with goodness of fit measures comparable to corresponding fits with the η-function. 1 Introduction: τ and η Collision sensitive neurons were reported in species such different as monkeys [5, 4], pigeons [36, 34], frogs [16, 20], and insects [33, 26, 27, 10, 38]. This indicates a high ecological relevance, and raises the question about how neurons compute a signal that eventually triggers corresponding movement patterns (e.g. escape behavior or interceptive actions). Here, we will focus on visual stimulation. Consider, for simplicity, a circular object (diameter 2l), which approaches the eye at a collision course with constant velocity v. If we do not have any a priori knowledge about the object in question (e.g. its typical size or speed), then we will be able to access only two information sources. These information sources can be measured at the retina and are called optical variables (OVs). The first is the visual angle Θ, which can be derived from the number of stimulated photoreceptors (spatial contrast). The second is its rate of change dΘ(t)/dt ≡˙Θ(t). Angular velocity ˙Θ is related to temporal contrast. How should we combine Θ and ˙Θ in order to track an imminent collision? The perhaps simplest combination is τ(t) ≡Θ(t)/ ˙Θ(t) [13, 18]. If the object hit us at time tc, then τ(t) ≈tc −t will ∗Also: www.ir3c.ub.edu, Research Institute for Brain, Cognition, and Behaviour (IR3C) Edifici de Ponent, Campus Mundet, Universitat de Barcelona, Passeig Vall d’Hebron, 171. E-08035 Barcelona 1 give us a running estimation of the time that is left until contact1. Moreover, we do not need to know anything about the approaching object: The ttc estimation computed by τ is practically independent of object size and velocity. Neurons with τ-like responses were indeed identified in the nucleus retundus of the pigeon brain [34]. In humans, only fast interceptive actions seem to rely exclusively on τ [37, 35]. Accurate ttc estimation, however, seems to involve further mechanisms (rate of disparity change [31]). Another function of OVs with biological relevance is η ≡˙Θ exp(−αΘ), with α = const. [10]. While η-type neurons were found again in pigeons [34] and bullfrogs [20], most data were gathered from the LGMD2 in locusts (e.g. [10, 9, 7, 23]). The η-function is a phenomenological model for the LGMD, and implies three principal hypothesis: (i) An implementation of an exponential function exp(·). Exponentation is thought to take place in the LGMD axon, via active membrane conductances [8]. Experimental data, though, seem to favor a third-power law rather than exp(·). (ii) The LGMD carries out biophysical computations for implementing the multiplicative operation. It has been suggested that multiplication is done within the LGMD itself, by subtracting the logarithmically encoded variables log ˙Θ −αΘ [10, 8]. (iii) The peak of the η-function occurs before ttc, at visual angle Θ(ˆt) = 2 arctan(1/α) [9]. It follows ttc for certain stimulus configurations (e.g. l/|v| ⪅5ms). In principle, ˆt > tc can be accounted for by η(t + δ) with a fixed delay δ < 0 (e.g. −27ms). But other researchers observed that LGMD activity continuous to rise after ttc even for l/|v| ⪆5ms [28]. These discrepancies remain unexplained so far [29], but stimulation dynamics perhaps plays a role. We we will address these three issues by comparing the novel function “ψ” with the η-function. 2 LGMD computations with the ψ-function: No multiplication, no exponentiation A circular object which starts its approach at distance x0 and with speed v projects a visual angle Θ(t) = 2 arctan[l/(x0 −vt)] on the retina [34, 9]. The kinematics is hence entirely specified by the half-size-to-velocity ratio l/|v|, and x0. Furthermore, ˙Θ(t) = 2lv/((x0 −vt)2 + l2). In order to define ψ, we consider at first the LGMD neuron as an RC-circuit with membrane potential3 V [17] Cm dV dt = β (Vrest −V ) + gexc (Vexc −V ) + ginh (Vinh −V ) (1) Cm = membrane capacity4; β ≡1/Rm denotes leakage conductance across the cell membrane (Rm: membrane resistance); gexc and ginh are excitatory and inhibitory inputs. Each conductance gi (i = exc, inh) can drive the membrane potential to its associated reversal potential Vi (usually Vinh ≤Vexc). Shunting inhibition means Vinh = Vrest. Shunting inhibition lurks “silently” because it gets effective only if the neuron is driven away from its resting potential. With synaptic input, the neuron decays into its equilibrium state V∞≡Vrestβ + Vexcgexc + Vinhginh β + gexc + ginh (2) according to V (t) = V∞(1 −exp(−t/τm)). Without external input, V (t ≫1) →Vrest. The time scale is set by τm. Without synaptic input τm ≡Cm/β. Slowly varying inputs gexc, ginh > 0 modify the time scale to approximately τm/(1 + (gexc + ginh)/β). For highly dynamic inputs, such as in late phase of the object approach, the time scale gets dynamical as well. The ψ-model assigns synaptic inputs5 gexc(t) = ˙ϑ(t), ˙ϑ(t) = ζ1 ˙ϑ(t −∆tstim) + (1 −ζ1) ˙Θ(t) (3a) ginh(t) = [γϑ(t)]e , ϑ(t) = ζ0ϑ(t −∆tstim) + (1 −ζ0)Θ(t) (3b) 1This linear approximation gets worse with increasing Θ, but turns out to work well until short before ttc (τ adopts a minimum at tc −0.428978 · l/|v|). 2LGMD activity is usually monitored via its postsynaptic neuron, the Descending Contralateral Movement Detector (DCMD) neuron. This represents no problem as LGMD spikes follow DCMD spikes 1:1 under visual stimulation [22] from 300Hz [21] to at least 400Hz [24]. 3Here we assume that the membrane potential serves as a predictor for the LGMD’s mean firing rate. 4Set to unity for all simulations. 5LGMD receives also inhibition from a laterally acting network [21]. The η-function considers only direct feedforward inhibition [22, 6], and so do we. 2 0 50 100 150 200 250 300 350 10 0 time [ms] log Θ(t) Θ ∈ [7.63°, 180.00°[ temporal resolution ∆ tstim=1.0ms scaled dΘ/dt continuous discretized (a) discretized optical variables 0 50 100 150 200 250 300 350 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 time [ms] amplitude l/|v|=20.00ms, β=1.00, γ=7.50, e=3.00, ζ0=0.90, ζ1=0.99, nrelax=25 Θ(t) (input) ϑ(t) (filtered) voltage V(t) (output) tmax= 56ms tc=300ms η(t): α=3.29, R2=1.00 nrelax=10 → tmax=37ms (b) ψ versus η Figure 1: (a) The continuous visual angle of an approaching object is shown along with its discretized version. Discretization transforms angular velocity from a continuous variable into a series of “spikes” (rescaled). (b) The ψ function with the inputs shown in a, with nrelax = 25 relaxation time steps. Its peak occurs tmax = 56ms before ttc (tc = 300ms). An η function (α = 3.29) that was fitted to ψ shows good agreement. For continuous optical variables, the peak would occur 4ms earlier, and η would have α = 4.44 with R2 = 1. For nrelax = 10, ψ is farther away from its equilibrium at V∞, and its peak moves 19ms closer to ttc. 5 10 15 20 25 30 35 40 45 50 −50 0 50 100 150 200 250 l/|v| [ms] tmax [ms] tc=500ms, dia=12.0cm, ∆tstim=1.00ms, dt=10.00µs, discrete=1 β=1.00, γ=7.50, e=3.00, Vinh=−0.001, ζ0=0.90, ζ1=0.99 nrelax = 50 α=4.66, R2=0.99 [normal] nrelax = 25 α=3.91, R2=1.00 [normal] nrelax = 0 α=1.15, R2=0.99 [normal] (a) different nrelax (b) different ∆tstim Figure 2: The figures plot the relative time tmax ≡tc −ˆt of the response peak of ψ, V (ˆt), as a function of half-size-to-velocity ratio (points). Line fits with slope α and intercept δ were added (lines). The predicted linear relationship in all cases is consistent with experimental evidence [9]. (a) The stimulus time scale is held constant at ∆tstim = 1ms, and several LGMD time scales are defined by nrelax (= number of intercalated relaxation steps for each integration time step). Bigger values of nrelax move V (t) closer to its equilibrium V∞(t), implying higher slopes α in turn. (b) LGMD time scale is fixed at nrelax = 25, and ∆tstim is manipulated. Because of the discretization of optical variables (OVs) in our simulation, increasing ∆tstim translates to an overall smaller number of jumps in OVs, but each with higher amplitude. Thus, we say ψ(t) ≡V (t) if and only if gexc and ginh are defined with the last equation. The time scale of stimulation is defined by ∆tstim (by default 1ms). The variables ϑ and ˙ϑ are lowpass filtered angular size and rate of expansion, respectively. The amount of filtering is defined by memory constants ζ0 and ζ1 (no filtering if zero). The idea is to continue with generating synaptic input after ttc, where Θ(t > tc) = const and thus ˙Θ(t > tc) = 0. Inhibition is first weighted by γ, and then potentiated by the exponent e. Hodgkin-Huxley potentiates gating variables n, m ∈[0, 1] instead (potassium ∝n4, sodium ∝m3, [12]) and multiplies them with conductances. Gabbiani and co-workers found that the function which transforms membrane potential to firing rate is better described by a power function with e = 3 than by exp(·) (Figure 4d in [8]). 3 3 Dynamics of the ψ-function Discretization. In a typical experiment, a monitor is placed a short distance away from the insect’s eye, and an approaching object is displayed. Computer screens have a fixed spatial resolution, and as a consequence size increments of the displayed object proceed in discrete jumps. The locust retina is furthermore composed of a discrete array of ommatidia units. We therefore can expect a corresponding step-wise increment of Θ with time, although optical and neuronal filtering may smooth Θ to some extent again, resulting in ϑ (figure 1). Discretization renders ˙Θ discontinuous, what again will be alleviated in ˙ϑ. For simulating the dynamics of ψ, we discretized angular size with floor(Θ), and ˙Θ(t) ≈[Θ(t + ∆tstim) −Θ(t)]/∆tstim. Discretized optical variables (OVs) were re-normalized to match the range of original (i.e. continuous) OVs. To peak, or not to peak? Rind & Simmons reject the hypothesis that the activity peak signals impending collision on grounds of two arguments [28]: (i) If Θ(t+∆tstim)−Θ(t) ⪆3o in consecutively displayed stimulus frames, the illusion of an object approach would be lost. Such stimulation would rather be perceived as a sequence of rapidly appearing (but static) objects, causing reduced responses. (ii) After the last stimulation frame has been displayed (that is Θ = const), LGMD responses keep on building up beyond ttc. This behavior clearly depends on l/|v|, also according to their own data (e.g. Figure 4 in [26]): Response build up after ttc is typically observed for sufficiently small values of l/|v|. Input into ψ in situations where Θ = const and ˙Θ = 0, respectively, is accommodated by ϑ and ˙ϑ, respectively. We simulated (i) by setting ∆tstim = 5ms, thus producing larger and more infrequent jumps in discrete OVs than with ∆tstim = 1ms (default). As a consequence, ϑ(t) grows more slowly (delayed build up of inhibition), and the peak occurs later (tmax ≡tc −ˆt = 10ms with everything else identical with figure 1b). The peak amplitude ˆV = V (ˆt) decreases nearly sixfold with respect to default. Our model thus predicts the reduced responses observed by Rind & Simmons [28]. Linearity. Time of peak firing rate is linearly related to l/|v| [10, 9]. The η-function is consistent with this experimental evidence: ˆt = tc −αl/|v| + δ (e.g. α = 4.7, δ = −27ms). The ψ-function reproduces this relationship as well (figure 2), where α depends critically on the time scale of biophysical processes in the LGMD. We studied the impact of this time scale by choosing 10µs for the numerical integration of equation 1 (algorithm: 4th order Runge-Kutta). Apart from improving the numerical stability of the integration algorithm, ψ is far from its equilibrium V∞(t) in every moment t, given the stimulation time scale ∆tstim = 1ms 6. Now, at each value of Θ(t) and ˙Θ(t), respectively, we intercalated nrelax iterations for integrating ψ. Each iteration takes V (t) asymptotically closer to V∞(t), and limnrelax≫1 V (t) = V∞(t). If the internal processes in the LGMD cannot keep up with stimulation (nrelax = 0), we obtain slopes values that underestimate experimentally found values (figure 2a). In contrast, for nrelax ⪆25 we get an excellent agreement with the experimentally determined α. This means that – under the reported experimental stimulation conditions (e.g. [9]) – the LGMD would operate relatively close to its steady state7. Now we fix nrelax at 25 and manipulate ∆tstim instead (figure 2b). The default value ∆tstim = 1ms corresponds to α = 3.91. Slightly bigger values of ∆tstim (2.5ms and 5ms) underestimate the experimental α. In addition, the line fits also return smaller intercept values then. We see tmax < 0 up to l/|v| ≈13.5ms – LGMD activity peaks after ttc! Or, in other words, LGMD activity continues to increase after ttc. In the limit, where stimulus dynamics is extremely fast, and LGMD processes are kept far from equilibrium at each instant of the approach, α gets very small. As a consequence, tmax gets largely independent of l/|v|: The activity peak would cling to tmax although we varied l/|v|. 4 Freeze! Experimental data versus steady state of “psi” In the previous section, experimentally plausible values for α were obtained if ψ is close to equilibrium at each instant of time during stimulation. In this section we will thus introduce a steady-state 6Assuming one ∆tstim for each integration time step. This means that by default stimulation and biophysical dynamics will proceed at identical time scales. 7Notice that in this moment we can only make relative statements - we do not have data at hand for defining absolute time scales 4 5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250 300 l/|v| [ms] tmax [ms] tc=500ms, v=2.00m/s ψ∞ → (β varies), γ=3.50, e=3.00, Vinh=−0.001 norm. |η−ψ∞| = 0.020...0.128 norm. rmse = 0.058...0.153 correlation (β,α)=−0.90 (n=4) β=10.00 β=5.00 β=2.50 β=1.00 (a) β varies 5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250 300 350 l/|v| [ms] tmax [ms] tc=500ms, v=2.00m/s ψ∞ → β=2.50, γ=3.50, (e varies), Vinh=−0.001 norm. |η−ψ∞| = 0.009...0.114 norm. rmse = 0.014...0.160 correlation (e,α)=0.98 (n=4) e=5.00 e=4.00 e=3.00 e=2.50 (b) e varies 5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250 300 l/|v| [ms] tmax [ms] tc=500ms, v=2.00m/s ψ∞ → β=2.50, (γ varies), e=3.00, Vinh=−0.001 norm. |η−ψ∞| = 0.043...0.241 norm. rmse = 0.085...0.315 correlation (γ,α)=1.00 (n=5) γ=5.00 γ=2.50 γ=1.00 γ=0.50 γ=0.25 (c) γ varies Figure 3: Each curve shows how the peak ˆψ∞≡ψ∞(ˆt) depends on the half-size-to-velocity ratio. In each display, one parameter of ψ∞is varied (legend), while the others are held constant (figure title). Line slopes vary according to parameter values. Symbol sizes are scaled according to rmse (see also figure 4). Rmse was calculated between normalized ψ∞(t) & normalized η(t) (i.e. both functions ∈[0, 1] with original minimum and maximum indicated by the textbox). To this end, the peak of the η-function was placed at tc, by choosing, at each parameter value, α = |v| · (tc −ˆt)/l (for determining correlation, the mean value of α was taken across l/|v|). 5 10 15 20 25 30 35 40 45 50 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 l/|v| [ms] meant |η(t)−ψ∞(t)| (normalized η, ψ∞) tc=500ms, v=2.00m/s ψ∞ → (β varies), γ=3.50, e=3.00, Vinh=−0.001 β=10.00 β=5.00 β=2.50 β=1.00 (a) β varies 10 15 20 25 30 35 40 45 50 0.02 0.04 0.06 0.08 0.1 0.12 l/|v| [ms] meant |η(t)−ψ∞(t)| (normalized η, ψ∞) tc=500ms, v=2.00m/s ψ∞ → β=2.50, γ=3.50, (e varies), Vinh=−0.001 e=5.00 e=4.00 e=3.00 e=2.50 (b) e varies 5 10 15 20 25 30 35 40 45 50 0 0.05 0.1 0.15 0.2 0.25 l/|v| [ms] meant |η(t)−ψ∞(t)| (normalized η, ψ∞) tc=500ms, v=2.00m/s ψ∞ → β=2.50, (γ varies), e=3.00, Vinh=−0.001 γ=5.00 γ=2.50 γ=1.00 γ=0.50 γ=0.25 (c) γ varies Figure 4: This figure complements figure 3. It visualizes the time averaged absolute difference between normalized ψ∞(t) & normalized η(t). For η, its value of α was chosen such that the maxima of both functions coincide. Although not being a fit, it gives a rough estimate on how the shape of both curves deviate from each other. The maximum possible difference would be one. version of ψ (i.e. equation 2 with Vrest = 0, Vexc = 1, and equations 3 plugged in), ψ∞(t) ≡ ˙Θ(t) + Vinh [γΘ(t)]e β + ˙Θ(t) + [γΘ(t)]e (4) (Here we use continuous versions of angular size and rate of expansion). The ψ∞-function makes life easier when it comes to fitting experimental data. However, it has its limitations, because we brushed the whole dynamic of ψ under the carpet. Figure 3 illustrates how the linear relationship (=“linearity”) between tmax ≡tc −ˆt and l/|v| is influenced by changes in parameter values. Changing any of the values of e, β, γ predominantly causes variation in line slopes. The smallest slope changes are obtained by varying Vinh (data not shown; we checked Vinh = 0, −0.001, −0.01, −0.1). For Vinh ⪅−0.01, linearity is getting slightly compromised, as slope increases with l/|v| (e.g. Vinh = −1 ⇝α ∈[4.2, 4.7]). In order to get a notion about how well the shape of ψ∞(t) matches η(t), we computed timeaveraged difference measures between normalized versions of both functions (details: figure 3 & 4). Bigger values of β match η better at smaller, but worse at bigger values of l/|v| (figure 4a). Smaller β cause less variation across l/|v|. As to variation of e, overall curve shapes seem to be best aligned with e = 3 to e = 4 (figure 4b). Furthermore, better matches between ψ∞(t) and η(t) correspond to bigger values of γ (figure 4c). And finally, Vinh marches again to a different tune (data not shown). Vinh = −0.1 leads to the best agreement (≈0.04 across l/|v|) of all Vinh, quite different from the other considered values. For the rest, ψ∞(t) and η(t) align the same (all have maximum 0.094), 5 (a) ˙Θ = 126o/s (b) ˙Θ = 63o/s Figure 5: The original data (legend label “HaGaLa95”) were resampled from ref. [10] and show DCMD responses to an object approach with ˙Θ = const. Thus, Θ increases linearly with time. The η-function (fitting function: Aη(t+δ)+o) and ψ∞(fitting function: Aψ∞(t)+o) were fitted to these data: (a) (Figure 3 Di in [10]) Good fits for ψ∞are obtained with e = 5 or higher (e = 3 ⇝R2 = 0.35 and rmse = 0.644; e = 4 ⇝R2 = 0.45 and rmse = 0.592). “Psi” adopts a sigmoid-like curve form which (subjectively) appears to fit the original data better than η. (b) (Figure 3 Dii in [10]) “Psi” yields an excellent fit for e = 3. 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 time [s] RoHaTo10 gregarious locust LV=0.03s ψ∞: R2=0.95, rmse=0.004, 3 coefficients → β=2.22, γ=0.70, e=3.00, Vinh=−0.001, A=0.07, o=0.02, δ=0.00ms η: R2=1.00, rmse=0.001 → α=3.30, A=0.08, o=0.0, δ=−10.5ms Θ(t), lv=30ms e011pos014 sgolay with 100 tmax=107ms ttc=5.00s ψ∞ adj.R2 0.95 (LM:3) η(t) adj.R2 1 (TR::1) (a) spike trace (b) α versus β Figure 6: (a) DCMD activity in response to a black square (l/|v| = 30ms, legend label “e011pos14”, ref. [30]) approaching to the eye center of a gregarious locust (final visual angle 50o). Data show the first stimulation so habituation is minimal. The spike trace (sampled at 104Hz) was full wave rectified, lowpass filtered, and sub-sampled to 1ms resolution. Firing rate was estimated with Savitzky-Golay filtering (“sgolay”). The fits of the η-function (Aη(t + δ) + o; 4 coefficients) and ψ∞-function (Aψ∞(t) with fixed e, o, δ, Vinh; 3 coefficients) provide both excellent fits to firing rate. (b) Fitting coefficient α (→η-function) inversely correlates with β (→ψ∞) when fitting firing rates of another 5 trials as just described (continuous line = line fit to the data points). Similar correlation values would be obtained if e is fixed at values e = 2.5, 4, 5 ⇝c = −0.95, −0.96, −0.91. If o was determined by the fitting algorithm, then c = −0.70. No clear correlations with α were obtained for γ. despite of covering different orders of magnitude with Vinh = 0, −0.001, −0.01. Decelerating approach. Hatsopoulos et al. [10] recorded DCMD activity in response to an approaching object which projected image edges on the retina moving at constant velocity: ˙Θ = const. implies Θ(t) = Θ0 + ˙Θt. This “linear approach” is perceived as if the object is getting increasingly slower. But what appears a relatively unnatural movement pattern serves as a test for the functions η & ψ∞. Figure 5 illustrates that ψ∞passes the test, and consistently predicts that activity sharply rises in the initial approach phase, and subsequently declines (η passed this test already in the year 1995). 6 Spike traces. We re-sampled about 30 curves obtained from LGMD recordings from a variety of publications, and fitted η & ψ∞-functions. We cannot show the results here, but in terms of goodness of fit measures, both functions are in the same ballbark. Rather, figure 6a shows a representative example [30]. When α and β are plotted against each other for five trials, we see a strong inverse correlation (figure 6b). Although five data points are by no means a firm statistical sample, the strong correlation could indicate that β and α play similar roles in both functions. Biophysically, β is the leakage conductance, which determines the (passive) membrane time constant τm ∝1/β of the neuron. Voltage drops within τm to exp(−1) times its initial value. Bigger values of β mean shorter τm (i.e., “faster neurons”). Getting back to η, this would suggest α ∝τm, such that higher (absolute) values for α would possibly indicate a slower dynamic of the underlying processes. 5 Discussion (“The Good, the Bad, and the Ugly”) Up to now, mainly two classes of LGMD models existed: The phenomenological η-function on the one hand, and computational models with neuronal layers presynaptic to the LGMD on the other (e.g. [25, 15]; real-world video sequences & robotics: e.g. [3, 14, 32, 2]). Computational models predict that LGMD response features originate from excitatory and inhibitory interactions in – and between – presynaptic neuronal layers. Put differently, non-linear operations are generated in the presynaptic network, and can be a function of many (model) parameters (e.g. synaptic weights, time constants, etc.). In contrast, the η-function assigns concrete nonlinear operations to the LGMD [7]. The η-function is accessible to mathematical analysis, whereas computational models have to be probed with videos or artificial stimulus sequences. The η-function is vague about biophysical parameters, whereas (good) computational models need to be precise at each (model) parameter value. The η-function establishes a clear link between physical stimulus attributes and LGMD activity: It postulates what is to be computed from the optical variables (OVs). But in computational models, such a clear understanding of LGMD inputs cannot always be expected: Presynaptic processing may strongly transform OVs. The ψ function thus represents an intermediate model class: It takes OVs as input, and connects them with biophysical parameters of the LGMD. For the neurophysiologist, the situation could hardly be any better. Psi implements the multiplicative operation of the η-function by shunting inhibition (equation 1: Vexc ≈Vrest and Vinh ≈Vrest). The η-function fits ψ very well according to our dynamical simulations (figure 1), and satisfactory by the approximate criterion of figure 4. We can conclude that ψ implements the η-function in a biophysically plausible way. However, ψ does neither explicitly specify η’s multiplicative operation, nor its exponential function exp(·). Instead we have an interaction between shunting inhibition and a power law (·)e, with e ≈3. So what about power laws in neurons? Because of e > 1, we have an expansive nonlinearity. Expansive power-law nonlinearities are well established in phenomenological models of simple cells of the primate visual cortex [1, 11]. Such models approximate a simple cell’s instantaneous firing rate r from linear filtering of a stimulus (say Y ) by r ∝([Y ]+)e, where [·]+ sets all negative values to zero and lets all positive pass. Although experimental evidence favors linear thresholding operations like r ∝[Y −Ythres]+, neuronal responses can behave according to power law functions if Y includes stimulus-independent noise [19]. Given this evidence, the power-law function of the inhibitory input into ψ could possibly be interpreted as a phenomenological description of presynaptic processes. The power law would also be the critical feature by means of which the neurophysiologist could distinguish between the η function and ψ. A study of Gabbiani et al. aimed to provide direct evidence for a neuronal implementation of the η-function [8]. Consequently, the study would be an evidence for a biophysical implementation of “direct” multiplication via log ˙Θ −αΘ. Their experimental evidence fell somewhat short in the last part, where “exponentation through active membrane conductances” should invert logarithmic encoding. Specifically, the authors observed that “In 7 out of 10 neurons, a third-order power law best described the data” (sixth-order in one animal). Alea iacta est. Acknowledgments MSK likes to thank Stephen M. Rogers for kindly providing the recording data for compiling figure 6. MSK furthermore acknowledges support from the Spanish Government, by the Ramon and Cajal program and the research grant DPI2010-21513. 7 References [1] D.G. Albrecht and D.B. Hamilton, Striate cortex of monkey and cat: contrast response function, Journal of Neurophysiology 48 (1982), 217–237. [2] S. Bermudez i Badia, U. Bernardet, and P.F.M.J. Verschure, Non-linear neuronal responses as an emergent property of afferent networks: A case study of the locust lobula giant movemement detector, PLoS Computational Biology 6 (2010), no. 3, e1000701. [3] M. Blanchard, F.C. Rind, and F.M.J. Verschure, Collision avoidance using a model of locust LGMD neuron, Robotics and Autonomous Systems 30 (2000), 17–38. [4] D.F. Cooke and M.S.A. Graziano, Super-flinchers and nerves of steel: Defensive movements altered by chemical manipulation of a cortical motor area, Neuron 43 (2004), no. 4, 585–593. [5] L. Fogassi, V. Gallese, L. Fadiga, G. Luppino, M. Matelli, and G. Rizzolatti, Coding of peripersonal space in inferior premotor cortex (area f4), Journal of Neurophysiology 76 (1996), 141–157. [6] F. Gabbiani, I. Cohen, and G. Laurent, Time-dependent activation of feed-forward inhibition in a looming sensitive neuron, Journal of Neurophysiology 94 (2005), 2150–2161. [7] F. Gabbiani, H.G. Krapp, N. Hatsopolous, C.H. Mo, C. Koch, and G. Laurent, Multiplication and stimulus invariance in a looming-sensitive neuron, Journal of Physiology - Paris 98 (2004), 19–34. [8] F. Gabbiani, H.G. Krapp, C. Koch, and G. Laurent, Multiplicative computation in a visual neuron sensitive to looming, Nature 420 (2002), 320–324. [9] F. Gabbiani, H.G. Krapp, and G. Laurent, Computation of object approach by a wide-field, motionsensitive neuron, Journal of Neuroscience 19 (1999), no. 3, 1122–1141. [10] N. Hatsopoulos, F. Gabbiani, and G. Laurent, Elementary computation of object approach by a wide-field visual neuron, Science 270 (1995), 1000–1003. [11] D.J. Heeger, Modeling simple-cell direction selectivity with normalized, half-squared, linear operators, Journal of Neurophysiology 70 (1993), 1885–1898. [12] A.L. Hodkin and A.F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, Journal of Physiology 117 (1952), 500–544. [13] F. Hoyle, The black cloud, Pinguin Books, London, 1957. [14] M.S. Keil, E. Roca-Morena, and A. Rodr´ıguez-V´azquez, A neural model of the locust visual system for detection of object approaches with real-world scenes, Proceedings of the Fourth IASTED International Conference (Marbella, Spain), vol. 5119, 6-8 September 2004, pp. 340–345. [15] M.S. Keil and A. Rodr´ıguez-V´azquez, Towards a computational approach for collision avoidance with real-world scenes, Proceedings of SPIE: Bioengineered and Bioinspired Systems (Maspalomas, Gran Canaria, Canary Islands, Spain) (A. Rodr´ıguez-V´azquez, D. Abbot, and R. Carmona, eds.), vol. 5119, SPIE - The International Society for Optical Engineering, 19-21 May 2003, pp. 285–296. [16] J.G. King, J.Y. Lettvin, and E.R. Gruberg, Selective, unilateral, reversible loss of behavioral responses to looming stimuli after injection of tetrodotoxin or cadmium chloride into the frog optic nerve, Brain Research 841 (1999), no. 1-2, 20–26. [17] C. Koch, Biophysics of computation: information processing in single neurons, Oxford University Press, New York, 1999. [18] D.N. Lee, A theory of visual control of braking based on information about time-to-collision, Perception 5 (1976), 437–459. [19] K.D. Miller and T.W. Troyer, Neural noise can explain expansive, power-law nonlinearities in neuronal response functions, Journal of Neurophysiology 87 (2002), 653–659. [20] Hideki Nakagawa and Kang Hongjian, Collision-sensitive neurons in the optic tectum of the bullfrog, rana catesbeiana, Journal of Neurophysiology 104 (2010), no. 5, 2487–2499. [21] M. O’Shea and C.H.F. Rowell, Projection from habituation by lateral inhibition, Nature 254 (1975), 53– 55. [22] M. O’Shea and J.L.D. Williams, The anatomy and output connection of a locust visual interneurone: the lobula giant movement detector (lgmd) neurone, Journal of Comparative Physiology 91 (1974), 257–266. [23] S. Peron and F. Gabbiani, Spike frequency adaptation mediates looming stimulus selectivity, Nature Neuroscience 12 (2009), no. 3, 318–326. [24] F.C. Rind, A chemical synapse between two motion detecting neurones in the locust brain, Journal of Experimental Biology 110 (1984), 143–167. [25] F.C. Rind and D.I. Bramwell, Neural network based on the input organization of an identified neuron signaling implending collision, Journal of Neurophysiology 75 (1996), no. 3, 967–985. 8 [26] F.C. Rind and P.J. Simmons, Orthopteran DCMD neuron: a reevaluation of responses to moving objects. I. Selective responses to approaching objects, Journal of Neurophysiology 68 (1992), no. 5, 1654–1666. [27] , Orthopteran DCMD neuron: a reevaluation of responses to moving objects. II. Critical cues for detecting approaching objects, Journal of Neurophysiology 68 (1992), no. 5, 1667–1682. [28] , Signaling of object approach by the dcmd neuron of the locust, Journal of Neurophysiology 77 (1997), 1029–1033. [29] , Reply, Trends in Neuroscience 22 (1999), no. 5, 438. [30] S.M. Roger, G.W.J. Harston, F. Kilburn-Toppin, T. Matheson, M. Burrows, F. Gabbiani, and H.G. Krapp, Spatiotemporal receptive field properties of a looming-sensitive neuron in solitarious and gregarious phases of desert locust, Journal of Neurophysiology 103 (2010), 779–792. [31] S.K. Rushton and J.P. Wann, Weighted combination of size and disparity: a computational model for timing ball catch, Nature Neuroscience 2 (1999), no. 2, 186–190. [32] Yue. S., Rind. F.C., M.S. Keil, J. Cuadri, and R. Stafford, A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment, Neurocomputing 69 (2006), 1591–1598. [33] G.R. Schlotterer, Response of the locust descending movement detector neuron to rapidly approaching and withdrawing visual stimuli, Canadian Journal of Zoology 55 (1977), 1372–1376. [34] H. Sun and B.J. Frost, Computation of different optical variables of looming objects in pigeon nucleus rotundus neurons, Nature Neuroscience 1 (1998), no. 4, 296–303. [35] J.R. Tresilian, Visually timed action: time-out for ’tau’?, Trends in Cognitive Sciences 3 (1999), no. 8, 1999. [36] Y. Wang and B.J. Frost, Time to collision is signalled by neurons in the nucleus rotundus of pigeons, Nature 356 (1992), 236–238. [37] J.P. Wann, Anticipating arrival: is the tau-margin a specious theory?, Journal of Experimental Psychology and Human Perceptual Performance 22 (1979), 1031–1048. [38] M. Wicklein and N.J. Strausfeld, Organization and significance of neurons that detect change of visual depth in the hawk moth manduca sexta, The Journal of Comparative Neurology 424 (2000), no. 2, 356– 376. 9
2011
1
4,146
Rapid Deformable Object Detection using Dual-Tree Branch-and-Bound Iasonas Kokkinos Center for Visual Computing Ecole Centrale de Paris iasonas.kokkinos@ecp.fr Abstract In this work we use Branch-and-Bound (BB) to efficiently detect objects with deformable part models. Instead of evaluating the classifier score exhaustively over image locations and scales, we use BB to focus on promising image locations. The core problem is to compute bounds that accommodate part deformations; for this we adapt the Dual Trees data structure [7] to our problem. We evaluate our approach using Mixture-of-Deformable Part Models [4]. We obtain exactly the same results but are 10-20 times faster on average. We also develop a multiple-object detection variation of the system, where hypotheses for 20 categories are inserted in a common priority queue. For the problem of finding the strongest category in an image this results in a 100-fold speedup. 1 Introduction Deformable Part Models (DPMs) deliver state-of-the-art object detection results [4] on challenging benchmarks when trained discriminatively, and have become a standard in object recognition research. At the heart of these models lies the optimization of a merit function -the classifier scorewith respect to the part displacements and the global object pose. In this work we take the classifier for granted, using the models of [4], and focus on the optimization problem. The most common detection algorithm used in conjunction with DPMs relies on Generalized Distance Transforms (GDTs) [5], whose complexity is linear in the image size. Despite its amazing efficiency this algorithm still needs to first evaluate the score everywhere before picking its maxima. In this work we use Branch-and-Bound in conjunction with part-based models. For this we exploit the Dual Tree (DT) data structure [7], developed originally to accelerate operations related to Kernel Density Estimation (KDE). We use DTs to provide the bounds required by Branch-and-Bound. Our method is fairly generic; it applies to any star-shape graphical model involving continuous variables, and pairwise potentials expressed as separable, decreasing binary potential kernels. We evaluate our technique using the mixture-of-deformable part models of [4]. Our algorithm delivers exactly the same results, but is 15-30 times faster. We also develop a multiple-object detection variation of the system, where all object hypotheses are inserted in the same priority queue. If our task is to find the best (or k-best) object hypotheses in an image this can result in a 100-fold speedup. 2 Previous Work on Efficient Detection Cascaded object detection [20] has led to a proliferation of vision applications, but far less work exists to deal with part-based models. The combinatorics of matching have been extensively studied for rigid objects [8], while [17] used A∗for detecting object instances. For categories, recent works [1, 10, 11, 19, 6, 18, 15] have focused on reducing the high-dimensional pose search space during 1 detection by initially simplifying the cost function being optimized, mostly using ideas similar to A∗and coarse-to-fine processing. In the recent work of [4] thresholds pre-computed on the training set are used to prune computation and result in substantial speedups compared to GDTs. Branch-and-bound (BB) prioritizes the search of promising image areas, as indicated by an upper bound on the classifier’s score. A most influential paper has been the Efficient Subwindow Search (ESS) technique of [12], where an upper bound of a bag-of-words classifier score delivers the bounds required by BB. Later [16] combined Graph-Cuts with BB for object segmentation, while in [13] a general cascade system was devised for efficient detection with a nonlinear classifier. Our work is positioned with respect to these works as follows: unlike existing BB works [16, 12, 15], we use the DPM cost and thereby accommodate parts in a rigorous energy minimization framework. And unlike the pruning-based works [1, 6, 4, 18], we do not make any approximations or assumptions about when it is legitimate to stop computation; our method is exact. We obtain the bound required by BB from Dual Trees. To the best of our knowledge, Dual Trees have been minimally been used in object detection; we are only aware of the work in [9] which used DTs to efficiently generate particles for Nonparametric Belief Propagation. Here we show that DTs can be used for part-based detection, which is related conceptually, but entirely different technically. 3 Preliminaries We first describe the cost function used in DPMs, then outline the limitations of GDT-based detection, and finally present the concepts of Dual Trees relevant to our setting. Due to lack of space we refer to [2, 4] for further details on DPMs and to [7] [14] for Dual Trees. 3.1 Merit function for DPMs We consider a star-shaped graphical model consisting of a set of P + 1 nodes {n0, . . . nP }; n0 is called the root and the part nodes n1, . . . , nP are connected to the root. Each node p has a unary observation potential Up(x), indicating the fidelity of the image at x to the node; e.g. in [2] Up(x) is the inner product of a HOG feature at x with a discriminant wp for p. The location xp = (hp, vp) of part p is constrained with respect to the root location x0 = (h0, v0) in terms of a quadratic binary potential Bp(xp, x0) of the form: Bp(xp, x0)=−(xp −x0 −µp)T Ip (xp −x0 −µp)=−(hp −h0 −ηp)2Hp −(vp −v0 −νp)2Vp, where Ip = diag(Hp, Vp) is a diagonal precision matrix and mp = (ηp, νp) is the nominal difference of root-part locations. We will freely alternate between the vector x and its horizontal/vertical h/v coordinates. Moreover we consider η0 = 0, µ0 = 0 and H0, V0 large enough so that B0(xp, x0) will be zero for xp = x0 and practically infinite elsewhere. If the root is at x0 the merit for part p being at xp is given by mp(xp, x0) = Up(xp) + Bp(xp, x0); summing over p gives the score P p mp(xp, x0) of a root-and-parts configuration X = (x0, . . . , xP ). The detector score at point x is obtained by maximizing over those X with x0 = x; this amounts to computing: S(x) .= P X p=0 max xp mp(xp, x) = P X p=0 max xp Up(xp) −(hp −h −ηp)2Hp −(vp −v −νp)2Vp. (1) A GDT can be used to maximize each summand in Eq. 1 jointly for all values of x0 in time O(N), where N is the number of possible locations. This is dramatically faster than the naive O(N 2) computation. For a P-part model, complexity decreases from O(N 2P) to O(NP). Still, the N factor can make things slow for large images. If we know that a certain threshold will be used for detection, e.g. −1 for a classifier trained with SVMs, the GDT-based approach turns out to be wasteful as it treats equally all image locations, even those where we can quickly realize that the classifier score cannot exceed this threshold. This is illustrated in Fig. 1: in (a) we show the part-root configuration that gives the maximum score, and in (b) the score of a bicycle model from [4] over the whole image domain. Our approach 2 (a) Input & Detection result (b) Detector score S(x) (c) BB for arg maxx S(x) (d) BB for S(x) ≥−1. Figure 1: Motivation for Branch-and-Bound (BB) approach: standard part-based models evaluate a classifier’s score S(x) over the whole image domain. Typically only a tiny portion of the image domain should be positivein (b) we draw a black contour around {x : S(x) > −1} for an SVM-based classifier. BB ignores large intervals with low S(x) by upper bounding their values, and postponing their ‘exploration’ in favor of more promising ones. In (c) we show as heat maps the upper bounds of the intervals visited by BB until the strongest location was explored, and in (d) of the intervals visited until all locations x with S(x) > −1 were explored. speeds up detection by upper bounding the score of the detector within intervals of x while using low-cost operations. This allows us to use a prioritized search strategy that can refine these bounds on promising intervals, while postponing the exploration of less promising intervals. This is demonstrated in Fig. 1(c,d) where we show as heat maps the upper bounds of the intervals visited by BB: parts of the image where the heat maps are more fine grained correspond to image locations that seemed promising. If our goal is to maximize S(x) BB discards a huge amount of computation, as shown in (c); even with a more conservative criterion, i.e. finding all x : S(x) > −1 (d), a large part of the image domain is effectively ignored and the algorithm obtains refined bounds only around ‘interesting’ image locations. 3.2 Dual Trees: Data Structures for Set-Set interactions The main technical challenge is to efficiently compute upper bounds for a model involving deformable parts; our main contribution consists in realizing that this can be accomplished with the Dual Tree data structure of [7]. We now give a high-level description of Dual Trees, leaving concrete aspects for their adaptation to the detection problem; we assume the reader is familiar with KD-trees. Dual Trees were developed to efficiently evaluate expressions of the form: P(xj) = N X i=1 wiK(xj, xi), xi ∈XS, i = 1, . . . N, xj ∈XD j = 1, . . . , M (2) where K(·, ·) is a separable, decreasing kernel, e.g. a Gaussian with diagonal covariance. We refer to XS as ‘source’ terms, and to XD as ‘domain’ terms, the idea being that the source points XS generate a ‘field’ P, which we want evaluate at the domain locations XP . Naively performing the computation in Eq. 2 considers all source-domain interactions and takes NM operations. The Dual Tree algorithm efficiently computes this sum by using two KD-trees, one (S) for the source locations XS and another (D) for the domain locations XD. This allows for substantial speedups when computing Eq. 2 for all domain points, as illustrated in Fig. 2: if a ‘chunk’ of source points cannot affect a ‘chunk’ of domain points, we skip computing their domain-source point interactions. 4 DPM opitimization using Dual Tree Branch and Bound Brand and Bound (BB) is a maximization algorithm for non-parametric, non-convex or even nondifferentiable functions. BB searches for the interval containing the function’s maximum using a prioritized search strategy; the priority of an interval is determined by the function’s upper bound within it. Starting from an interval containing the whole function domain, BB increasingly narrows down to the solution: at each step an interval of solutions is popped from a priority queue, split into sub-intervals (Branch), and a new upper bound for those intervals is computed (Bound). These intervals are then inserted in the priority queue and the process repeats until a singleton interval is popped. If the bound is tight for singletons, the first singleton will be the function’s global maximum. 3 Figure 2: Left: Dual Trees efficiently deal with the interaction of ‘source’ (red) and ‘domain’ points (blue), using easily computable bounds. For instance points lying in square 6 cannot have a large effect on points in square A, therefore we do not need to go to a finer level of resolution to exactly estimate their interactions. Right: illustration of the terms involved in the geometric bound computations of Eq. 10. Coming to our case, the DPM criterion developed in Sec. 3.1 is a sum of scores of the form: sp(x0) = max xP mp(xp, x0) = max (hp,vp) Up(hp, vp) −(hp −h0 −ηp)2Hp −(vp −v0 −νp)2Vp. (3) Using Dual Tree terminology the ‘source points’ correspond to part locations xp, i.e. XSp = {xp}, and the ‘domain points’ to object locations x0, i.e. XD = {x0}. Dual Trees allow us to efficiently derive bounds for sp(x0), x0 ∈XD, the scores that a set of object locations can have due to a set of part p locations. Once these are formed, we add over parts to bound the score S(x0) = P p sp(x0), x0 ∈XD. This provides the bound needed by Branch-and Bound (BB). We now present our approach through a series intermediate problems. These may be amenable to simpler solutions, but the more complex solutions discussed finally lead to our algorithm. 4.1 Maximization for One Domain Point We first introduce notation: we index the source/domain points in XS/XD using i/j respectively. We denote by wp i = Up(xi) the unary potential of part p at location xi. We shift the unary scores by the nominal offsets µ, which gives new source locations: xi →xi −µp, (hi, vi) →(hi −ηp, vi −νp). Finally, we drop p from mp, Hp and Vp unless necessary. We can now write Eq. 3 as: m(h0, v0) = max i∈Sp wi −H(hi −h0)2 −V (vi −v0)2. (4) To evaluate Eq. 4 at (h0, v0) we use prioritized search over intervals of i ∈Sp, starting from Sp and gradually narrowing down to the best i. To prioritize intervals we use a KD-tree for the source points xi ∈XSp to quickly compute bounds of Eq. 4. In specific, if Sn is the set of children of the n-th node of the KD-tree for Sp, consider the subproblem: mn(h0, v0) = max i∈Sn wi −H(hi −h0)2 −V (vi −v0)2 = max i∈Sn wi + Gi, (5) where Gi .= −H(hi −h0)2 −V (vi −v0)2 stands for the geometric part of Eq. 5. We know that for all points (hi, vi) within Sn we have hi ∈[ln, rn] and vi ∈[bn, tn], where l, r, b, t are the left, right, bottom, top axes defining n’s bounding box, Bn. We can then bound Gi within Sn as follows: Gn = −H min(⌈l −h0⌉, ⌈h0 −r⌉)2 −V min(⌈b −v0⌉, ⌈v0 −t⌉)2 (6) Gn = −H max( l −h0 , h0 −r )2 −V max( b −v0 , v0 −t )2, (7) where ⌈·⌉= max(·, 0), and Gn ≥Gi ≥Gn ∀i ∈Sn. The upper bound is zero inside Bn and uses the boundaries of Bn that lie closest to (h0, v0), when (h0, v0) is outside Bn. The lower bound uses the distance from (h0, v0) to the furthest point within Bn. Regarding the wi term in Eq. 5, for both bounds we can use the value wj, j = arg maxi∈Sn wi. This is clearly suited for the upper bound. For the lower bound, since Gi > Gn ∀i ∈Sn, we have maxi∈Sn wi + Gi ≥wj + Gj ≥wj + Gn. So wj + Gn provides a proper lower bound for maxi∈Sn wi + Gi. Summing up, we bound Eq. 5 as: wj + Gn ≥mn(h0, v0) ≥wj + Gn. 4 l l1 l2 m 7 0 m1 4 2 m2 6 1 n 3 0 n1 2 0 n2 3 1 o 8 4 o1 5 4 o2 8 6 Figure 3: Supporter pruning: source nodes {m, n, o} are among the possible supporters of domain-node l. Their upper and lower bounds (shown as numbers to the right of each node) are used to prune them. Here, the upper bound for n (3) is smaller than the maximal lower bound among supporters (4, from o): this implies the upper bound of n’s children contributions to l’s children (shown here for l1) will not surpass the lower bound of o’s children. We can thus safely remove n from the supporters. We can use the upper bound in a prioritized search for the maximum of m(h0, v0), as described in Table 1. Starting with the root of the KD-tree we expand its children nodes, estimate their prioritiesupper bounds, and insert them in a priority queue. The search stops when the first leaf node is popped; this provides the maximizer, as its upper and lower bounds coincide and all other elements waiting in queue have smaller upper bounds. The lower bound is useful in Sec. 4.2. 4.2 Maximization for All Domain Points Having described how KD-trees to provide bounds in the single domain point case, we now describe how Dual Trees can speedup this operation in when treating multiple domain points simultaneously. In specific, we consider the following maximization problem: x∗= arg max x∈XD m(x) = arg max j∈D max i∈S wi −H(hi −hj)2 −V (vi −vj)2, (8) where XD/D is the set of domain points/indices and S are the source indices. The previous algorithm could deliver x∗by computing m(x) repeatedly for each x ∈XD and picking the maximizer. But this will repeat similar checks for neighboring domain points, which can instead be done jointly. For this, as in the original Dual Tree work, we build a second KD-tree for the domain points (‘Domain tree’, as opposed to ‘Source tree’). The nodes in the Domain tree (‘domain-nodes’) correspond to intervals of domain points that are processed jointly. This saves repetitions of similar bounding operations, and quickly discards large domain areas with poor bounds. For the bounding operations, as in Sec. 4.1 we consider the effect of source points contained in a node Sn of the Source tree. The difference is that now we bound the maximum of this quantity over domain points contained in a domain-node Dl. In specific, we consider the quantity: ml,n = max j∈Dl max i∈Sn wi −H(hi −hj)2 −V (vi −vj)2 (9) Bounding Gi,j = −H(hi −hj)2 −V (vi −vj)2 involves two 2D intervals, one for the domain-node l and one for the domain-node n. If the interval for node n is centered at hn, vn, and has dimensions dh,n, dv,n, we use ¯dh = 1 2(dh,l + dh,n), ¯dv = 1 2(dv,l + dv,n) and write: Gl,n = −H max(⌈hn −hl −¯dh⌉, ⌈hl −hn −¯dh⌉)2 −V max(⌈vn −vl −¯dv⌉, ⌈vl −vn −¯dv⌉)2 Gl,n = −H max( hn −hl + ¯dh , hl −hn + ¯dh )2 −V max( vn −vl −¯dv , vl −vn −¯dv )2 We illustrate these bounds in Fig. 2. The upper bound is zero if the boxes overlap, or else equals the (scaled) distance of their closest points. The lower bound uses the furthest points of the two boxes. As in Sec. 4.1, we use w∗ n = maxi∈Sn wi for the first term in Eq. 9, and bound ml,n as follows: Gl,n + w∗ n ≤ml,n ≤Gl,n + w∗ n. (10) This expression bounds the maximal value m(x) that a point x in domain-node l can have using contributions from points in source-node n. Our initial goal was to find the maximum using all possible source point contributions. We now describe a recursive approach to limit the set of sourcenodes considered, in a manner inspired from the ‘multi-recursion’ approach of [7]. 5 For this, we associate every domain-node l with a set Sl of ‘supporter’ source-nodes that can yield the maximal contribution to points in l. We start by associating the root node of the Domain tree with the root node of the Source-tree, which means that all domain-source point interactions are originally considered. We then recursively increase the ‘resolution’ of the Domain-tree in parallel with the ‘resolution’ of the Source-tree. More specifically, to determine the supporters for a child m of domain-node l we consider only the children of the source-nodes in Sl; formally, denoting by pa and ch the parent and child operations respectively we have Sm ⊂∪n∈Spa(m){ch(n)}. Our goal is to reduce computation by keeping Sm small. This is achieved by pruning based on both the lower and upper bounds derived above. The main observation is that when we go from parents to children we decrease the number of source/domain points; this tightens the bounds, i.e. makes the upper bounds less optimistic and the lower bounds more optimistic. Denoting the maximal lower bound for contributions to parent node l by Gl = maxn∈Sl Gl,n, this means that Gk ≥Gl if pa(k) = l. On the flip side, Gl,n ≤Gk,q if pa(k) = l, pa(q) = n. This means that if for sourcenode n at the parent level Gl,n < Gl, at the children level the children of n will contribute something worse than Gm, the lower bound on l’s child score. We therefore do not need to keep n among Sl - its children’s contribution will be certainly worse than the best contribution from other node’s children. Based on this observation we can reduce the set of supporters, while guaranteeing optimality. Pseudocode summarizing this algorithm is provided in Table 1. The bounds in Eq. 10 are used in a prioritized search algorithm for the maximum of m(x) over x. The algorithm uses a priority queue for Domain tree nodes, initialized with the root of the Domain tree (i.e. the whole range of possible locations x). At each iteration we pop a Domain tree node from the queue, compute upper bounds and supporters for its children, which are then pushed in the priority queue. The first leaf node that is popped contains the best domain location: its upper bound equals its lower bound, and all other nodes in the priority queue have smaller upper bounds, therefore cannot result in a better solution. 4.3 Maximization over All Domain Points and Multiple Parts: Branch and Bound for DPMs The algorithm we described in the previous subsection is essentially a Branch-and-Bound (BB) algorithm for the maximization of a merit function x∗= arg max x0 m(x0) = arg max (h0,v0) max i∈Sp wi −H(hi −h0)2 −V (vi −v0)2 (11) corresponding to a DPM with a single-part (p). To see this, recall that at each step BB pops a domain of the function being maximized from the priority queue, splits it into subdomains (Branch), and computes a new upper bound for the subdomains (Bound). In our case Branching amounts to considering the two descendants of the domain node being popped, while Bounding amounts to taking the maximum of the upper bounds of the domain node supporters. The single-part DPM optimization problem is rather trivial, but adapting the technique to the multipart case is now easy. For this, we rewrite Eq. 1 in a convenient form as: m(h0, v0) = P X p=0 max i∈S wp,i −Hp(hp i −h0)2 −Vp(vp i −v0)2 (12) using the conventions we used in Eq. 4. Namely, we only consider using points in S for object parts, and subtract mp from hi, vi to yield simple quadratic forms; since mp is part-dependent, we now have a p superscript for hi, vi. Further, we have in general different H, V variables for different parts, so we brought back the p subscript for these. Finally, wp,i depends on p, since the same image point will give different unary potentials for different object parts. From this form we realize that computing the upper bound of m(x) within a range of values of x, as required by Branch-and-Bound is as easy as it was for the single terms in the previous section. In specific we have m(x) = PP p=1 mp(x), where mp are the individual part contributions; since maxx PP p=0 mp(x) ≤PP p=0 maxx mp(x). we can separately upper bound the individual part contributions, and sum them up to get an overall upper bound. Pseudocode describing the maximization algorithm is provided in Table 1. Note that each part has its own KDtree (SourcT[p]): we build a separate Source-tree per part using the part-specific coordinates 6 (hp, vp) and weights wp,i. Each part’s contribution to the score is computed using the supporters it lends to the node; the total bound is obtained by summing the individual part bounds. Single Domain Point IN: ST, x {Source Tree, Location x} OUT: arg maxxi∈ST m(x, xi) Push(S,ST.root); while 1 do Pop(S,popped); if popped.UB = popped.LB then return popped; end if for side = [Left,Right] do child = popped.side; child.UB = BoundU(x,child); child.LB = BoundL(x,child); Push(S,child); end for end while Multiple Domain Points IN: ST, DT {Source/Domain Tree} OUT: arg maxx∈DT maxi∈ST m(x, xi) Seed = DT.root; Seed.supporters = ST.Root; Push(S,Seed); while 1 do Pop(S,popped); if popped.UB = popped.LB then return popped; end if for side = [Left,Right] do child = popped.side; supp = Descend(popped.supp); UB,supc = Bound(child,supp,DT,ST); child.UB = UB; child.supc = supc; Push(S,child); end for end while Multiple Domain Points, Multiple Parts IN: ST[P], DT {P Source Trees/Domain Tree} OUT: arg maxx∈DT P p maxi∈ST [P ] m(x, xp, i) Seed = DT.root; for p = 1 to P do Seed.supporters[p] = ST[p].Root; end for Push(S,Seed); while 1 do Pop(S,popped); if popped.UB = popped.LB then return popped; end if for side = [Left,Right] do child = popped.side; UB = 0; for part = 1:P do supp = Descend(popped.supp[part]) UP,s = Bound(child,supp,DT,ST[p]); child.supp[part] = s; UB = UB + UP; end for child.UB = UB; Push(S,child); end for end while Bounding Routine IN: child,supporters,DT,ST OUT: supch, LB {Chosen supporters, Max LB} UB = −∞; LB = ∞; for n ∈supporters do UB[n] = BoundU(DT.node[child],ST.node[n]); LB[n] = BoundL(DT.node[child],ST.node[n]); end for MaxLB = max(LB); supch = supporters(find(UB>MaxLB)); Return supch, MaxLB; Table 1: Pseudocode for the algorithms presented in Section 4. 5 Results - Application to Deformable Object Detection To estimate the merit of BB we first compare with the mixtures-of-DPMs developed and distributed by [3]. We directly extend the Branch-and-Bound technique that we developed for a single DPM to deal with multiple scales and mixtures (‘ORs’) of DPMs [4, 21], by inserting all object hypotheses into the same queue. To detect multiple instances of objects at multiple scales we continue BB after getting the best scoring object hypothesis. As termination criterion we choose to stop when we pop an interval whose upper bound is below a fixed threshold. Our technique delivers essentially the same results as [4]. One minuscule difference is that BB uses floating point arithmetic for the part locations, while in GDT they are necessarily processed at integer resolution; other than that the results are identical. We therefore do not provide any detection performance curves, but only timing results. Coming to time efficiency, in Fig. 4 (a) we compare the results of the original DPM mixture model and our implementation. We use 2000 images from the Pascal dataset and a mix of models for different object clases (gains vary per category). We consider the standard detection scenario where we want to detect all objects in an image having score above a certain threshold. We show how 7 10 0 10 1 10 2 Speedup: Single object Speedup Image rank t = −0.4 t = −0.6 t = −0.8 t = −1.0 10 0 10 1 10 2 Speedup: M−objects, 1−best Image rank M = 1 M = 5 M = 10 M = 20 10 0 10 1 10 2 Speedup: 20−objects, k−best Image rank k = 1 k = 2 k = 5 k = 10 10 0 10 1 10 2 Speedup − front−end Image rank k = 1 (a) (b) (c) (d) Figure 4: (a) Single-object speedup of Branch and Bound compared to GDTs on images from the Pascal dataset, (b,c) Multi-object speedup. (d) Speedup due to the front-end computation of the unary potentials. Please see text for details. the threshold affects the speedup we obtain; for a conservative threshold the speedup is typically tenfold, but as we become more aggressive it doubles. As a second application, we consider the problem of identifying the ‘dominant’ object present in the image, i.e. the category the gives the largest score. Typically simpler models, like bag-of-words classifiers are applied to this problem, based on the understanding that part-based models can be time-consuming, therefore applying a large set of models to an image would be impractical. Our claim is that Branch-and-Bound allows us to pursue a different approach, where in fact having more object categories can increase the speed of detection, if we leave the unary potential computation aside. In specific, our approach can be directly extended to the multiple-object detection setting; as long as the scores computed by different object categories are commensurate, they can all be inserted in the same priority queue. In our experiments we observed that we can get a response faster by introducing more models. The reason for this is that including into our object repertoire a model giving a large score helps BB stop; otherwise BB keeps searching for another object. In plots (b),(c) Fig. 4 we show systematic results on the Pascal dataset. We compare the time that would be required by GDT to perform detection of all multiple objects considered in Pascal, to that of a model simultaneously exploring all models. In (b) we show how finding the first-best result is accelerated as the number of objects (M) increases; while in (c) we show how increasing the ‘k’ in ‘k-best’ affects the speedup. For small values of k the gains become more pronounced. Of course if we use a fixed threshold the speedup would not change, when compared to plot (a), since essentially the objects do not ‘interact’ in any way (we do not use nonmaximum suppression). But as we turn to the best-first problem, the speedup becomes dramatic, ranging in the order of up to a hundred times. We note that the timings refer to the ‘message passing’ part implemented with GDT and not the computation of unary potentials, which is common for both models, and is currently the bottleneck. Even though it is tangential to our contribution in this paper, we mention that as shown in plot (d) we compute unary potentials approximately five times faster than the single-threaded convolution provided by [3] by exploiting Matlab’s optimized matrix multiplication routines. 6 Conclusions In this work we have introduced Dual-Tree Branch-and-Bound for efficient part-based detection. We have used Dual Trees to compute upper bounds on the cost function of a part-based model and thereby derived a Branch-and-Bound algorithm for detection. Our algorithm is exact and makes no approximations, delivering identical results with the DPMs used in [4], but in typically 10-15 less time. Further, we have shown that the flexibility of prioritized search allows us to consider new tasks, such as multiple-object detection, which yielded further speedups. The main challenge for future work will be to reduce the unary term computation cost; we intend to use BB for this task too. 7 Acknowledgements We are grateful to the authors of [3, 12, 9] for making their code available, and to the reviewers for constructive feedback. This work was funded by grant ANR-10-JCJC -0205. 8 References [1] Y. Chen, L. Zhu, C. Lin, A. L. Yuille, and H. Zhang. Rapid inference on a novel and/or graph for object detection, segmentation and parsing. In NIPS, 2007. [2] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 2008. [3] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models, release 4. http://www.cs.brown.edu/ pff/latent-release4/. [4] P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. Cascade object detection with deformable part models. In CVPR, 2010. [5] P. F. Felzenszwalb and D. P. Huttenlocher. Distance transforms of sampled functions. Technical report, Cornell CS, 2004. [6] V. Ferrari, M. J. Marin-Jimenez, and A. Zisserman. Progressive search space reduction for human pose estimation. In CVPR, 2008. [7] A. G. Gray and A. W. Moore. Nonparametric density estimation: Toward computational tractability. In SIAM International Conference on Data Mining, 2003. [8] E. Grimson. Object Recognition by Computer. MIT Press, 1991. [9] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling from products of gaussian mixtures. In NIPS, 2003. [10] I. Kokkinos and A. Yuille. HOP: Hierarchical Object Parsing. In CVPR, 2009. [11] I. Kokkinos and A. L. Yuille. Inference and learning with hierarchical shape models. International Journal of Computer Vision, 93(2):201–225, 2011. [12] C. Lampert, M. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization by efficient subwindow search. In CVPR, 2008. [13] C. H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In CVPR, 2010. [14] D. Lee, A. G. Gray, and A. W. Moore. Dual-tree fast gauss transforms. In NIPS, 2005. [15] A. Lehmann, B. Leibe, and L. V. Gool. Fast PRISM: Branch and Bound Hough Transform for Object Class Detection. International Journal of Computer Vision, 94(2):175–197, 2011. [16] V. Lempitsky, A. Blake, and C. Rother. Image segmentation by branch-and-mincut. In ECCV, 2008. [17] P. Moreels, M. Maire, and P. Perona. Recognition by probabilistic hypothesis construction. In ECCV, page 55, 2004. [18] M. Pedersoli, A. Vedaldi, and J. Gonz`alez. A coarse-to-fine approach for fast deformable object detection. In CVPR, 2011. [19] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010. [20] P. Viola and M. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. In CVPR, 2001. [21] S. C. Zhu and D. Mumford. Quest for a Stochastic Grammar of Images. Foundations and Trends in Computer Graphics and Vision, 2(4):259–362, 2007. 9
2011
10
4,147
Fast and Accurate k-llleans For Large Datasets Michael Shindler School of EECS Oregon State University shindler@eecs.oregonstate.edu Alex Wong Department of Computer Science UC Los Angeles alexw@seas.ucla.edu Adam Meyerson Google, Inc. Mountain View, CA awmeyerson@google.com Abstract Clustering is a popular problem with many applications. We consider the k-means problem in the situation where the data is too large to be stored in main memory and must be accessed sequentially, such as from a disk, and where we must use as little memory as possible. Our algorithm is based on recent theoretical results, with significant improvements to make it practical. Our approach greatly simplifies a recently developed algorithm, both in design and in analysis, and eliminates large constant factors in the approximation guarantee, the memory requirements, and the running time. We then incorporate approximate nearest neighbor search to compute k-means in o(nk) (where n is the number of data points; note that computing the cost, given a solution, takes 8(nk) time). We show that our algorithm compares favorably to existing algorithms - both theoretically and experimentally, thus providing state-of-the-art performance in both theory and practice. 1 Introduction We design improved algorithms for Euclidean k-means in the streaming model. In the k-means problem, we are given a set of n points in space. Our goal is to select k points in this space to designate asfacilities (sometimes called centers or means); the overall cost of the solution is the sum of the squared distances from each point to its nearest facility. The goal is to minimize this cost; unfortunately the problem is NP-Hard to optimize, although both heuristic [21] and approximation algorithm techniques [20,25,7] exist. In the streaming model, we require that the point set be read sequentially, and that our algorithm stores very few points at any given time. Many problems which are easy to solve in the standard batch-processing model require more complex techniques in the streaming model (a survey of streaming results is available [3]); nonetheless there are a number of existing streaming approximations for Euclidean k-means. We present a new algorithm for the problem based on [9] with several significant improvements; we are able to prove a faster worst-case running time and a better approximation factor. In addition, we compare our algorithm empirically with the previous state-of-the-art results of [2] and [4] on publicly available large data sets. Our algorithm outperforms them both. The notion of clustering has widespread applicability, such as in data mining, pattern recognition, compression, and machine learning. The k-means objective is one of the most popular formalisms, and in particular Lloyd's algorithm [21] has significant usage [5,7, 19,22, 23, 25, 27, 28]. Many of the applications for k-means have experienced a large growth in data that has overtaken the amount of memory typically available to a computer. This is expressed in the streaming model, where an algorithm must make one (or very few) passes through the data, reflecting cases where random access to the data is unavailable, such as a very large file on· a hard disk. Note that the data size, despite being large, is still finite. Our algorithm is based on the recent work of [9]. They "guess" the cost of the optimum, then run the online facility location algorithm of [24] until either the total cost of the solution exceeds a constant times the guess or the total number of facilities exceeds some computed value 1\,. They then declare the end of a phase, increase the guess, consolidate the facilities via matching, and continue with the next point. When the stream has been exhausted, the algorithm has some I\, facilities, which are then consolidated down to k. They then run a ball k-means step (similar to [25]) by maintaining samples of the points assigned to each facility and moving the facilities to the centers of mass of these samples. The algorithm uses O(k logn) memory, runs in O(nk logon) time, and obtains an 0(1) worst-case approximation. Provided that the original data set was o--separable (see section 1.2 for the definition), they use ball k-means to improve the approximation factor to 1+ 0(0-2 ). From a practical standpoint, the main issue with [9] is that the constants hidden in the asymptotic notation are quite large. The approximation factor is in the hundreds, and the 0 (k log n) memory requirement has sufficiently high constants that there are actually more than n facilities for many of the data sets analyzed in previous papers. Further, these constants are encoded into the algorithm itself, making it difficult to argue that the performance should improve for non-worst-case inputs. 1.1 Our Contributions We substantially simplify the algorithm of [9]. We improve the manner by which the algorithm determines better facility cost as the stream is processed, removing unnecessary checks and allowing the user to parametrize what remains. We show that our changes result in a better approximation guarantee than the previous work. We also develop a variant that computes a solution in o(nk) and show experimentally that both algorithms outperform previous techniques. We remove the end-of-phase condition based on the total cost, ending phases only when the number of facilities exceeds 1\,. While we require I\, E n(k log n), we do not require any particular constants in the expression (in fact we will use I\, == k log n in our experiments). We also simplify the transition between phases, observing that it's quite simple to bound the number of phases by log 0 PT (where OPT is the optimum k-means cost), and that in practice this number of phases is usually quite a bit less than n. We show that despite our modifications, the worst case approximation factor is still constant. Our proof is based on a much tighter bound on the cost incurred per phase, along with a more flexible definition of the "critical phase" by which the algorithm should terminate. Our proofs establish that the algorithm converges for any I\, > k; of course, there are inherent tradeoffs between I\, and the approximation bound. For appropriately chosen constants our approximation factor will be roughly 17, substantially less than the factor claimed in [9] prior to the ball k-means step. In addition, we apply approximate nearest-neighbor algorithms to compute the facility assignment of each point. The running time of our algorithm is dominated by repeated nearest-neighbor calculations, and an appropriate technique can change our running time from 8 (nk log n) to 8 (n(log k + loglogn)), an improvement for most values of k. Of course, this hurts our accuracy somewhat, but we are able to show that we take only a constant-factor loss in approximation. Note that our final running time is actually faster than the 8 (nk) time needed to compute the k-means cost of a given set of facilities! In addition to our theoretical improvements, we perform a number of empirical tests using realistic data. This allows us to compare our algorithm to previous [4, 2] streaming k-means results. 1.2 Previous Work A simple local search heuristic for the k-means problem was proposed in 1957 by Lloyd [21]. The algorithm begins with k arbitrarily chosen points as facilities. At each stage, it allocates the points into clusters (each point assigned to closest facility) and then computes the center of mass for each cluster. These become the new facilities for the next phase, and the process repeats until it is stable. Unfortunately, Lloyd's algorithm has no provable approximation bound, and arbitrarily bad examples exist. Furthermore, the worst-case running time is exponential [29]. Despite these drawbacks, Lloyd's algorithm (frequently known simply as k-means) remains common in practice. The best polynomial-time approximation for k-means is by Kanungo, Mount, Netanyahu, Piatko, Silverman, and Wu [20]. Their algorithm uses local search (similar to the k-median algorithm of [8]), and is a 9+c approximation. However, Lloyd's observed runtime is superior, and this is a high priority for real applications. 2 Ostrovsky, Rabani, Schulman and Swamy [25] observed that the value of k is typically selected such that the data is "well-clusterable" rather than being arbitrary. They defined the notion of (J"separability, where the input to k-means is said to be (J"-separable if reducing the number of facilities from k to k - 1 would incr~ase the cost of the optimum solution by a factor ;2' They designed an algorithm with approximation ratio 1 + O((J"2). Subsequently, Arthur and Vassilvitskii [7] showed that the same procedure produces an O(log k) approximation for arbitrary instances of k-means. There are two basic approaches to the streaming version of the k-means problem. Our approach is based on solving k-means as we go (thus at each point in the algorithm, our memory contains a current set of facilities). This type of approach was pioneered in 2000 by Guha, Mishra, Motwani, and O'Callaghan [17]. Their algorithm reads the data in blocks, clustering each using some nonstreaming approximation, and then gradually merges these blocks when enough of them arrive. An improved result for k-median was given by Charikar, O'Callaghan, and Panigrahy in 2003 [11], producing an 0 (1) approximation using 0 (k log2 n) space. Their work was based on guessing a lower bound on the optimum k-median cost and running O(log n) parallel versions of the online facility location algorithm of Meyerson [24] with facility cost based on the guessed lower bound. When these parallel calls exceeded the approximation bounds, they would be terminated and the guessed lower bound on the optimum k-median cost would increase. The recent paper ofBraverman, Meyerson, Ostrovsky, Roytman, Shindler, and Tagiku [9] extended the result of [11] to k-means and improved the space bound to 0 (k log n) by proving high-probability bounds on the performance of online facility location. This result also added a ball k-means step (as in [25]) to substantially improve the approximation factor under the assumption that the original data was (J"-separable. Another recent result for streaming k-means, due to Ailon, Jaiswal, and Monteleoni [4], is based on a divide and conquer approach, similar to the k-median algorithm of Guha, Meyerson Mishra, Motwani, and O'Callaghan [16]. It uses the result of Arthur and Vassilvitskii [7] as a subroutine, finding 3k log k centers for each block. Their experiment showed that this algorithm is an improvement over an online variant of Lloyd's algorithm and was comparable to the batch version of Lloyd's. The other approach to streaming k-means is based on coresets: selecting a weighted subset of the original input points such that any k-means solution on the subset has roughly the same cost as on the original point set. At any point in the algorithm, the memory should contain a weighted representative sample of the points. This approach was first used in a non-streaming setting for a variety of clustering problems by Badoiu, Har-Peled, and Indyk [10], and in the streaming setting by Har-Peled and Mazumdar [18]; the time and memory bounds were subsequently improved through a series of papers [14, 13] with the current best theoretical bounds by Chen [12]. A practical implementation of the coreset paradigm is due to Ackermann, Lammersen, Martens, Raupach, Sohler, and Swierkot [2]. Their approach was shown empirically to be fast and accurate on a variety of benchmarks. 2 Algorithm and Theory Both our algorithm and that of [9] are based on the online facility location algorithm of [24]. For the facility location problem, the number of clusters is not part of the input (as it is for k-means), but rather a facility cost is given; an algorithm to solve this problem may have as many clusters as it desires in its output, simply by denoting some point as a facility. The solution cost is then the sum of the resulting k-means cost ("service cost") and the total paid for facilities. Our algorithm runs the online facility location algorithm of [24] with a small facility cost until we have more than ~ E e(k log n) facilities. It then increases the facility cost, re-evaluates the current facilities, and continues with the stream. This repeats until the entire stream is read. The details of the algorithm are given as Algorithm 1. The major differences between our algorithm and that of [9] are as follows. We ignore the overall service cost in determining when to end a phase and raise our facility cost f. Further, the number of facilities which must open to end a phase can be any ~ E e(k log n), the constants do not depend directly on the competitive ratio of online facility location (as they did in [9]). Finally, we omit the somewhat complicated end-of-phase analysis of [9], which used matching to guarantee that the' number of facilities decreased substantially with each phase and allowed bounding the number of phases by kl~gn' We observe that our number of phases will be bounded by logj3 OPT; while this is not technically bounded in terms of n, in practice this term should be smaller than the linear number of phases implied in previous work. 3 Algorithm 1 Fast streaming k-means (data stream, k, ~, (3) 1: Initialize f = Ij(k(l + logn)) and an empty set K 2: while some portion of the stream remains unread do 3: while IKI :::; ~ = 8(k log n) and some portion of the stream is unread do 4: Read the next point x from the stream 5: Measure 6 = minYEK d(x, y)2 6: ifprobability fJj f event occurs then 7: setK r- KU{x} 8: else 9: assign x to its closest facility in K 10: if stream not exhausted then 11: while IKI > ~ do 12: Set f r- {31 13: Move each x E K to the center-of-mass of its points 14: Let W x be the number of points assigned to x E K 15: Initialize K containing the first facility from K 16: for each x E K do 17: Measure fJ = minyEK d(x, y)2 18: if probability w x 6j f event occurs then 19: setKr-KU{x} 20: else 21: assign"x to its closest facility in K 22: SetK r- K 23: else 24: Run batch k-means algorithm on weighted points K 25: Perform ball k-means (as per [9]) on the resulting set of clusters We will give a theoretical analysis of our modified algorithm to obtain a constant approximation bound. Our constant is substantially smaller than those implicit in [9], with most of the loss occurring in the final non-streaming k-means algorithm to consolidate ~ means down to k. The analysis will follow from the theorems stated below; proofs of these theorems are deferred to the appendix. Theorem 1. Suppose that our algorithm completes the data stream when the facility cost is f. Then the overall solution prior to the final re-clustering has expected service cost at most ~~, and the probability ofbeing within 1+ E ofthe expected service cost is at least 1 pol~(n) . Theorem 2. With probability at least 1 pol~(n)' the algorithm will either halt with 1 :::; e(~*){3, where C* is the optimum k-means cost, or it will halt within one phase of exceeding this value. Furthermore, for large values of ~ and {3, the hidden constant in 8 (C*) approaches 4. Note that while the worst-case bound of roughly 4 proven here may not seem particularly strong, 'unlike the previous work of [9], the worst-case performance is not directly encoded into the algorithm. In practice, we would expect the performance of online facility location to be substantially better than worst-case (in fact, if the ordering of points in the stream is non-adversarial there is a proof to this effect in [24]); in addition the assumption was made that distances add (i.e. triangle inequality is tight) which will not be true in practice (especially of points in low-dimensional space). We also assumed that using more than k facilities does not substantially help the optimum service cost (also unlikely to be true for real data). Combining these, it would be unsurprising if our service cost was actually better than optimum at the end of the data stream (of course, we used many more facilities than optimum, so it is not precisely a fair comparison). The following theorem summarizes the worst-case performance of the algorithm; its proof is direct from Theorems 1 and 2. Theorem 3. The cost of our algorithm's final ~-mean solution is at most O(C*), where C* is the cost of the optimum k-means solution, with probability 1 pol~(n)' If ~ is a large constant times k log nand j3 > 2 is fairly large, then the cost of our algorithm's solution will approach C* :~21; the extra j3 factor is due to "overshooting" the bestfacility cost f. 4 We note that if we run the streaming part of the algorithm NI times in parallel, we can take the solution with the smallest final facility cost. This improves the approximation factor to roughly 4;31+(1/M) hi h h 4' h l' . Of .. b' 11 . h ,B-1 ,w c approac es In t e lIDlt. course, IncreasIng ~ can su stantla y Increase t e memory requirement and increasing NI can increase both memory and running time requirements. When the algorithm terminates, we have a set of ~ weighted means which we must reduce to k means. A theoretically sound approach involves mapping these means back to randomly selected points from the original set (these can be maintained in a streaming manner) and then approximating k-means on ~ points using a non-streaming algorithm. The overall approximation ratio will be twice the ratio established by our algorithm (we lose a factor oftwo by mapping back to the original points) plus the approximation ratio for the non-streaming algorithm. If we use the algorithm of [20] along with a large ~, we will get an approximation factor of twice 4 plus 9+c for roughly 17. Ball k-means can then reduce the approximation factor to 1+0 ((J2) if the inputs were (J-separable (as in [25] and [9]; the hidden constant will be reduced by our more accurate algorithm). 3 Approximate Nearest Neighbor The most time-consuming step in our algorithm is measuring 6 in lines 5 and 17. This requires as many as ~ distance computations; there are a number of results enabling fast computation of approximate nearest neighbors and applying these results will improve our running time. If we can assume that errors in nearest neighbor computation are independent from one point to the next (and that the expected result is good), our analysis from the previous section applies. Unfortunately, many of the algorithms construct a random data structure to store the facilities, then use this structure to resolve all queries; this type of approach implies that errors are not independent from one query to the next. Nonetheless we can obtain a constant approximation for sufficiently large choices of (3. For our empirical result, we will use a very simple approximate nearest-neighbor algorithm based on random projection. This has reasonable performance in expectation, but is not independent from one step to the next. While the theoretical results from this particular approach are not very strong, it works very well in our experiments. For this implementation, a vector w is created, with each of the d dimensions space being chosen independently and uniformly at random from [0,1). We store our facilities sorted by their inner product with w. When a new point x arrives, instead of taking O(~) to determine its (exact) nearest neighbor, we instead use O(log~) to find the two facilities that x . w is between. We determine the (exact) closer of these two facilities; this determines the value of 6 in lines 5 and 17 and the "closest" facility in lines 9 and 21. Theorem 4. Ifour approximate nearest neighbor computation finds a facility with distance at most v times the distance to the closestfacility in expectation, then the approximation ratio increase by a constantfactor. We defer explanation of how we form the stronger theoretical result to the appendix. 4 Empirical Evaluation A comparison of algorithms on real data sets gives a great deal of insight as to their relative performance. Real data is not worst-case, implying that neither the asymptotic performance or runningtime bounds claimed in theoretical results are necessarily tight. Of course, empirical evaluation depends heavily on the data sets selected for the experiments. We selected data sets which have been used previously to demonstrate streaming algorithms. A number of the data sets analyzed in previous work were not particularly large, probably so that batch-processing algorithms would terminate quickly on those inputs. The main motivation for streaming is very large data sets, so we are more interested in sets that might be difficult to fit in a main memory and focused on the largest examples. We looked to [2], and used the two biggest data sets they considered. These were the BigCross dataset1 and the Census1990 dataset 2. All the other data sets in [2, 4] were either subsets of these or were well under a half million points. A necessary input for each of these algorithms is the desired number of clusters. Previous work chose k seemingly arbitrarily; typical values were of the form {5, 10, 15,20, 25}. While this input provides a well-defined geometry problem, it fails to capture any information about how k-means is IThe BigCross dataset is 11,620,300 points in 57-dimensional space; it is available from [1] 2The Census1990 dataset is 2,458,285 points in 68 dimensions; it is available from [15] 5 used in practice and need not lead to separable data. Instead, we want to select k such that the best k-means solution is much cheaper than the best (k -I)-means solution. Since k-means is NP-Hard, we cannot solve large instances to optimality. For the Census dataset we ran several iterations of the algorithm of [25] for each of many values of k. We took the best observed cost for each value of k, and found the four values of k minimizing the ratio of k-means cost to (k - I)-means cost. This was not possible for the larger BigCross dataset. Instead, we ran a modified version of our algorithm; at the end of a phase, it adjusts the facility cost and restarts the stream. This avoids the problem of compounding the approximation factor at the end of a phase. As with Census, we ran this for consecutive values of k and chose the best ratios of observed values; we chose two, rather than four, so that we could finish our experiments in a reasonable amount of time. Our approach to selecting k is closer to what's done in practice, and is more likely to yield meaningful results. We do not compare to the algorithm of [9]. First, the memory is not configurable, making it not fit into the common baselme that we will define shortly. Second, the memory requirements and runtime, while asymptotically nice, have large leading constants that cause it to be impracticaL In fact, it was an attempt to implement this algorithm that initially motivated the work on this paper. 4.1 Implementation Discussion The divide and conquer ("D&C") algorithm [4] can use its available memory in two possible ways. First, it can use the entire amount to read from the stream, writing the results of computing their 3k log k means to disk; when the stream is exhausted, this file is treated as a stream, until an iteration produces a file that fits entirely into main memory. Alternately, the available memory could be partitioned into layers; the first layer would be filled by reading from the stream, and the weighted facilities produced would be stored in the second. When any layer is full, it can be clustered and the result placed in a higher layer, replacing the use of files and disk. Upon completion of the stream, any remaining points are gathered and clustered to produce k final means. When larger amounts of memory are available, the latter method is preferred. With smaller amounts, however, this isn't always possible, and when it is possible, it can produce worse actual running times than a disk-based approach. As our goal is to judge streaming algorithms under low memory conditions, we used the first approach, which is more fitting to such a constraint. Each algorithm3 was programmed in C/C++, compiled with g++, and run under Ubuntu Linux (10.04 LTS) on HP Pavilion p6520fDesktop PC, with an AMD Athlon II X4 635 Processor running at 2.9 GhZ and with 6 GB main memory (although nowhere near the entirety of this was used by any algorithm). For StreamKM++, the authors' implementation [2], also in C, was used instead. With all algorithms, the reported cost is determined by taking the resulting k facilities and computing the k-means cost across the entire dataset. The time to compute this cost is not included in the reported running times of the algorithms. Each test case was run 10 times and the average costs and running times were reported. 4.2 Experimental Design Our goal is to compare the algorithms at a common basepoint. Instead of just comparing for the same dataset and cluster count, we further constrained each to use the same amount of memory (in terms of number of points stored in random access). The memory constraints were chosen to reflect the usage of small amounts of memory that are close to the algorithms' designers' specifications, where possible. Ailon et al [4] suggest -v:n:k memory for the batch process; this memory availability is marked in the charts by an asterisk. The suggestion from [2] for a coreset of size 200k was not run for all algorithms, as the amount of memory necessary for computing a coreset of this size is much larger than the other cases, and our goal is to compare the algorithms at a small memory limit. This does produce a drop in solution quality compared to running the algorithm at their suggested parameters, although their approach remains competitive. Finally, our algorithm suggests memory of ~ = k log n or a small constant times the same. In each case, the memory constraint dictates the parameters; for the divide and conquer algorithm, this is simply the batch size. The coreset size is also dictated by the available memory. Our algorithm is a little more parametrizable; when M memory is available, we allowed ~ = M/5 and each facility to have four samples. 3Visit http://web . engr. oregonstate. edu/ - shindler / to access code for our algorithms 6 ~ 4DDE+13 2520 MemoryAvailable Figure 1: Census Data, k=8, cost 3780 MemoryAvailahle Figure 3: Census Data, k=12, cost IIIlD&C 1I0ur,,+ANN .D&C 3350 MemoryAvailaible Figure 2: Census Data, k=8, time 5040 MemOfyAvaUahle Figure 4: Census Data, k=12, time JIIIOurs+ANN JIIID&C 1I0&C ~ 1.50E""-14····· MemoryAvailahle Figure 5: BigCross Data, k=13, cost .Our!l+ANN MemoryAvaiEable Figure 6: BigCross Data, k=13, time MemoryAvaifable MemoryAVlliiable ; Ours 1lI0urs-:-ANN IlIStreamKMH Figure 7: BigCross Data, k=24, cost 7 Figure 8: BigCross Data, k=24, time 4.3 Discussion of Results We see that our algorithms are much faster than the D&C algorithm, while having a comparable (and often better) solution quality. We find that we compare well to StreamKM++ in average results, with a closer standard deviation and a better sketch of the data produced. Furthermore, our algorithm stands to gain the most by improved solutions to batch k-means, due to the better representative sample present after the stream is processed. The prohibitively high running time of the divide-and-conquer algorithm [4] is due to the many repeated instances ofrunning their k-means# algorithm on each batch ofthe given size. For sufficiently large memory, this is not problematic, as very few batches will need this treatment. Unfortunately, with very small locally available memory, there will be an immense amount of repeated calls, and the overall running time will suffer greatly. In particular, the observed running time was much worse than the other approaches. For the Census dataset, k = 12 case, for example, the slowest run of our algorithm (20 minutes) and the fastest run of the D&C algorithm (125 minutes) occurred at the same case. It is because of this discrepancy that we present the chart of algorithm running times as a log-plot. Furthermore, due to the prohibitively high running time on the smaller data set, we omitted the divide-and-conquer algorithm for the experiment with the larger set. The decline in accuracy for StreamKM++ at very low memory can be partially explained by the 8(k2 1og8 n) points' worth of memory needed for a strong guarantee in previous theory work [12]. However, the fact that the algorithm is able to achieve a good approximation in practice while using far less than that amount of memory suggests that improved provable bounds for coreset algorithms may be on the horizon. We should note that the performance of the algorithm declines sharply as the memory difference with the authors' specification grows, but gains accuracy as the memory grows. All three algorithms can be described as computing a weighted sketch of the data, and then solving k-means on that sketch. The final approximation ratios can be described as a(l + E) where a is the loss from the final batch algorithm. The coreset E is a direct function of the memory allowed to the algorithm, and can be made arbitrarily small. However, the memory needed to provably reduce E to a small constant is quite substantial, and while StreamKM++ does produce a good resulting clustering, it is not immediately clear the the discovery of better batch k-means algorithms would improve their solution quality. Our algorithm's E represents the ratio of the cost of our f1:-mean solution to the cost of the optimum k-means solution. The provable value is a large constant, but since ~ is much larger than k, we would expect better performance in practice, and we observe this effect in our experiments. For our algorithm, the observed value of 1+E has been typically between 1 and 3, whereas the D&C approach did not yield one better than 24, and was high (low thousands) for the very low memory conditions. The coreset algorithm was the worst, with even the best values in the mid ten figures (tens to hundreds of billions). The low ratio for our algorithm also suggests that our ~ facilities are a good sketch of the overall data, and thus our observed accuracy can be expected to improve as more accurate batch k-means algorithms are discovered. Acknowledgments We are grateful to Christian Sohler's research group for providing their code for the StreamKM++ algorithm. We also thank Jennifer Wortman Vaughan, Thomas G. Dietterich, Daniel Sheldon, Andrea Vattani, and Christian Sohler for helpful feedback on drafts of this paper. This work was done while all the authors were at UCLA; at that time, Adam Meyerson and Michael Shindler were partially supported by NSF CIF Grant CCF-1016540. References [1] http://www.cs.uni-paderbom.de/en/fachgebiete/ag-bloemer/research/clustering/streamkmpp. [2] Marcel R. Ackermann, Christian Lammersen, Marcus Martens, Christoph Raupach, Christian Sohler, and Kamil Swierkot. StreamKM++: A clustering algorithms for data streams. In ALENEX, 2010. [3] Cham C. Aggarwal, editor. Data Streams: Models and Algorithms. Springer, 2007. 8 [4] Nir Ailon, Ragesh Jaiswal, and Claire Monteleoni. Streaming k-means approximation. In NIPS, 2009. [5] Khaled Alsabti, Sanjay Ranka, and Vineet Singh. An efficient k-means clustering algorithm. In HPDM, 1998. [6] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Communications ofthe ACM, January 2008. [7] David Arthur and Sergei Vassilvitskii. k-means++: The Advantages of Careful Seeding. In SODA, 2007. [8] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristic for k-median and facility location problems. In STOC, 2001. [9] Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman, Michael Shindler, and Brian Tagiku. Streaming k-means on Well-Clusterable Data. In SODA, 201l. [10] Mihai Badoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In STOC, 2002. [11] Moses Charikar, Liadan O'Callaghan, and Rina Panigrahy. Better streaming algorithms for clustering problems. In STOC, 2003. [12] Ke Chen. On coresets for k-median and k-means clustering in metric and euclidean spaces and their applications. SIAM J. Comput., 2009. . [13] Dan Feldman, Morteza Monemizadeh, and Christian Sohler. A PTAS for k-means clustering based on weak coresets. In SCG, 2007. [14] Gereon Frahling and Christian Sohler. Coresets in dynamic geometric data streams. In STOC, 2005. [15] A. Frank and A. Asuncion. DCI machine learning repository, 2010. [16] Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan O'Callaghan. Clustering data streams: Theory and practice. In TDKE, 2003. [17] Sudipto Guha, Nina Mishra, Rajeev Motwani, and Liadan 0'Callaghan. Clustering data streams. In FOCS, 2000. [18] Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In STOC, 2004. [19] Anil Kumar Jain, M Narasimha Murty, and Patrick Joseph Flynn. Data clustering: a review. ACM Computing Surveys, 31(3), September 1999. [20] Tapas Kanungo, David Mount, Nathan Netanyahu, Christine Piatko, Ruth Silverman, and Angela Wu. A local search approximation algorithm for k-means clustering. In SCG, 2002. [21] Stuart Lloyd. Least Squares Quantization in PCM. In Special issue on quantization, IEEE Transactions on Information Theory, 1982. [22] James MacQueen. Some methods for classification and analysis of multivariate observations. In Berkeley Symposium on Mathematical Statistics and Probability, 1967. [23] Joel Max. Quantizing for minimum distortion. IEEE Transactions on Information Theory, 1960. [24] Adam Meyerson. Online facility location. In FOCS, 200l. [25] Rafail Ostrovsky, Yuval Rabani, Leonard Schulman, and Chaitanya Swamy. The Effectiveness of Lloyd-Type Methods for the k-Means Problem. In FOCS, 2006. [26] Rina Panigrahy. Entropy based nearest neighbor search in high dimensions. In SODA, 2006. [27] Dan Pelleg and Andrew Moore. Accelerating exact k-means algorithms with geometric reasoning. In KDD, 1999. [28] Steven 1. Phillips. Acceleration of k-means and related clustering problems. In ALENEX, 2002. [29] Andrea Vattani. k-means requires exponentially many iterations even in the plane. Discrete Computational Geometry, June 2011. 9
2011
100
4,148
Message-Passing for Approximate MAP Inference with Latent Variables Jiarong Jiang Dept. of Computer Science University of Maryland, CP jiarong@umiacs.umd.edu Piyush Rai School of Computing University of Utah piyush@cs.utah.edu Hal Daum´e III Dept. of Computer Science University of Maryland, CP hal@umiacs.umd.edu Abstract We consider a general inference setting for discrete probabilistic graphical models where we seek maximum a posteriori (MAP) estimates for a subset of the random variables (max nodes), marginalizing over the rest (sum nodes). We present a hybrid message-passing algorithm to accomplish this. The hybrid algorithm passes a mix of sum and max messages depending on the type of source node (sum or max). We derive our algorithm by showing that it falls out as the solution of a particular relaxation of a variational framework. We further show that the Expectation Maximization algorithm can be seen as an approximation to our algorithm. Experimental results on synthetic and real-world datasets, against several baselines, demonstrate the efficacy of our proposed algorithm. 1 Introduction Probabilistic graphical models provide a compact and principled representation for capturing complex statistical dependencies among a set of random variables. In this paper, we consider the general maximum a posteriori (MAP) problem in which we want to maximize over a subset of the variables (max nodes, denoted X), marginalizing the rest (sum nodes, denoted Z). This problem is termed as the Marginal-MAP problem. A typical example is the minimum Bayes risk (MBR) problem [1] where the goal is to find an assignment ˆx which optimizes a loss ℓ(ˆx, x) with regard to some usually unknown truth x. Since x is latent, we need to marginalize it before optimizing with respect to ˆx. Although the specific problems of estimating marginals and estimating MAP individually have been studied extensively [2, 3, 4], similar developments for the more general problem of simultaneous marginal and MAP estimation are lacking. More recently, [5] proposed a method based optimizing a variational objective on specific graph structures, and is a simultaneous development as the method we propose in this paper (please refer to the supplementary material for further details and other related work). This problem is fundamentally difficult. As mentioned in [6, 7], even for a tree-structured model, we cannot solve the Marginal-MAP problem exactly in poly-time unless P = NP. Moreover, it has been shown [8] that even if a joint distribution p(x, z) belongs to the exponential family, the corresponding marginal distribution p(x) = P z p(x, z) is in general not exponential family (with a very short list of exceptions, such as Gaussian random fields). This means that we cannot directly apply algorithms for MAP inference to our task. Motivated by this problem, we propose a hybrid message passing algorithm which is both intuitive and justified according to variational principles. Our hybrid message passing algorithm uses a mix of sum and max messages with the message type depending on the source node type. Experimental results on chain and grid structured synthetic data sets and another real-world dataset show that our hybrid message-passing algorithm works favorably compared to standard sumproduct, standard max-product, or the Expectation-Maximization algorithm which iteratively provides MAP and marginal estimates. Our estimates can be further improved by a few steps of local 1 search [6]. Therefore, using the solution found by our hybrid algorithm to initialize some local search algorithms largely improves the performance on both accuracy and convergence speed, compared to the greedy stochastic search method described in [6]. We also give an example in Sec. 5 of how our algorithm can also be used to solve other practical problem which can be cast under the Marginal-MAP framework. In particular, the Minimum Bayes Risk [9] problem for decomposable loss-functions can be readily solved under this framework. 2 Problem Setting In our setting, the nodes in a graphical model with discrete random variables are divided into two sets: max and sum nodes. We denote a graph G = (V, E), V = X ∪Z where X is the set of nodes for which we want to compute the MAP assignment (max nodes), and Z is the set of nodes for which we need the marginals (sum nodes). Let x = {x1, . . . , xm} (xs ∈Xs), z = {z1, . . . , zn} (zs ∈Zs) be the random variables associated with the nodes in X and Z respectively. The exponential family distribution p over these random variables is defined as follows: pθ(x, z) = exp [⟨θ, φ(x, z)⟩−A(θ)] where φ(x, z) is the sufficient statistics of the enumeration of all node assignments, and θ is the vector of canonical or exponential parameters. A(θ) = log P x,z exp[⟨θ, φ(x, z)⟩] is the log-partition function. In this paper, we consider only pairwise node interactions and use standard overcomplete representation of the sufficient statistics [10] (defined by indicator function I later). The general MAP problem can be formalized as the following maximization problem: x∗= arg max x X z pθ(x, z) (1) with corresponding marginal probabilities of the z nodes, given x∗. p(zs|x∗) = X Z\{zs} p(z|x∗), s = 1, . . . , n (2) Before proceeding, we introduce some notations for clarity of exposition: Subscripts s, u, t, etc. denote nodes in the graphical model. zs, xs are sum and max random variables respectively, associated with some node s. vs can be either a sum (zs) or a max random (xs) variable, associated with some node s. N(s) is the set of neighbors of node s. Xs, Zs, Vs are the state spaces from which xs, zs, vs take values. 2.1 Message Passing Algorithms The sum-product and max-product algorithms are standard message-passing algorithms for inferring marginal and MAP estimates respectively in probabilistic graphical models. Their idea is to store a belief state associated with each node, and iteratively passing messages between adjacent nodes, which are used to update the belief states. It is known [11] that these algorithms are guaranteed to converge to the exact solution on trees or polytrees. On loopy graphs, they are no longer guaranteed to converge, but they can still provide good estimates when converged [12]. In the standard sum product algorithm, the message Mts passed from node s to one of its neighbors t is as follows: Mts(vs) ←κ X v′ t∈Vt   exp[θst(vs, v′ t) + θt(v′ t)] Y u∈N(t)\s Mut(v′ t)    (3) where κ is a normalization constant. When the messages converge, i.e. {Mts, Mst} does not change for every pair of nodes s and t, the belief (psuedomarginal distribution) for the node s is given by µs(vs) = κ exp{θs(vs)} Q t∈N(s) Mts(vs). The outgoing messages for max product algorithm have the same form but with a maximization instead of a summation in Eq. (3). After convergence, the MAP assignment for each node is the assignment with the highest max-marginal probability. On loopy graphs, the tree-weighted sum and max product [13, 14] can help find the upper bound of the marginal or MAP problem. They decompose the loopy graph into several spanning trees and reweight the messages by the edge appearance probability. 2 2.2 Local Search Algorithm Eq (1) can be viewed as doing a variable elimination for z nodes first, followed by a maximization over x. Its maximization step may be performed using heuristic search techniques [7, 6]. Eq (2) can be computed by running standard sum-product over z, given the MAP x∗assignments. In [6], the assignment for the MAP nodes are found by greedily searching the best neighboring assignments which only differs on one node. However, the hybrid algorithm we propose allows simultaneously approximating both Eq (1) and Eq (2). 3 HYBRID MESSAGE PASSING In our setting, we wish to compute MAP estimates for one set of nodes and marginals for the rest. One possible approach is to run standard sum/max product algorithms over the graph, and find the most-likely assignment for each max node according to the maximum of sum or max marginals1. These na¨ıve approaches have their own shortcomings; for example, although using standard maxproduct may perform reasonably when there are many max nodes, it inevitably ignores the effect of sum nodes which should ideally be summed over. This is analogous to the difference between EM for Gaussian mixture models and K-means. (See Sec. 6) 3.1 ALGORITHM We now present a hybrid message-passing algorithm which passes sum-style or max-style messages based on the type of nodes from which the message originates. In the hybrid message-passing algorithm, a sum node sends sum messages to its neighbors and a max node sends max messages. The type of message passed depends on the type of source node, not the destination node. More specifically, the outgoing messages from a source node are as follows: • Message from sum node t to any neighbor s: Mts(vs) ←κ1 X z′ t∈Zt   exp[θst(vs, z′ t) + θt(z′ t)] Y u∈N(t)\s Mut(z′ t)    (4) • Message from max node t to any neighbor s: Mts(vs) ←κ2 max x′ t∈Xt   exp[θst(vs, x′ t) + θt(x′ t)] Y u∈N(t)\s Mut(x′ t)    (5) and κ1,κ2 are normalization constants. Algo 1 shows the procedure to do hybrid message-passing. Algorithm 1 Hybrid Message-Passing Algorithm Inputs: Graph G = (V, E), V = X ∪Z, potentials θs, s ∈V and θst, (s, t) ∈E. 1. Initialize the messages to some arbitrary value. 2. For each node s ∈V in G, do the following until messages converge (or maximum number of iterations reached) • If s ∈X, update messages by Eq.(5). • If s ∈Z, update messages by Eq.(4). 3. Compute the local belief for each node s. µs(ys) = κ exp{θs(vs)} Q t∈N(s) Mts(vs) 4. For all xs ∈X, return arg maxxs∈Xs µs(xs) 5. For all zs ∈Z, return µs(zs). When there is only a single type of node in the graph, the hybrid algorithm reduces to the standard max or sum-product algorithm. Otherwise, it passes different messages simultaneously and gives an approximation to the MAP assignment on max nodes as well as the marginals on sum nodes. On the loopy graphs, we can also apply this scheme to pass hybrid tree-reweighted messages between nodes to obtain marginal and MAP estimates. (See Appendix C of the supplementary material) 1Running the standard sum-product algorithm and choosing the maximum likelihood assignment for the max nodes is also called maximum marginal decoding [15, 16]. 3 3.2 VARIATIONAL DERIVATION In this section, we show that the Marginal-MAP problem can be framed under a variational framework, and the hybrid message passing algorithm turns out to be a solution of it. (a detailed derivation is in Appendix A of the supplementary material). To see this, we construct a new graph G¯x with xs’ assignments fixed to be ¯x ∈X = X1 × · · · × Xm, so the log-partition function A(θ¯x) of the graph G¯x is A(θ¯x) = log X z p(¯x, z) + log A(θ) = log p(¯x) + const (6) As the constant only depends on the log-partition function of the original graph and does not vary with different assignments of MAP nodes, A(θ¯x) exactly estimates the log-likelihood of assignment ¯x. Therefore argmax¯x∈X log p(¯x) = argmax¯x∈X A(θ¯x). Moreover, A(θ¯x) can be approximated by the following [10]: A(θ¯x) ≈ sup µ∈M(G¯x) ⟨θ, µ⟩+ HBethe(µ) (7) where M(G¯x) is the following marginal polytope of graph Gx: M(G¯x) =   µ µs(zs), µst(vs, vt): marginals with ¯x fixed to its assignment µs(xs) =  1 if xs = ¯xs 0 else    (8) Recall, vs stands for xs or zs. HBethe(µ) is the Bethe energy of the graph: HBethe(µ) = X s Hs(µs) − X (s,t)∈E Ist(µst), Hs(µs) = − X vs∈Vs µs(vs) log µs(vs) (9) Ist(µst) = X (vs,vt)∈Vs×Vt µst(vs, vt) log µst(vs, vt) µs(vs)µt(vt) For readability, we use µsum, µmax to subsume the node and pairwise marginals for sum/max nodes and µsum→max, µmax→sum are the pairwise marginals for edges between different types of nodes. The direction here is used to be consistent with the distinction of the constraints as well as the messages. Solving the Marginal-MAP problem is therefore equivalent to solving the following optimization problem: max ¯x∈X sup µother∈M(G¯x) ⟨θ, µ⟩+ HBethe(µ) ≈ sup µmax∈M¯x sup µother∈M(G¯x) ⟨θ, µ⟩+ HBethe(µ) (10) µother contains all other node/pairwise marginals except µmax. The Bethe entropy terms can be written as (H is the entropy and I is mutual information) HBethe(µ) = Hµmax + Hµsum −Iµmax→µmax −Iµsum→µsum −Iµmax→µsum −Iµsum→µmax If we force to satisfy the second condition in (8), the entropy of max nodes Hµmax = Hs(µs) = 0, ∀s ∈X and the mutual information between max nodes Iµmax→µmax = Ist(xs, xt) = 0, ∀s, t ∈X. For mutual information between different types of nodes, we can either force xs to have integral solutions, or relax xs to have non-integral solution, or relax xs on one direction2. In practice, we relax the mutual information on the message from sum nodes to max nodes, so the mutual information on the other direction Iµmax→µsum = Ist(xs, zt) = P (xs,zt)∈Xs×Zt µst(xs, zt) log µst(xs,zt) µs(xs)µt(zt) = P zt∈Zt µst(x∗, zt) log µst(x∗,zt) µs(x∗)µt(zt) = 0, ∀s ∈X, t ∈Z, where x∗is the assigned state of x at node s. Finally, we only require sum nodes to satisfy normalization and marginalization conditions, the entropy for sum nodes, mutual information between sum nodes, and from sum node to max node can be nonzero. The above process relaxes the polytope M(G¯x) to be M¯x × Lz(G¯x), where Lz(G¯x) =          µ ≥0 P zs µs(zs) = 1, µs(xs) = 1 iff xs = ¯xs, P zt µst(vs, zt) = µs(vs), P zs µst(zs, vt) = µt(vt), µst(xs, zt) = µt(zt) iff xs = ¯xs, µst(xs, xt) = 1 iff xs = ¯xs, xt = ¯xt.          2This results in four different relaxations for different combinations of message types and the hybrid algorithm performed empirically the best. 4 This analysis results in the following optimization problem. sup µmax∈M¯x sup µothers∈M(G¯x) ⟨θ, µ⟩+ H(µsum) −I(µsum→sum) −I(µsum→max) Further relaxing µ¯xs to have non-integral solutions, define L(G) =   µ ≥0 P vs µs(vs) = 1, P vt µst(vs, vt) = µs(vs), P vs µst(vs, vt) = µt(vt).    Finally we get sup µ∈L(G) ⟨µ, θ⟩+ H(µsum) −I(µsum→sum) −I(µsum→max) (11) So M¯x ×Mz(G¯x) ⊆M¯x ×Lz(G¯x) ⊆L(G). Unfortunately, M¯x ×Mz(G¯x) is not guaranteed to be convex and we can only obtain an approximate solution to the problem defined in Eq (11). Taking the Lagrangian formulation, for an x node, the partial derivative of the Lagrangian with respect to µs(xs), s ∈X keeps the same form as in max product derivation[10], and the situations are identical for µs(zs), s ∈Z and pairwise psuedo-marginals, so the hybrid message-passing algorithm provides a solution to Eq (11) (see Appendix A of the supplementary material for a detailed derivation). 4 Expectation Maximization Another plausible approach to solve the Marginal MAP problem is by the Expectation Maximization(EM) algorithm [17], typically used for maximum likelihood parameter estimation in latent variable models. In our setting, the variables Z correspond to the latent variables. We now show one way of approaching this problem by applying the sum-product and max-product algorithms in the E and M step respectively. To see this, let us first define3: F(˜p, x) = E˜p[log p(x, z)] + H(˜p(z)) (12) where H(˜p) = −E˜p[log ˜p(z)]. Then EM can be interpreted as a joint maximization of the function F [18]: At iteration t, for the E-step, ˜p(t) is set to be the ˜p that maximizes F(˜p, x(t−1)) and for the M-step, x(t) is the x that maximizes F(˜p(t), x). Given F, the following two properties4 show that jointly maximizing function F is equivalent to maximizing the objective function p(x) = P z p(x, z). 1. With the value of x fixed in function F, the unique solution to maximizing F(˜p, x) is given by ˜p(z) = p(z|x). 2. If ˜p(z) = p(z|x), then F(˜p, x) = log p(x) = log P z p(x, z). 4.1 Expectation Maximization via Message Passing Now we can derive the EM algorithm for solving the Marginal-MAP problem by jointly maximizing function F. In the E-step, we need to estimate ˜p(z) = p(z|x) given x. This can be done by fixing x values at their MAP assignments and running the sum-product algorithm over the resulting graph: The M-step works by maximizing Epθ(z | ¯x) log pθ(x, z), where ¯x is the assignment given by the previous M-step. This is equivalent to maximizing Ez∼pθ(z | ¯x) log pθ(x | z), as the log pθ(z) term in the maximization is independent of x. maxx Ez∼pθ(z | ¯x) log pθ(x | z) = maxx P z p(z | ¯x)⟨θ, φ(x, z)⟩, which in the overcomplete representation [10] can be approximated by X s∈X,i  θs;i + X t∈Z,j µt;jθst;ij  Is;i(xs) + X (s,t)∈E,s,t∈X X (i,j) θst;ijIst;ij(xs, xt) + C where C subsumes the terms irrelevant to the maximization over x, µt is the psuedo-marginal of node t given ¯x5. Then, the M-step amounts to running the max product algorithm with potentials on x nodes modified according to Eq. (13). Summarizing, the EM algorithm for solving marginal-MAP estimation can be interpreted as follows: • E-step: Fix xs to be the MAP assignment value from iteration (k −1) and run sum product to get beliefs on sum nodes zs, say µt, t ∈Z. 3By directly applying Jensen’s inequality to the objective function maxx log P z p(x, z) 4The proofs are straightforward following Lemma 1 and 2 in [18] page 4-5. More details are in Appendix B of the supplementary material 5A detailed derivation is in Appendix B.4 of the supplementary material 5 • M-step: Build a new graph ˜G = ( ˜V , ˜E) only containing the max nodes. ˜V =X and ˜E = {(s, t)|∀(s, t) ∈E, s, t ∈X}. For each max node s in the graph, set its potential as ˜θs;i = θs;i + P j θst;ijµt;j, where t ∈Z and (s, t) ∈E. ˜θst;ij = θst;ij ∀(s, t) ∈˜E. Run max product over this new graph and update the MAP assignment. 4.2 Relationship with the Hybrid Algorithm Apart from the fact that the hybrid algorithm passes different messages simultaneously and EM does it iteratively, to see the connection with the hybrid algorithm, let us first consider the message passed in the E-step at iteration k. xs are fixed at the last assignment which maximizes the message at iteration k −1, denoted as x∗here. The M (k−1) ut are the messages computed at iteration k −1. M (k) ts (zs) = κ1{exp[θst(zs, x∗ t ) + θt(x∗ t )] Y u∈N(t)\s M (k−1) ut (x∗ t )} (13) Now assume there exists an iterative algorithm which, at each iteration, computes the messages used in both steps of the message-passing variant of the EM algorithm, denoted ˜ Mts. Eq (13) then becomes ˜ M (k) ts (zs) == κ1 max x′ {exp[θst(zs, x′ t) + θt(x′ t)] Y u∈N(t)\s ˜ M (k−1) ut (x′ t)} So the max nodes (x’s) should pass the max messages to its neighbors (z’s), which is what the hybrid message-passing algorithm does. In the M-step for EM (as discussed in Sec. 4), all the sum nodes t are removed from the graph and the parameters of the adjacent max nodes are modified as: θs;i = θs;i + P j θst;ijµt;j. µt is computed by the sum product at the E-step of iteration k, and these sum messages are used (in form of the marginals µt) in the subsequent M-step (with the sum nodes removed). However, a max node may prefer different assignments according to different neighboring nodes. With such uncertainties, especially during the first a few iterations, it is very likely that making hard decisions will directly lead to the bad local optima. In comparison, the hybrid message passing algorithm passes mixed messages instead of making deterministic assignments in each iteration. 5 MBR Decoding Most work on finding “best” solutions in graphical models focuses on the MAP estimation problem: find the x that maximizes pθ(x). In many practical applications, one wishes to find an x that minimizes some risk, parameterized by a given loss function. This is the minimum Bayes risk (MBR) setting, which has proven useful in a number of domains, such as speech recognition [9], natural language parsing [19, 20], and machine translation [1]. We are given a loss function ℓ(x, ˆx) which measures the loss of ˆx assuming x is the truth. We assume losses are non-negative. Given this loss function, the minimum Bayes risk solution is the minimizer of Eq (14): MBRθ = arg min ˆx Ex∼p[ℓ(x, ˆx)] = arg min ˆx X x p(x)ℓ(x, ˆx) (14) We now assume that ℓdecomposes over the structure of x. In particular, suppose that: ℓ(x, ˆx) = P c∈C ℓ(xc, ˆxc), where C is some set of cliques in x, and xc denotes the variables associated with that clique. For example, for Hamming loss, the cliques are simply the set of pairs of vertices of the form (xi, ˆxi), and the loss simply counts the number of disagreements. Such decompositionality is widely assumed in structured prediction algorithms [21, 22]. Assume lc(x, x′) ≤L ∀c, x, x′. Therefore l(x, x′) ≤|C|L. We can then expand Eq (14) into the following: MBRθ = arg min ˆx X x p(x)ℓ(x, ˆx) = argmax x′ X x p(x)(|C|L −l(x, x′)) = arg max ˆx X x exp " ⟨θ, x⟩+ log X c [L −ℓ(xc, ˆxc)] −A(θ) # This resulting expression has exactly the same form as the MAP-with-marginal problem, where x is the variable being marginalized and ˆx being the variable being maximized. Fig. 1 shows a simple example of transforming a MAP lattice problem into an MBR problem under Hamming loss. Therefore, we can apply our hybrid algorithm to solve the MBR problem. 6 Figure 1: The Augmented Model For Solving The MBR Problem Under Hamming Loss over a 6-node simple lattice 10 20 30 40 50 60 70 80 90 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 % of sum nodes KL−divergence Average KL−divergence on sum nodes Max Product Sum Product Hybrid Message Passing Figure 2: Comparison of Various Algorithms For Marginals on 10-Node Chain Graph 6 EXPERIMENTS We perform the experiments on synthetic datasets as well as a real-world protein side-chain prediction dataset [23], and compare our hybrid message-passing algorithm (both its standard belief propagation and the tree-reweighted belief propagation (TRBP) versions) against a number of baselines such as the standard sum/max product based MAP estimates, EM, TRBP, and the greedy local search algorithm proposed in [6]. 6.1 Synthetic Data For synthetic data, we first take a 10-node chain graph with varying splits of sum vs max nodes, and random potentials. Each node can take one of the two states (0/1). The node and the edge potentials are drawn from UNIFORM(0,1) and we randomly pick nodes in the graph to be sum or max nodes. For this small graph, the true assignment is computable by explicitly maximizing p(x) = P z p(x, z) = 1 Z P z Q s∈V ψs(vs) Q (s,t)∈E ψst(vs, vt), where Z is some normalization constant and ψs(vs) = exp θs(vs). First, we compare the various algorithms on the MAP assignments. Assume that the aforementioned maximization gives assignment x∗= (x∗ 1, . . . , x∗ n) and some algorithm gives the approximate assignment x = (x1, . . . , xn). The metrics we use here are the 0/1 loss and the Hamming loss. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 % of sum nodes Error rate 0/1 Loss on a 10−node chain graph max sum hybrid EM max+local search sum+local search hybrid+local search 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.05 0.1 0.15 0.2 0.25 % of sum nodes Error rate Hamming loss on a 10−node chain graph max sum hybrid EM max+local search sum+local search hybrid+local search Figure 3: Comparison of Various Algorithms For MAP Estimates on 10-Node Chain Graph: 0-1 Loss (Left), Hamming Loss (Right) Fig. 3 shows the loss on the assignment of the max nodes. In the figure, as the number of sum nodes goes up, the accuracy of the standard sum-product based estimation (sum) gets better, whereas the accuracy of standard max-product based estimation (max) worsens. However, our hybrid messagepassing algorithm (hybrid), on an average, results in the lowest loss compared to the other baselines, with running times similar to the sum/max product algorithms. We also compare a stochastic greedy search approach described in [6] initialized by the results of sum/max/hybrid algorithm (sum/max/hybrid+local search). As shown in [6], local search with sum product initialization empirically performs better than with max product, so later on, we only compare the results with local search using sum product initialization (LS). Best of the three initialization methods, by starting at the hybrid algorithm results, the search algorithm in very few steps can find 7 10% 20% 30% 40% 50% 60% 70% 80% 90% 0.98 0.985 0.99 0.995 1 1.005 log−likelihood of the assignment normalized by hybrid algorithm % of sum node Relative Likelihood max sum hybrid LS hybrid+LS 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.991 0.992 0.993 0.994 0.995 0.996 0.997 0.998 0.999 1 1.001 log−likelihood of the assignments normalized by hybrid algo % of sum nodes Relative Likelihood TR−max TR−sum TR−hybrid LS TR−hybrid+LS Figure 4: Approximate Log-Partition Function Scores on a 50-Node Tree (Left) and an 8*10 Grid (Right) Graph Normalized By the Result of Hybrid Algorithm the local optimum, which often happened to be the global optimum as well. In particular, it only takes 1 or 2 steps of search in the 10-node chain case and 1 to 3 steps in the 50-node tree case. Next, we experiment with the marginals estimation. Fig 2 shows the mean of KL-divergence on the marginals for the three message-passing algorithms (each averaged over 100 random experiments) compared to the true marginals of p(z|x). Greedy search of [6] is not included since it only provides MAP, not marginals. The x-axis shows the percentage of sum nodes in the graph. Just like in the MAP case, our hybrid method consistently produces the smallest KL-divergence compared to others. When the computation of the truth is intractable, the loglikelihood of the assignment can be approximated by the log-partition function with Bethe approximation according to Sec. 3.2. Note that this is exact on trees. Here, we use a 50-node tree with binary node states and 8 × 10 grid with various states 1 ≤|Ys| ≤20. On the grid graph, we apply tree-reweighted sum or max product [14, 13], and our hybrid version based on TRBP. For the edge appearance probability in TRBP, we apply a common approach that use a greedy algorithm to finding the spanning trees with as many uncovered edges as possible until all the edges in the graph are covered at least once. Even if the messagepassing algorithms are not guaranteed to converge on loopy graphs, we can still compare the best result they provide after a certain number of iterations Fig. 4 presents the results. In the tree case, as expected, using hybrid message-passing algorithms’s result to initialize the local search algorithm performs the best. On the grid graph, the local search algorithm initialized by the sum product results works well when there are few max nodes, but as the search space grows exponentially with the number of max nodes, so it takes hundreds of steps to find the optimum. On the other hand, because the hybrid TRBP starts in a good area, it consistently achieves the highest likelihood among all four algorithms with fewer extra steps. 6.2 Real-world Data Table 1: Accuracy on the 1st, the 1st & 2rd Angles χ1 ALL SURFACE CORE sum product 0.7900 0.7564 0.8325 max product 0.7900 0.7555 0.8336 hybrid 0.7910 0.7573 0.8336 TRBP 0.7942 0.7608 0.8364 hybrid TRBP 0.7950 0.7626 0.8359 χ1 ∧χ2 ALL SURFACE CORE sum product 0.6482 0.6069 0.7005 max product 0.6512 0.6064 0.7078 hybrid 0.6485 0.6051 0.7033 TRBP 0.6592 0.6112 0.7174 hybrid TRBP 0.6597 0.6140 0.7186 We then experiment with the protein side-chain prediction dataset [23, 24] which consists a set of protein structures for which we need to find lowest energy assignment for rotamer residues. There are two sets of residues: core residues and surface residues. The core residues are the residues which are connected to more than 19 other residues, and the surface ones are the others. Since the MAP results are usually lower on the surface residues than the core residues nodes [24], we choose the surface residues to be max nodes and the core nodes to be the sum nodes. The ground truth is given by the maximum likelihood assignment of the residues, so we do not expect to have a better results on the core nodes, but we hope that any improvement on the accuracy of the surface nodes can make up the loss on the core nodes and thus give a better performance overall. As shown in Table 6.2, the improvements of the hybrid methods on the surface nodes are more than the loss the the core nodes, thus improving the overall performance. 8 References [1] Shankar Kumar, William Byrne, and Speech Processing. Minimum bayes-risk decoding for statistical machine translation. In HLT-NAACL, 2004. [2] David Sontag and Tommi Jaakkola. New outer bounds on the marginal polytope. In In Advances in Neural Information Processing Systems, 2007. [3] Amir Globerson and Tommi Jaakkola. Fixing max-product: Convergent message passing algorithms for map lp-relaxations. In NIPS, 2007. [4] Pradeep Ravikumar, Alekh Agarwal, and Martin J. Wainwright. Message-passing for graph-structured linear programs: proximal projections, convergence and rounding schemes. In ICML, 2008. [5] Qiang Liu and Alexander Ihler. Variational algorithms for marginal map. In UAI, 2011. [6] James D. Park. MAP Complexity Results and Approximation Methods. In UAI, 2002. [7] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [8] Shaul K. Bar-Lev, Daoud Bshouty, Peter Enis, Gerard Letac, I-Li Lu, and Donald Richards. The diagnonal multivariate natural exponential families and their classification. In Journal of Theoretical Probability, pages 883–929, 1994. [9] Vaibhava Goel and William J. Byrne. Minimum Bayes-risk automatic speech recognition. Computer Speech and Language, 14(2), 2000. [10] M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 2008. [11] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988. [12] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In NIPS, 2000. [13] Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Exact map estimates by tree agreement. In NIPS, 2002. [14] Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Tree-reweighted belief propagation algorithms and approximate ml estimation by pseudo-moment matching. In AISTATS, 2003. [15] Mark Johnson. Why doesnt em find good hmm pos-taggers. In EMNLP, pages 296–305, 2007. [16] Pradeep Ravikumar, Martin J. Wainwright, and Alekh Agarwal. Message-passing for graph-structured linear programs: Proximal methods and rounding schemes, 2008. [17] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM algorithm. Journal of The Royal Statistica Society, 1977. [18] Radford M. Neal and Geoffrey E. Hinton. A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants. In Learning in graphical models, pages 355–368, 1999. [19] Slav Petrov and Dan Klein. Discriminative log-linear grammars with latent variables. In NIPS, 2008. [20] Ivan Titov and James Henderson. A latent variable model for generative dependency parsing. In IWPT, 2007. [21] Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. Learning structured prediction models: a large margin approach. 2004. [22] Ioannis Tsochantaridis, Google Inc, Thorsten Joachims, Thomas Hofmann, Yasemin Altun, and Yoram Singer. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453–1484, 2005. [23] Chen Yanover, Talya Meltzer, and Yair Weiss. Linear programming relaxations and belief propagation – an empirical study. Journal of Machine Learning Research, 7:1887–1907, 2006. [24] Chen Yanover, Ora Schueler-furman, and Yair Weiss. Minimizing and learning energy functions for side-chain prediction. In RECOMB2007, 2007. 9
2011
101
4,149
A More Powerful Two-Sample Test in High Dimensions using Random Projection Miles E. Lopes1 Laurent Jacob1 Martin J. Wainwright1,2 Departments of Statistics1 and EECS2 University of California, Berkeley Berkeley, CA 94720-3860 {mlopes,laurent,wainwrig}@stat.berkeley.edu Abstract We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) →∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer. 1 Introduction Two-sample hypothesis tests are concerned with the question of whether two samples of data are generated from the same distribution. Such tests are among the most widely used inference procedures in treatment-control studies in science and engineering [1]. Application domains such as molecular biology and fMRI have stimulated considerable interest in detecting shifts between distributions in the high-dimensional setting, where the two samples of data {X1, . . . , Xn1} and {Y1, . . . , Yn2} are subsets of Rp, and n1, n2 ≪p [e.g., 2–5]. In transcriptomics, for instance, p gene expression measures on the order of hundreds or thousands may be used to investigate differences between two biological conditions, and it is often difficult to obtain sample sizes n1 and n2 larger than several dozen in each condition. In high-dimensional situations such as these, classical methods may be ineffective, or not applicable at all. Likewise, there has been growing interest in developing testing procedures that are better suited to deal with the effects of dimension [e.g., 6–10]. A fundamental instance of the general two-sample problem is the two-sample test of means with Gaussian data. In this case, two independent sets of samples {X1, . . . , Xn1} and {Y1, . . . , Yn2} are generated in an i.i.d. manner from p-dimensional multivariate normal distributions N(µ1, Σ) and N(µ2, Σ) respectively, where the mean vectors µ1, µ2 ∈Rp and covariance matrix Σ ≻0 are all fixed and unknown. The hypothesis testing problem of interest is H0 : µ1 = µ2 versus H1 : µ1 ̸= µ2. (1) The most well-known test statistic for this problem is the Hotelling T 2 statistic, defined by T 2 := n1 n2 n1 + n2 ( ¯X −¯Y )⊤bΣ−1 ( ¯X −¯Y ), (2) 1 where ¯X := 1 n1 Pn1 j=1 Xj and ¯Y := 1 n2 Pn2 j=1 Yj are the sample means, and bΣ is the pooled sample covariance matrix, given by bΣ := 1 n Pn1 j=1(Xj −¯ X)(Xj −¯ X)⊤+ 1 n Pn2 j=1(Yj −¯Y )(Yj −¯Y )⊤, with n := n1 + n2 −2. When p > n, the matrix bΣ is singular, and the Hotelling test is not well-defined. Even when p ≤n, the Hotelling test is known to perform poorly if p is nearly as large as n. This behavior was demonstrated in a seminal paper of Bai and Saranadasa [6] (or BS for short), who studied the performance of the Hotelling test under (p, n) →∞with p/n →1 −ϵ, and showed that the asymptotic power of the test suffers for small values of ϵ > 0. In subsequent years, a number of improvements on the Hotelling test in the high-dimensional setting have been proposed [e.g., 6–9]. In this paper, we propose a new test statistic for the two-sample test of means with multivariate normal data, applicable when p ≥n/2. We provide an explicit asymptotic power function for our test with (p, n) →∞, and show that under certain conditions, our test has greater asymptotic power than other state-of-the-art tests. These comparison results are valid with p/n tending to a positive constant or infinity. In addition to its advantage in terms of asymptotic power, our procedure specifies exact level-α critical values for multivariate normal data, whereas competing procedures offer only approximate level-α critical values. Furthermore, our experiments in Section 4 suggest that the critical values of our test may also be more robust than those of competing tests. Lastly, the computational cost of our procedure is modest in the n < p setting, being of order O(n2p). The remainder of this paper is organized as follows. In Section 2, we provide background on hypothesis testing and describe our testing procedure. Section 3 is devoted to a number of theoretical results about its performance. Theorem 1 in Section 3.1 provides an asymptotic power function, and Theorems 2 and 3 in Sections 3.3 and 3.4 give sufficient conditions for achieving greater power than state-of-the-art tests in the sense of asymptotic relative efficiency. In Section 4 we provide performance comparisons with ROC curves on synthetic data against a broader collection of methods, including some recent kernel-based and non-parametric approaches such as MMD [11], KFDA [12], and TreeRank [10]. Lastly, we study a high-dimensional gene expression dataset involving the discrimination of different cancer types, demonstrating that our test’s false positive rate is reliable in practice. We refer the reader to the preprint [13] for proofs of our theoretical results. Notation. Let δ := µ1 −µ2 denote the shift vector between the distributions N(µ1, Σ) and N(µ2, Σ), and define the ordered pair of parameters θ := (δ, Σ). Let z1−α denote the 1 −α quantile of the standard normal distribution, and let Φ be its cumulative distribution function. If A is a matrix in Rp×p, let |||A|||2 denote its spectral norm (maximum singular value), and define the Frobenius norm |||A|||F := qP i,j A2 ij. When all the eigenvalues of A are real, we denote them by λmin(A) = λp(A) ≤· · · ≤λ1(A) = λmax(A). For a positive-definite covariance matrix Σ, let Dσ := diag(Σ), and define the associated correlation matrix R := D−1/2 σ ΣD−1/2 σ . We use the notation f(n) ≲g(n) if there is some absolute constant c such that the inequality f(n) ≤c n holds for all large n. If both f(n) ≲g(n) and g(n) ≲f(n) hold, then we write f(n) ≍g(n). The notation f(n) = o(g(n)) means f(n)/g(n) →0 as n →∞. 2 Background and random projection method For the remainder of the paper, we retain the set-up for the two-sample test of means (1) with Gaussian data, assuming throughout that p ≥n/2, and n = n1 + n2 −2. Review of hypothesis testing terminology. The primary focus of our results will be on the comparison of power between test statistics, and here we give precise meaning to this notion. When testing a null hypothesis H0 versus an alternative hypothesis H1, a procedure based on a test statistic T specifies a critical value, such that H0 is rejected if T exceeds that critical value, and H0 is accepted otherwise. The chosen critical value fixes a trade-off between the risk of rejecting H0 when H0 actually holds, and the risk of accepting H0 when H1 holds. The former error is referred to as a type I error and the latter as a type II error. A test is said to have level α if the probability of committing a type I error is at most α. Finally, at a given level α, the power of a test is the probability of rejecting H0 under H1, i.e., 1 minus the probability of a type II error. When evaluating testing procedures at a given level α, we seek to identify the one with the greatest power. 2 Past work. The Hotelling T 2 statistic (2) discriminates between the hypotheses H0 and H1 by providing an estimate of the “statistical distance” separating the distributions N(µ1, Σ) and N(µ2, Σ). More specifically, the Hotelling statistic is essentially an estimate of the Kullback-Leibler (KL) divergence DKL N(µ1, Σ)∥N(µ2, Σ)  = 1 2δ⊤Σ−1δ, where δ := µ1 −µ2. Due to the fact that the pooled sample covariance matrix bΣ in the definition of T 2 is not invertible when p > n, several recent procedures have offered substitutes for the Hotelling statistic in the high-dimensional setting: Bai and Saranadasa [6], Srivastava and Du [7, 8], Chen and Qin [9], hereafter BS, SD and CQ respectively. Up to now, the route toward circumventing this difficulty has been to form an estimate of Σ that is diagonal, and hence easily invertible. We shall see later that this limited use of covariance structure sacrifices power when the data exhibit non-trivial correlation. In this regard, our procedure is motivated by the idea that covariance structure may be used more effectively by testing with projected samples in a space of lower dimension. Intuition for random projection. To provide some further intuition for our method, it is possible to consider the problem (1) in terms of a competition between the dimension p, and the statistical distance separating H0 and H1. On one hand, the accumulation of variance from a large number of variables makes it difficult to discriminate between the hypotheses, and thus, it is desirable to reduce the dimension of the data. On the other hand, most methods for reducing dimension will also bring H0 and H1 “closer together,” making them harder to distinguish. Mindful of the fact that the Hotelling test measures the separation of H0 and H1 in terms of δ⊤Σ−1δ, we see that the statistical distance is driven by the Euclidean length of δ. Consequently, we seek to transform the data in such a way that the dimension is reduced, while the length of the shift δ is mostly preserved upon passing to the transformed distributions. From this geometric point of view, it is natural to exploit the fact that random projections can simultaneously reduce dimension and approximately preserve lengths with high probability [14]. The use of projection-based test statistics has been considered previously in Jacob et al., [15], Cl´emenc¸on et al. [10], and Cuesta-Albertos et al. [16]. At a high level, our method can be viewed as a two step procedure. First, a single random projection is drawn, and is used to map the samples from the high-dimensional space Rp to a low-dimensional space1 Rk, with k := ⌊n/2⌋. Second, the Hotelling T 2 test is applied to a new hypothesis testing problem, H0,proj versus H1,proj, in the projected space. A decision is then pulled back to the original problem by simply rejecting H0 whenever the Hotelling test rejects H0,proj. Formal testing procedure. Let P ⊤ k ∈Rk×p denote a random projection with i.i.d. N(0, 1) entries, drawn independently of the data, where k = ⌊n/2⌋. Conditioning on the drawn matrix P ⊤ k , the projected samples {P ⊤ k X1, . . . , P ⊤ k Xn1} and {P ⊤ k Y1, . . . , P ⊤ k Yn2} are distributed i.i.d. according to N(P ⊤ k µi, P ⊤ k ΣPk) respectively, with i = 1, 2. Since n ≥k, the projected data satisfy the usual conditions [17, p. 211] for applying the Hotelling T 2 procedure to the following new two-sample problem in the projected space Rk: H0,proj : P ⊤ k µ1 = P ⊤ k µ2 versus H1,proj : P ⊤ k µ1 ̸= P ⊤ k µ2. (3) For this projected problem, the Hotelling test statistic takes the form2 T 2 k := n1n2 n1+n2 ( ¯X −¯Y )⊤Pk(P ⊤ k bΣPk)−1P ⊤ k ( ¯X −¯Y ), where ¯X, ¯Y , and bΣ are as defined in Section 1. Lastly, define the critical value tα := k n n−k+1F ∗ k,n−k+1(α), where F ∗ k,n−k+1(α) is the upper α quantile of the Fk,n−k+1 distribution [17]. It is a basic fact about the classical Hotelling test that rejecting H0,proj when T 2 k ≥tα is a level-α test for the projected problem (3) (e.g., see Muirhead [17, p.217]). Inspection of the formula for T 2 k shows that its distribution is the same under both H0 and H0,proj. Therefore, rejecting the original H0 when T 2 k ≥tα is also a level α test for the original problem (1). Likewise, we define this as the condition for rejecting H0 at level α in our procedure for (1). We summarize our procedure below. 1The choice of projected dimension k = ⌊n/2⌋is explained in the preprint [13]. 2Note that P ⊤ k bΣPk is invertible with probability 1 when P ⊤ k has i.i.d. N(0, 1) entries. 3 1. Generate a single random matrix P ⊤ k with i.i.d. N(0, 1) entries. 2. Compute T 2 k , using P ⊤ k and the two sets of samples. 3. If T 2 k ≥tα, reject H0; otherwise accept H0. (⋆) Projected Hotelling test at level α for problem (1). 3 Main results and their consequences This section is devoted to the statement and discussion of our main theoretical results, including a characterization of the asymptotic power function of our test (Theorem 1), and comparisons of asymptotic relative efficiency with state-of-the-art tests proposed in past work (Theorems 2 and 3). 3.1 Asymptotic power function As is standard in high-dimensional asymptotics, we will consider a sequence of hypothesis testing problems indexed by n, allowing the dimension p, mean vectors µ1 and µ2 and covariance matrix Σ to implicitly vary as functions of n, with n →∞. We also make another type of asymptotic assumption, known as a local alternative [18, p.193], which is commonplace in hypothesis testing. The idea lying behind a local alternative assumption is that if the difficulty of discriminating between H0 and H1 is “held fixed” with respect to n, then it is often the case that most testing procedures have power tending to 1 under H1 as n →∞. In such a situation, it is not possible to tell if one test has greater asymptotic power than another. Consequently, it is standard to derive asymptotic power results under the extra condition that H0 and H1 become harder to distinguish as n grows. This theoretical device aids in identifying the conditions under which one test is more powerful than another. The following local alternative (A1), and balancing assumption (A2), are similar to those used in previous works [6–9] on problem (1). In particular, condition (A1) means that the KL-divergence between N(µ1, Σ) and N(µ2, Σ) tends to 0 as n →∞. (A1) Suppose that δ⊤Σ−1δ = o(1). (A2) Let there be a constant b ∈(0, 1) such that n1/n →b. To set the notation for Theorem 1, it is important to notice that each time the procedure (⋆) is implemented, a draw of P ⊤ k induces a new test statistic T 2 k . To make this dependence clear, recall θ := (δ, Σ), and let β(θ; P ⊤ k ) denote the exact (non-asymptotic) power function of our level-α test for problem (1), induced by a draw of P ⊤ k , as in (⋆). Another key quantity that depends on P ⊤ k is the KL-divergence between the projected sampling distributions N(P ⊤ k µ1, P ⊤ k ΣPk) and N(P ⊤ k µ2, P ⊤ k ΣPk). We denote this divergence by 1 2∆2 k, and a simple calculation shows that 1 2∆2 k = 1 2δ⊤Pk(P ⊤ k ΣPk)−1P ⊤ k δ. Theorem 1. Under conditions (A1) and (A2), for almost all sequences of projections P ⊤ k , β(θ; P ⊤ k ) −Φ  −z1−α + b(1−b) √ 2 √n ∆2 k  →0 as n →∞. (4) Remarks. Note that if ∆2 k = 0, e.g. under H0, then Φ(−z1−α+0) = α, which corresponds to blind guessing at level α. Consequently, the second term (b(1 −b)/ √ 2) √n∆2 k determines the advantage of our procedure over blind guessing. Since ∆2 k is proportional to the KL-divergence between the projected sampling distributions, these observations conform to the intuition from Section 2 that the KL-divergence measures the discrepancy between H0 and H1. 3.2 Asymptotic relative efficiency (ARE) Having derived an asymptotic power function for our test in Theorem 1, we are now in position to provide sufficient conditions for achieving greater power than two other recent procedures for problem (1): Srivastava and Du [7, 8] (SD), and Chen and Qin [9] (CQ). To the best of our knowledge, 4 these works represent the state of the art3 among tests for problem (1) with a known asymptotic power function under (p, n) →∞. From Theorem 1, the asymptotic power function of our random projection-based test at level α is βRP(θ; P ⊤ k ) := Φ  −z1−α + b(1−b) √ 2 √n ∆2 k  . (5) The asymptotic power functions for the CQ and SD testing procedures at level α are βCQ(θ) := Φ  −z1−α + b(1−b) √ 2 n ∥δ∥2 2 |||Σ|||F  , and βSD(θ) := Φ  −z1−α + b(1−b) √ 2 n δ⊤D−1 σ δ |||R|||F  . Recall that Dσ := diag(Σ), and R denotes the correlation matrix associated with Σ. The functions βCQ and βSD are derived under local alternatives and asymptotic assumptions that are similar to the ones used here to obtain βRP. In particular, all three functions can be obtained allowing p/n to tend to an arbitrary positive constant or infinity. A standard method of comparing asymptotic power functions under local alternatives is through the concept of asymptotic relative efficiency (ARE) e.g., see van der Vaart [18, p.192]). Since Φ is monotone increasing, the term added to −z1−α inside the Φ functions above controls the power. To compare power between tests, the ARE is simply defined via the ratio of such terms. More explicitly, we define ARE (βCQ; βRP) :=  n ∥δ∥2 2 |||Σ|||F √n∆2 k 2 , and ARE (βSD; βRP) :=  n δ⊤D−1 σ δ |||R|||F √n ∆2 k 2 . Whenever the ARE is less than 1, our procedure is considered to have greater asymptotic power than the competing test—with our advantage being greater for smaller values of the ARE. Consequently, we seek sufficient conditions in Theorems 2 and 3 for ensuring that the ARE is small. In the present context, the analysis of ARE is complicated by the fact that the ARE varies with n and depends on a random draw of P ⊤ k through ∆2 k. Moreover, the quantity ∆2 k, and hence the ARE, are affected by the orientation of δ with respect to the eigenvectors of Σ. In order to consider an average-case scenario, where no single orientation of δ is of particular importance, we place a prior on the unit vector δ/∥δ∥2, and assume that it is uniformly distributed on the unit sphere of Rp. We emphasize that our procedure (⋆) does not rely on this assumption, and that it is only a device for making an average-case comparison. Therefore, to be clear about the meaning of Theorems 2 and 3, we regard the ARE as a function two random objects, P ⊤ k and δ/∥δ∥2, and our probability statements are made with this understanding. We complete the preparation for our comparison theorems by isolating four assumptions with n →∞. (A3) The vector δ ∥δ∥2 is uniformly distributed on the p-dimensional unit sphere, independent of P ⊤ k . (A4) There is a constant a ∈[0, 1) such that k/p →a. (A5) The ratio 1 √ k tr(Σ)  (p λmin(Σ)) = o(1). (A6) The matrix Dσ = diag(Σ) satisfies |||D−1 σ |||2 tr(D−1 σ ) = o(1). 3.3 Comparison with Chen and Qin [9] The next result compares the asymptotic power of our projection-based test with that of Chen and Qin [9]. The choice of ϵ1 = 1 below (and in Theorem 3) is the reference for equal asymptotic performance, with smaller values of ϵ1 corresponding to better performance of random projection. Theorem 2. Assume conditions (A3), (A4), and (A5). Fix a number ϵ1 > 0, and let c(ϵ1) be any constant strictly greater than 4 ϵ1 (1−√a)4 . If the inequality n ≥c(ϵ1) tr(Σ)2 |||Σ|||2 F (6) holds for all large n, then P [ARE (βCQ; βRP) ≤ϵ1] →1 as n →∞. Interpretation. To interpret the result, note that Jensen’s inequality implies that for any choice of Σ, we have 1 ≤tr(Σ)2 |||Σ|||2 F ≤p. As such, it is reasonable to interpret this ratio as a measure of 3Two other high-dimensional tests have been proposed in older works [6, 19, 20] that lead to the asymptotic power function βCQ, but under more restrictive assumptions. 5 the effective dimension of the covariance structure. The message of Theorem 2 is that as long as the sample size n exceeds the effective dimension, then our projection-based test is asymptotically superior to CQ. The ratio tr(Σ)2/ |||Σ|||2 F can also be viewed as measuring the decay rate of the spectrum of Σ, with tr(Σ)2 |||Σ|||2 F ≪p indicating rapid decay. This condition means that the data has low variance in “most” directions in Rp, and so projecting onto a random set of k directions will likely map the data into a low-variance subspace in which it is harder for chance variation to explain away the correct hypothesis, thereby resulting in greater power. 3.4 Comparison with Srivastava and Du [7, 8] We now turn to comparison of asymptotic power with the test of Srivastava and Du (SD). Theorem 3. In addition to the conditions of Theorem 2, assume that condition (A6) holds. Fix a number ϵ1 > 0, and let c(ϵ1) be any constant strictly greater than 4 ϵ1 (1−√a)4 . If the inequality n ≥c(ϵ1)  tr(Σ) p 2  tr(D−1 σ ) |||R|||F 2 (7) holds for all large large n, then P [ARE (βSD; βRP) ≤ϵ1] →1 as n →∞. Interpretation. Unlike the comparison with the CQ test, the correlation matrix R plays a large role in determining the relative efficiency between our procedure and the SD test. The correlation matrix enters in two different ways. First, the Frobenius norm |||R|||F is larger when the data variables are more correlated. Second, correlation mitigates the growth of tr(D−1 σ ), since this trace is largest when Σ is nearly diagonal and has a large number of small eigenvalues. Inspection of the SD test statistic in [7] shows that it does not make any essential use of correlation. By contrast, our T 2 k statistic does take correlation into account, and so it is understandable that correlated data enhance the performance of our test relative to SD. As a simple example, let ρ ∈(0, 1) and consider a highly correlated situation where all variables have ρ correlation will all other variables. Then, R = (1 −ρ)Ip×p + ρ11⊤where 1 ∈Rp is the all ones vector. We may also let Σ = R for simplicity. In this case, we see that |||R|||2 F = p + 2 p 2  ρ2 ≳ p2, and tr(D−1 σ )2 = tr(Ip×p)2 = p2. This implies tr(D−1 σ )2 |||R|||2 F ≲1 and tr(Σ)/p = 1, and then the sufficient condition (7) for outperforming SD is easily satisfied in terms of rates. We could even let the correlation ρ decay at a rate of n−q with q ∈(0, 1/2), and (7) would still be satisfied for large enough n. More generally, it is not necessary to use specially constructed covariance matrices Σ to demonstrate the superior performance of our method. Section 4 illustrates simulations involving randomly selected covariance matrices where T 2 k is more powerful than SD. Conversely, it is possible to show that condition (7) requires non-trivial correlation. To see this, first note that in the complete absence of correlation, we have |||R|||2 F = |||Ip×p|||2 F = p. Jensen’s inequality implies that tr(D−1 σ ) ≥ p2 tr(Dσ) = p2 tr(Σ), and so  tr(Σ) p 2  tr(D−1 σ |||R|||F 2 ≥p. Altogether, this shows if the data exhibits very low correlation, then (7) cannot hold when p grows faster than n. This will be illustrated in the simulations of Section 4. 4 Performance comparisons on real and synthetic data In this section, we compare our procedure to state-of-the-art methods on real and synthetic data, illustrating the effects of the different factors involved in Theorems 2 and 3. Comparison on synthetic data. In order to validate the consequences of our theory and compare against other methods in a controlled fashion, we performed simulations in four settings: slow/fast spectrum decay, and diagonal/random covariance structure. To consider two rates of spectrum decay, we selected p equally spaced values between 0.01 and 1, and raised them to the power 20 for fast decay and the power 5 for slow decay. Random covariance structure was generated by specifying the eigenvectors of Σ as the column vectors of the orthogonal component of a QR decomposition of a p × p matrix with i.i.d. N(0, 1) entries. In all cases, we sampled n1 = n2 = 50 data points from two multivariate normal distributions in p = 200 dimensions, and repeated the process 500 times 6 with δ = 0 for H0, and 500 times with ∥δ∥2 = 1 for H1. In the case of H1, δ was drawn uniformly from the unit sphere, as in Theorems 2 and 3. We fixed the total amount of variance by setting |||Σ|||F = 50 in all cases. In addition to our random projection (RP)-based test, we implemented the methods of BS [6], SD [7], and CQ [9], all of which are designed specifically for problem (1) in the high-dimensional setting. For the sake of completeness, we also compare against recent non-parametric procedures for the general two-sample problem that are based on kernel methods (MMD) [11] and (KFDA) [12], as well as area-under-curve maximization (TreeRank) [10]. The ROC curves from our simulations are displayed in the left block of four panels in Figure 1. These curves bear out the results of Theorems 2 and 3 in several ways. First notice that fast spectral decay improves the performance of our test relative to CQ, as expected from Theorem 2. If we set a = 0 and ϵ1 = 1 in Theorem 2, then condition (6) for outperforming CQ is approximately n ≥75 in the case of fast decay. Given that n = 50 + 50 −2 = 98, the advantage of our method over CQ in panels (b) and (d) is consistent with condition (6) being satisfied. In the case of slow decay, the same settings of a and ϵ1 indicate that n ≥246 is sufficient for outperforming CQ. Since the ROC curve of our method is roughly the same as that of CQ in panels (a) and (c) (where again n = 98), our condition (6) is somewhat conservative for slow decay at the finite sample level. To study the consequences of Theorem 3, observe that when the covariance matrix Σ is generated randomly, the amount of correlation is much larger than in the idealized case that Σ is diagonal. Specifically, for a fixed value of tr(Σ), the quantity tr(D−1 σ )/ |||R|||F , is much smaller in in the presence of correlation. Consequently, when comparing (a) with (c), and (b) with (d), we see that correlation improves the performance of our test relative to SD, as expected from the bound in Theorem 3. More generally, the ROC curves illustrate that our method has an overall advantage over BS, CQ, KFDA, and MMD. Note that KFDA and MMD are not designed specifically for the n ≪p regime. In the case of zero correlation, it is notable that the TreeRank procedure displays a superior ROC curve to our method, given that it also employs a dimension reduction strategy. False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 RP SD CQ BS KFDA MMD TreeRank (a) diagonal Σ, slow decay False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 RP SD CQ BS KFDA MMD TreeRank (b) diagonal Σ, fast decay 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Nominal level α False positive rate RP SD CQ BS HG (e) FPR for genomic data False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 RP SD CQ BS KFDA MMD TreeRank (c) random Σ, slow decay False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 RP SD CQ BS KFDA MMD TreeRank (d) random Σ, fast decay 0.00 0.02 0.04 0.06 0.08 0.10 0.00 0.02 0.04 0.06 0.08 0.10 Nominal level α False positive rate RP SD CQ BS HG (f) FPR for genomic data (zoom) Figure 1: Left and middle panels: ROC curves of several test statistics for two different choices of correlation structure and decay rate. (a) Diagonal covariance slow decay, (b) Diagonal covariance fast decay, (c) Random covariance slow decay, (d) Random covariance fast decay. Right panels: (e) False positive rate against p-value threshold on the gene expression experiment of Section 4 for RP (⋆), BS, CQ, SD and enrichment test, (f) zoom on the p-value < 0.1 region. 7 Comparison on high-dimensional gene expression data. The ability to identify gene sets having different expression between two types of conditions, e.g., benign and malignant forms of a disease, is of great value in many areas of biomedical research. Likewise, there is considerable motivation to study our procedure in the context of detecting differential expression of p genes between two small groups of patients of sizes n1 and n2. To compare the performance our T 2 k statistic against competitors CQ and SD in this type of application, we constructed a collection of 1680 distinct two-sample problems in the following manner, using data from three genomic studies of ovarian [21], myeloma [22] and colorectal [23] cancers. First, we randomly split the 3 datasets respectively into 6, 4, and 6 groups of approximately 50 patients. Next, we considered pairwise comparisons between all sets of patients on each of 14 biologically meaningful gene sets from the canonical pathways of MSigDB [24], with each gene set containing between 75 and 128 genes. Since n1 ≃n2 ≃50 for all patient sets, our collection of twosample problems is genuinely high-dimensional. Specifically, we have 14×( 6 2  + 4 2  + 6 2  ) = 504 problems under H0 and 14× (6· 4+ 6· 4+ 6· 6) = 1176 problems under H1—assuming that every gene set was differentially expressed between two sets of patients with different cancers, and that no gene set was differentially expressed between two sets of patients with the same cancer type.4 A natural performance measure for comparing test statistics is the actual false positive rate (FPR) as a function of the nominal level α. When testing at level α, the actual FPR should be as close to α as possible, but differences may occur if the distribution of the test statistic under H0 is not known exactly (as is the case in practice). Figure 1 (e) shows that the curve for our procedure is closer to the optimal diagonal line for most values of α than the competing curves. Furthermore, the lowerleft corner of Figure 1 (e) is of particular importance, as practitioners are usually only interested in p-values lower than 10−1. Figure 1 (f) is a zoomed plot of this region and shows that the SD and CQ tests commit too many false positives at low thresholds. Again, in this regime, our procedure is closer to the diagonal and safely commits fewer than the allowed number of false positives. For example, when thresholding p-values at 0.01, SD has an actual FPR of 0.03, and an even more excessive FPR of 0.02 when thresholding at 0.001. The tests of CQ and BS are no better. The same thresholds on the p-values of our test lead to false positive rates of 0.008 and 0 respectively. With consideration to ROC curves, the samples arising from different cancer types are dissimilar enough that BS, CQ, SD, and our method all obtain perfect ROC curves (no H1 case has a larger pvalue than any H0 case). We also note that the hypergeometric test-based (HG) enrichment analysis often used by experimentalists on this problem [25] gives a suboptimal area-under-curve of 0.989. 5 Conclusion We have proposed a novel testing procedure for the two-sample test of means in high dimensions. This procedure can be implemented in a simple manner by first projecting a dataset with a single randomly drawn matrix, and then applying the standard Hotelling T 2 test in the projected space. In addition to obtaining the asymptotic power of this test, we have provided interpretable conditions on the covariance matrix Σ for achieving greater power than competing tests in the sense of asymptotic relative efficiency. Specifically, our theoretical comparisons show that our test is well suited to interesting regimes where most of the variance in the data can be captured in a relatively small number of variables, or where the variables are highly correlated. Furthermore, in the realistic case of (n, p) = (98, 200), these regimes were shown to correspond to favorable performance of our test against several competitors in ROC curve comparisons on simulated data. Finally, we showed on real gene expression data that our procedure was more reliable than competitors in terms of its false positive rate. Extensions of this work may include more refined applications of random projection to high-dimensional testing problems. Acknowledgements. The authors thank Sandrine Dudoit, Anne Biton, and Peter Bickel for helpful discussions. MEL gratefully acknowledges the support of the DOE CSGF Fellowship under grant number DE-FG02-97ER25308, and LJ the support of Stand Up to Cancer. MJW was partially supported by NSF grant DMS-0907632. 4Although this assumption could be violated by the existence of various cancer subtypes, or technical differences between original tissue samples, our initial step of randomly splitting the three cancer datasets into subsets guards against these effects. 8 References [1] E. L. Lehmann and J. P. Romano. Testing statistical hypotheses. Springer Texts in Statistics. Springer, New York, third edition, 2005. [2] Y. Lu, P. Liu, P. Xiao, and H. Deng. Hotelling’s T2 multivariate profiling for detecting differential expression in microarrays. Bioinformatics, 21(14):3105–3113, Jul 2005. [3] J. J. Goeman and P. B¨uhlmann. Analyzing gene expression data in terms of gene sets: methodological issues. Bioinformatics, 23(8):980–987, Apr 2007. [4] D. V. D. Ville, T. Blue, and M. Unser. Integrated wavelet processing and spatial statistical testing of fMRI data. Neuroimage, 23(4):1472–1485, 2004. [5] U. Ruttimann et al. Statistical analysis of functional MRI data in the wavelet domain. IEEE Transactions on Medical Imaging, 17(2):142–154, 1998. [6] Z. Bai and H. Saranadasa. Effect of high dimension: by an example of a two sample problem. Statistica Sinica, 6:311,329, 1996. [7] M. S. Srivastava and M. Du. A test for the mean vector with fewer observations than the dimension. Journal of Multivariate Analysis, 99:386–402, 2008. [8] M. S. Srivastava. A test for the mean with fewer observations than the dimension under non-normality. Journal of Multivariate Analysis, 100:518–532, 2009. [9] S. X. Chen and Y. L. Qin. A two-sample test for high-dimensional data with applications to gene-set testing. Annals of Statistics, 38(2):808–835, Feb 2010. [10] S. Cl´emenc¸on, M. Depecker, and Vayatis N. AUC optimization and the two-sample problem. In Advances in Neural Information Processing Systems (NIPS 2009), 2009. [11] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch¨olkop, and A.J. Smola. A kernel method for the twosample-problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513–520. MIT Press, Cambridge, MA, 2007. [12] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS. MIT Press, 2007. [13] M. E. Lopes, L. J. Jacob, and M. J. Wainwright. A more powerful two-sample test in high dimensions using random projection. Technical Report arXiv: 1108.2401, 2011. [14] S. S. Vempala. The Random Projection Method. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society, 2004. [15] L. Jacob, P. Neuvial, and S. Dudoit. Gains in power from structured two-sample tests of means on graphs. Technical Report arXiv: q-bio/1009.5173v1, 2010. [16] J. A. Cuesta-Albertos, E. Del Barrio, R. Fraiman, and C. Matr´an. The random projection method in goodness of fit for functional data. Computational Statistics & Data Analysis, 51(10):4814–4831, 2007. [17] R. J. Muirhead. Aspects of Multivariate Statistical Theory. John Wiley & Sons, inc., 1982. [18] A. W. van der Vaart. Asymptotic Statistics. Cambridge, 2007. [19] A. P. Dempster. A high dimensional two sample significance test. Annals of Mathematical Statistics, 29(4):995–1010, 1958. [20] A. P. Dempster. A significance test for the separation of two highly multivariate small samples. Biometrics, 16(1):41–50, 1960. [21] R. W. Tothill et al. Novel molecular subtypes of serous and endometrioid ovarian cancer linked to clinical outcome. Clin Cancer Res, 14(16):5198–5208, Aug 2008. [22] J. Moreaux et al. A high-risk signature for patients with multiple myeloma established from the molecular classification of human myeloma cell lines. Haematologica, 96(4):574–582, Apr 2011. [23] R. N. Jorissen et al. Metastasis-associated gene expression changes predict poor outcomes in patients with dukes stage b and c colorectal cancer. Clin Cancer Res, 15(24):7642–7651, Dec 2009. [24] A. Subramanian et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. USA, 102(43):15545–15550, Oct 2005. [25] T. Beissbarth and T. P. Speed. Gostat: find statistically overrepresented gene ontologies within a group of genes. Bioinformatics, 20(9):1464–1465, Jun 2004. 9
2011
102
4,150
Kernel Bayes’ Rule Kenji Fukumizu The Institute of Statistical Mathematics, Tokyo fukumizu@ism.ac.jp Le Song College of Computing Georgia Institute of Technology lsong@cc.gatech.edu Arthur Gretton Gatsby Unit, UCL MPI for Intelligent Systems arthur.gretton@gmail.com Abstract A nonparametric kernel-based method for realizing Bayes’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes’ rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric statespace model. A consistency rate for the posterior estimate is established. 1 Introduction Kernel methods have long provided powerful tools for generalizing linear statistical approaches to nonlinear settings, through an embedding of the sample to a high dimensional feature space, namely a reproducing kernel Hilbert space (RKHS) [16]. The inner product between feature mappings need never be computed explicitly, but is given by a positive definite kernel function, which permits efficient computation without the need to deal explicitly with the feature representation. More recently, the mean of the RKHS feature map has been used to represent probability distributions, rather than mapping single points: we will refer to these representations of probability distributions as kernel means. With an appropriate choice of kernel, the feature mapping becomes rich enough that its expectation uniquely identifies the distribution: the associated RKHSs are termed characteristic [6, 7, 22]. Kernel means in characteristic RKHSs have been applied successfully in a number of statistical tasks, including the two sample problem [9], independence tests [10], and conditional independence tests [8]. An advantage of the kernel approach is that these tests apply immediately to any domain on which kernels may be defined. We propose a general nonparametric framework for Bayesian inference, expressed entirely in terms of kernel means. The goal of Bayesian inference is to find the posterior of x given observation y; q(x|y) = p(y|x)π(x) qY(y) , qY(y) = Z p(y|x)π(x)dµX (x), (1) where π(x) and p(y|x) are respectively the density function of the prior, and the conditional density or likelihood of y given x. In our framework, the posterior, prior, and likelihood are all expressed as kernel means: the update from prior to posterior is called the Kernel Bayes’ Rule (KBR). To implement KBR, the kernel means are learned nonparametrically from training data: the prior and likelihood means are expressed in terms of samples from the prior and joint probabilities, and the posterior as a kernel mean of a weighted sample. The resulting updates are straightforward matrix operations. This leads to the main advantage of the KBR approach: in the absence of a specific parametric model or an analytic form for the prior and likelihood densities, we can still perform Bayesian inference by making sufficient observations on the system. Alternatively, we may have a parametric model, but it might be complex and require time-consuming sampling techniques for inference. By contrast, KBR is simple to implement, and is amenable to well-established approximation techniques which yield an overall computational cost linear in the training sample size [5]. We further 1 establish the rate of consistency of the estimated posterior kernel mean to the true posterior, as a function of training sample size. The proposed kernel realization of Bayes’ rule is an extension of the approach used in [20] for state-space models. This earlier work applies a heuristic, however, in which the kernel mean of the previous hidden state and the observation are assumed to combine additively to update the hidden state estimate. More recently, a method for belief propagation using kernel means was proposed [18, 19]: unlike the present work, this directly estimates conditional densities assuming the prior to be uniform. An alternative to kernel means would be to use nonparametric density estimates. Classical approaches include finite distribution estimates on a partitioned domain or kernel density estimation, which perform poorly on high dimensional data. Alternatively, direct estimates of the density ratio may be used in estimating the conditional p.d.f. [24]. By contrast with density estimation approaches, KBR makes it easy to compute posterior expectations (as an RKHS inner product) and to perform conditioning and marginalization, without requiring numerical integration. 2 Kernel expression of Bayes’ rule 2.1 Positive definite kernel and probabilities We begin with a review of some basic concepts and tools concerning statistics on RKHS [1, 3, 6, 7]. Given a set Ω, a (R-valued) positive definite kernel k on Ωis a symmetric kernel k : Ω×Ω→R such that Pn i,j=1 cicjk(xi, xj) ≥0 for arbitrary points x1, . . . , xn in Ωand real numbers c1, . . . , cn. It is known [1] that a positive definite kernel on Ωuniquely defines a Hilbert space H (RKHS) consisting of functions on Ω, where ⟨f, k(·, x)⟩= f(x) for any x ∈Ωand f ∈H (reproducing property). Let (X, BX , µX ) and (Y, BY, µY) be measure spaces, and (X, Y ) be a random variable on X × Y with probability P. Throughout this paper, it is assumed that positive definite kernels on the measurable spaces are measurable and bounded, where boundedness is defined as supx∈Ωk(x, x) < ∞. Let kX be a positive definite kernel on a measurable space (X, BX ), with RKHS HX . The kernel mean mX of X on HX is defined by the mean of the HX -valued random variable kX (·, X), namely mX = Z kX (·, x)dPX(x). (2) For notational simplicity, the dependence on kX in mX is not shown. Since the kernel mean depends only on the distribution of X (and the kernel), it may also be written mPX; we will use whichever of these equivalent notations is clearest in each context. From the reproducing property, we have ⟨f, mX⟩= E[f(X)] (∀f ∈HX ). (3) Let kX and kY be positive definite kernels on X and Y with respective RKHS HX and HY. The (uncentered) covariance operator CY X : HX →HY is defined by the relation ⟨g, CY Xf⟩HY = E[f(X)g(Y )] ( = ⟨g ⊗f, m(Y X)⟩HY⊗HX ) (∀f ∈HX , g ∈HY). It should be noted that CY X is identified with the mean m(Y X) in the tensor product space HY⊗HX , which is given by the product kernel kYkX [1]. The identification is standard: the tensor product is isomorphic to the space of linear maps by the correspondence ψ ⊗φ ↔[h 7→ψ⟨φ, h⟩]. We also define CXX : HX →HX by ⟨f2, CXXf1⟩= E[f2(X)f1(X)] for any f1, f2 ∈HX . We next introduce the notion of a characteristic RKHS, which is essential when using kernels to manipulate probability measures. A bounded measurable positive definite kernel k is called characteristic if EX∼P [k(·, X)] = EX′∼Q[k(·, X′)] implies P = Q: probabilities are uniquely determined by their kernel means [7, 22]. With this property, problems of statistical inference can be cast in terms of inference on the kernel means. A widely used characteristic kernel on Rm is the Gaussian kernel, exp(−∥x −y∥2/(2σ2)). Empirical estimates of the kernel mean and covariance operator are straightforward to obtain. Given an i.i.d. sample (X1, Y1), . . . , (Xn, Yn) with law P, the empirical kernel mean and covariance operator are respectively bm(n) X = 1 n n X i=1 kX (·, Xi), bC(n) Y X = 1 n n X i=1 kY(·, Yi) ⊗kX (·, Xi), where bC(n) Y X is written in the tensor product form. These are known to be √n-consistent in norm. 2 2.2 Kernel Bayes’ rule We now derive the kernel mean implementation of Bayes’ rule. Let Π be a prior distribution on X with p.d.f. π(x). In the following, Q and QY denote the probabilities with p.d.f. q(x, y) = p(y|x)π(x) and qY(y) in Eq. (1), respectively. Our goal is to obtain an estimator of the kernel mean of posterior mQX |y = R kX (·, x)q(x|y)dµX (x). The following theorem is fundamental in manipulating conditional probabilities with positive definite kernels. Theorem 1 ([6]). If E[g(Y )|X = ·] ∈HX holds for g ∈HY, then CXXE[g(Y )|X = ·] = CXY g. If CXX is injective, the above relation can be expressed as E[g(Y )|X = ·] = CXX −1CXY g. (4) Using Eq. (4), we can obtain an expression for the kernel mean of QY. Theorem 2 ([20]). Assume CXX is injective, and let mΠ and mQY be the kernel means of Π in HX and QY in HY, respectively. If mΠ ∈R(CXX) and E[g(Y )|X = ·] ∈HX for any g ∈HY, then mQY = CY XCXX −1mΠ. (5) As discussed in [20], the operator CY XC−1 XX implements forward filtering of the prior π with the conditional density p(y|x), as in Eq. (1). Note, however, that the assumptions E[g(Y )|X = ·] ∈ HX and injectivity of CXX may not hold in general; we can easily provide counterexamples. In the following, we nonetheless derive a population expression of Bayes’ rule under these strong assumptions, use it as a prototype for an empirical estimator expressed in terms of Gram matrices, and prove its consistency subject to appropriate smoothness conditions on the distributions. In deriving kernel realization of Bayes’ rule, we will also use Theorem 2 to obtain a kernel mean representation of the joint probability Q: mQ = C(Y X)XC−1 XXmΠ ∈HY ⊗HX . (6) In the above equation, C(Y X)X is the covariance operator from HX to HY ⊗HX with p.d.f. ˜p((y, x), x′) = p(x, y)δx(x′), where δx(x′) is the point measure at x. In many applications of Bayesian inference, the probability conditioned on a particular value should be computed. By plugging the point measure at x into Π in Eq. (5), we have a population expression E[kY(·, Y )|X = x] = CY XCXX −1kX (·, x), (7) which was used by [20, 18, 19] as the kernel mean of the conditional probability p(y|x). Let (Z, W) be a random variable on X × Y with law Q. Replacing P by Q and x by y in Eq. (7), we obtain E[kX (·, Z)|W = y] = CZW C−1 W W kY(·, y). (8) This is exactly the kernel mean of the posterior which we want to obtain. The next step is to derive the covariance operators in Eq. (8). Recalling that the mean mQ = m(ZW ) ∈HX ⊗HY can be identified with the covariance operator CZW : HY →HX , and m(W W ) ∈HY ⊗HY with CW W , we use Eq. (6) to obtain the operators in Eq. (8), and thus the kernel mean expression of Bayes’ rule. The above argument can be rigorously implemented for empirical estimates of the kernel means and covariances. Let (X1, Y1), . . ., (Xn, Yn) be an i.i.d. sample with law P, and assume a consistent estimator for mΠ given by bm(ℓ) Π = ℓ X j=1 γjkX (·, Uj), where U1, . . . , Uℓis the sample that defines the estimator (which need not be generated by Π), and γj are the weights. Negative values are allowed for γj. The empirical estimators for CZW and CW W are identified with bm(ZW ) and bm(W W ), respectively. From Eq. (6), they are given by bmQ = bm(ZW ) = bC(n) (Y X)X bC(n) XX + εnI −1 bm(ℓ) Π , bm(W W ) = bC(n) (Y Y )X bC(n) XX + εnI −1 bm(ℓ) Π , where I is the identity and εn is the coefficient of Tikhonov regularization for operator inversion. The next two propositions express these estimators using Gram matrices. The proofs are simple matrix manipulation and shown in Supplementary material. In the following, GX and GY denote the Gram matrices (kX (Xi, Xj)) and (kY(Yi, Yj)), respectively. 3 Input: (i) {(Xi, Yi)}n i=1: sample to express P. (ii) {(Uj, γj)}ℓ j=1: weighted sample to express the kernel mean of the prior bmΠ. (iii) εn, δn: regularization constants. Computation: 1. Compute Gram matrices GX = (kX (Xi, Xj)), GY = (kY(Yi, Yj)), and a vector bmΠ = (Pℓ j=1 γjkX (Xi, Uj))n i=1 ∈Rn. 2. Compute bµ = n(GX + nεnIn)−1 bmΠ. 3. Compute RX|Y = ΛGY ((ΛGY )2 + δnIn)−1Λ, where Λ = Diag(bµ). Output: n × n matrix RX|Y . Given conditioning value y, the kernel mean of the posterior q(x|y) is estimated by the weighted sample {(Xi, wi)}n i=1 with w = RX|Y kY (y), where kY (y) = (kY(Yi, y))n i=1. Figure 1: Kernel Bayes’ Rule Algorithm Proposition 3. The Gram matrix expressions of bCZW and bCW W are given by bCZW = Pn i=1bµikX (·, Xi) ⊗kY(·, Yi) and bCW W = Pn i=1bµikY(·, Yi) ⊗kY(·, Yi), respectively, where the common coefficient bµ ∈Rn is bµ = n(GX + nεnIn)−1 bmΠ, bmΠ,i = bmΠ(Xi) = Pℓ j=1γjkX (Xi, Uj). (9) Prop. 3 implies that the probabilities Q and QY are estimated by the weighted samples {((Xi, Yi), bµi)}n i=1 and {(Yi, bµi)}n i=1, respectively, with common weights. Since the weights bµi may be negative, we use another type of Tikhonov regularization in computing Eq. (8), bmQX |y := bCZW bC2 W W + δnI −1 bCW W kY(·, y). (10) Proposition 4. For any y ∈Y, the Gram matrix expression of bmQX |y is given by bmQX |y = kT XRX|Y kY (y), RX|Y := ΛGY ((ΛGY )2 + δnIn)−1Λ, (11) where Λ = Diag(bµ) is a diagonal matrix with elements bµi given by Eq. (9), kX = (kX (·, X1), . . . , kX (·, Xn))T ∈HX n, and kY = (kY(·, Y1), . . . , kY(·, Yn))T ∈HY n. We call Eqs.(10) or (11) the kernel Bayes’ rule (KBR): i.e., the expression of Bayes’ rule entirely in terms of kernel means. The algorithm to implement KBR is summarized in Fig. 1. If our aim is to estimate E[f(Z)|W = y], that is, the expectation of a function f ∈HX with respect to the posterior, then based on Eq. (3) an estimator is given by ⟨f, bmQX |y⟩HX = f T XRX|Y kY(y), (12) where fX = (f(X1), . . . , f(Xn))T ∈Rn. In using a weighted sample to represent the posterior, KBR has some similarity to Monte Carlo methods such as importance sampling and sequential Monte Carlo ([4]). The KBR method, however, does not generate samples from the posterior, but updates the weights of a sample via matrix operations. We will provide experimental comparisons between KBR and sampling methods in Sec. 4.1. 2.3 Consistency of KBR estimator We now demonstrate the consistency of the KBR estimator in Eq. (12). We show only the best rate that can be derived under the assumptions, and leave more detailed discussions and proofs to the Supplementary material. We assume that the sample size ℓ= ℓn for the prior goes to infinity as the sample size n for the likelihood goes to infinity, and that bm(ℓn) Π is nα-consistent. In the theoretical results, we assume all Hilbert spaces are separable. In the following, R(A) denotes the range of A. Theorem 5. Let f ∈HX , (Z, W) be a random vector on X × Y such that its law is Q with p.d.f. p(y|x)π(x), and bm(ℓn) Π be an estimator of mΠ such that ∥bm(ℓn) Π −mΠ∥HX = Op(n−α) as n →∞for some 0 < α ≤1/2. Assume that π/pX ∈R(C1/2 XX), where pX is the p.d.f. of PX, and E[f(Z)|W = ·] ∈R(C2 W W ). For εn = n−2 3 α and δn = n−8 27 α, we have for any y ∈Y f T XRX|Y kY (y) −E[f(Z)|W = y] = Op(n−8 27 α), (n →∞), where f T XRX|Y kY (y) is the estimator of E[f(Z)|W = y] given by Eq. (12). 4 The condition π/pX ∈R(C1/2 XX) requires the prior to be smooth. If ℓn = n, and if bm(n) Π is a direct empirical kernel mean with an i.i.d. sample of size n from Π, typically α = 1/2 and the theorem implies n4/27-consistency. While this might seem to be a slow rate, in practice the convergence may be much faster than the above theoretical guarantee. 3 Bayesian inference with Kernel Bayes’ Rule In Bayesian inference, tasks of interest include finding properties of the posterior (MAP value, moments), and computing the expectation of a function under the posterior. We now demonstrate the use of the kernel mean obtained via KBR in solving these problems. First, we have already seen from Theorem 5 that we may obtain a consistent estimator under the posterior for the expectation of some f ∈HX . This covers a wide class of functions when characteristic kernels are used (see also experiments in Sec. 4.1). Next, regarding a point estimate of x, [20] proposes to use the preimage bx = arg minx ∥kX (·, x) − kT XRX|Y kY (y)∥2 HX , which represents the posterior mean most effectively by one point. We use this approach in the present paper where point estimates are considered. In the case of the Gaussian kernel, a fixed point method can be used to sequentially optimize x [13]. In KBR the prior and likelihood are expressed in terms of samples. Thus unlike many methods for Bayesian inference, exact knowledge on their densities is not needed, once samples are obtained. The following are typical situations where the KBR approach is advantageous: • The relation among variables is difficult to realize with a simple parametric model, however we can obtain samples of the variables (e.g. nonparametric state-space model in Sec. 3). • The p.d.f of the prior and/or likelihood is hard to obtain explicitly, but sampling is possible: (a) In population genetics, branching processes are used for the likelihood to model the split of species, for which the explicit density is hard to obtain. Approximate Bayesian Computation (ABC) is a popular sampling method in these situations [25, 12, 17]. (b) In nonparametric Bayesian inference (e.g. [14]), the prior is typically given in the form of a process without a density. The KBR approach can give alternative ways of Bayesian computation for these problems. We will show some experimental comparisons between KBR approach and ABC in Sec. 4.2. • If a standard sampling method such as MCMC or sequential MC is applicable, the computation given y may be time consuming, and real-time applications may not be feasible. Using KBR, the expectation of the posterior given y is obtained simply by the inner product as in Eq. (12), once f T XRX|Y has been computed. The KBR approach nonetheless has a weakness common to other nonparametric methods: if a new data point appears far from the training sample, the reliability of the output will be low. Thus, we need sufficient diversity in training sample to reliably estimate the posterior. In KBR computation, Gram matrix inversion is necessary, which would cost O(n3) for sample size n if attempted directly. Substantial cost reductions can be achieved by low rank matrix approximations such as the incomplete Cholesky decomposition [5], which approximates a Gram matrix in the form of ΓΓT with n × r matrix Γ. Computing Γ costs O(nr2), and with the Woodbury identity, the KBR can be approximately computed with cost O(nr2). Kernel choice or model selection is key to the effectiveness of KBR, as in other kernel methods. KBR involves three model parameters: the kernel (or its parameters), and the regularization parameters εn and δn. The strategy for parameter selection depends on how the posterior is to be used in the inference problem. If it is applied in a supervised setting, we can use standard cross-validation (CV). A more general approach requires constructing a related supervised problem. Suppose the prior is given by the marginal PX of P. The posterior density q(x|y) averaged with PY is then equal to the marginal density pX. We are then able to compare the discrepancy of the kernel mean of PX and the average of the estimators bQX|y=Yi over Yi. This leads to application of K-fold CV approach. Namely, for a partition of {1, . . . , n} into K disjoint subsets {Ta}K a=1, let bm[−a] QX|y be the kernel mean of posterior estimated with data {(Xi, Yi)}i/∈Ta, and the prior mean bm[−a] X with data {Xi}i/∈Ta. We use PK a=1 1 |Ta| P j∈Ta bm[−a] QX|y=Yj −bm[a] X 2 HX for CV, where bm[a] X = 1 |Ta| P j∈Ta kX (·, Xj). 5 Application to nonparametric state-space model. Consider the state-space model, p(X, Y ) = π(X1)QT t=1p(Yt|Xt)QT −1 t=1 q(Xt+1|Xt), where Yt is observable and Xt is a hidden state. We do not assume the conditional probabilities p(Yt|Xt) and q(Xt+1|Xt) to be known explicitly, nor do we estimate them with simple parametric models. Rather, we assume a sample (X1, Y1), . . . , (XT +1, YT +1) is given for both the observable and hidden variables in the training phase. This problem has already been considered in [20], but we give a more principled approach based on KBR. The conditional probability for the transition q(xt+1|xt) and observation process p(y|x) are represented by the covariance operators as computed with the training sample; bCX,X+1 = 1 T PT i=1 kX (·, Xi) ⊗kX (·, Xi+1), bCXY = 1 T PT i=1 kX (·, Xi) ⊗kY(·, Yi), and bCY Y and bCXX are defined similarly. Note that though the data are not i.i.d., consistency is achieved by the mixing property of the Markov model. For simplicity, we focus on the filtering problem, but smoothing and prediction can be done similarly. In filtering, we wish to estimate the current hidden state xt, given observations ˜y1, . . . , ˜yt. The sequential estimate of p(xt|˜y1, . . . , ˜yt) can be derived using KBR (we give only a sketch below; see Supplementary material for the detailed derivation). Suppose we already have an estimator of the kernel mean of p(xt|˜y1, . . . , ˜yt) in the form bmxt|˜y1,...,˜yt = PT i=1α(t) i kX (·, Xi), where α(t) i = α(t) i (˜y1, . . . , ˜yt) are the coefficients at time t. By applying Theorem 2 twice, the kernel mean of p(yt+1|˜y1, . . . , ˜yt) is estimated by bmyt+1|˜y1,...,˜yt = PT i=1 bµ(t+1) i kY(·, Yi), where bµ(t+1) = (GX + TεT IT )−1GX,X+1(GX + TεT IT )−1GXα(t). (13) Here GX+1X is the “transfer” matrix defined by GX+1X  ij = kX (Xi+1, Xj). With the notation Λ(t+1) = Diag(bµ(t+1) 1 , . . . , bµ(t+1) T ), kernel Bayes’ rule yields α(t+1) = Λ(t+1)GY (Λ(t+1)GY )2 + δT IT −1Λ(t+1)kY (˜yt+1). (14) Eqs. (13) and (14) describe the update rule of α(t)(˜y1, . . . , ˜yt). By contrast with [20], where the estimates of the previous hidden state and observation are assumed to combine additively, the above derivation is based only on applying KBR. In sequential filtering, a substantial reduction of computational cost can be achieved by low rank approximations for the matrices of a training phase: given rank r, the computation costs only O(Tr2) for each step in filtering. Bayesian computation without likelihood. When the likelihood and/or prior is not obtained in an analytic form but sampling is possible, the ABC approach [25, 12, 17] is popular for Bayesian computation. The ABC rejection method generates a sample from q(X|Y = y) as follows: (1) generate Xt from the prior Π, (2) generate Yt from p(y|Xt), (3) if D(y, Yt) < ρ, accept Xt; otherwise reject, (4) go to (1). In Step (3), D is a distance on X, and ρ is the tolerance to acceptance. In the exactly the same situation as the above, the KBR approach gives the following method: (i) generate X1, . . . , Xn from the prior Π, (ii) generate a sample Yt from p(y|Xt) (t = 1, . . . , n), (iii) compute Gram matrices GX and GY with (X1, Y1), . . . , (Xn, Yn), and RX|Y kY (y). The distribution of a sample given by ABC approaches the true posterior if ρ →0, while the empirical posterior estimate of KBR converges to the true one as n →∞. The computational efficiency of ABC, however, can be arbitrarily low for a small ρ, since Xt is then rarely accepted in Step (3). Finally, ABC generates a sample, which allows any statistic of the posterior to be approximated. In the case of KBR, certain statistics of the posterior (such as confidence intervals) can be harder to obtain, since consistency is guaranteed only for expectations of RKHS functions. In Sec. 4.2, we provide experimental comparisons addressing the trade-off between computational time and accuracy for ABC and KBR. 4 Experiments 4.1 Nonparametric inference of posterior First we compare KBR and the standard kernel density estimation (KDE). Let {(Xi, Yi)}n i=1 be an i.i.d. sample from P on Rd × Rr. With p.d.f. K(x) on Rd and H(y) on Rr, the conditional 6 p.d.f. p(y|x) is estimated by bp(y|x) = Pn j=1 KhX(x −Xj)HhY (y −Yj)/ Pn j=1 KhX(x −Xj), where KhX(x) = h−d X K(x/hX) and HhY (x) = h−r Y H(y/hY ). Given an i.i.d. sample {Uj}ℓ j=1 from the prior Π, the posterior q(x|y) is represented by the weighted sample (Ui, wi) with wi = bp(y|Ui)/ Pℓ j=1 bp(y|Uj) as importance weight (IW). We compare the estimates of R xq(x|y)dx obtained by KBR and KDE + IW, using Gaussian kernels for both the methods. Note that with Gaussian kernel, the function f(x) = x does not belong to HX , and the consistency of the KBR method is not rigorously guaranteed (c.f. Theorem 5). Gaussian kernels, however, are known to be able to approximate any continuous function on a compact subset with arbitrary accuracy [23]. We can thus expect that the posterior mean can be estimated effectively. 2 4 8 12 16 24 32 48 64 0 10 20 30 40 50 60 Dimension Ave. MSE (50 runs) KBR vs KDE+IW (E[X|Y=y]) KBR (CV) KBR (Med dist) KDE+IW (MS CV) KDE+IW (best) Figure 2: KBR v.s. KDE+IW. In the experiments, the dimensionality was given by r = d ranging form 2 to 64. The distribution P of (X, Y ) was N((0, 1d)T , V ) with V randomly generated for each run. The prior Π was PX = N(0, VXX/2), where VXX is the X-component of V . The sample sizes were n = ℓ= 200. The bandwidth parameter hX, hY in KDE were set hX = hY and chosen by two ways, the least square cross-validation [15] and the best mean performance, over the set {2 ∗i | i = 1, . . . , 10}. For the KBR, we used use two methods to choose the deviation parameter in Gaussian kernel: the median over the pairwise distances in the data [10] and the 10-fold CV described in Sec. 3. Fig. 2 shows the MSE of the estimates over 1000 random points y ∼N(0, VY Y ). While the accuracy of the both methods decrease for larger dimensionality, the KBR significantly outperforms the KDE+IW. 4.2 Bayesian computation without likelihood 10 0 10 1 10 2 10 3 10 4 10 −1 CPU time vs Error (6 dim.) CPU time (sec) Av. Mean Square Errors KBR ABC 5.1×102 2.5×103 1.0×104 6.4×104 7.9×105 200 400 1000 2000 600 800 Figure 3: Estimation accuracy and computational time with KBR and ABC. We compare KBR and ABC in terms of the estimation accuracy and computational time. To compute the estimation accuracy rigorously, Gaussian distributions are used for the true prior and likelihood. The samples are taken from the same model as in Sec. 4.1, and R xq(x|y)dx is evaluated at 10 different points of y. We performed 10 runs with different covariance. For ABC, we used only the rejection method; while there are more advanced sampling schemes [12, 17], implementation is not straightforward. Various parameters for the acceptance are used, and the accuracy and computational time are shown in Fig.3 together with total sizes of generated samples. For the KBR method, the sample sizes n of the likelihood and prior are varied. The regularization parameters are given by εn = 0.01/n and δn = 2εn. In KBR, Gaussian kernels are used and the incomplete Cholesky decomposition is employed. The results indicate that KBR achieves more accurate results than ABC in the same computational time. 4.3 Filtering problems The KBR filter proposed in Sec. 3 is applied. Alternative strategies for state-space models with complex dynamics involve the extended Kalman filter (EKF) and unscented Kalman filter (UKF, [11]). There are some works on nonparametric state-space model or HMM which use nonparametric estimation of conditional p.d.f. such as KDE or partitions [27, 26] and, more recently, kernel method [20, 21]. In the following, the KBR method is compared with linear and nonlinear Kalman filters. KBR has the regularization parameters εT , δT , and kernel parameters for kX and kY (e.g., the deviation parameter for Gaussian kernel). The validation approach is applied for selecting them by dividing the training sample into two. To reduce the search space, we set δT = 2εT and use the Gaussian kernel deviation βσX and βσY, where σX and σY are the median of pairwise distances among the training samples ([10]), leaving only two parameters β and εT to be tuned. 7 200 400 600 800 1000 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Training sample size Mean square errors KBR EKF UKF 200 400 600 800 0.05 0.06 0.07 0.08 0.09 Training data size Mean square errors KBF EKF UKF Data (a) Data (b) Figure 4: Comparisons with the KBR Filter and EKF. (Average MSEs and SEs over 30 runs.) KBR (Gauss) KBR (Tr) Kalman (9 dim.) Kalman (Quat.) σ2 = 10−4 0.210 ± 0.015 0.146 ± 0.003 1.980 ± 0.083 0.557 ± 0.023 σ2 = 10−3 0.222 ± 0.009 0.210 ± 0.008 1.935 ± 0.064 0.541 ± 0.022 Table 1: Average MSEs and SEs of camera angle estimates (10 runs). We first use two synthetic data sets with KBR, EKF, and UKF, assuming that EKF and UKF know the exact dynamics. The dynamics has a hidden state Xt = (ut, vt)T ∈R2, and is given by (ut+1, vt+1) = (1 + b sin(Mθt+1))(cos θt+1, sin θt+1) + Zt, θt+1 = θt + η (mod 2π), where Zt ∼N(0, σ2 hI2) is independent noise. Note that the dynamics of (ut, vt) is nonlinear even for b = 0. The observation Yt follows Yt = Xt + Wt, where Wt ∼N(0, σ2 oI). The two dynamics are defined as follows: (a) (noisy rotation) η = 0.3, b = 0, σh = σo = 0.2, (b) (noisy oscillatory rotation) η = 0.4, b = 0.4, M = 8, σh = σo = 0.2. The results are shown in Fig. 4. In all the cases, EKF and UKF show unrecognizably small difference. The dynamics in (a) has weak nonlinearity, and KBR shows slightly worse MSE than EKF and UKF. For dataset (b) of strong nonlinearity, KBR outperforms for T ≥200 the nonlinear Kalman filters, which know the true dynamics. Next, we applied the KBR filter to the camera rotation problem used in [20]1, where the angle of a camera is the hidden variable and the movie frames of a room taken by the camera are observed. We are given 3600 frames of 20 × 20 RGB pixels (Yt ∈[0, 1]1200), where the first 1800 frames are used for training, and the second half are used for test. For the details on the data, see [20]. We make the data noisy by adding Gaussian noise N(0, σ2) to Yt. Our experiments cover two settings. In the first, we assume we do not know the hidden state Xt is included in SO(3), but is a general 3 × 3 matrix. In this case, we use the Kalman filter by estimating the relations under a linear assumption, and the KBR filter with Gaussian kernels for Xt and Yt. In the second setting, we exploit the fact Xt ∈SO(3): for the Kalman filter, Xt is represented by a quanternion, and for the KBR filter the kernel k(A, B) = Tr[ABT ] is used for Xt. Table 1 shows the Frobenius norms between the estimated matrix and the true one. The KBR filter significantly outperforms the Kalman filter, since KBR has the advantage in extracting the complex nonlinear dependence of the observation on the hidden state. 5 Conclusion We have proposed a general, novel framework for implementing Bayesian inference, where the prior, likelihood, and posterior are expressed as kernel means in reproducing kernel Hilbert spaces. The model is expressed in terms of a set of training samples, and inference consists of a small number of straightforward matrix operations. Our approach is well suited to cases where simple parametric models or an analytic forms of density are not available, but samples are easily obtained. We have addressed two applications: Bayesian inference without likelihood, and sequential filtering with nonparametric state-space model. Future studies could include more comparisons with sampling approaches like advanced Monte Carlo, and applications to various inference problems such as nonparametric Bayesian models and Bayesian reinforcement learning. Acknowledgements. KF was supported in part by JSPS KAKENHI (B) 22300098. 1Due to some difference in noise model, the results here are not directly comparable with those of [20]. 8 References [1] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68(3):337–404, 1950. [2] C.R. Baker. Joint measures and cross-covariance operators. Trans. Amer. Math. Soc., 186:273–289, 1973. [3] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Kluwer Academic Publisher, 2004. [4] A. Doucet, N. De Freitas, and N.J. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. [5] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. JMLR, 2:243– 264, 2001. [6] K. Fukumizu, F.R. Bach, and M.I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73–99, 2004. [7] K. Fukumizu, F.R. Bach, and M.I. Jordan. Kernel dimension reduction in regression. Anna. Stat., 37(4):1871–1905, 2009. [8] K. Fukumizu, A. Gretton, X. Sun, and B. Sch¨olkopf. Kernel measures of conditional dependence. In Advances in NIPS 20, pages 489–496. MIT Press, 2008. [9] A. Gretton, K.M. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the twosample-problem. In Advances in NIPS 19, pages 513–520. MIT Press, 2007. [10] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch¨olkopf, and A. Smola. A kernel statistical test of independence. In Advances in NIPS 20, pages 585–592. MIT Press, 2008. [11] S.J. Julier and J.K. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Proc. AeroSense: The 11th Intern. Symp. Aerospace/Defence Sensing, Simulation and Controls, 1997. [12] P. Marjoram, Jo. Molitor, V. Plagnol, and S. Tavare. Markov chain monte carlo without likelihoods. PNAS, 100(26):15324–15328, 2003. [13] S. Mika, B. Sch¨olkopf, A. Smola, K.-R. M¨uller, M. Scholz, and G. R¨atsch. Kernel pca and de-noising in feature spaces. In Advances in NIPS 11, pages 536–542. MIT Press, 1999. [14] P. M¨uller and F.A. Quintana. Nonparametric bayesian data analysis. Statistical Science, 19(1):95–110, 2004. [15] M. Rudemo. Empirical choice of histograms and kernel density estimators. Scandinavian J. Statistics, 9(2):pp. 65–78, 1982. [16] B. Sch¨olkopf and A.J. Smola. Learning with Kernels. MIT Press, 2002. [17] S. A. Sisson, Y. Fan, and M. M. Tanaka. Sequential monte carlo without likelihoods. PNAS, 104(6):1760– 1765, 2007. [18] L. Song, A. Gretton., and C. Guestrin. Nonparametric tree graphical models via kernel embeddings. In AISTATS 2010, pages 765–772, 2010. [19] L. Song, A. Gretton, D. Bickson, Y. Low, and C. Guestrin. Kernel belief propagation. In AISTATS 2011. [20] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. Proc ICML2009, pages 961–968. 2009. [21] L. Song and S. M. Siddiqi and G. Gordon and A. Smola. Hilbert Space Embeddings of Hidden Markov Models. Proc. ICML2010, 991–998. 2010. [22] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch¨olkopf, and G. R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 11:1517–1561, 2010. [23] I. Steinwart. On the Influence of the Kernel on the Consistency of Support Vector Machines. JMLR, 2:67–93, 2001. [24] M. Sugiyama, I. Takeuchi, T. Suzuki, T. Kanamori, H. Hachiya, and D. Okanohara. Conditional density estimation via least-squares density ratio estimation. In AISTATS 2010, pages 781–788, 2010. [25] S. Tavar´e, D.J. Balding, R.C. Griffithis, and P. Donnelly. Inferring coalescence times from dna sequece data. Genetics, 145:505–518, 1997. [26] S. Thrun, J. Langford, and D. Fox. Monte carlo hidden markov models: Learning non-parametric models of partially observable stochastic processes. In ICML 1999, pages 415–424, 1999. [27] V. Monbet , P. Ailliot, and P.F. Marteau. l1-convergence of smoothing densities in non-parametric state space models. Statistical Inference for Stochastic Processes, 11:311–325, 2008. 9
2011
103
4,151
ShareBoost: Efficient Multiclass Learning with Feature Sharing Shai Shalev-Shwartz⇤ Yonatan Wexler† Amnon Shashua‡ Abstract Multiclass prediction is the problem of classifying an object into a relevant target class. We consider the problem of learning a multiclass predictor that uses only few features, and in particular, the number of used features should increase sublinearly with the number of possible classes. This implies that features should be shared by several classes. We describe and analyze the ShareBoost algorithm for learning a multiclass predictor that uses few shared features. We prove that ShareBoost efficiently finds a predictor that uses few shared features (if such a predictor exists) and that it has a small generalization error. We also describe how to use ShareBoost for learning a non-linear predictor that has a fast evaluation time. In a series of experiments with natural data sets we demonstrate the benefits of ShareBoost and evaluate its success relatively to other state-of-the-art approaches. 1 Introduction Learning to classify an object into a relevant target class surfaces in many domains such as document categorization, object recognition in computer vision, and web advertisement. In multiclass learning problems we use training examples to learn a classifier which will later be used for accurately classifying new objects. Typically, the classifier first calculates several features from the input object and then classifies the object based on those features. In many cases, it is important that the runtime of the learned classifier will be small. In particular, this requires that the learned classifier will only rely on the value of few features. We start with predictors that are based on linear combinations of features. Later, in Section 3, we show how our framework enables learning highly non-linear predictors by embedding non-linearity in the construction of the features. Requiring the classifier to depend on few features is therefore equivalent to sparseness of the linear weights of features. In recent years, the problem of learning sparse vectors for linear classification or regression has been given significant attention. While, in general, finding the most accurate sparse predictor is known to be NP hard, two main approaches have been proposed for overcoming the hardness result. The first approach uses `1 norm as a surrogate for sparsity (e.g. the Lasso algorithm [33] and the compressed sensing literature [5, 11]). The second approach relies on forward greedy selection of features (e.g. Boosting [15] in the machine learning literature and orthogonal matching pursuit in the signal processing community [35]). A popular model for multiclass predictors maintains a weight vector for each one of the classes. In such case, even if the weight vector associated with each class is sparse, the overall number of used features might grow with the number of classes. Since the number of classes can be rather large, and our goal is to learn a model with an overall small number of features, we would like that the weight vectors will share the features with non-zero weights as much as possible. Organizing the weight vectors of all classes as rows of a single matrix, this is equivalent to requiring sparsity of the columns of the matrix. ⇤School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel †OrCam Ltd., Jerusalem, Israel ‡OrCam Ltd., Jerusalem, Israel 1 In this paper we describe and analyze an efficient algorithm for learning a multiclass predictor whose corresponding matrix of weights has a small number of non-zero columns. We formally prove that if there exists an accurate matrix with a number of non-zero columns that grows sub-linearly with the number of classes, then our algorithm will also learn such a matrix. We apply our algorithm to natural multiclass learning problems and demonstrate its advantages over previously proposed state-of-the-art methods. Our algorithm is a generalization of the forward greedy selection approach to sparsity in columns. An alternative approach, which has recently been studied in [26, 12], generalizes the `1 norm based approach, and relies on mixed-norms. We discuss the advantages of the greedy approach over mixednorms in Section 1.2. 1.1 Formal problem statement Let V be the set of objects we would like to classify. For example, V can be the set of gray scale images of a certain size. For each object v 2 V, we have a pool of predefined d features, each of which is a real number in [−1, 1]. That is, we can represent each v 2 V as a vector of features x 2 [−1, 1]d. We note that the mapping from v to x can be non-linear and that d can be very large. For example, we can define x so that each element xi corresponds to some patch, p 2 {±1}q⇥q, and a threshold ✓, where xi equals 1 if there is a patch of v whose inner product with p is higher than ✓. We discuss some generic methods for constructing features in Section 3. From this point onward we assume that x is given. The set of possible classes is denoted by Y = {1, . . . , k}. Our goal is to learn a multiclass predictor, which is a mapping from the features of an object into Y. We focus on the set of predictors parametrized by matrices W 2 Rk,d that takes the following form: hW (x) = argmax y2Y (Wx)y . (1) That is, the matrix W maps each d-dimensional feature vector into a k-dimensional score vector, and the actual prediction is the index of the maximal element of the score vector. If the maximizer is not unique, we break ties arbitrarily. Recall that our goal is to find a matrix W with few non-zero columns. We denote by W·,i the i’th column of W and use the notation kWk1,0 = |{i : kW·,ik1 > 0}| to denote the number of columns of W which are not identically the zero vector. More generally, given a matrix W and a pair of norms k · kp, k · kr we denote kWkp,r = k(kW·,1kp, . . . , kW·,dkp)kr, that is, we apply the p-norm on the columns of W and the r-norm on the resulting d-dimensional vector. The 0−1 loss of a multiclass predictor hW on an example (x, y) is defined as 1[hW (x) 6= y]. That is, the 0−1 loss equals 1 if hW (x) 6= y and 0 otherwise. Since this loss function is not convex with respect to W, we use a surrogate convex loss function based on the following easy to verify inequalities: 1[hW (x) 6= y] 1[hW (x) 6= y] −(Wx)y + (Wx)hW (x) max y02Y 1[y0 6= y] −(Wx)y + (Wx)y0 (2) ln X y02Y e1[y06=y]−(W x)y+(W x)y0 . (3) We use the notation `(W, (x, y)) to denote the right-hand side (eqn. (3)) of the above. The loss given in eqn. (2) is the multi-class hinge loss [7] used in Support-Vector-Machines, whereas `(W, (x, y)) is the result of performing a “soft-max” operation: maxx f(x) (1/p) ln P x epf(x), where equality holds for p ! 1. This logistic multiclass loss function `(W, (x, y)) has several nice properties — see for example [39]. Besides being a convex upper-bound on the 0−1 loss, it is smooth. The reason we need the loss function to be both convex and smooth is as follows. If a function is convex, then its first order approximation at any point gives us a lower bound on the function at any other point. When the function is also smooth, the first order approximation gives us both lower and upper bounds on the 2 value of the function at any other point1. ShareBoost uses the gradient of the loss function at the current solution (i.e. the first order approximation of the loss) to make a greedy choice of which column to update. To ensure that this greedy choice indeed yields a significant improvement we must know that the first order approximation is indeed close to the actual loss function, and for that we need both lower and upper bounds on the quality of the first order approximation. Given a training set S = (x1, y1), . . . , (xm, ym), the average training loss of a matrix W is: L(W) = 1 m P (x,y)2S `(W, (x, y)). We aim at approximately solving the problem min W 2Rk,d L(W) s.t. kWk1,0 s . (4) That is, find the matrix W with minimal training loss among all matrices with column sparsity of at most s, where s is a user-defined parameter. Since `(W, (x, y)) is an upper bound on 1[hW (x) 6= y], by minimizing L(W) we also decrease the average 0−1 error of W over the training set. In Section 4 we show that for sparse models, a small training error is likely to yield a small error on unseen examples as well. Regrettably, the constraint kWk1,0 s in eqn. (4) is non-convex, and solving the optimization problem in eqn. (4) is NP-hard [24, 9]. To overcome the hardness result, the ShareBoost algorithm will follow the forward greedy selection approach. The algorithm comes with formal generalization and sparsity guarantees (described in Section 4) that makes ShareBoost an attractive multiclass learning engine due to efficiency (both during training and at test time) and accuracy. 1.2 Related Work The centrality of the multiclass learning problem has spurred the development of various approaches for tackling the task. Perhaps the most straightforward approach is a reduction from multiclass to binary, e.g. the one-vs-rest or all pairs constructions. The more direct approach we choose, in particular, the multiclass predictors of the form given in eqn. (1), has been extensively studied and showed a great success in practice — see for example [13, 37, 7]. An alternative construction, abbreviated as the single-vector model, shares a single weight vector, for all the classes, paired with class-specific feature mappings. This construction is common in generalized additive models [17], multiclass versions of boosting [16, 28], and has been popularized lately due to its role in prediction with structured output where the number of classes is exponentially large (see e.g. [31]). While this approach can yield predictors with a rather mild dependency of the required features on k (see for example the analysis in [39, 31, 14]), it relies on a-priori assumptions on the structure of X and Y. In contrast, in this paper we tackle general multiclass prediction problems, like object recognition or document classification, where it is not straightforward or even plausible how one would go about to construct a-priori good class specific feature mappings, and therefore the single-vector model is not adequate. The class of predictors of the form given in eqn. (1) can be trained using Frobenius norm regularization (as done by multiclass SVM – see e.g. [7]) or using `1 regularization over all the entries of W. However, as pointed out in [26], these regularizers might yield a matrix with many non-zeros columns, and hence, will lead to a predictor that uses many features. The alternative approach, and the most relevant to our work, is the use of mix-norm regularizations like kWk1,1 or kWk2,1 [21, 36, 2, 3, 26, 12, 19]. For example, [12] solves the following problem: min W 2Rk,d L(W) + λkWk1,1 . (5) which can be viewed as a convex approximation of our objective (eqn. (4)). This is advantageous from an optimization point of view, as one can find the global optimum of a convex problem, but it remains unclear how well the convex program approximates the original goal. For example, in Section C we show cases where mix-norm regularization does not yield sparse solutions while ShareBoost does yield a sparse solution. Despite the fact that ShareBoost tackles a non-convex program, and thus limited to local optimum solutions, we prove in Theorem 2 that under mild 1Smoothness guarantees that |f(x) −f(x0) −rf(x0)(x −x0)| βkx −x0k2 for some β and all x, x0. Therefore one can approximate f(x) by f(x0)+rf(x0)(x−x0) and the approximation error is upper bounded by the difference between x, x0. 3 conditions ShareBoost is guaranteed to find an accurate sparse solution whenever such a solution exists and that the generalization error is bounded as shown in Theorem 1. We note that several recent papers (e.g. [19]) established exact recovery guarantees for mixed norms, which may seem to be stronger than our guarantee given in Theorem 2. However, the assumptions in [19] are much stronger than the assumptions of Theorem 2. In particular, they have strong noise assumptions and a group RIP like assumption (Assumption 4.1-4.3 in their paper). In contrast, we impose no such restrictions. We would like to stress that in many generic practical cases, the assumptions of [19] will not hold. For example, when using decision stumps, features will be highly correlated which will violate Assumption 4.3 of [19]. Another advantage of ShareBoost is that its only parameter is the desired number of non-zero columns of W. Furthermore, obtaining the whole-regularization-path of ShareBoost, that is, the curve of accuracy as a function of sparsity, can be performed by a single run of ShareBoost, which is much easier than obtaining the whole regularization path of the convex relaxation in eqn. (5). Last but not least, ShareBoost can work even when the initial number of features, d, is very large, as long as there is an efficient way to choose the next feature. For example, when the features are constructed using decision stumps, d will be extremely large, but ShareBoost can still be implemented efficiently. In contrast, when d is extremely large mix-norm regularization techniques yield challenging optimization problems. As mentioned before, ShareBoost follows the forward greedy selection approach for tackling the hardness of solving eqn. (4). The greedy approach has been widely studied in the context of learning sparse predictors for linear regression. However, in multiclass problems, one needs sparsity of groups of variables (columns of W). ShareBoost generalizes the fully corrective greedy selection procedure given in [29] to the case of selection of groups of variables, and our analysis follows similar techniques. Obtaining group sparsity by greedy methods has been also recently studied in [20, 23], and indeed, ShareBoost shares similarities with these works. We differ from [20] in that our analysis does not impose strong assumptions (e.g. group-RIP) and so ShareBoost applies to a much wider array of applications. In addition, the specific criterion for choosing the next feature is different. In [20], a ratio between difference in objective and different in costs is used. In ShareBoost, the L1 norm of the gradient matrix is used. For the multiclass problem with log loss, the criterion of ShareBoost is much easier to compute, especially in large scale problems. [23] suggested many other selection rules that are geared toward the squared loss, which is far from being an optimal loss function for multiclass problems. Another related method is the JointBoost algorithm [34]. While the original presentation in [34] seems rather different than the type of predictors we describe in eqn. (1), it is possible to show that JointBoost in fact learns a matrix W with additional constraints. In particular, the features x are assumed to be decision stumps and each column W·,i is constrained to be ↵i(1[1 2 Ci] , . . . , 1[k 2 Ci]), where ↵i 2 R and Ci ⇢Y. That is, the stump is shared by all classes in the subset Ci. JointBoost chooses such shared decision stumps in a greedy manner by applying the GentleBoost algorithm on top of this presentation. A major disadvantage of JointBoost is that in its pure form, it should exhaustively search C among all 2k possible subsets of Y. In practice, [34] relies on heuristics for finding C on each boosting step. In contrast, ShareBoost allows the columns of W to be any real numbers, thus allowing ”soft” sharing between classes. Therefore, ShareBoost has the same (or even richer) expressive power comparing to JointBoost. Moreover, ShareBoost automatically identifies the relatedness between classes (corresponding to choosing the set C) without having to rely on exhaustive search. ShareBoost is also fully corrective, in the sense that it extracts all the information from the selected features before adding new ones. This leads to higher accuracy while using less features as was shown in our experiments on image classification. Lastly, ShareBoost comes with theoretical guarantees. Finally, we mention that feature sharing is merely one way for transferring information across classes [32] and several alternative ways have been proposed in the literature such as target embedding [18, 4], shared hidden structure [22, 1], shared prototypes [27], or sharing underlying metric [38]. 4 2 The ShareBoost Algorithm ShareBoost is a forward greedy selection approach for solving eqn. (4). Usually, in a greedy approach, we update the weight of one feature at a time. Now, we will update one column of W at a time (since the desired sparsity is over columns). We will choose the column that maximizes the `1 norm of the corresponding column of the gradient of the loss at W. Since W is a matrix we have that rL(W) is a matrix of the partial derivatives of L. Denote by rrL(W) the r’th column of rL(W), that is, the vector ⇣ @L(W ) @W1,r , . . . , @L(W ) @Wk,r ⌘ . A standard calculation shows that @L(W) @Wq,r = 1 m X (x,y)2S X c2Y ⇢c(x, y) xr(1[q = c] −1[q = y]) where ⇢c(x, y) = e1[c6=y]−(W x)y+(W x)c P y02Y e1[y06=y]−(W x)y+(W x)y0 . (6) Note that P c ⇢c(x, y) = 1 for all (x, y). Therefore, we can rewrite, @L(W ) @Wq,r = 1 m P (x,y) xr(⇢q(x, y) −1[q = y]) . Based on the above we have krrL(W)k1 = 1 m X q2Y %%%%%% X (x,y) xr(⇢q(x, y) −1[q = y]) %%%%%% . (7) Finally, after choosing the column for which krrL(W)k1 is maximized, we re-optimize all the columns of W which were selected so far. The resulting algorithm is given in Algorithm 1. Algorithm 1 ShareBoost 1: Initialize: W = 0 ; I = ; 2: for t=1,2,...,T do 3: For each class c and example (x, y) define ⇢c(x, y) as in eqn. (6) 4: Choose feature r that maximizes the right-hand side of eqn. (7) 5: I I [ {r} 6: Set W argminW L(W) s.t. W·,i = 0 for all i /2 I 7: end for The runtime of ShareBoost is as follows. Steps 3-5 requires O(mdk). Step 6 is a convex optimization problem in tk variables and can be performed using various methods. In our experiments, we used Nesterov’s accelerated gradient method [25] whose runtime is O(mtk/p✏) for a smooth objective, where ✏is the desired accuracy. Therefore, the overall runtime is O(Tmdk + T 2mk/p✏). It is interesting to compare this runtime to the complexity of minimizing the mixed-norm regularization objective given in eqn. (5). Since the objective is no longer smooth, the runtime of using Nesterov’s accelerated method would be O(mdk/✏) which can be much larger than the runtime of ShareBoost when d ≫T. 2.1 Variants of ShareBoost We now describe several variants of ShareBoost. The analysis we present in Section 4 can be easily adapted for these variants as well. Modifying the Greedy Choice Rule ShareBoost chooses the feature r which maximizes the `1 norm of the r-th column of the gradient matrix. Our analysis shows that this choice leads to a sufficient decrease of the objective function. However, one can easily develop other ways for choosing a feature which may potentially lead to an even larger decrease of the objective. For example, we can choose a feature r that minimizes L(W) over matrices W with support of I [ {r}. This will lead to the maximal possible decrease of the objective function at the current iteration. Of course, the runtime of choosing r will now be much larger. Some intermediate options are to choose r that minimizes min↵2R W + ↵rrL(W) or to choose r that minimizes minw2Rk W + we† r, where e† r is the all-zero row vector except 1 in the r’th position. 5 Selecting a Group of Features at a Time In some situations, features can be divided into groups where the runtime of calculating a single feature in each group is almost the same as the runtime of calculating all features in the group. In such cases, it makes sense to choose groups of features at each iteration of ShareBoost. This can be easily done by simply choosing the group of features J that maximizes P j2J krjL(W)k1. Adding Regularization Our analysis implies that when |S| is significantly larger than ˜O(Tk) then ShareBoost will not overfit. When this is not the case, we can incorporate regularization in the objective of ShareBoost in order to prevent overfitting. One simple way is to add to the objective function L(W) a Frobenius norm regularization term of the form λ P i,j W 2 i,j, where λ is a regularization parameter. It is easy to verify that this is a smooth and convex function and therefore we can easily adapt ShareBoost to deal with this regularized objective. It is also possible to rely on other norms such as the `1 norm or the `1/`1 mixed-norm. However, there is one technicality due to the fact that these norms are not smooth. We can overcome this problem by defining smooth approximations to these norms. The main idea is to first note that for a scalar a we have |a| = max{a, −a} and therefore we can rewrite the aforementioned norms using max and sum operations. Then, we can replace each max expression with its soft-max counterpart and obtain a smooth version of the overall norm function. For example, a smooth version of the `1/`1 norm will be kWk1,1 ⇡1 β Pd j=1 log ⇣Pk i=1(eβWi,j + e−βWi,j) ⌘ , where β ≥1 controls the tradeoff between quality of approximation and smoothness. 3 Non-Linear Prediction Rules We now demonstrate how ShareBoost can be used for learning non-linear predictors. The main idea is similar to the approach taken by Boosting and SVM. That is, we construct a non-linear predictor by first mapping the original features into a higher dimensional space and then learning a linear predictor in that space, which corresponds to a non-linear predictor over the original feature space. To illustrate this idea we present two concrete mappings. The first is the decision stumps method which is widely used by Boosting algorithms. The second approach shows how to use ShareBoost for learning piece-wise linear predictors and is inspired by the super-vectors construction recently described in [40]. 3.1 ShareBoost with Decision Stumps Let v 2 Rp be the original feature vector representing an object. A decision stump is a binary feature of the form 1[vi ✓], for some feature i 2 {1, . . . , p} and threshold ✓2 R. To construct a non-linear predictor we can map each object v into a feature-vector x that contains all possible decision stumps. Naturally, the dimensionality of x is very large (in fact, can even be infinite), and calculating Step 4 of ShareBoost may take forever. Luckily, a simple trick yields an efficient solution. First note that for each i, all stump features corresponding to i can get at most m + 1 values on a training set of size m. Therefore, if we sort the values of vi over the m examples in the training set, we can calculate the value of the right-hand side of eqn. (7) for all possible values of ✓in total time of O(m). Thus, ShareBoost can be implemented efficiently with decision stumps. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Figure 1: Motivating super vectors. 3.2 Learning Piece-wise Linear Predictors with ShareBoost To motivate our next construction let us consider first a simple one dimensional function estimation problem. Given sample (x1, yi), . . . , (xm, ym) we would like to find a function f : R ! R such that f(xi) ⇡yi for all i. The class of piece-wise linear functions can be a good candidate for the approximation function f. See for example an illustration in Fig. 1. In fact, it is easy to verify that all smooth functions can be approximated by piecewise linear functions (see for example the discussion in [40]). In general, we can express piece-wise linear vector-valued functions as f(v) = Pq j=1 1[kv −vjk < rj] (huj, vi + bj) , where q is 6 the number of pieces, (uj, bj) represents the linear function corresponding to piece j, and (vj, rj) represents the center and radius of piece j. This expression can be also written as a linear function over a different domain, f(v) = hw, (v)i where (v) = [ 1[kv −v1k < r1] [v , 1] , . . . , 1[kv −vqk < rq] [v , 1] ] . In the case of learning a multiclass predictor, we shall learn a predictor v 7! W (v), where W will be a k by dim( (v)) matrix. ShareBoost can be used for learning W. Furthermore, we can apply the variant of ShareBoost described in Section 2.1 to learn a piece-wise linear model with few pieces (that is, each group of features will correspond to one piece of the model). In practice, we first define a large set of candidate centers by applying some clustering method to the training examples, and second we define a set of possible radiuses by taking values of quantiles from the training examples. Then, we train ShareBoost so as to choose a multiclass predictor that only use few pairs (vj, rj). The advantage of using ShareBoost here is that while it learns a non-linear model it will try to find a model with few linear “pieces”, which is advantageous both in terms of test runtime as well as in terms of generalization performance. 4 Analysis In this section we provide formal guarantees for the ShareBoost algorithm. The proofs are deferred to the appendix. We first show that if the algorithm has managed to find a matrix W with a small number of non-zero columns and a small training error, then the generalization error of W is also small. The bound below is in terms of the 0−1 loss. A related bound, which is given in terms of the convex loss function, is described in [39]. Theorem 1 Suppose that the ShareBoost algorithm runs for T iterations and let W be its output matrix. Then, with probability of at least 1 −δ over the choice of the training set S we have that P (x,y)⇠D [hW (x) 6= y]  P (x,y)⇠S [hW (x) 6= y]+O s Tk log(Tk) log(k) + T log(d) + log(1/δ) |S| ! Next, we analyze the sparsity guarantees of ShareBoost. As mentioned previously, exactly solving eqn. (4) is known to be NP hard. The following main theorem gives an interesting approximation guarantee. It tells us that if there exists an accurate solution with small `1,1 norm, then the ShareBoost algorithm will find a good sparse solution. Theorem 2 Let ✏> 0 and let W ? be an arbitrary matrix. Assume that we run the ShareBoost algorithm for T = ⌃ 4 1 ✏kW ?k2 1,1 ⌥ iterations and let W be the output matrix. Then, kWk1,0 T and L(W) L(W ?) + ✏. 5 Experiments In this section we demonstrate the merits (and pitfalls) of ShareBoost by comparing it to alternative algorithms in different scenarios. The first experiment exemplifies the feature sharing property of ShareBoost. We perform experiments with an OCR data set and demonstrate a mild growth of the number of features as the number of classes grows from 2 to 36. The second experiment shows that ShareBoost can construct predictors with state-of-the-art accuracy while only requiring few features, which amounts to fast prediction runtime. The third experiment, which due to lack of space is deferred to Appendix A.3, compares ShareBoost to mixed-norm regularization and to the JointBoost algorithm of [34]. We follow the same experimental setup as in [12]. The main finding is that ShareBoost outperforms the mixed-norm regularization method when the output predictor needs to be very sparse, while mixed-norm regularization can be better in the regime of rather dense predictors. We also show that ShareBoost is both faster and more accurate than JointBoost. Feature Sharing The main motivation for deriving the ShareBoost algorithm is the need for a multiclass predictor that uses only few features, and in particular, the number of features should increase slowly with the number of classes. To demonstrate this property of ShareBoost we experimented with the Char74k data set which consists of images of digits and 7 letters. We trained ShareBoost with the number of classes varying from 2 classes to the 36 classes corresponding to the 10 digits and 26 capital letters. We calculated how many features were required to achieve a certain fixed accuracy as a function of the number of classes. Due to lack of space, the description of the feature space is deferred to the appendix. 0 5 10 15 20 25 30 35 40 0 50 100 150 200 250 300 350 # classes # features Figure 2: The number of features required to achieve a fixed accuracy as a function of the number of classes for ShareBoost (dashed) and the 1-vs-rest (solid-circles). The blue lines are for a target error of 20% and the green lines are for 8%. We compared ShareBoost to the 1-vs-rest approach, where in the latter, we trained each binary classifier using the same mechanism as used by ShareBoost. Namely, we minimize the binary logistic loss using a greedy algorithm. Both methods aim at constructing sparse predictors using the same greedy approach. The difference between the methods is that ShareBoost selects features in a shared manner while the 1-vs-rest approach selects features for each binary problem separately. In Fig. 2 we plot the overall number of features required by both methods to achieve a fixed accuracy on the test set as a function of the number of classes. As can be easily seen, the increase in the number of required features is mild for ShareBoost but significant for the 1-vs-rest approach. Constructing fast and accurate predictors The goal of our this experiment is to show that ShareBoost achieves state-ofthe-art performance while constructing very fast predictors. We experimented with the MNIST digit dataset, which consists of a training set of 60, 000 digits represented by centered sizenormalized 28 ⇥28 images, and a test set of 10, 000 digits. The MNIST dataset has been extensively studied and is considered a standard test for multiclass classification of handwritten digits. The SVM algorithm with Gaussian kernel achieves an error rate of 1.4% on the test set. The error rate achieved by the most advanced algorithms are below 1% of the test set. See http://yann.lecun.com/exdb/mnist/. In particular, the top MNIST performer [6] uses a feed-forward Neural-Net with 7.6 million connections which roughly translates to 7.6 million multiply-accumulate (MAC) operations at run-time as well. During training, geometrically distorted versions of the original examples were generated in order to expand the training set following [30] who introduced a warping scheme for that purpose. The top performance error rate stands at 0.35% at a run-time cost of 7.6 million MAC per test example 50 100 150 200 250 300 350 400 450 500 550 600 0 0.47 0.5 1 1.5 Rounds Train Test Figure 3: The test error rate of ShareBoost on the MNIST dataset as a function of the number of rounds using patch based features. The error-rate of ShareBoost with T = 266 rounds stands on 0.71% using the original training set and 0.47% with the expanded training set of 360, 000 examples generated by adding five deformed instances per original example and with T = 305 rounds. Fig. 3 displays the convergence curve of error-rate as a function of the number of rounds. Note that the training error is higher than the test error. This follows from the fact that the training set was expanded with 5 fairly strong deformed versions of each input, using the method in [30]. As can be seen, less than 75 features suffices to obtain an error rate of < 1%. In terms of run-time on a test image, the system requires 305 convolutions of 7⇥7 templates and 540 dot-product operations which totals to roughly 3.3·106 MAC operations — compared to around 7.5 · 106 MAC operations of the top MNIST performer. The error rate of 0.47% is better than that reported by [10] who used a 1-vs-all SVM with a 9-degree polynomial kernel and with an expanded training set of 780, 000 examples. The number of support vectors (accumulated over the ten separate binary classifiers) was 163, 410 giving rise to a run-time of 21fold compared to ShareBoost. Moreover, due to the fast convergence of ShareBoost, 75 rounds are enough for achieving less than 1% error. Acknowledgements: We would like to thank Itay Erlich and Zohar Bar-Yehuda for their contribution to the implementation of ShareBoost and to Ronen Katz for helpful comments. 8 References [1] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification. In International Conference on Machine Learning, 2007. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, pages 41–48, 2006. [3] F.R. Bach. Consistency of the group lasso and multiple kernel learning. J. of Machine Learning Research, 9:1179–1225, 2008. [4] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In NIPS, 2011. [5] E.J. Candes and T. Tao. Decoding by linear programming. IEEE Trans. on Information Theory, 51:4203–4215, 2005. [6] D. C. Ciresan, U. Meier, L. Maria G., and J. Schmidhuber. Deep big simple neural nets excel on handwritten digit recognition. CoRR, 2010. [7] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991, 2003. [8] A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass learnability and the erm principle. In COLT, 2011. [9] G. Davis, S. Mallat, and M. Avellaneda. Greedy adaptive approximation. Journal of Constructive Approximation, 13:57–98, 1997. [10] D. Decoste and S. Bernhard. Training invariant support vector machines. Mach. Learn., 46:161–190, 2002. [11] D.L. Donoho. Compressed sensing. In Technical Report, Stanford University, 2006. [12] J. Duchi and Y. Singer. Boosting with structural sparsity. In Proc. ICML, pages 297–304, 2009. [13] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973. [14] M. Fink, S. Shalev-Shwartz, Y. Singer, and S. Ullman. Online multiclass learning by interclass hypothesis sharing. In International Conference on Machine Learning, 2006. [15] Y. Freund and R. E. Schapire. A short introduction to boosting. J. of Japanese Society for AI, pages 771–780, 1999. [16] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, pages 119–139, 1997. [17] T.J. Hastie and R.J. Tibshirani. Generalized additive models. Chapman & Hall, 1995. [18] D. Hsu, S.M. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, 2010. [19] J. Huang and T. Zhang. The benefit of group sparsity. Annals of Statistics, 38(4), 2010. [20] J. Huang, T. Zhang, and D.N. Metaxas. Learning with structured sparsity. In ICML, 2009. [21] G.R.G. Lanckriet, N. Cristianini, P.L. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming. J. of Machine Learning Research, pages 27–72, 2004. [22] Y. L. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of IEEE, pages 2278–2324, 1998. [23] A. Majumdar and R.K. Ward. Fast group sparse classification. Electrical and Computer Engineering, Canadian Journal of, 34(4):136– 144, 2009. [24] B. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Computing, pages 227–234, 1995. [25] Y. Nesterov and I.U.E. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Netherlands, 2004. [26] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for l 1,infinity regularization. In ICML, page 108, 2009. [27] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype representations. In CVPR, 2008. [28] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1–40, 1999. [29] S. Shalev-Shwartz, T. Zhang, and N. Srebro. Trading accuracy for sparsity in optimization problems with sparsity constraints. Siam Journal on Optimization, 20:2807–2832, 2010. [30] P. Y. Simard, Dave S., and John C. Platt. Best practices for convolutional neural networks applied to visual document analysis. Document Analysis and Recognition, 2003. [31] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [32] S. Thrun. Learning to learn: Introduction. Kluwer Academic Publishers, 1996. [33] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58(1):267–288, 1996. [34] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), pages 854–869, 2007. [35] J.A. Tropp and A.C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. Information Theory, IEEE Transactions on, 53(12):4655–4666, 2007. [36] B. A Turlach, W. N V., and Stephen J Wright. Simultaneous variable selection. Technometrics, 47, 2000. [37] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [38] E. Xing, A.Y. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with side-information. In NIPS, 2003. [39] T. Zhang. Class-size independent generalization analysis of some discriminative multi-category classification. In NIPS, 2004. [40] X. Zhou, K. Yu, T. Zhang, and T. Huang. Image classification using super-vector coding of local image descriptors. Computer Vision– ECCV 2010, pages 141–154, 2010. 9
2011
104
4,152
Heavy-tailed Distances for Gradient Based Image Descriptors Yangqing Jia and Trevor Darrell UC Berkeley EECS and ICSI {jiayq,trevor}@eecs.berkeley.edu Abstract Many applications in computer vision measure the similarity between images or image patches based on some statistics such as oriented gradients. These are often modeled implicitly or explicitly with a Gaussian noise assumption, leading to the use of the Euclidean distance when comparing image descriptors. In this paper, we show that the statistics of gradient based image descriptors often follow a heavy-tailed distribution, which undermines any principled motivation for the use of Euclidean distances. We advocate for the use of a distance measure based on the likelihood ratio test with appropriate probabilistic models that fit the empirical data distribution. We instantiate this similarity measure with the Gammacompound-Laplace distribution, and show significant improvement over existing distance measures in the application of SIFT feature matching, at relatively low computational cost. 1 Introduction A particularly effective image representation has developed in recent years, formed by computing the statistics of oriented gradients quantized into various spatial and orientation selective bins. SIFT [14], HOG [6], and GIST [17] have been shown to have extraordinary descriptiveness on both instance and category recognition tasks, and have been designed with invariances to many common nuisance parameters. Significant motivation for these architectures arises from biology, where models of early visual processing similarly integrate statistics over orientation selective units [21, 18]. Two camps have developed in recent years regarding how such descriptors should be compared. The first advocates comparison of raw descriptors. Early works [6] considered the distance of patches to a database from labeled images; this idea was reformulated as a probabilistic classifier in the NBNN technique [4], which has surprisingly strong performance across a range of conditions. Efficient approximations based on hashing [22, 12] or tree-based data structures [14, 16] or their combination [19] have been commonly applied, but do not change the underlying ideal distance measure. The other approach is perhaps the more dominant contemporary paradigm, and explores a quantizedprototype approach where descriptors are characterized in terms of the closest prototype, e.g., in a vector quantization scheme. Recently, hard quantization and/or Euclidean-based reconstruction techniques have been shown inferior to sparse coding methods, which employ a sparsity prior to form a dictionary of prototypes. A series of recent publications has proposed prototype formation methods including various sparsity-inducing priors, including most commonly the L1 prior [15], as well as schemes for sharing structure in a ensemble-sparse fashion across tasks or conditions [10]. It is informative that sparse coding methods also have a foundation as models for computational visual neuroscience [18]. Virtually all these methods use the Euclidean distance when comparing image descriptors against the prototypes or the reconstructions, which is implicitly or explicitly derived from a Gaussian noise assumption on image descriptors. In this paper, we ask whether this is the case, and further, whether 1 (a) Histogram (b) Matching Patches Figure 1: (a) The histogram of the difference between SIFT features of matching image patches from the Photo Tourism dataset. (b) A typical example of matching patches. The obstruction (wooden branch) in the bottom patch leads to a sparse change to the histogram of oriented gradients (the two red bars). there is a distance measure that better fits the distribution of real-world image descriptors. We begin by investigating the statistics of oriented gradient based descriptors, focusing on the well known Photo Tourism database [25] of SIFT descriptors for the case of simplicity. We evaluate the statistics of corresponding patches, and see the distribution is heavy-tailed and decidedly nonGaussian, undermining any principled motivation for the use of Euclidean distances. We consider generative factors why this may be so, and derive a heavy-tailed distribution (that we call the gamma-compound-Laplace distribution) in a Bayesian fashion, which empirically fits well to gradient based descriptors. Based on this, we propose to use a principled approach using the likelihood ratio test to measure the similarity between data points under any arbitrary parameterized distribution, which includes the previously adopted Gaussian and exponential family distributions as special cases. In particular, we prove that for the heavy-tailed distribution we proposed, the corresponding similarity measure leads to a distance metric, theoretically justifying its use as a similarity measurement between image patches. The contribution of this paper is two-fold. We believe ours is the first work to systematically examine the distribution of the noise in terms of oriented gradients for corresponding keypoints in natural scenes. In addition, the likelihood ratio distance measure establishes a principled connection between the distribution of data and various distance measures in general, allowing us to choose the appropriate distance measure that corresponds to the true underlying distribution in an application. Our method serves as a building block in either nearest-neighbor distance computation (e.g. NBNN [4]) and codebook learning (e.g. vector quantization and sparse coding), where the Euclidean distance measure can be replaced by our distance measure for better performance. It is important to note that in both paradigms listed above – nearest-neighbor distance computation and codebook learning – discriminative variants and structured approaches exist that can optimize a distance measure or codebook based on a given task. Learning a distance measure that incorporate both the data distribution and task-dependent information is the subject of future work. 2 Statistics of Local Image Descriptors In this section, we focus on examining the statistics of local image descriptors, using the SIFT feature [14] as an example. Classical feature matching and clustering methods on SIFT features use the Euclidean distance to compare two descriptors. In a probabilistic perspective, this implies a Gaussian noise model for SIFT: given a feature prototype µ (which could be the prototype in feature matching, or a cluster center in clustering), the probability that an observation x matches the prototype can be evaluated by the Gaussian probability p(x|µ) ∝exp ∥x −µ∥2 2 2σ2  , (1) 2 Figure 2: The probability values of the GCL, Laplace and Gaussian distributions via ML estimation, compared against the empirical distribution of local image descriptor noises. The figure is in log scale and curves are normalized for better comparison. For details about the data, see Section 4. where σ is the standard deviation of the noise. Such a Gaussian noise model has been explicitly or implicitly assumed in most algorithms including vector quantization, sparse coding (on the reconstruction error), etc. Despite the popular use of Euclidean distance, the distribution of the noise between matching SIFT patches does not follow a Gaussian distribution: as shown in Figure 1(a), the distribution is highly kurtotic and heavy tailed, indicating that Euclidean distance may not be ideal. The reason why the Gaussian distribution may not be a good model for the noise of local image descriptors can be better understood from the generative procedure of the SIFT features. Figure 1(b) shows a typical case of matching patches: one patch contains a partially obstructing object while the other does not. The resulting histogram differs only in a sparse subset of the oriented gradients. Further, research on the V1 receptive field [18] suggests that natural images are formed from localized, oriented, bandpass patterns, implying that changing the weight of one such building pattern may tend to change only one or a few dimensions of the binned oriented gradients, instead of imposing an isometric Gaussian change to the whole feature. 2.1 A Heavy-tailed Distribution for Image Descriptors We first explore distributions that fits such heavy-tailed property. A common approach to cope with heavy-tails is to use the L1 distance, which corresponds to the Laplace distribution p(x|µ; λ) ∝λ 2 exp (−λ|x −µ|) . (2) However, the tail of the noise distribution is often still heavier than the Laplace distribution: empirically, we find the kurtosis of the SIFT noise distribution to be larger than 7 for most dimensions, while the kurtosis of the Laplace distribution is only 3. Inspired by the hierarchical Bayesian models [11], instead of fixing the λ value in the Laplace distribution, we introduce a conjugate Gamma prior over λ modeled by hyperparameters {α, β}, and compute the probability of x given the prototype µ by integrating over λ: p(x|µ; α, β) = Z λ λ 2 e−λ|x−µ| 1 Γ(α)λα−1βαe−βλ dλ = 1 2αβα(|x −µ| + β)−α−1. (3) This leads to a heavier tail than the Laplace distribution. We call Equation (3) the Gammacompound-Laplace (GCL) distribution, in which the hyperparameters α and β control the shape of the tail. Figure 2 shows the empirical distribution of the SIFT noise and the maximum likelihood fitting of various models. It can be observed that the GCL distribution enables us to fit the heavy tailed empirical distribution better than other distributions. We note that similar approaches have been exploited in the compressive sensing context [9], and are shown to perform better than using the Laplace distribution as the sparse prior in applications such as signal recovery. Further, we note that the statistics of a wide range of other natural image descriptors beyond SIFT features are known to be highly non-Gaussian and have heavy tails [24]. Examples of these include 3 derivative-like wavelet filter responses [23, 20], optical flow and stereo vision statistics [20, 8], shape from shading [3], and so on. In this paper we retract from the general question “what is the right distribution for natural images”, and ask specifically whether there is a good distance metric for local image descriptors that takes the heavy-tailed distribution into consideration. Although heuristic approaches such as taking the squared root of the feature values before computing the Euclidean distance are sometimes adopted to alleviate the effect of heavy tails, there lacks a principled way to define a distance for heavytailed data in computer vision to the best of our knowledge. To this end, we start with a principled similarity measure based on the well known statistical hypothesis test, and instantiate it with heavytailed distributions we propose for local image descriptors. 3 Distance For Heavy-tailed Distributions In statistics, the hypothesis test [7] approach has been widely adopted to test if a certain statistical model fits the observation. We will focus on the likelihood ratio test in this paper. In general, we assume that the data is generated by a parameterized probability distribution p(x|θ), where θ is the vector of parameters. A null hypothesis is stated by restricting the parameter θ in a specific subset Θ0, which is nested in a more general parameter space Θ. To test if the restricted null hypothesis fits a set of observations X, a natural choice is to use the ratio of the maximized likelihood of the restricted model to the more general model: Λ(X) = L(ˆθ0; X)/L(ˆθ; X), (4) where L(θ; X) is the likelihood function, ˆθ0 is the maximum likelihood estimate of the parameter within the restricted subset Θ0, and ˆθ is the maximum likelihood estimate under the general case. It is easily verifiable that Λ(X) always lies in the range [0, 1], as the maximum likelihood estimate of the general case would always fit at least as well as the restricted case, and that the likelihood is always a nonnegative value. The likelihood ratio test is then defined as a statistical test that rejects the null hypothesis when the statistic Λ(X) is smaller than a certain threshold α, such as the Pearson’s chi-square test [7] for categorical data. Instead of producing a binary decision, we propose to use the score directly as the generative similarity measure between two single data points. Specifically, we assume that each data point x is generated from a parameterized distribution p(x|µ) with unknown prototype µ. Thus, the statement “two data points x and y are similar” can be reasonably represented by the null hypothesis that the two data points are generated from the same prototype µ, leading to the probability q0(x, y|µxy) = p(x|µxy)p(y|µxy). (5) This restricted model is further nested in the more general model that generates the two data points from two possibly different prototypes: q(x, y|µx, µy) = p(x|µx)p(y|µy), (6) where µx and µy are not necessarily equal. The similarity between the two data points x and y is then defined by the the likelihood ratio statistics between the null hypothesis of equality and the alternate hypothesis of inequality over prototypes: s(x, y) = p(x|ˆµxy)p(y|ˆµxy) p(x|ˆµx)p(y|ˆµy) , (7) where ˆµx, ˆµy and ˆµxy are the maximum likelihood estimates of the prototype based on x, y, and {x, y} respectively. We call (7) the likelihood ratio similarity between x and y, which provides us information from a generative perspective: two similar data points, such as two patches of the same real-world location, are more likely to be generated from the same underlying distribution, thus have a large likelihood ratio value. In the following parts of the paper, we define the likelihood ratio distance between x and y as the square root of the negative logarithm of the similarity: d(x, y) = p −log(s(x, y)). (8) It is worth pointing out that, for arbitrary distributions p(x), d(x, y) is not necessarily a distance metric as the triangular inequality may not hold. However, for heavy-tailed distributions, we have the following sufficient condition in the 1-dimensional case: 4 Theorem 3.1. If the distribution p(x|µ) can be written as p(x|µ) = exp(−f(x−µ))b(x), where f(t) is a non-constant quasiconvex function w.r.t. t that satisfies f ′′(t) ≤0, ∀t ∈R\{0}, then the distance defined in Equation (8) is a metric. Proof. First we point out the following lemmas: Lemma 3.2. If a function d(x, y) defined on X×X →R is a distance metric, then p d(x, y) is also a distance metric. Lemma 3.3. If function f(t) is defined as in Theorem 3.1, then we have: (1) the minimizer ˆµxy = arg minµ f(x−µ) + f(y−µ) is either x or y. (2) the function g(t) = min(f(t), f(−t)) −f(0) is monotonically increasing and concave in R+ ∪ {0}, and g(0) = 0. With Lemma 3.3, it is easily verifiable that d2(x, y) = g(|x−y|). Then, via the subadditivity of g(·) we can reach a result stronger than Theorem 3.1 that d2(x, y) is a distance metric. Thus, d(x, y) is also a distance metric based on Lemma 3.2. Note that we keep the square root here in conformity with classical distance metrics, which we will discuss in the later parts of the paper. Detailed proofs of the theorem and lemmas can be found in the supplementary material. As an extreme case, when f ′′(t) = 0 (t ̸= 0), the distance defined above is the square root of the (scaled) L1 distance. 3.1 Distance for the GCL distribution We use the GCL distribution parameterized by the prototype µ with fixed hyperparameters (α, β) as the SIFT noise model, which leads to the following GCL distance between dimensions of SIFT patches1: d2(x, y) = (α + 1)(log(|x −y| + β) −log β) (9) The distance between two patches is then defined as the sum of per-dimension distances. Intuitively, while the Euclidean distance grows linearly w.r.t. to the difference between the coordinates, the GCL distance grows in a logarithmic way, suppressing the effect of too large differences. Further, we have the following theoretical justification which is a direct result of Theorem 3.1.: Proposition 3.4. The distance d(x, y) defined in (9) is a metric. 3.2 Hyperparameter Estimation for GCL In the following, we discuss how to estimate the hyperparameters α and β in the GCL distribution. Assuming that we are given a set of one-dimensional data D = {x1, x2, · · · , xn} that follows the GCL distribution, we estimate the hyperparameters by maximizing the log likelihood l(α, β; D) = n X i=1  log α 2 + α log β −(α + 1) log (|xi| + β)  (10) The ML estimation does not have a closed-form solution, so we adopt an alternate optimization and iteratively update α and β until convergence. Updating α with fixed β can be achieved by computing α ←n n X i=1 log(|xi| + β) −n log(β) !−1 . (11) Updating β can be done via the Newton-Raphson method β ←β −l′(β) l′′(β), where l′(β) = nα β − n X i=1 α + 1 |xi| + β , l′′(β) = n X i=1 α + 1 (|xi| + β)2 −nα β2 (12) 1For more than two data points X = {xi}, it is generally difficult to find the maximum likelihood estimation of µ as the likelihood is nonconvex. However, with two data points x and y, it is trivial to see that µ = x and µ = y are the two global optimums of the likelihood L(µ; {x, y}), both leading to the same distance representation in (9). 5 3.3 Relation to Existing Measures The likelihood ratio distance is related to several existing methods. In particular, we show that under the exponential family distribution, it leads to several widely used distance measures. The exponential family distribution has drawn much attention in the recent years. Here we focus on the regular exponential family, where the distribution of data x can be written in the following form: p(x) = exp (−dB(x, µ)) b(x), (13) where µ is the mean in the exponential family sense, and dB is the regular Bregman divergence corresponding to the distribution [2]. When applying the likelihood ratio distance on the distribution, we obtain the distance d(x, y) = q dB(x, ˆµxy) + dB(x, ˆµx,y) (14) since ˆµx ≡x and dB(x, x) ≡0 for any x. We note that this is the square root of the Jensen-Bregman divergence and is known to be a distance metric [1]. Several popular distances can be derived in this way. In the two most common cases, the Gaussian distribution leads to the Euclidean distance, and the multinomial distribution leads to the square root of the Jensen-Shannon divergence, whose first-order approximation is the χ-squared distance. More generally, for (non-regular) Bregman divergences dB(x, µ) defined as dB(x, µ) = F(x) −F(µ) + (x −µ)F ′(µ) with arbitrary smooth function F, the condition on which the square root of the corresponding Jensen-Bregman divergence is a metric has been discussed in [5]. While the exponential family embraces a set of mathematically elegant distributions whose properties are well known, it fails to capture the heavy-tailed property of various natural image statistics, as the tail of the sufficient statistics is exponentially bounded by definition. The likelihood ratio distance with heavy-tailed distributions serves as a principled extension of several popular distance metrics based on the exponential family distribution. Further, there are principled approaches that connect distances with kernels [1], upon which kernel methods such as support vector machines may be built with possible heavy-tailed property of the data taken into consideration. The idea of computing the similarity between data points based on certain scores has also been seen in the one-shot learning context [26] that uses the average prediction score taking one data point as training and the other as testing, and vice versa. Our method shares similar merit, but with a generative probabilistic interpretation. Integration of our method with discriminative information or latent application-dependent structures is one future direction. 4 Experiments In this section, we apply the GCL distance to the problem of local image patch similarity measure using the SIFT feature, a common building block of many applications such as stereo vision, structure from motion, photo tourism, and bag-of-words image classification. 4.1 The Photo Tourism Dataset We used the Photo Tourism dataset [25] to evaluate different similarity measures of the SIFT feature. The dataset contains local image patches extracted from three scenes namely Notredame, Trevi and Halfdome, reflecting different natural scenarios. Each set contains approximately 30,000 ground-truth 3D points, with each point containing a bag of 2d image patches of size 64 × 64 corresponding to the 3D point. To the best of our knowledge, this is the largest local image patch database with ground-truth correspondences. Figure 3 shows a typical subset of patches from the dataset. The SIFT features are computed using the code in [13]. Specifically, two different normalization schemes are tested: the l2 scheme simply normalizes each feature to be of length 1, and the thres scheme further thresholds the histogram at 0.2, and rescales the resulting feature to length 1. The latter is the classical hand-tuned normalization designed in the original SIFT paper, and can be seen as a heuristic approach to suppress the effect of heavy tails. Following the experimental setting of [25], we also introduce random jitter effects to the raw patches before SIFT feature extraction by warping each image by the following random warping parame6 Figure 3: An example of the Photo Tourism dataset. From top to bottom patches are sampled from Notredame, Trevi and Halfdome respectively. Within each row, every adjacent two patches forms a matching pair. 0.80 0.85 0.90 0.95 1.00 Recall 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Precision PR-curve trevi L2 L1 symmKL chi2 gcl 0.80 0.85 0.90 0.95 1.00 Recall 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Precision PR-curve notredame L2 L1 symmKL chi2 gcl 0.80 0.85 0.90 0.95 1.00 Recall 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Precision PR-curve halfdome L2 L1 symmKL chi2 gcl (a) trevi (b) notredame (c) halfdome Figure 4: The mean precision-recall curve over 20 independent runs. In the figure, solid lines are experiments using features that are normalized in the l2 scheme, and dashed lines using features normalized in the thres scheme. Best viewed in color. ters: position shift, rotation and scale with standard deviations of 0.4 pixels, 11 degrees and 0.12 octaves respectively. Such jitter effects represent the noise we may encounter in real feature detection and localization [25], and allows us to test the robustness of different distance measures. For completeness, the data without jitter effects are also tested and the results reported. 4.2 Testing Protocol The testing protocol is as follows: 10,000 matching pairs and 10,000 non-matching pairs are randomly sampled from the dataset, and we classify each pair to be matching or non-matching based on the distance computed from different testing metrics. The precision-recall (PR) curve is computed, and two values, namely the average precision (AP) computed as the area under the PR curve and the false positive rate at 95% recall (95%-FPR) are reported to compare different distance measures. To test the statistical significance, we carry out 20 independent runs and report the mean and standard deviation in the paper. We focus on comparing distance measures that presume the data to lie in a vector space. Five different distance measures are compared, namely the L2 distance, the L1 distance, the symmetrized KL divergence, the χ2 distance, and the GCL distance. The hyperparameters of the GCL distance measure are learned by randomly sampling 50,000 matching pairs from the set Notredame, and performing hyperparameter estimation as described in Section 3.2. They are then fixed and used universally for all other experiments without re-estimation. As a final note, the code for the experiments in the paper will be released to public for repeatability. 4.3 Experimental Results Figure 4 shows the average precision-recall curve for all the distances on the three datasets respectively. The numerical results on the data with jitter effects are summarized in Table 1, with statistically significant values shown in bold. Table 2 shows the 99% FPR on the data without jitter effects2. We refer to the supplementary materials for other results on the no jitter case due to space constraints. Notice that, the observed trends and conclusions from the experiments with jitter effects are also confirmed on those without jitter effects. The GCL distance outperforms other base distance measures in all the experiments. Notice that the hyperparameters learned from the notredame set performs well on the other two datasets as well, 2As the accuracy for the no jitter effects case is much higher in general, 99% FPR is reported instead of 95% FPR as in the jitter effect case. 7 AP L2 L1 SymmKL χ2 GCL trevi-l2 96.61±0.16 98.08±0.10 97.40±0.12 97.69±0.11 98.33±0.09 trevi-thres 97.23±0.12 98.05±0.10 97.40±0.11 97.71±0.11 98.21±0.10 notre-l2 95.90±0.14 97.83±0.10 96.96±0.12 97.31±0.11 98.19±0.10 notre-thres 96.76±0.13 97.84±0.10 97.05±0.12 97.39±0.11 98.07±0.10 halfd-l2 94.51±0.16 96.75±0.11 94.87±0.15 95.42±0.14 98.19±0.10 halfd-thres 95.55±0.14 96.90±0.11 95.08±0.16 95.64±0.14 97.21±0.10 95%-FPR L2 L1 SymmKL χ2 GCL trevi-l2 23.61±1.14 12.71±0.83 17.58±0.96 15.85±0.74 10.52±0.73 trevi-thres 19.23±0.84 13.08±0.91 17.57±0.98 15.66±0.77 11.21±0.71 notre-l2 26.43±1.03 14.27±1.09 19.56±1.00 17.70±1.08 11.58±1.00 notre-thres 21.88±1.21 14.49±1.25 19.07±1.11 17.38±0.95 12.09±1.11 halfd-l2 36.34±0.98 24.11±1.13 34.55±0.96 31.62±1.09 19.76±1.03 halfd-thres 31.44±1.20 23.14±0.13 33.71±1.05 30.56±1.13 20.74±1.16 Table 1: The average precision (above) and the false positive rate at 95% recall (below) of different distance measures on the Photo Tourism datasets, with random jitter effects. A larger AP score and a smaller FPR score are desired. The l2 and thres in the leftmost column indicate the two different feature normalization schemes. 99%-FPR L2 L1 SymmKL χ2 GCL trevi-l2 11.36±1.65 3.44±0.75 8.02±1.04 8.02±1.08 2.42±0.58 trevi-thres 7.14±1.31 3.24±0.69 7.93±1.11 5.06±0.97 2.23±0.48 notre-l2 19.69±1.93 6.09±0.72 14.81±1.66 9.40±1.04 4.16±0.57 notre-thres 11.9±1.19 5.17±0.58 13.11±1.39 8.24±1.12 3.72±0.56 halfd-l2 44.55±9.42 34.01±2.10 43.51±1.07 40.53±1.12 26.06±2.25 halfd-thres 40.58±1.63 32.30±2.28 42.51±1.22 39.28±1.49 26.36±2.50 Table 2: The false positive rate at 99% recall of different distance measures on the Photo Tourism datasets without jitter effects. indicating that they capture the general statistics of the SIFT feature, instead of dataset-dependent statistics. Also, the thresholding and renormalization of SIFT features does provide a significant improvement for the Euclidean distance, but its effect is less significant for other distances. In fact, the hard thresholding may introduce artificial noise to the data, counterbalancing the positive effect of reducing the tail, especially when the distance measure is already able to cope with heavy tails. We argue that the key factor leading to the performance improvement is taking the heavy tail property of the data into consideration but not others. For instance, the Laplace distribution has a heavier tail than distributions corresponding to other base distance measures, and a better performance of the corresponding L1 distance over other distance measures is observed, showing a positive correlation between tail heaviness and performance. Notice that the tails of distributions assumed by the baseline distances are still exponentially bounded, and performance is further increased by introducing heavy-tailed distributions such as the GCL distribution in our experiment. 5 Conclusion While visual representations based on oriented gradients have been shown to be effective in many applications, scant attention has been paid to the issue of the heavy-tailed nature of their distributions, undermining the use of distance measures based on exponentially bounded distributions. In this paper, we advocate the use of distance measures that are derived from heavy-tailed distributions, where the derivation can be done in a principled manner using the log likelihood ratio test. In particular, we examine the distribution of local image descriptors, and propose the Gamma-compound-Laplace (GCL) distribution and the corresponding distance for image descriptor matching. Experimental results have shown that this yields to more accurate feature matching than existing baseline distance measures. 8 References [1] A Agarwal and H Daume III. Generative kernels for exponential families. In AISTATS, 2011. [2] A Banerjee, S Merugu, I Dhillon, and J Ghosh. Clustering with Bregman divergences. JMLR, 6:1705– 1749, 2005. [3] JT Barron and J Malik. High-frequency shape and albedo from shading using natural image statistics. In CVPR, 2011. [4] O Boiman, E Shechtman, and M Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [5] P Chen, Y Chen, and M Rao. Metrics defined by bregman divergences. Communications in Mathematical Sciences, 6(4):915–926, 2008. [6] N Dalal. Histograms of oriented gradients for human detection. In CVPR, 2005. [7] AC Davison. Statistical models. Cambridge Univ Press, 2003. [8] J Huang, AB Lee, and D Mumford. Statistics of range images. In CVPR, 2000. [9] S Ji, Y Xue, and L Carin. Bayesian compressive sensing. IEEE Trans. Signal Processing, 56(6):2346– 2356, 2008. [10] Y Jia, M Salzmann, and D Trevor. Factorized latent spaces with structured sparsity. In NIPS, 2010. [11] D Koller and N Friedman. Probabilistic graphical models. MIT press, 2009. [12] B Kulis and T Darrell. Learning to hash with binary reconstructive embeddings. In NIPS, 2009. [13] S Lazebnik, C Schmid, and J Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006. [14] D Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004. [15] J Mairal, F Bach, J Ponce, and G Sapiro. Online learning for matrix factorization and sparse coding. JMLR, 11:19–60, 2010. [16] AW Moore. The anchors hierarchy: using the triangle inequality to survive high dimensional data. In UAI, 2000. [17] A Oliva and A Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001. [18] B Olshausen. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [19] M Ozuysal and P Fua. Fast keypoint recognition in ten lines of code. In CVPR, 2007. [20] J Portilla, V Strela, MJ Wainwright, and EP Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Image Processing, 12(11):1338–1351, 2003. [21] M Riesenhuber and T Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2:1019–1025, 1999. [22] G Shakhnarovich, P Viola, and T Darrell. Fast pose estimation with parameter-sensitive hashing. In ICCV, 2003. [23] EP Simoncelli. Statistical models for images: compression, restoration and synthesis. In Asilomar Conference on Signals, Systems & Computers, 1997. [24] Y Weiss and WT Freeman. What makes a good model of natural images? In CVPR, 2007. [25] S Winder and M Brown. Learning local image descriptors. In CVPR, 2007. [26] L Wolf, T Hassner, and Y Taigman. The one-shot similarity kernel. In ICCV, 2009. 9
2011
105
4,153
An Application of Tree-Structured Expectation Propagation for Channel Decoding Pablo M. Olmos∗, Luis Salamanca∗, Juan J. Murillo-Fuentes∗, Fernando P´erez-Cruz† ∗Dept. of Signal Theory and Communications, University of Sevilla 41092 Sevilla Spain {olmos,salamanca,murillo}@us.es † Dept. of Signal Theory and Communications, University Carlos III in Madrid 28911 Legan´es (Madrid) Spain fernando@tsc.uc3m.es Abstract We show an application of a tree structure for approximate inference in graphical models using the expectation propagation algorithm. These approximations are typically used over graphs with short-range cycles. We demonstrate that these approximations also help in sparse graphs with long-range loops, as the ones used in coding theory to approach channel capacity. For asymptotically large sparse graph, the expectation propagation algorithm together with the tree structure yields a completely disconnected approximation to the graphical model but, for for finite-length practical sparse graphs, the tree structure approximation to the code graph provides accurate estimates for the marginal of each variable. Furthermore, we propose a new method for constructing the tree structure on the fly that might be more amenable for sparse graphs with general factors. 1 Introduction Belief propagation (BP) has become the standard procedure to decode channel codes, since in 1996 MacKay [7] proposed BP to decode codes based on low-density parity-check (LDPC) matrices with linear complexity. A rate r = k/n LDPC code can be represented as a sparse factor graph with n variable nodes (typically depicted on the left side) and n −k factor nodes (on the right side), in which the number of edges is linear in n [15]. The first LDPC codes [6] presented a regular structure, in which all variables and factors had, respectively, ℓand r connections, i.e. an (ℓ, r) LDPC code. But the analysis of their limiting decoding performance, when n tends to infinity for a fixed rate, showed that they do not approach the channel capacity [15]. To improve the performance of regular LDPC codes, we can define an (irregular) LDPC ensemble as the set of codes randomly generated according to the degree distribution (DD) from the edge perspective as follows: λ(x) = lmax X i=1 λixi−1 and ρ(x) = rmax X j=1 ρjxj−1, where the fraction of edges with left degree i (from variables to factors) is given by λi and the fraction of edges with right degree j (from factors to variables) is given by ρj. The left (right) degree of an edge is the degree of the variable (factor) node it is connected to. The rate of the code is then given by r = 1 − R 1 0 ρ(x)dx/ R 1 0 λ(x)dx, and the total number of edges by E = n/(P i λi/i). Although optimized irregular LDPC codes can achieve the channel capacity with a decoder based on BP [15], they present several drawbacks. First, the error floor in those codes increases significantly, because capacity achieving LDPC ensembles with BP decoding have a large fraction of variables 1 with two connections and they present low minimum distances. Second, the maximum number of ones per column lmax tends to infinity to approach capacity. These problems limit the BP decoding performance of capacity approaching codes, when we work with finite length codes used in real applications. Approximate inference in graphical models can be solved using more accurate methods that significantly improve the BP performance, especially for dense graphs with short-range loops. A nonexhaustive list of methods are: generalized BP [22], expectation propagation (EP) [10], fractional BP [19], linear programming [17] and power EP [8]. A detailed list of contributions for approximate inference can be found in [18] and the references therein. But it is a common belief that BP is sufficiently accurate to decode LDPC codes and other approximate inference algorithms would not outperform BP decoding significantly, if at all. In this paper, we challenge that belief and show that more accurate approximate inference algorithms for graphical models can also improve the BP decoding performance for LDPC codes, which are sparse graphical models with long-range loops. We particularly focus on tree-structured approximations for inference in graphical models [9] using the expectation propagation (EP) algorithm, because it presents a simple algorithmic implementation for LDPC decoding transmitted over the binary erasure channel (BEC)1, although other higher order inference algorithms might be suitable for this problem, as well, as in [20] it was proven a connection between some of them. We show the results for the BEC, because it has a simple structure amenable for deeper analysis and most of its properties carry over to actual communications channels [14]. The EP with a tree-structured approximation can be presented in a similar way as the BP decoder for an LDPC code over the BEC [11], with similar run-time complexity. We show that a decoder based on EP with a tree-structured approximation converges to the BP solution for the asymptotic limit n →∞, for finite-length graphs the performance is otherwise improved significantly [13, 11]. For finite graphs, the presence of cycles in the graph degrades the BP estimate and we show that the EP solution with a tree-structured approximation is less sensitive to the presence of such loops, and provides more accurate estimates for the marginal of each bit. Therefore, it makes the expectation propagation with a tree-structured approximation (for short we refer to this algorithm as tree-structured EP or TEP) a more practical decoding algorithm for finite length LDPC codes. Besides, the analysis of the application of the tree-structured EP to channel decoding over the BEC leads to another way of fixing the approximating tree structure different from the one proposed in [9] for dense codes with positive correlation potentials. In channel coding, the factors of the graph are parity-checks and the correlations are high but can change from positive to negative by the flip of a single variable. Therefore, the pair-wise mutual information is zero for any two variables (unless the factor only contains two variables) and we could not define a prefixed tree structure with the algorithm in [9]. In contrast, we propose a tree structure that is learnt on the fly based on the graph itself, hence it might be amenable for other potentials and sparser graphs. The rest of the paper is organized as follows. In Section 2, we present the peeling decoder, which is the interpretation of the BP algorithm for LDPC codes over the BEC, and how it can be extended to incorporate the tree-structured EP decoding procedure. In Section 3, we analyze the TEP decoder performance for LDPC codes in both the asymptotic and the finite-length regimen. We provide an estimation of the TEP decoder error rate for a given LDPC ensemble. We conclude the paper in Section 5. 2 Tree-structured EP and the peeling decoder The BP algorithm was proposed as a message passing algorithm [5] but, for the BEC, it exhibits a simpler formulation, in which the non-erased variable nodes are removed from the graph in each iteration [4], because we either have absolute certainty about the received bit (0 or 1) or complete ignorance (?). The BP under this interpretation is referred to as the peeling decoder (PD) [3, 15] and it is easily described using the factor graph of the code. The first step is to initialize the graph by removing all the variable nodes corresponding to non-erased bits. When removing a one-valued nonerased variable node, the parity of the factors it was connected to are flipped. After the initialization 1The BEC allows binary transmission, in which the bits are either erased with probability ϵ or arrive without error otherwise. The capacity for this channel is 1 −ϵ and is achieved with equiprobable inputs [2]. 2 stage, the algorithm proceeds over the resulting graph by removing a factor and a variable node in each step: 1. It looks for any factor linked to a single variable (a check node of degree one). The peeling decoder copies the parity of this factor into the variable node and removes the factor. 2. It removes the variable node that we have just de-erased. If the variable was assigned a one, it changes the parity of the factors it was connected to. 3. It repeats Steps 1 and 2 until all the variable nodes have been removed, successful decoding, or until there are no degree-one factors left, unsuccessful decoding. We illustrate an example of the PD for a 1/2-rate code with four variables in Figure 1. The first and last bits have not been erased and when we remove them from the graph, the second factor is singled connected to the third variable, which can be now de-erased (Figure 1(b)). Finally, the first factor is singled connected to the second variable, decoding the transmitted codeword (Figure 1(c)). P1 P2 V1 V2 V3 V4 p(Y1 = 0|V1) p(Y2 =?|V2) p(Y3 =?|V3) p(Y4 = 1|V4) 0 0 (a) P1 P2 V2 V3 p(Y2 =?|V2) p(Y3 =?|V3) 1 0 p(Y1 = 0|V1) V1 p(Y4 = 1|V4) V4 (b) P1 V2 p(Y1 = 0|V1) p(Y2 =?|V2) p(Y3 =?|V3) 1 p(Y4 = 1|V4) V1 V3 V4 P2 (c) Figure 1: Example of the PD algorithm for LDPC channel decoding in the erasure channel. The analysis of the PD for fixed-rate codes, proposed in [3, 4], allows to compute its threshold in the BEC. This result can be used to optimize the DD to build irregular LDPC codes that, as n tends to infinity, approach the channel capacity. However, as already discussed, these codes present higher error floors, because they present many variables with only two edges, and they usually present poor finite-length performance due to the slow convergence to the asymptotic limit [15]. 2.1 The TEP decoder The tree-structured EP overlaps a tree over the variables on the graph to further impose pairwise marginal constraints. In the procedure proposed in [9] the tree was defined by measuring the mutual information between a pair of variables, before running the EP algorithm. The mutual information between pair of variables is zero for parity-check factors with more than two variables and we need to define the structure in another way. We propose to define the tree structure on the fly. Let’s assume that we run the PD in the previous section and yields an unsuccessful decoding. Any factor of degree two in the remaining graph either tells us that the connected variables are equal (if the parity check is zero), or opposite (if the parity check is one). We should link these two variables by the tree structure, because their marginal would provide further information to the remaining erased variables in the graph. The proposed algorithm actually replaces one variable by the other and iterates until a factor of degree one is created and more variables can be de-erased. When this happens a tree structure has been created, in which the pairwise marginal constraint provides information that was not available with single marginals approximations. The TEP decoder can be explained in a similar fashion as the PD decoder, in which instead of looking for degree-one factors, we look for degree one and two. We initialize the TEP decoder, as the PD, by removing all known variable nodes and updating the parity checks for the variables that are one. The TEP then removes a variable and a factor per iteration: 1. It looks for a factor of degree one or two. 2. If a factor of degree one is found, the TEP recovers the associated variable, performing the Steps 1 and 2 of the PD previously described. 3. If a factor of degree two is found, the decoder removes it from the graph together with one of the variable nodes connected to it and the two associated edges. Then, it reconnects 3 to the remaining variable node all the factors that were connected to the removed variable node. The parities of the factors re-connected to the remaining variable node are reversed if the removed factor had parity one. 4. Steps 1-3 are repeated until all the variable nodes have been removed, successful decoding, or the graph runs out of factors of degree one or two, unsuccessful decoding. The process of removing a factor of degree two is sketched in Figure 2. First, the variable V1 heirs the connections from V2 (solid lines), see Figure 2(b). Finally, the factor P1 and the variable V2 can be removed (Figure 2(c)), because they have no further implication in the decoding process. V2 is de-erased once V1 is de-erased. The TEP removes a factor and a variable node per iteration, as the PD does. The removal of a factor and a variable does not increase the complexity of the TEP decoder compared to the BP algorithm. Both TEP and BP algorithms have complexity O(n). V1 V2 P1 P2 P3 (a) V1 V2 P1 P2 P3 (b) P2 P3 V1 (c) Figure 2: In (a) we show two variable nodes, V1 and V2, that share a factor of degree two P1. In (b), V1 heirs the connections of V2 (solid lines). In (c), we show the graph once P1 and V2 have been removed. If P1 is parity one, the parities of P2, P3 are reversed. By removing factors of degree two, we eventually create factors of degree one, whenever we find an scenario equivalent to the one depicted in Figure 3. Consider two variable nodes connected to a factor of degree two that also share another factor with degree three, as illustrated in Figure 3(a). When we remove the factor P3 and the variable node V2, the factor P4 is now degree one, as illustrated in Figure 3(b). At the beginning of the decoding algorithm, it is unlikely that the two variable nodes in a factor of degree two also share a factor of degree three. However, as we remove variables and factors, the probability of this event grows. Note that, when we remove a factor of degree two connected to variables V1 and V2, in terms of the EP algorithm, we are including a pairwise factor between both variables. Therefore, the TEP equivalent tree structure is not fixed a priori and we construct it along the decoding process. Also, the steps of the TEP decoder can be presented as a linear combination of the columns of the parity-check matrix of the code and hence its solution is independent of the processing order. V1 V2 V3 P1 P2 P3 P4 (a) V1 V3 P1 P2 P4 (b) Figure 3: In (a), the variables V1 and V2 are connected to a degree two factor, P3, and they also share a factor of degree three, P4. In (b) we show the graph once the TEP has removed P3 and V2. 3 TEP analysis: expected graph evolution We now sketch the proof of why the TEP decoder outperforms BP. The actual proof can be found in [12] (available as supplementary material). Both the PD and the TEP decoder sequentially reduces 4 the LDPC graph by removing check nodes of degree one or two. As a consequence, the decoding process yields a sequence of residual graphs and their associated DD. The DD sequence of the residual graphs constitutes a sufficient statistic to analyze this random process [1]. In [3, 4], the sequence of residual graphs follows a typical path or expected evolution [15]. The authors make use of Wormald’s theorem in [21] to describe this path as the solution of a set of differential equations and characterized the typical deviation from it. For the PD, we have an analytical form for the evolution of the number of degree one factor as the decoding progresses, r1(τ, ϵ), as a function of the decoding time, τ, and the erasure rate, ϵ. The PD threshold ϵBP is the maximum ϵ for which r1(τ, ϵ) ≥0, ∀τ. In [1, 15], the authors show that particular decoding realizations are Gaussian distributed around r1(τ, ϵ), with a variance of order αBP/n, where αBP can be computed from the LDPC DD. They also provide the following approximation to the block error probability of elements of an LDPC ensemble: ELDPC [λ(x),ρ(x),n]  P BP W (C, ϵ)  ≈Q √n(ϵBP −ϵ) αBP  , (1) where P BP W (C, ϵ) is the average block error probability for the code C ∈LDPC [λ(x), ρ(x), n]. For the TEP decoder the analysis follows a similar path, but its derivation is more involved. For arbitrarily large codes, the expected graph evolution during the TEP decoding is computed in [12], with a set of non-linear differential equations. They track down the expected progression of the fraction of edges with left degree i, li(τ) for i = 1, . . . , lmax, and right degree j, rj(τ) for j = 1, . . . , rmax as the TEP decoder performs, where τ is a normalized time: if u is the TEP iteration index and E is the total number of edges in the original graph, then τ = u/E. By Wormald’s theorem [21], any real decoding realization does not differ from the solution of such equations in a factor larger than O(E−1/6). The TEP threshold, ϵTEP, is found as the maximum erasure rate ϵ such that rT EP (τ) .= r1(τ) + r2(τ) > 0, ∀τ ∈[0, n/E], (2) where rT EP (τ) is computed by solving the system of differential equations in [12] and ϵTEP ≥ϵBP. Let us illustrate the accuracy of the model derived to analyze the TEP decoder properties. In Figure 4(a), for a regular (3, 6) code with n = 217 and ϵ = 0.415, we compare the solution of the system of differential equations for R1(τ) = r1(τ)E and R2(τ) = r2(τ)E, depicted by thick solid lines, with 30 simulated decoding trajectories, depicted by thin dashed lines. We can see that empirical curves are tightly distributed around the predicted curves. Indeed, the distribution tends very quickly to n to a Gaussian [1, 15]. All curves are plotted with respect the evolution of the normalized size of the graph at each time instant, denoted by e(τ) so that the decoding process starts on the right e(τ = 0) ≈0.415 and, if successful, finishes at e(τEND) = 0. In Figure 4(b) we reproduce, with identical conclusions, the same experiment for the irregular DD LDPC code defined by: λ(x) = 1 6x + 5 6x3, (3) ρ(x) = x5. (4) For the TEP decoder to perform better than the BP decoder, it needs to significantly increase the number of check nodes of degree one that are created, which happens if two variables nodes share a degree-two check together along with a degree-three check node, as illustrated earlier in Figure 3(a). In [12], we compute the probability that two variable nodes that share a check node of degree-two also share another check node (scenario S). If we randomly choose a particular degree-two check node at time τ, the probability of scenario S is: PS(τ) = (lavg(τ) −1)2(ravg(τ) −1) e(τ)E , (5) where lavg(τ) and ravg(τ) are, respectively, the average left and right edge degrees, and e(τ) is the fraction of remaining edges in the graph. As the TEP decoder progresses, lavg(τ) increases, because the remaining variables in the graph inherits the connections of the variables that have been removed, and e(τ) decreases, therefore creating new factors of degree one and improving the BP/PD performance. However, note that in the limit n →∞, PS(τ = 0) = 0. Therefore, to improve the PD solution in this regime we require that lavg(τ ′) →∞for some τ ′. The solution of the TEP decoder differential equations does not satisfy this property. For instance, in Figure 5 (a), we plot the expected evolution of r1(τ) and r2(τ) for n →∞and the (3, 6) regular LDPC ensemble when 5 we are just above the BP threshold for this code, which is ϵBP ≈0.4294. Unlike Figure 4(a), r1(τ) and r2(τ) go to zero before e(τ) cancels: the TEP decoder gets stuck before completing the decoding process. In Fig.5 (b), we include the computed evolution for lavg(τ). As shown, the fraction of degree two check nodes vanishes before lavg(τ) becomes infinite. We conclude that, in the asymptotic limit n →∞, the EP with tree-structure is not able to outperform the BP solution, which is optimal since LDPC codes become cycle free [15]. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 10 2 10 3 10 4 10 5 Ri(τ ), i = 1, 2 Residual graph normalized size e(τ ) 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 10 3 10 4 R1(τ ) Residual graph normalized size e(τ ) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 10 1 10 2 10 3 10 4 10 5 Ri(τ ), i = 1, 2 Residual graph normalized size e(τ ) 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 10 3 10 4 R1(τ ) Residual graph normalized size e(τ ) (a) (b) Figure 4: In (a), for a regular (3, 6) code with n = 217 and ϵ = 0.415, we compare the solution of the system of differential equations for R1(τ) = r1(τ)E (◁) and R2(τ) = r2(τ)E (⋄) (thick solid lines) with 30 simulated decoding trajectories (thin dashed lines). In (b), we reproduce the same experiment for the irregular LDPC in (3) and (4) for ϵ = 0.47. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 10 −7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Residual graph normalized size e(τ ) ri(τ ) for i = 1, 2 Pred icted r1(τ ) Pred icted r2(τ ) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 10 0 10 1 10 2 Residual graph normalized size e(t) lavg(τ ) Predicted lav g(τ ) (a) (b) Figure 5: For the regular (3, 6) ensemble and ϵBP ≈0.4294, in (a) we plot the expected evolution of r1(τ) and r2(τ) for n →∞. In (b), we include the computed evolution of lavg(τ) for this case. 3.1 Analysis in the finite-length regime In the finite-length regime, the TEP decoder emerges as a powerful decoding algorithm. At a complexity similar to BP, i.e. of order O(n), it is able to further improve the BP solution thanks to a more accurate estimate of the marginal for each bit. We illustrate the TEP decoder performance for some regular and irregular finite-length LDPC codes. We first consider a rate 1/2 regular (3, 6) LDPC code. This ensemble has no asymptotic error floor [15] and we plot the word error rate obtained with the TEP and the BP decoders with different code lengths in Figure 6(a). In Figure 6(b), we 6 include the results for the irregular DD in (3) and (4), where we can see that in all cases BP and TEP converge to the same error floor but, as in previous examples, the TEP decoder provides significant gains in the waterfall region and they are more significant for shorter codes. 0.34 0.36 0.38 0.4 0.42 0.44 0.46 10 −4 10 −3 10 −2 10 −1 10 0 Channel erasure probability ϵ Word Error Rate (a) 0.3 0.35 0.4 0.45 0.5 10 −2 10 −1 10 0 Channel erasure probability ϵ Word error rate (b) Figure 6: TEP (solid line) and BP (dashed line) decoding performance for a regular LDPC (3,6) code in (a), and the irregular LDPC in (3) and (4) in (b), with code lengths n = 29 (◦), n = 210 (□), n = 211 (×) and 212 (▷). The expected graph evolution during the TEP decoding in [12], which provides the average presence in the graph of degree one and two check nodes as the decoder proceeds, can be used to derive a coarse estimation of the TEP decoder probability of error for a given LDPC ensemble, similar to (1) for the BP decoder. By using the regular (3, 6) code as an example, in Figure 5(a), we plot the solution for r1(τ) in the case n →∞. Let τ ∗be the time at which the decoder gets stuck, i.e. r1(τ ∗) + r2(τ ∗) = 0. In Figure 7, we plot the solution for the evolution of r1(τ, n, ϵBP) with respect to e(t) for a (3, 6) regular code at ϵ = ϵBP = ϵTEP. To avoid confusion, in the following we explicitly include the dependence with n and ϵ in r1(τ, n, ϵ). The code lengths considered are n = 212 (+), n = 213 (◦), n = 214 (□), n = 215 (⋄), n = 216 (×) and n = 217 (•). For finite-length values, we observe that r1(τ ∗, n, ϵBP) is not zero and, indeed, a closer look shows that the following approximation is reasonable tight: r1(τ ∗, n, ϵTEP) ≈γTEPn−1, (6) where we compute γTEP from the ensemble. For the (3, 6) regular case, we obtain γTEP ≈0.3198 [12]. The idea to estimate the TEP decoder performance at ϵ = ϵBP + ∆ϵ is to assume that any particular realization will succeed almost surely as long as the fraction of degree one check nodes at τ ∗is positive. For ϵ = ϵBP + ∆ϵ, we can approximate r1(τ ∗, n, ϵ) as follows: r1(τ ∗, n, ϵ) = ∂r1(τ, n, ϵ) ∂ϵ τ=τ ∗ ϵ=ϵTEP ∆ϵ + γTEPn−1. (7) In [1, 15], it is shown that simulated trajectories for the evolution of degree one check nodes under BP are asymptotically Gaussian distributed and this is observed for the TEP decoder as well. Furthermore, the variance is of order δ(τ)/n, where δ(τ) depends on the ensemble and the decoder [1]. To estimate the TEP decoder error rate, we compute the probability that the fraction of degree one check nodes at at τ ∗is positive. Since it is distributed as N(r1(τ ∗, n, ϵTEP), δ(τ)/n), we get ELDPC [λ(x),ρ(x),n]  P TEP W (C, ϵ)  ≈1 −Q       ∂r1(τ,n,ϵ) ∂ϵ τ=τ ∗ ϵ=ϵTEP ∆ϵ + γTEPn−1 p δ(τ ∗)/n       = Q √n(ϵTEP −ϵ) αTEP + γTEP p n δ(τ ∗) ! , (8) 7 where αTEP = p δ(τ ∗) ∂r1(τ, n, ϵ) ∂ϵ −1 τ=τ ∗ ϵ=ϵTEP . (9) Finally, note that, since for n →∞we know that the TEP and the BP converge to the same solution, we can approximate αTEP ≈αBP. Besides, we have empirically observed that the variance of trajectories under BP and TEP decoding are quite similar so, for simplicity, we set δ(τ ∗) in (8) equal to δ(τ ∗)BP, whose analytic solution can be found in [16, 1]. Hence, we consider the TEP decoder expected evolution to estimate the parameter γTEP in (8). In Figure 7(b), we compare the TEP performance for the regular (3, 6) ensemble (solid lines) with the approximation in (8) (dashed lines), using the approximation αTEP ≈αBP = 0.56036, δ(τ ∗) ≈0.0526 and γTEP ≈0.3198. We have plot the results for code lengths of n = 29 (◦), n = 210 (□), n = 211 (×) and 212 (▷). As we can see, for the shortest code length, the model seems to slightly over-estimate the error probability, but this mismatch vanishes for the rest of the cases, obtaining a tight estimate. 0 0.05 0.1 0.15 0.2 0.25 0.3 10 −4 10 −3 10 −2 10 −1 Residual graph normalized size e(τ ) r1(τ , n, ϵT EP) n = 212 n = 213 n = 214 n = 215 n = 216 n →∞ 0.36 0.38 0.4 0.42 0.44 0.46 0.48 10 −3 10 −2 10 −1 10 0 Channel erasure probability ϵ Word error rate (a) (b) Figure 7: In (a), we plot the solution for r1(τ) with respect to e(t) for a (3, 6) regular code at ϵ = ϵBP = ϵTEP. In (b), we compare the TEP performance for the regular (3, 6) ensemble (solid lines) with the approximation in (8) (dashed lines), using the approximation αTEP ≈αBP = 0.56036, δ(τ ∗) ≈0.0526 and γTEP ≈0.3198. We have plot the results for code lengths of n = 29 (◦), n = 210 (□), n = 211 (×) and n = 212 (▷). 4 Conclusions In this paper, we consider a tree structure for approximate inference in sparse graphical models using the EP algorithm. We have shown that, for finite-length LDPC sparse graphs, the accuracy of the marginal estimation with the method proposed significantly outperforms the BP estimate for the same graph. As a consequence, the decoding error rates are clearly improved. This result is remarkable in itself, as BP was considered the gold standard for LDPC decoding, and it was assumed that the long-range cycles and sparse nature of these factor graphs did not lend themselves for the application of more accurate approximate inference algorithms designed for dense graphs with short-range cycles. Additionally, the application of LDPC decoding showed us a different way of learning the tree structure that might be amenable for general factors. 5 Acknowledgments This work was partially funded by Spanish government (Ministerio de Educaci´on y Ciencia, TEC2009-14504-C02-01,02, Consolider-Ingenio 2010 CSD2008-00010), Universidad Carlos III (CCG10-UC3M/TIC-5304) and European Union (FEDER). 8 References [1] Abdelaziz Amraoui, Andrea Montanari, Tom Richardson, and R¨udiger Urbanke. Finite-length scaling for iteratively decoded LDPC ensembles. IEEE Transactions on Information Theory., 55(2):473–498, 2009. [2] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wilson and Sons, New York, USA, 1991. [3] Michael Luby, Michael Mitzenmacher, Amin Shokrollahi, Daniel Spielman, and Volker Stemann. Practical loss-resilient codes. In Proceedings of the 29th annual ACM Symposium on Theory of Computing, pages 150–159, 1997. [4] Michael Luby, Michael Mitzenmacher, Amin Shokrollahi, Daniel Spielman, and Volker Stemann. Efficient erasure correcting codes. IEEE Transactions on Information Theory, 47(2):569–584, Feb. 2001. [5] David J. C. MacKay. Good error-correcting codes based on very sparse matrices. IEEE Transactions on Information Theory, 45(2):399–431, 1999. [6] David J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [7] David J. C. MacKay and Radford M. Neal. Near Shannon limit performance of low density parity check codes. Electronics Letters, 32:1645–1646, 1996. [8] T. Minka. Power EP. Technical report, MSR-TR-2004-149, 2004. http://research.microsoft.com/˜ minka/papers/. [9] Thomas Minka and Yuan Qi. Tree-structured approximations by expectation propagation. In Proceedings of the Neural Information Processing Systems Conference, (NIPS), 2003. [10] Thomas P. Minka. Expectation Propagation for approximate Bayesian inference. In Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence (UAI 2001), pages 362–369. Morgan Kaufmann Publishers Inc., 2001. [11] Pablo M. Olmos, Juan Jos´e Murillo-Fuentes, and Fernando P´erez-Cruz. Tree-structure expectation propagation for decoding LDPC codes over binary erasure channels. In 2010 IEEE International Symposium on Information Theory, ISIT, Austin, Texas, 2010. [12] P.M. Olmos, J.J. Murillo-Fuentes, and F. P´erez-Cruz. Tree-structure expectation propagation for LDPC decoding in erasure channels. Submitted to IEEE Transactions on Information Theory, 2011. [13] P.M. Olmos, J.J. Murillo-Fuentes, and F. P´erez-Cruz. Tree-structured expectation propagation for decoding finite-length ldpc codes. IEEE Communications Letters, 15(2):235 –237, Feb. 2011. [14] P. Oswald and A. Shokrollahi. Capacity-achieving sequences for the erasure channel. IEEE Transactions on Information Theory, 48(12):3017 – 3028, Dec. 2002. [15] Tom Richardson and Ruediger Urbanke. Modern Coding Theory. Cambridge University Press, Mar. 2008. [16] N. Takayuki, K. Kasai, and S. Kohichi. Analytical solution of covariance evolution for irregular LDPC codes. e-prints, November 2010. [17] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsk. Map estimation via agreement on (hyper)trees: Message-passing and linear-programming approaches. IEEE Transactions on Information Theory, 51(11):3697–3717, November 2005. [18] Martin J. Wainwright and Michael I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 2008. [19] W. Weigerinck and T. Heskes. Fractional belief propagation. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, Cambridge, MA, December 2002. MIT Press. [20] M. Welling, T. Minka, and Y.W. Teh. Structured region graphs: Morphing EP into GBP. In UAI, 2005. [21] Nicholas C. Wormald. Differential equations for random processes and random graphs. Annals of Applied Probability, 5(4):1217–1235, 1995. [22] J. S. Yedidia, W. T. Freeman, and Y. Weis. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282–2312, July 2005. 9
2011
106
4,154
PAC-Bayesian Analysis of Contextual Bandits Yevgeny Seldin1,4 Peter Auer2 Franc¸ois Laviolette3 John Shawe-Taylor4 Ronald Ortner2 1Max Planck Institute for Intelligent Systems, T¨ubingen, Germany 2Chair for Information Technology, Montanuniversit¨at Leoben, Austria 3D´epartement d’informatique, Universit´e Laval, Qu´ebec, Canada 4Department of Computer Science, University College London, UK seldin@tuebingen.mpg.de, {auer,ronald.ortner}@unileoben.ac.at, francois.laviolette@ift.ulaval.ca, jst@cs.ucl.ac.uk Abstract We derive an instantaneous (per-round) data-dependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). The scaling of our regret bound with the number of states (contexts) N goes as p NI⇢t(S; A), where I⇢t(S; A) is the mutual information between states and actions (the side information) used by the algorithm at round t. If the algorithm uses all the side information, the regret bound scales as p N ln K, where K is the number of actions (arms). However, if the side information I⇢t(S; A) is not fully used, the regret bound is significantly tighter. In the extreme case, when I⇢t(S; A) = 0, the dependence on the number of states reduces from linear to logarithmic. Our analysis allows to provide the algorithm large amount of side information, let the algorithm to decide which side information is relevant for the task, and penalize the algorithm only for the side information that it is using de facto. We also present an algorithm for multiarmed bandits with side information with O(K) computational complexity per game round. 1 Introduction Multiarmed bandits with side information are an elegant mathematical model for many real-life interactive systems, such as personalized online advertising, personalized medical treatment, and so on. This model is also known as contextual bandits or associative bandits (Kaelbling, 1994, Strehl et al., 2006, Langford and Zhang, 2007, Beygelzimer et al., 2011). In multiarmed bandits with side information the learner repeatedly observes states (side information) {s1, s2, . . . } (for example, symptoms of a patient) and has to perform actions (for example, prescribe drugs), such that the expected regret is minimized. The regret is usually measured by the difference between the reward that could be achieved by the best (unknown) fixed policy (for example, the number of patients that would be cured if we knew the best drug for each set of symptoms) and the reward obtained by the algorithm (the number of patients that were actually cured). Most of the existing analyses of multiarmed bandits with side information has focused on the adversarial (worst-case) model, where the sequence of rewards associated with each state-action pair is chosen by an adversary. However, many problems in real-life are not adversarial. We derive datadependent analysis for stochastic multiarmed bandits with side information. In the stochastic setting the rewards for each state-action pair are drawn from a fixed unknown distribution. The sequence of states is also drawn from a fixed unknown distribution. We restrict ourselves to problems with finite number of states N and finite number of actions K and leave generalization to continuous state and action spaces to future work. We also do not assume any structure of the state space. Thus, for us a state is just a number between 1 and N. For example, in online advertising the state can be the country from which a web page is accessed. 1 The result presented in this paper exhibits adaptive dependency on the side information (state identity) that is actually used by the algorithm. This allows us to provide the algorithm a large amount of side information and let the algorithm decide, which of this side information is actually relevant to the task. For example, in online advertising we can increase the state resolution and provide the algorithm the town from which the web page was accessed, but if this refined state information is not used by the algorithm the regret bound will not deteriorate. This can be opposed to existing analysis of adversarial multiarmed bandits, where the regret bound depends on a predefined complexity of the underlying expert class (Beygelzimer et al., 2011). Thus, the existing analysis of adversarial multiarmed bandits would either become looser if we add more side information or a-priori limit the usage of the side information through its internal structure. (We note that through the relation between PAC-Bayesian analysis and the analysis of adversarial online learning described in Banerjee (2006) it might be possible to extend our analysis to adversarial setting, but we leave this research direction to future work.) The idea of regularization by relevant mutual information goes back to the Information Bottleneck principle in supervised and unsupervised learning (Tishby et al., 1999). Tishby and Polani (2010) further suggested to measure the complexity of a policy in reinforcement learning by the mutual information between states and actions used by the policy. We note, however, that our starting point is the regret bound and we derive the regularization term from our analysis without introducing it a-priori. The analysis also provides time and data dependent weighting of the regularization term. Our results are based on PAC-Bayesian analysis (Shawe-Taylor and Williamson, 1997, ShaweTaylor et al., 1998, McAllester, 1998, Seeger, 2002), which was developed for supervised learning within the PAC (Probably Approximately Correct) learning framework (Valiant, 1984). In PACBayesian analysis the complexity of a model is defined by a user-selected prior over a hypothesis space. Unlike in VC-dimension-based approaches and their successors, where the complexity is defined for a hypothesis class, in PAC-Bayesian analysis the complexity is defined for individual hypotheses. The analysis provides an explicit trade-off between individual model complexity and its empirical performance and a high probability guarantee on the expected performance. An important distinction between supervised learning and problems with limited feedback, such as multiarmed bandits and reinforcement learning more generally, is the fact that in supervised learning the training set is given, whereas in reinforcement learning the training set is generated by the learner as it plays the game. In supervised learning every hypothesis in a hypothesis class can be evaluated on all the samples, whereas in reinforcement learning rewards of one action cannot be used to evaluate another action. Recently, Seldin et al. (2011b,a) generalized PAC-Bayesian analysis to martingales and suggested a way to apply it under limited feedback. Here, we apply this generalization to multiarmed bandits with side information. The remainder of the paper is organized as follows. We start with definitions in Section 2 and provide our main results in Section 3, which include an instantaneous regret bound and a new algorithm for stochastic multiarmed bandits with side information. In Section 4 we present an experiment that illustrates our theoretical results. Then, we dive into the proof of our main results in Section 5 and discuss the paper in Section 6. 2 Definitions In this section we provide all essential definitions for our main results in the following section. We start with the definition of stochastic multiarmed bandits with side information. Let S be a set of |S| = N states and let A be a set of |A| = K actions, such that any action can be performed in any state. Let s 2 S denote the states and a 2 A denote the actions. Let R(a, s) be the expected reward for performing action a in state s. At each round t of the game the learner is presented a state St drawn i.i.d. according to an unknown distribution p(s). The learner draws an action At according to his choice of a distribution (policy) ⇡t(a|s) and obtains a stochastic reward Rt with expected value R(At, St). Let {S1, S2, . . . } denote the sequence of observed states, {⇡1, ⇡2, . . . } the sequence of policies played, {A1, A2, . . . } the sequence of actions played, and {R1, R2, . . . } the sequence of observed rewards. Let Tt = {{S1, . . . , St}, {⇡1, . . . , ⇡t}, {A1, . . . , At}, {R1, . . . , Rt}} denote the history of the game up to time t. 2 Assume that ⇡t(a|s) > 0 for all t, a, and s. For t ≥1, a 2 {1, . . . , K}, and the sequence of observed states {S1, . . . , St} define a set of random variables Ra,St t : Ra,St t = ⇢ 1 ⇡t(a|St)Rt, if At = a 0, otherwise. (The variables Ra,s t are defined only for the observed state s = St.) Note that whenever defined, E[Ra,St t |Tt−1, St] = R(a, St). The definition of Ra,s t is generally known as importance weighted sampling (Sutton and Barto, 1998). Importance weighted sampling is required for application of PAC-Bayesian analysis, as will be shown in the technical part of the paper. Define nt(s) = Pt ⌧=1 I{S⌧=s} as the number of times state s appeared up to time t (I is the indicator function). We define the empirical rewards of state-action pairs as: ˆRt(a, s) = ( P {⌧=1,...,t:S⌧=s} Ra,s ⌧ nt(s) , if nt(s) > 0 0, otherwise. Note that whenever nt(s) > 0 we have E ˆRt(a, s) = R(a, s). For every state s we define the “best” action in that state as a⇤= arg maxa R(a, s) (if there are multiple “best” actions, one of them is chosen arbitrarily). We then define the expected and empirical regret for performing any other action a in state s as: ∆(a, s) = R(a⇤(s), s) −R(a, s), ˆ∆t(a, s) = ˆRt(a⇤(s), s) −ˆRt(a, s). Let ˆpt(s) = nt(s) t be the empirical distribution over states observed up to time t. For any policy ⇢(a|s) we define the empirical reward, empirical regret, and expected regret of the policy as: ˆRt(⇢) = P s ˆpt(s) P a ⇢(a|s) ˆRt(a, s), ˆ∆t(⇢) = P s ˆpt(s) P a ⇢(a|s) ˆ∆t(a, s), and ∆(⇢) = P s p(s) P a ⇢(a|s)∆(a, s). We define the marginal distribution over actions that corresponds to a policy ⇢(a|s) and the uniform distribution over S as ¯⇢(a) = 1 N P s ⇢(a|s) and the mutual information between actions and states corresponding to the policy ⇢(a|s) and the uniform distribution over S as I⇢(S; A) = 1 N X s,a ⇢(a|s) ln ⇢(a|s) ¯⇢(a) . For the proof of our main result and also in order to explain the experiments we also have to define a hypothesis space for our problem. This definition is not used in the statement of the main result. Let H be a hypothesis space, such that each member h 2 H is a deterministic mapping from S to A. Denote by a = h(s) the action assigned by hypothesis h to state s. It is easy to see that the size of the hypothesis space |H| = KN. Denote by R(h) = P s2S p(s)R(h(s), s) the expected reward of a hypothesis h. Define: ˆRt(h) = 1 t t X ⌧=1 Rh(S⌧),S⌧ ⌧ . Note that E ˆRt(h) = R(h). Let h⇤= arg maxh2H R(h) be the “best” hypothesis (the one that chooses the “best” action in each state). (If there are multiple hypotheses achieving maximal reward pick any of them.) Define: ∆(h) = R(h⇤) −R(h), ˆ∆t(h) = ˆRt(h⇤) −ˆRt(h). Any policy ⇢(a|s) defines a distribution over H: we can draw an action a for each state s according to ⇢(a|s) and thus obtain a hypothesis h 2 H. We use ⇢(h) to denote the respective probability of drawing h. For a policy ⇢we define ∆(⇢) = E⇢(h)[∆(h)] and ˆ∆t(⇢) = E⇢(h)[ ˆ∆t(h)]. By marginalization these definitions are consistent with our preceding definitions of ∆(⇢) and ˆ∆t(⇢). Finally, let nh(a) = PN s=1 Ih(s)=a be the number of states in which action a is played by the hypothesis h. Let Ah = n nh(a) N o a2A be the normalized cardinality profile (histogram) over the 3 actions played by hypothesis h (with respect to the uniform distribution over S). Let H(Ah) = −P a nh(a) N ln nh(a) N be the entropy of this cardinality profile. In other words, H(Ah) is the entropy of an action choice of hypothesis h (with respect to the uniform distribution over S). Note, that the optimal policy ⇢⇤(a|s) (the one, that selects the “best” action in each state) is deterministic and we have I⇢⇤(S; A) = H(Ah⇤). 3 Main Results Our main result is a data and complexity dependent regret bound for a general class of prediction strategies of a smoothed exponential form. Let ⇢t(a) be an arbitrary distribution over actions, let ⇢ exp t (a|s) = ⇢t(a)eγt ˆ Rt(a,s) Z(⇢ exp t , s) , (1) where Z(⇢exp t , s) = P a ⇢t(a)eγt ˆ Rt(a,s) is a normalization factor, and let ˜⇢ exp t (a|s) = (1 −K"t+1)⇢ exp t (a|s) + "t+1 (2) be a smoothed exponential policy. The following theorem provides a regret bound for playing ˜⇢ exp t at round t + 1 of the game. For generality, we assume that rounds 1, . . . , t were played according to arbitrary policies ⇡1, . . . , ⇡t. Theorem 1. Assume that in game rounds 1, . . . , t policies {⇡1, . . . , ⇡t} were played and assume that mina,s ⇡t(a|s) ≥"t for an arbitrary "t that is independent of Tt. Let ⇢t(a) be an arbitrary distribution over A that can depend on Tt and satisfies mina ⇢t(a) ≥✏t. Let c > 1 be an arbitrary number that is independent of Tt. Then, with probability greater than 1 −δ over Tt, simultaneously for all policies ˜⇢ exp t defined by (2) that satisfy NI⇢exp t (S; A) + K(ln N + ln K) + ln 2mt δ 2(e −2)t "t c2 (3) we have: ∆(˜⇢ exp t ) (1 + c) s 2(e −2)(NI⇢exp t (S; A) + K(ln N + ln K) + ln 2mt δ ) t"t + ln 1 ✏t+1 γt + K"t+1, (4) where mt = ln ⇣q (e−2)t ln 2 δ ⌘ / ln(c), and for all ⇢ exp t that do not satisfy (3), with the same probability: ∆(˜⇢ exp t )  2(NI⇢exp t (S; A) + K(ln N + ln K) + ln 2mt δ ) t"t + ln 1 ✏t+1 γt + K"t+1. Note that the mutual information in Theorem 1 is calculated with respect to ⇢ exp t and not ˜⇢ exp t . Theorem 1 allows to tune the learning rate γt based on the sample. It also provides an instantaneous regret bound for any algorithm that plays the policies {˜⇢ exp 1 , ˜⇢ exp 2 , . . . } throughout the game. In order to obtain such a bound we just have to take a decreasing sequence {"1, "2, . . . } and substitute δ in Theorem 1 with δt = δ t(t+1). Then, by the union bound, the result holds with probability greater than 1 −δ for all rounds of the game simultaneously. This leads to Algorithm 1 for stochastic multiarmed bandits with side information. Note that each round of the algorithm takes O(K) time. Theorem 1 is based on the following regret decomposition and the subsequent theorem and two lemmas that bound the three terms in the decomposition. ∆(˜⇢ exp t ) = [∆(⇢ exp t ) −ˆ∆t(⇢ exp t )] + ˆ∆t(⇢ exp t ) + [R(⇢ exp t ) −R(˜⇢ exp t )]. (5) Theorem 2. Under the conditions of Theorem 1 on {⇡1, . . . , ⇡t} and c, simultaneously for all policies ⇢that satisfy (3) with probability greater than 1 −δ: ,,,∆(⇢) −ˆ∆t(⇢) ,,, (1 + c) s 2(e −2)(NI⇢(S;A) + K(ln N + ln K) + ln 2mt δ ) t"t , (6) 4 Algorithm 1: Algorithm for stochastic contextual bandits. (See text for definitions of "t and γt.) Input: N, K ˆR(a, s) 0 for all a, s (These are cumulative [unnormalized] rewards) ⇢(a) 1 K for all a n(s) 0 for all s t 1 while not terminated do Observe state St. if "t ≥1 K . or (n(St) = 0) then ⇢(a|St) ⇢(a) for all a else ⇢(a|St) (1 −K"t) ⇢(a)eγt ˆ R(a,St)/n(St) P a0 ⇢(a0)eγt ˆ R(a0,St)/n(St) + "t for all a ⇢(a) N−1 N ⇢(a) + 1 N ⇢(a|St) for all a Draw action At according to ⇢(a|St) and play it. Observe reward Rt. n(St) n(St) + 1 ˆR(At, St) ˆR(At, St) + Rt ⇢(At|St) t t + 1 and for all ⇢that do not satisfy (3) with the same probability: ,,,∆(⇢) −ˆ∆t(⇢) ,,, 2(NI⇢(S; A) + K(ln N + ln K) + ln 2mt δ ) t"t . Note that Theorem 2 holds for all possible ⇢-s, including those that do not have an exponential form. Lemma 1. For any distribution ⇢ exp t of the form (1), where ⇢t(a) ≥✏for all a, we have: ˆ∆t(⇢ exp t ) ln 1 ✏ γt . Lemma 2. Let ˜⇢be an "-smoothed version of a policy ⇢, such that ˜⇢(a|s) = (1 −K")⇢(a|s) + ", then R(⇢) −R(˜⇢) K". Proof of Theorem 2 is provided in Section 5 and proofs of Lemmas 1 and 2 are provided in the supplementary material. Comments on Theorem 1. Theorem 1 exhibits what we were looking for: the regret of a policy ˜⇢ exp t depends on the trade-off between its complexity, NI⇢exp t (S; A), and the empirical regret, which is bounded by 1 γt ln 1 ✏t+1 . We note that 0 I⇢t(S; A) ln K, hence, the result is interesting when N ≫K, since otherwise K ln K term in the bound neutralizes the advantage we get from having small mutual information values. The assumption that N ≫K is reasonable for many applications. We believe that the dependence of the first term of the regret bound (4) on "t is an artifact of our crude upper bound on the variance of the sampling process (given in Lemma 3 in the proof of Theorem 2) and that this term should not be in the bound. This is supported by an empirical study of stochastic multiarmed bandits (Seldin et al., 2011a). With the current bound the best choice for "t is "t = (Kt)−1/3, which, by integration over the game rounds, yields O(K1/3t2/3) dependence of the cumulative regret on the number of arms and game rounds. However, if we manage to derive a tighter analysis and remove "t from the first term in (4), the best choice of "t will be "t = (Kt)−1/2 and the dependence of the cumulative regret on the number of arms and time horizon will improve to O((Kt)1/2). One way to achieve this is to apply EXP3.P-style updates (Auer et al., 2002b), however, Seldin et al. (2011a) empirically show that in stochastic environments EXP3 algorithm of Auer et al. (2002b), which is closely related to Algorithm 1, has significantly better performance. Thus, it is desirable to derive a better analysis for EXP3 algorithm in stochastic environments. We note 5 that although UCB algorithm for stochastic multiarmed bandits (Auer et al., 2002a) is asymptotically better than the EXP3 algorithm, it is not compatible with PAC-Bayesian analysis and we are not aware of a way to derive a UCB-type algorithm and analysis for multiarmed bandits with side information, whose dependence on the number of states would be better than O(N ln K). Seldin et al. (2011a) also demonstrate that empirically it takes a large number of rounds until the asymptotic advantage of UCB over EXP3 translates into a real advantage in practice. It is not trivial to minimize (4) with respect to γt analytically. Generally, higher values of γt decrease the second term of the bound, but also lead to more concentrated policies (conditional distributions) ⇢ exp t (a|s) and thus higher mutual information values I⇢exp t (S; A). A simple way to address this trade-off is to set γt such that the contribution of the second term is as close to the contribution of the first term as possible. This can be approximated by taking the value of mutual information from the previous round (or approximation of the value of mutual information from the previous round). More details on parameter setting for the algorithm are provided in the supplementary material. Comments on Algorithm 1. By regret decomposition (5) and Theorem 2, regret at round t + 1 is minimized by a policy ⇢t(a|s) that minimizes a certain trade-off between the mutual information I⇢(S; A) and the empirical regret ˆRt(⇢). This trade-off is analogical to rate-distortion trade-off in information theory (Cover and Thomas, 1991). Minimization of rate-distortion trade-off is achieved by iterative updates of the following form, which are known as Blahut-Arimoto (BA) algorithm: ⇢BA t (a|s) = ⇢BA t (a)eγt ˆ Rt(a,s) P a ⇢BA t (a)eγt ˆ Rt(a,s) , ⇢BA t (a) = 1 N X s ⇢BA t (a|s). Running a similar type of iterations in our case would be prohibitively expensive, since they require iteration over all states s 2 S at each round of the game. We approximate these iterations by approximating the marginal distribution over the actions by a running average: ˜⇢ exp t+1(a) = N −1 N ˜⇢ exp t (a) + 1 N ˜⇢ exp t (a|St). (7) Since ⇢ exp t (a|s) is bounded from zero by a decreasing sequence "t+1, the same automatically holds for ˜⇢ exp t+1(a) (meaning that in Theorem 1 ✏t = "t). Note that Theorem 1 holds for any choice of ⇢t(a), including (7). We point out an interesting fact: ⇢ exp t (a) propagates information between different states, but Theorem 1 also holds for the uniform distribution ⇢(a) = 1 K , which corresponds to application of EXP3 algorithm in each state independently. If these independent multiarmed bandits independently converge to similar strategies, we still get a tighter regret bound. This happens because the corresponding subspace of the hypothesis space is significantly smaller than the total hypothesis space, which enables us to put a higher prior on it (Seldin and Tishby, 2010). Nevertheless, propagation of information between states via the distribution ⇢ exp t (a) helps to achieve even faster convergence of the regret, as we can see from the experiments in the next section. Comparison with state-of-the-art. We are not aware of algorithms for stochastic multiarmed bandits with side information. The best known to us algorithm for adversarial multiarmed bandits with side information is EXP4.P by Beygelzimer et al. (2011). EXP4.P has O( p Kt ln |H|) regret and O(K|H|) complexity per game round. In our case |H| = KN, which means that EXP4.P would have O( p KtN ln K) regret and O(KN+1) computational complexity. For hard problems, where all side information has to be used, our regret bound is inferior to the regret bound of Beygelzimer et al. (2011) due to O(t2/3) dependence on the number of game rounds. However, we believe that this can be improved by a more careful analysis of the existing algorithm. For simple problems the dependence of our regret bound on the number of states is significantly better, up to the point that when the side information is irrelevant for the task we can get O( p K ln N) dependence on the number of states versus O( p N ln K) in EXP4.P. For N ≫K this leads to tighter regret bounds for small t even despite the “incorrect” dependence on t of our bound, and if we improve the analysis it will lead to tighter regret bounds for all t. As we said it already, our algorithm is able to filter relevant information from large amounts of side information automatically, whereas in EXP4.P the usage of side information has to be restricted externally through the construction of the hypothesis class. 6 0 1 2 3 4 x 10 6 0 5 10 15x 10 4 t ∆(t) H(Ah*)=0 H(Ah*)=1 H(Ah*)=2 H(Ah*)=3 Baseline (a) ∆(t) 0 1 2 3 4 x 10 6 0.5 1 1.5 2 2.5 t Bound on ∆(~ρt exp) H(Ah*)=0 H(Ah*)=1 H(Ah*)=2 H(Ah*)=3 (b) bound on ∆(˜⇢ exp t ) 0 1 2 3 4 x 10 6 0 0.5 1 1.5 2 2.5 t Iρt (S;A) H(Ah*)=0 H(Ah*)=1 H(Ah*)=2 H(Ah*)=3 (c) I⇢exp t (S; A) Figure 1: Behavior of: (a) cumulative regret ∆(t), (b) bound on instantaneous regret ∆(˜⇢ exp t ), and (c) the approximation of mutual information I⇢exp t (S; A). “Baseline” in the first graph corresponds to playing N independent multiarmed bandits, one in each state. Each line in the graphs corresponds to an average over 10 repetitions of the experiment. The second important advantage of our algorithm is the exponential improvement of computational complexity. This is achieved by switching from the space of experts to the state-action space in all our calculations. 4 Experiments We present an experiment on synthetic data that illustrates our results. We take N = 100, K = 20, a uniform distribution over states (p(s) = 0.01), and consider four settings, with H(Ah⇤) = ln(1) = 0, H(Ah⇤) = ln(3) ⇡1, H(Ah⇤) = ln(7) ⇡2, and H(Ah⇤) = ln(20) ⇡3, respectively. In the first case, the same action is the best in all states (and hence H(Ah⇤) = 0 for the optimal hypothesis h⇤). In the second case, for the first 33 states the best action is number 1, for the next 33 states the best action is number 2, and for the rest third of the states the best action is number 3 (thus, depending on the state, one of the three actions is the “best” and H(Ah⇤) = ln(3)). In the third case, there are seven groups of 14 states and each group has its own best action. In the last case, there are 20 groups of 5 states and each of K = 20 actions is the best in exactly one of the 20 groups. For all states, the reward of the best action in a state has Bernoulli distribution with bias 0.6 and the rewards of all other actions in that state have Bernoulli distribution with bias 0.5. We run the experiment for T = 4, 000, 000 rounds and calculate the cumulative regret ∆(t) = Pt ⌧=1 ∆(˜⇢ exp ⌧ ) and instantaneous regret bound given in (4). For computational efficiency, the mutual information I⇢exp t (S; A) is approximated by a running average (see supplementary material for details). As we can see from the graphs (see Figure 1), the algorithm exhibits sublinear cumulative regret (put attention to the axes’ scales). Furthermore, for simple problems (with small H(Ah⇤)) the regret grows slower than for complex problems. “Baseline” in Figure 1.a shows the performance of an algorithm with the same parameter values that runs N multiarmed bandits, one in each state independently of other states. We see that for all problems except the hardest one our algorithm performs better than the baseline and for the hardest problem it performs almost as good as the baseline. The regret bound in Figure 1.b provides meaningful values for the simplest problem after 1 million rounds (which is on average 500 samples per state-action pair) and after 4 million rounds for all the problems (the graph starts at t = 10, 000). Our estimates of the mutual information I⇢exp t (S; A) reflect H(Ah⇤) for the corresponding problems (for H(Ah⇤) = 0 it converges to zero, for H(Ah⇤) ⇡1 it is approximately one, etc.). 5 Proof of Theorem 2 The proof of Theorem 2 is based on PAC-Bayes-Bernstein inequality for martingales (Seldin et al., 2011b). Let KL(⇢kµ) denote the KL-divergence between two distributions (Cover and Thomas, 1991). Let {Z1(h), . . . , Zn(h) : h 2 H} be martingale difference sequences indexed by h with respect to the filtration σ(U1), . . . , σ(Un), where Ui = {Z1(h), . . . , Zi(h) : h 2 H} is the subset of martingale difference variables up to index i and σ(Ui) is the σ-algebra generated by Ui. This means that E[Zi(h)|σ(Ui−1)] = 0, where Zi(h) may depend on Zj(h0) for all j < i and h0 2 H. There might also be interdependence between {Zi(h) : h 2 H}. Let ˆ Mi(h) = Pi j=1 Zj(h) be 7 the corresponding martingales. Let Vi(h) = Pi j=1 E[Zj(h)2|σ(Uj−1)] be cumulative variances of the martingales ˆ Mi(h). For a distribution ⇢over H define ˆ Mi(⇢) = E⇢(h)[ ˆ Mi(h)] and Vt(⇢) = E⇢(h)[Vt(h)] as weighted averages of the martingales and their cumulative variances according to a distribution ⇢. Theorem 3 (PAC-Bayes-Bernstein Inequality). Assume that |Zi(h)| b for all h with probability 1. Fix a prior distribution µ over H. Pick an arbitrary number c > 1. Then with probability greater than 1 −δ over Un, simultaneously for all distributions ⇢over H that satisfy s KL(⇢kµ) + ln 2m δ (e −2)Vn(⇢) 1 cb we have | ˆ Mn(⇢)| (1 + c) s (e −2)Vn(⇢) ✓ KL(⇢kµ) + ln 2m δ ◆ , where m = ln ⇣q (e−2)n ln 2 δ ⌘ / ln(c), and for all other ⇢ | ˆ Mn(⇢)| 2b ✓ KL(⇢kµ) + ln 2m δ ◆ . Note that Mt(h) = t(∆(h) −ˆ∆t(h)) are martingales and their cumulative variances are Vt(h) = Pt ⌧=1 E ⇥[Rh⇤(S⌧),S⌧ ⌧ −Rh(S⌧),S⌧ ⌧ ] −[R(h⇤) −R(h)] .2,,T⌧−1 ⇤ . In order to apply Theorem 3 we have to derive an upper bound on Vt(⇢ exp t ),1 a prior µ(h) over H, and calculate (or upper bound) the KL-divergence KL(⇢ exp t kµ). This is done in the following three lemmas. Lemma 3. If {"1, "2, . . . } is a decreasing sequence, such that "t mina,s ⇡t(a|s), then for all h: Vt(h) 2t "t . The proof of the lemma is provided in the supplementary material. Lemma 3 provides an immediate, but crude, uniform upper bound on Vt(h), which yields Vt(⇢ exp t ) 2t "t . Since our algorithm concentrates on h-s with small ∆(h), which, in turn, concentrate on the best action in each state, the variance Vt(h) for the corresponding h-s is expected to be of the order of 2Kt and not 2t "t . However, we were not able to prove yet that the probability ⇢ exp t (h) of the remaining hypotheses (those with large ∆(h)) gets sufficiently small (of order K"t), so that the weighted cumulative variance would be of order 2Kt. Nevertheless, this seems to hold in practice starting from relatively small values of t (Seldin et al., 2011a). Improving the upper bound on Vt(⇢ exp t ) will improve the regret bound, but for the moment we present the regret bound based on the crude upper bound Vt(⇢ exp t ) 2t "t . The remaining two lemmas, which define a prior µ over H and bound KL(⇢kµ), are due to Seldin and Tishby (2010). Lemma 4. It is possible to define a distribution µ over H that satisfies: µ(h) ≥e−NH(Ah)−K ln N−K ln K. (8) Lemma 5. For the distribution µ that satisfies (8) and any distribution ⇢(a|s): KL(⇢kµ) NI⇢(S; A) + K ln N + K ln K. Substitution of the upper bounds on Vt(⇢ exp t ) and KL(⇢ exp t kµ) into Theorem 3 yields Theorem 2. 6 Discussion We presented PAC-Bayesian analysis of stochastic multiarmed bandits with side information. Our analysis provides data-dependent algorithm and data-dependent regret analysis for this problem. The selection of task-relevant side information is delegated from the user to the algorithm. We also provide a general framework for deriving data-dependent algorithms and analyses for many other stochastic problems with limited feedback. The analysis of the variance of our algorithm still waits to be improved and will be addressed in future work. 1Seldin et al. (2011b) show that Vn(⇢) can be replaced by an upper bound everywhere in Theorem 3. 8 Acknowledgments We would like to thank all the people with whom we discussed this work and, in particular, Nicol`o Cesa-Bianchi, G´abor Bart´ok, Elad Hazan, Csaba Szepesv´ari, Miroslav Dud´ık, Robert Shapire, John Langford, and the anonymous reviewers, whose comments helped us to improve the final version of this manuscript. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886, and by the European Community’s Seventh Framework Programme (FP7/2007-2013), under grant agreement N o231495. This publication only reflects the authors’ views. References Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47, 2002a. Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal of Computing, 32(1), 2002b. Arindam Banerjee. On Bayesian bounds. In Proceedings of the International Conference on Machine Learning (ICML), 2006. Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandit algorithms with supervised learning guarantees. In Proceedings on the International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. Leslie Pack Kaelbling. Associative reinforcement learning: Functions in k-DNF. Machine Learning, 15, 1994. John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances in Neural Information Processing Systems (NIPS), 2007. David McAllester. Some PAC-Bayesian theorems. In Proceedings of the International Conference on Computational Learning Theory (COLT), 1998. Matthias Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. Journal of Machine Learning Research, 2002. Yevgeny Seldin and Naftali Tishby. PAC-Bayesian analysis of co-clustering and beyond. Journal of Machine Learning Research, 11, 2010. Yevgeny Seldin, Nicol`o Cesa-Bianchi, Peter Auer, Franc¸ois Laviolette, and John Shawe-Taylor. PAC-BayesBernstein inequality for martingales and its application to multiarmed bandits. 2011a. In review. Preprint available at http://arxiv.org/abs/1110.6755. Yevgeny Seldin, Franc¸ois Laviolette, Nicol`o Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian inequalities for martingales. 2011b. In review. Preprint available at http://arxiv.org/abs/1110.6886. John Shawe-Taylor and Robert C. Williamson. A PAC analysis of a Bayesian estimator. In Proceedings of the International Conference on Computational Learning Theory (COLT), 1997. John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5), 1998. Alexander L. Strehl, Chris Mesterharm, Michael L. Littman, and Haym Hirsh. Experience-efficient learning in associative bandit problems. In Proceedings of the International Conference on Machine Learning (ICML), 2006. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. Naftali Tishby and Daniel Polani. Information theory of decisions and actions. In Vassilis Cutsuridis, Amir Hussain, John G. Taylor, and Daniel Polani, editors, Perception-Reason-Action Cycle: Models, Algorithms and Systems. Springer, 2010. Naftali Tishby, Fernando Pereira, and William Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computation, 1999. Leslie G. Valiant. A theory of the learnable. Communications of the Association for Computing Machinery, 27 (11), 1984. 9
2011
107
4,155
Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition Jia Deng1,2, Sanjeev Satheesh1, Alexander C. Berg3, Li Fei-Fei1 Computer Science Department, Stanford University1 Computer Science Department, Princeton University2 Computer Science Department, Stony Brook University3 Abstract We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al. 1 Introduction Classification problems with many classes arise in many important domains and pose significant computational challenges. One prominent example is recognizing tens of thousands of visual object categories, one of the grand challenges of computer vision. The large number of classes renders the standard one-versus-all multiclass approach too costly, as the complexity grows linearly with the number of classes, for both training and testing, making it prohibitive for practical applications that require low latency or high throughput, e.g. those in robotics or in image retrieval. Classification with many classes has received increasing attention recently and most approaches appear to have converged to tree based models [2, 3, 9, 1]. In particular, Bengio et al. [1] proposes a label tree model, which has been shown to achieve state of the art performance in testing. In a label tree, each node is associated with a subset of class labels and a linear classifier that determines which branch to follow. In performing the classification task, a test example travels from the root of the tree to a leaf node associated with a single class label. Therefore for a well balanced tree, the time required for evaluation is reduced from O(DK) to O(D log K), where K is the number of classes and D is the feature dimensionality. The technique can be combined with an embedding technique, so that the evaluation cost can be further reduced to O( ˜D log K + D ˜D) where ˜D ≪D is an embedded label space. Despite the success of label trees in addressing testing efficiency, the learning technique, critical to ensuring good testing accuracy and efficiency, has several limitations. Learning the tree structure (determining how to split the classes into subsets) involves first training one-vs-all classifiers for all K classes to obtain a confusion matrix, and then using spectral clustering to split the classes into disjoint subsets. First, learning one-vs-all classifiers is costly for large number of classes. Second, the partitioning of classes does not allow overlap, which can be unnecessarily difficult for classification. Third, the tree structure may be unbalanced, which can result in sub-optimal test efficiency. In this paper, we address these issues by observing that (1)determining the partition of classes and learning a classifier for each child can be performed jointly, and (2)allowing overlapping of class 1 labels among children leads to an efficient optimization that also enables precise control of the accuracy vs efficiency trade-off, which can in turn guarantee balanced trees. This leads to a novel label tree learning technique that is more efficient and effective. Specifically, we eliminate the onevs-all training step while improving both efficiency and accuracy in testing. 2 Related Work Our approach is directly motivated by the label tree embedding technique proposed by Bengio et al. in [1], which is among the few approaches that address sublinear testing cost for multi-class classification problems with a large number of classes and has been shown to outperform alternative approaches including Filter Tree [2] and Conditional Probability Tree(CPT) [3]. Our contribution is a new technique to achieve more efficient and effective learning for label trees. For a comprehensive discussion on multi-class classification techniques, we refer the reader to [1]. Classifying a large number of object classes has received increasing attention in computer vision as datasets with many classes such as ImageNet [7] become available. One line of work is concerned with developing effective feature representations [13, 16, 15, 10] and achieving state of the art performances. Another direction of work, explores methods for exploiting the structure between object classes. In particular, it has been observed that object classes can be organized in a tree-like structure both semantically and visually [9, 11, 6], making tree based approaches especially attractive. Our work follows this direction, focusing on effective learning methods for building tree models. Our framework of explicitly controlling accuracy or efficiency is connected to Weiss et al.’s work [14] on building a cascade of graphical models with increasing complexity for structured prediction. Our work differs in that we reduce the label space instead of the model space. 3 Label Tree and Label Tree Learning by Bengio et al. Here we briefly review the label tree learning technique proposed by Bengio et al. and then discuss the limitations we attempt to address. A label tree is a tree T = (V, E) with nodes V and edges E. Each node r ∈V is associated with a set of class labels κ(r) ⊆{1, . . . , K} . Let σ(r) ⊂V be the its set of children. For each child c, there is a linear classifier wc ∈RD and we require that its label set is a subset of its parent’s, that is, κ(c) ⊆κ(r), ∀c ∈σ(r). To make a prediction given an input x ∈RD, we use Algorithm 1. We travel from the root until we reach a leaf node, at each node following the child that has the largest classifier score. There is a slight difference than the algorithm in [1] in that the leaf node is not required to have only one class label. If there is more than one label, an arbitrary label from the set is predicted. Algorithm 1 Predict the class of x given the root node r s ←r. while σ(s) ̸= ∅do s ←arg maxc∈σ(s) wT c x end while return an arbitrary k ∈κ(s) or NULL if κ(s) = ∅. Learning the tree structure is a fundamentally hard problem because brute force search for the optimal combination of tree structure and classifier weights is intractable. Bengio et al. [1] instead propose to solve two subproblems: learning the tree structure and learning the classifier weights. To learn the tree structure, K one versus all classifiers are trained first to obtain a confusion matrix C ∈RK×K on a validation set. The class labels are then clustered into disjoint sets by spectral clustering with the confusion between classes as affinity measure. This procedure is applied recursively to build a complete tree. Given the tree structure, all classifier weights are then learned jointly to optimize the misclassification loss of the tree. We first analyze the cost of learning by showing that training, with m examples, K classes and D dimensional feature, costs O(mDK). Assume optimistically that the optimization algorithm converges 2 after only one pass of the data and that we use first order methods that cost O(D) at each iteration, with feature dimensionality D. Therefore learning one versus all classifiers costs O(mDK). Spectral clustering only depends on K and does not depend on D or m, and therefore its cost is negligible. In learning the classifier weights on the tree, each training example is affected by only the classifiers on its path, i.e. O(Q log K) classifiers, where Q ≪K is the number of children for each node. Hence the training cost is O(mDQ log K). This analysis indicates that learning K one versus all classifiers dominates the cost. This is undesirable in large scale learning because with bounded time, accommodating a large number of classes entails using less expressive and lower dimensional features. Moreover, spectral clustering only produces disjoint subsets. It can be difficult to learn a classifier for disjoint subsets when examples of certain classes cannot be reliably classified to one subset. If such mistakes are made at higher level of the tree, then it is impossible to recover later. Allowing overlap potentially yields more flexibility and avoids such errors. In addition, spectral clustering does not guarantee balanced clusters and thus cannot ensure a desired speedup. We seek a novel learning technique that overcomes these limitations. 4 New Label Tree Learning To address the limitations, we start by considering simple and less expensive alternatives of generating the splits. For example, we can sub-sample the examples for one-vs-all training, or generate the splits randomly, or use a human constructed semantic hierarchy(e.g. WordNet [8]). However, as shown in [1], improperly partitioning the classes can greatly reduce testing accuracy and efficiency. To preserve accuracy, it is important to split the classes such that they can be easily separated. To gain efficiency, it is important to have balanced splits. We therefore propose a new technique that jointly learns the splits and classifier weights. By tightly coupling the two, this approach eliminates the need of one-vs-all training and brings the total learning cost down to O(mDQ log K). By allowing overlapping splits and explicitly modeling the accuracy and efficiency trade-off, this approach also improves testing accuracy and efficiency. Our approach processes one node of the tree a time, starting with the root node. It partitions the classes into a fixed number of child nodes and learns the classifier weights for each of the children. It then recursively repeats for each child. In learning a tree model, accuracy and efficiency are inherently conflicting goals and some trade-off must be made. Therefore we pose the optimization problem as maximizing efficiency given a constraint on accuracy, i.e. requiring that the error rate cannot exceed a certain threshold. Alternatively one can also optimize accuracy given efficiency constraints. We will first describe the accuracy constrained optimization and then briefly discuss the efficiency constrained variant. In practice, one can choose between the two formulations depending on convenience. For the rest of this section, we first express all the desiderata in one single optimization problem(Sec. 4.1), including defining the optimization variables(classifier weights and partitions), objectives(efficiency) and constraints(accuracy). Then in Sec. 4.2& 4.3 we show how to solve the main optimization by alternating between learning the classifier weights and determining the partitions. We then summarize the complete algorithm(Sec. 4.4) and conclude with an alternative formulation using efficiency constraints(Sec. 4.5). 4.1 Main optimization Formally, let the current node r represent classes labels κ(r) = {1, . . . , K} and let Q be the specified number of children we wish to follow. The goal is to determine: (1)a partition matrix P ∈{0, 1}Q×K that represents the assignment of classes to the children, i.e. Pqk = 1 if class label k appear in child q and Pqk = 0 otherwise; (2)the classifier weights w ∈RD×Q, where a column wq is the classifier weights for child q ∈σ(r), We measure accuracy by examining whether an example is classified to the correct child, i.e. a child that includes its true class label. Let x ∈RD be a training example and y ∈{1, . . . , K} be its true label. Let ˆq = arg maxq∈σ(r) wT q x be the child that x follows. Given w, P, x, y, the classification 3 loss at the current node r is then L(w, x, y, P) = 1 −P(ˆq, y). (1) Note that the final prediction of the example is made at a leaf node further down the tree, if the child to follow is not already a leaf node. Therefore L is a lower bound of the actual loss. It is thus important to achieve a smaller L because it could be a bottleneck of the final accuracy. We measure efficiency by how fast the set of possible class labels shrinks. Efficiency is maximized when each child has a minimal number of class labels so that an unambiguous prediction can be made, otherwise we incur further cost for traveling down the tree. Given a test example, we define ambiguity as our efficiency measure, i.e. the size of label set of the child that the example follows, relative to its parent’s size. Specifically, given w and P, the ambiguity for an example x is A(w, x, P) = 1 K K X k=1 P(ˆq, k). (2) Note that A ∈[0, 1]. A perfectly balanced K-nary tree would result in an ambiguity of 1/K for all examples at each node. One important note is that the classification loss(accuracy) and ambiguity(efficiency) measures as defined in Eqn. 1 and Eqn. 2 are local to the current node being considered in greedily building the tree. They serve as proxies to the global accuracy and efficiency of the entire tree. For the rest of this paper, we will omit the “local” and “global” qualifications if it is clear according to the context. Let ϵ > 0 be the maximum classification loss we are willing to tolerate. Given a training set (xi, yi), i = 1, . . . , m, we seek to minimize the average ambiguity of all examples while keeping the classification loss below ϵ, which leads to the following optimization problem: OP1. Optimizing efficiency with accuracy constraints. minimize w,P 1 m m X i=1 A(w, xi, P) subject to 1 m m X i=1 L(w, xi, yi, P) ≤ϵ P ∈{0, 1}Q×K. There are no further constraints on P other than that its entries are integers 0 and 1. We do not require that the children cover all the classes in the parent. It is legal that one class in the parent can be assigned to none of the children, in which case we give up on the training examples from the class. In doing so, we pay a price on accuracy, i.e. those examples will have a misclassification loss of 1. Therefore a partition P with all zeros is unlikely to be a good solution. We also allow overlap of label sets between children. If we cannot classify the examples from a class perfectly into one of the children, we allow them to go to more than one child. We pay a price on efficiency since we make less progress in eliminating possible class labels. This is different from the disjoint label sets in [1]. Overlapping label sets gives more flexibility and in fact leads to simpler optimization, as will become clear in Sec. 4.3. Directly solving OP1 is intractable. However, with proper relaxation, we can alternate between optimizing over w and over P where each is a convex program. 4.2 Learning classifier weights w given partitions P Observe that fixing P and optimizing over w is similar to learning a multi-class classifier except for the overlapping classes. We relax the loss L by a convex loss ˜L similar to the hinge loss. ˜L(w, xi, yi, P) = max{0, 1 + max q∈Ai,r∈Bi{wT r xi −wT q xi)}} where Ai = {q|Pq,yi = 1} and Bi = {r|Pr,yi = 0}. Here Ai is the set of children that contain class yi and Bi is the rest of the children. The responses of the classifiers in Ai are encouraged to be bigger than those in Bi, otherwise the loss ˜L increases. It is easily verifiable that ˜L upperbounds L. We then obtain the following convex optimization problem. 4 OP2. Optimizing over w given P. minimize w λ Q X q=1 ∥wq∥2 2 + 1 m m X i=1 ˜L(w, xi, yi, P) Note that here the objective is no longer the ambiguity A. This is because the influence of w on A is typically very small. When the partition P is fixed, w can lower A by classifying examples into the child with the smallest label set. However, the way w classifies examples is mostly constrained by the accuracy cap ϵ, especially for small ϵ. Empirically we also found that in optimizing ˜L over w, A remains almost constant. Therefore for simplicity we assume that A is constant w.r.t w and the optimization becomes minimizing classification loss to move w to the feasible region. We also added a regularization term PQ q=1 ∥wq∥2 2. 4.3 Determining partitions P given classifier weights w If we fix w and optimize over P, rearranging terms gives the following integer program. OP3. Optimizing over P. minimize P A(P) = X q,k Pqk 1 mK m X i=1 1(ˆqi = q) subject to 1 − X q,k Pqk 1 m m X i=1 1(ˆqi = q ∧yi = k) ≤ϵ Pqk ∈{0, 1}, ∀q, k. Integer programming in general is NP-hard. However, for this integer program, we can solve it by relaxing it to a linear program and then taking the ceiling of the solution. We show that this solution is in fact near optimal by showing that the number of non-integers can be very few, due to the fact that the LP has few constraints other than that the variables lie in [0, 1] and most of the [0, 1] constraints will be active. Specifically we use Lemma 4.1(proof in supplementary materials) to bound the rounded LP solution in Theorem 4.2. Lemma 4.1. For LP problem minimize x cT x subject to Ax ≤b 0 ≤x ≤1, where A ∈Rm×n, m < n, if it is feasible, then there exists an optimal solution with at most m non-integer entries and such a solution can be found in polynomial time. Theorem 4.2. Let A∗be an optimal value of OP3. A solution P ′ can be computed within polynomial time such that A(P ′) ≤A∗+ 1 K . Proof. We relax OP3 to an LP by replacing the constraint Pqk ∈{0, 1}, ∀q, k with Pqk ∈ [0, 1], ∀q, k. Apply Lemma 4.1 and we obtain an optimal solution P ′′ of the LP with at most 1 non-integer. We take the ceiling of the fraction and obtain an integer solution P ′ to OP3. The value of the LP, a lower bound of A∗, increases by at most 1 K , since 1 mK Pm i=1 1(ˆqi = q) ≤1 K , ∀q. Note that the ambiguity is a quantity in [0, 1] and K is the number of classes. Therefore for large numbers of classes the rounded solution is almost optimal. 4.4 Summary of algorithm Now all ingredients are in place for an iterative algorithm to build the tree, except that we need to initialize the partition P or the weights w. We find that a random initialization of P works well in practice. Specifically, for each child, we randomly pick one class, without replacement, from the 5 label set of the parent. That is, for each row of P, randomly pick a column and set the column to 1. This is analogous to picking the cluster seeds in the K-means algorithm. We summarize the algorithm for building one level of tree nodes in Algorithm 2. The procedure is applied recursively from the root. Note that each training example only affects classifiers on one path of the tree, hence the training cost is O(mD log K) for a balanced tree. Algorithm 2 Grow a single node r Input: Q,ϵ and training examples classified into node r by its ancestors. Initialize P. For each child, randomly pick one class label from the parent, without replacement. for t = 1 →T do Fix P, solve OP2 and update w. Fix w, solve OP3 and update P. end for 4.5 Efficiency constrained formulations As mentioned earlier, we can also optimize accuracy given explicit efficiency constraints. Let δ be the maximum ambiguity we can tolerate. Let OP1’, OP2’, OP3’ be the counterparts of OP1, OP2 and OP3. We obtain OP1’ by replacing ϵ with δ and switching L(w, xi, yi, P) and A(w, xi, p) in OP1. OP2’ is the same as OP2 because we also treat A as constant and minimize the classification loss L unconstrained. OP3’ can also be formulated in a straightforward manner, and solved nearly optimally by rounding from LP(Theorem 4.3). Theorem 4.3. Let L∗be the optimal value of OP3’. A solution P ′ can be computed within polynomial time such that L(P ′) ≤L∗+ maxk ψk, where ψk = 1 m Pm i=1 1(yi = k), is the percentage of training examples from class k. Proof. We relax OP3’ to an LP. Apply Lemma 4.1 and obtain an optimal solution P ′′ with at most 1 non-integer. We take the floor of P ′′ and obtain a feasible solution P ′ to OP3′. The value of the LP, a lower bound of L∗, increases by at most maxk ψk, since 1 m P i 1(ˆqi = q ∧yi = k) ≤ 1 m Pm i=1 1(yi = k) ≤maxk ψk, ∀k, q. For uniform distribution of examples among classes, maxk ψk = 1/K and the rounded solution is near optimal for large K. If the distribution is highly skewed, for example, a heavy tail, then the rounding can give poor approximation. One simple workaround is to split the big classes into artificial subclasses or treat the classes in the tail as one big class, to “equalize” the distribution. Then the same learning techniques can be applied. In this paper we focus on the near uniform case and leave further discussion on the skewed case as future work. 5 Experiments We use two datasets for evaluation: ILSVRC2010 [12] and ImageNet10K [6]. In ILSVRC2010, there are 1.2M images from 1k classes for training, 50k images for validation and 150k images for test. For each image in ILSVRC2010 we compute the LLC [13] feature with SIFT on a 10k codebook and use a two level spatial pyramid(1x1 and 2x2 grids) to obtain a 50k dimensional feature vector. In ImageNet10K, there are 9M images from 10184 classes. We use 50% for training, 25% for validation, and the rest 25% for testing. For ImageNet10K, We compute LLC similarly except that we use no spatial pyramid, obtaining a 10k dimensional feature vector. We use parallel stochastic gradient descent(SGD) [17] for training. SGD is especially suited for large scale learning [4] where the learning is bounded by the time and the features can no longer fit into memory (the LLC features take 80G in sparse format). Parallelization makes it possible to use multiple CPUs to improves wall time. We compare our algorithm with the original label tree learning method by Bengio et al. [1]. For both algorithms, we fix two parameters, the number of children Q for each node, and the maximum depth H of the tree. The depth of each node is defined as the maximum distance to the root(the root 6 T32,2 T10,3 T6,4 T101,2 Acc% Ctr Ste Acc% Ctr Ste Acc% Ctr Ste Acc% Ctr Ste Ours 11.9 259 10.3 8.92 104 18.2 5.62 50.2 31.3 3.4 685 32.4 [1] 8.33 321 10.3 5.99 193 15.2 5.88 250 9.32 2.7 1191 32.4 Table 1: Global accuracy(Acc), training cost(Ctr), and test speedup(Ste) on ILSVRC2010 1K classes (T32,2, T10,3, T6,4) and on ImageNet10K(T101,2) classes. Training and test costs are measured as the average number of vector operations performed per example. Test speedup is the onevs-all test cost divided by the label tree test cost. Ours outperforms the Bengio et al. [1] approach by achieving comparable or better accuracy and efficiency, with less training cost, compared with the training cost for Bengio et al. [1] with the one-vs-all training cost excluded. Tree T32,2 T10,3 T6,4 Depth 0 1 0 1 2 0 1 2 3 Classification loss(%) Ours 49.9 76.1 34.6 52.6 71.2 30.0 48.8 55.9 64.4 Bengio [1] 76.6 64.8 62.8 53.7 65.3 56.2 34.8 37.3 65.8 Ambiguity(%) Ours 6.49 1.55 18.9 18.4 2.96 24.7 24.1 23.5 7.15 Bengio [1] 6.49 1.87 19.0 25.9 2.95 24.7 59.6 56.5 2.02 Table 2: Local classification loss(Eqn. 1) and ambiguity(Eqn. 2) measured at different depth levels for all trees on the ILSVRC2010 test set(1k classes). T6,4 of Bengio et al. is less balanced(large ambiguity). Our trees are more balanced as efficiency is explicitly enforced by capping the ambiguity throughout all levels. has depth 0). We require every internal node to split into Q children, with two exceptions: nodes at depth H −1(parent of leaves) and nodes with fewer than Q classes. In both cases, we split the node fully, i.e. grow one child node per class. We use TQ,H to denote a tree built with parameters Q and H. We set Q and H such that for a well balanced tree, the number of leaf nodes QH approximate the number of classes K. We evaluate the global classification accuracy and computational cost in both training and test. The main costs of learning consist of two operations, evaluating the gradient and updating the weights, i.e. vector dot products and vector additions(possibly with scaling). We treat both operations as costing the same 1. To measure the cost, we count the number of vector operations performed per training example. For instance, running SGD one-versus-all(either independent or single machine SVMs [5]) for K classes costs 2K per example for going through data once, as in each iteration all K classifiers are evaluated against the feature vector(dot product) and updated(addition). For both algorithms, we build three trees T32,2, T10,3, T6,4 for the ILSVRC2010 1k classes and build one tree T101,2 for ImageNet10K classes. For the Bengio et al. method, we first train one-versus-all classifiers with one pass of parallel SGD. This results in a cost of 2000 per example for ISVRC2010 and 20368 for ImageNet10K. After forming the tree skeleton by spectral clustering using confusion matrix from the validation set, we learn the weights by solving a joint optimization(see [1]) with two passes of parallel SGD. For our method, we do three iterations in Algorithm 2. In each iteration, we do one pass of parallel SGD to solve OP3’, such that the computation is comparable to that of Bengio et al. (excluding the one-versus-all training). We then solve OP3’ on the validation set to update the partition. To set the efficiency constraint, we measure the average (local) ambiguity of the root node of the tree generated by the Bengio et al. approach, on the validation set. We use it as our ambiguity cap throughout our learning, in an attempt to produce a similarly structured tree. We report the test results in Table 1. The results show that for all types of trees, our method achieves comparable or significantly better accuracy while achieving better speed-up with much less training cost, even after excluding the 1-versus-all training in Bengio et al.’s. It’s worth noting that for the Bengio et al. approach, T6,4 fails to further speed-up testing compared to the other shallower trees. The reason is that at depth 1(one level down from root), the splits became highly imbalanced and does not shrink the class sets faster enough until the height limit is reached. This is revealed in Table 2, where we measure the average local ambiguity(Eq. 2) and classification loss(Eq. 1) at each depth on the test set to shed more light on the structure of the trees. Observe that our trees have 1This is inconsequential as a vector addition always pairs with a dot product for all training in this paper. 7 Figure 1: Comparison of partition matrices(32 × 1000) of the root node of T32,2 for our approach(top) and the Bengio et al. approach(bottom). Each entry represents the membership of a class label(column) in a child(row). The columns are ordered by a depth first search of WordNet. Columns belonging to certain WordNet subtrees are marked by red boxes. Figure 2: Paths of the tree T6,4 taken by two test examples. The class labels shown are randomly subsampled to fit into the space. almost constant average ambiguity at each level, as enforced in learning. This shows an advantage of our algorithm since we are able to explicitly enforce balanced tree while in Bengio et al. [1] no such control is possible, although spectral clustering encourages balanced splits. In Fig. 1, we visualize the partition matrices of the root of T32,2, for both algorithms. The columns are ordered by a depth first search of the WordNet tree so that neighboring columns are likely to be semantic similar classes. We observe that for both methods, there is visible alignment of the WordNet ordering. We further illustrate the semantic alignment by showing with the paths of our T6,4 traveled by two test examples. Also observe that our partition is notably “noisier”, despite that both partitions have the same average ambiguity. This is a result of overlapping partitions, which in fact improves accuracy(as shown in Table 2) because it avoids the mistakes made by forcing all examples of a class commit to one child. Also note that Bengio et al. showed in [1] that optimizing the classifiers on the tree jointly is significantly better than independently training the classifiers for each node, as it encodes the dependency of the classifiers along a tree path. This does not contradict our results. Although we have no explicit joint learning of classifiers over the entire tree, we train the classifiers of each node using examples already filtered by classifiers of the ancestors, thus implicitly enforcing the dependency. 6 Conclusion We have presented a novel approach to efficiently learn a label tree for large scale classification with many classes, allowing fine grained efficiency-accuracy tradeoff. Experimental results demonstrate more efficient trees at better accuracy with less training cost compared to previous work. Acknowledgment L. F-F is partially supported by an NSF CAREER grant (IIS-0845230), the DARPA CSSG grant, and a Google research award. 8 References [1] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems (NIPS), 2010. [2] A. Beygelzimer, J. Langford, and P. Ravikumar. Multiclass classification with filter trees. Preprint, June, 2007. [3] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory B. Sorkin, and Alexander L. Strehl. Conditional probability tree estimation analysis and algorithms. Computing Research Repository, 2009. [4] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information processing systems, 20:161–168, 2008. [5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265–292, 2002. [6] J. Deng, A.C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us? In ECCV10. [7] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR09, 2009. [8] C. Fellbaum. WordNet: An Electronic Lexical Database. MIT Press, 1998. [9] Gregory Griffin and Pietro Perona. Learning and using taxonomies for fast visual categorization. CVPR08, 2008. [10] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image classification: Fast feature extraction and svm training. In Conference on Computer Vision and Pattern Recognition, page (to appear), volume 1, page 3, 2011. [11] A. Torralba, R. Fergus, and W.T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1958–1970, 2008. [12] http://www.image-net.org/challenges/LSVRC/2010/. [13] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. 2010. [14] D. Weiss, B. Sapp, and B. Taskar. Sidestepping intractable inference with structured ensemble cascades. In NIPS, volume 1281, pages 1282–1284, 2010. [15] K. Yu and T. Zhang. Improved local coordinate coding using local tangents. ICML09, 2010. [16] X. Zhou, K. Yu, T. Zhang, and T. Huang. Image classification using super-vector coding of local image descriptors. Computer Vision–ECCV 2010, pages 141–154, 2010. [17] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2595–2603. 2010. 9
2011
108
4,156
Divide-and-Conquer Matrix Factorization Lester Mackeya Ameet Talwalkara Michael I. Jordana, b a Department of Electrical Engineering and Computer Science, UC Berkeley b Department of Statistics, UC Berkeley Abstract This work introduces Divide-Factor-Combine (DFC), a parallel divide-andconquer framework for noisy matrix factorization. DFC divides a large-scale matrix factorization task into smaller subproblems, solves each subproblem in parallel using an arbitrary base matrix factorization algorithm, and combines the subproblem solutions using techniques from randomized matrix approximation. Our experiments with collaborative filtering, video background modeling, and simulated data demonstrate the near-linear to super-linear speed-ups attainable with this approach. Moreover, our analysis shows that DFC enjoys high-probability recovery guarantees comparable to those of its base algorithm. 1 Introduction The goal in matrix factorization is to recover a low-rank matrix from irrelevant noise and corruption. We focus on two instances of the problem: noisy matrix completion, i.e., recovering a low-rank matrix from a small subset of noisy entries, and noisy robust matrix factorization [2, 3, 4], i.e., recovering a low-rank matrix from corruption by noise and outliers of arbitrary magnitude. Examples of the matrix completion problem include collaborative filtering for recommender systems, link prediction for social networks, and click prediction for web search, while applications of robust matrix factorization arise in video surveillance [2], graphical model selection [4], document modeling [17], and image alignment [21]. These two classes of matrix factorization problems have attracted significant interest in the research community. In particular, convex formulations of noisy matrix factorization have been shown to admit strong theoretical recovery guarantees [1, 2, 3, 20], and a variety of algorithms (e.g., [15, 16, 23]) have been developed for solving both matrix completion and robust matrix factorization via convex relaxation. Unfortunately, these methods are inherently sequential and all rely on the repeated and costly computation of truncated SVDs, factors that limit the scalability of the algorithms. To improve scalability and leverage the growing availability of parallel computing architectures, we propose a divide-and-conquer framework for large-scale matrix factorization. Our framework, entitled Divide-Factor-Combine (DFC), randomly divides the original matrix factorization task into cheaper subproblems, solves those subproblems in parallel using any base matrix factorization algorithm, and combines the solutions to the subproblem using efficient techniques from randomized matrix approximation. The inherent parallelism of DFC allows for near-linear to superlinear speedups in practice, while our theory provides high-probability recovery guarantees for DFC comparable to those enjoyed by its base algorithm. The remainder of the paper is organized as follows. In Section 2, we define the setting of noisy matrix factorization and introduce the components of the DFC framework. To illustrate the significant speed-up and robustness of DFC and to highlight the effectiveness of DFC ensembling, we present experimental results on collaborative filtering, video background modeling, and simulated data in Section 3. Our theoretical analysis follows in Section 4. There, we establish high-probability noisy recovery guarantees for DFC that rest upon a novel analysis of randomized matrix approximation and a new recovery result for noisy matrix completion. 1 Notation For M ∈Rm×n, we define M(i) as the ith row vector and Mij as the ijth entry. If rank(M) = r, we write the compact singular value decomposition (SVD) of M as UMΣMV⊤ M, where ΣM is diagonal and contains the r non-zero singular values of M, and UM ∈Rm×r and VM ∈Rn×r are the corresponding left and right singular vectors of M. We define M+ = VMΣ−1 M U⊤ M as the Moore-Penrose pseudoinverse of M and PM = MM+ as the orthogonal projection onto the column space of M. We let ∥·∥2, ∥·∥F , and ∥·∥∗respectively denote the spectral, Frobenius, and nuclear norms of a matrix and let ∥·∥represent the ℓ2 norm of a vector. 2 The Divide-Factor-Combine Framework In this section, we present our divide-and-conquerframework for scalable noisy matrix factorization. We begin by defining the problem setting of interest. 2.1 Noisy Matrix Factorization (MF) In the setting of noisy matrix factorization, we observe a subset of the entries of a matrix M = L0 + S0 + Z0 ∈Rm×n, where L0 has rank r ≪m, n, S0 represents a sparse matrix of outliers of arbitrary magnitude, and Z0 is a dense noise matrix. We let Ωrepresent the locations of the observed entries and PΩbe the orthogonal projection onto the space of m × n matrices with support Ω, so that (PΩ(M))ij = Mij, if (i, j) ∈Ω and (PΩ(M))ij = 0 otherwise. Our goal is to recover the low-rank matrix L0 from PΩ(M) with error proportional to the noise level ∆≜∥Z0∥F . We will focus on two specific instances of this general problem: • Noisy Matrix Completion (MC): s ≜|Ω| entries of M are revealed uniformly without replacement, along with their locations. There are no outliers, so that S0 is identically zero. • Noisy Robust Matrix Factorization (RMF): S0 is identically zero save for s outlier entries of arbitrary magnitude with unknown locations distributed uniformly without replacement. All entries of M are observed, so that PΩ(M) = M. 2.2 Divide-Factor-Combine Algorithms 1 and 2 summarize two canonical examples of the general Divide-Factor-Combine framework that we refer to as DFC-PROJ and DFC-NYS. Each algorithm has three simple steps: (D step) Divide input matrix into submatrices: DFC-PROJ randomly partitions PΩ(M) into t lcolumn submatrices, {PΩ(C1), . . . , PΩ(Ct)}1, while DFC-NYS selects an l-column submatrix, PΩ(C), and a d-row submatrix, PΩ(R), uniformly at random. (F step) Factor each submatrix in parallel using any base MF algorithm: DFC-PROJ performs t parallel submatrix factorizations, while DFC-NYS performs two such parallel factorizations. Standard base MF algorithms output the low-rank approximations { ˆC1, . . . , ˆCt} for DFC-PROJ and ˆC, and ˆR for DFC-NYS. All matrices are retained in factored form. (C step) Combine submatrix estimates: DFC-PROJ generates a final low-rank estimate ˆLproj by projecting [ ˆC1, . . . , ˆCt] onto the column space of ˆC1, while DFC-NYS forms the lowrank estimate ˆLnys from ˆC and ˆR via the generalized Nystr¨om method. These matrix approximation techniques are described in more detail in Section 2.3. 2.3 Randomized Matrix Approximations Our divide-and-conqueralgorithms rely on two methods that generate randomized low-rank approximations to an arbitrary matrix M from submatrices of M. 1For ease of discussion, we assume that mod(n, t) = 0, and hence, l = n/t. Note that for arbitrary n and t, PΩ(M) can always be partitioned into t submatrices, each with either ⌊n/t⌋or ⌈n/t⌉columns. 2 Algorithm 1 DFC-PROJ Input: PΩ(M), t {PΩ(Ci)}1≤i≤t = SAMPCOL(PΩ(M), t) do in parallel ˆC1 = BASE-MF-ALG(PΩ(C1)) ... ˆCt = BASE-MF-ALG(PΩ(Ct)) end do ˆLproj = COLPROJECTION( ˆC1, . . . , ˆCt) Algorithm 2 DFC-NYSa Input: PΩ(M), l, d PΩ(C) , PΩ(R) = SAMPCOLROW(PΩ(M), l, d) do in parallel ˆC = BASE-MF-ALG(PΩ(C)) ˆR = BASE-MF-ALG(PΩ(R)) end do ˆLnys = GENNYSTR ¨OM ( ˆC, ˆR) aWhen Q is a submatrix of M we abuse notation and define PΩ(Q) as the corresponding submatrix of PΩ(M). Column Projection This approximation, introduced by Frieze et al. [7], is derived from column sampling of M. We begin by sampling l < n columns uniformly without replacement and let C be the m × l matrix of sampled columns. Then, column projection uses C to generate a “matrix projection” approximation [13] of M as follows: Lproj = CC+M = UCU⊤ CM. In practice, we do not reconstruct Lproj but rather maintain low-rank factors, e.g., UC and U⊤ CM. Generalized Nystr¨om Method The standard Nystr¨om method is often used to speed up largescale learning applications involving symmetric positive semidefinite (SPSD) matrices [24] and has been generalized for arbitrary real-valued matrices [8]. In particular, after sampling columns to obtain C, imagine that we independently sample d < m rows uniformly without replacement. Let R be the d × n matrix of sampled rows and W be the d × l matrix formed from the intersection of the sampled rows and columns. Then, the generalized Nystr¨om method uses C, W, and R to compute an “spectral reconstruction” approximation [13] of M as follows: Lnys = CW+R = CVW Σ+ W U⊤ W R . As with Mproj, we store low-rank factors of Lnys, such as CVW Σ+ W and U⊤ W R. 2.4 Running Time of DFC Many state-of-the-art MF algorithms have Ω(mnkM) per-iteration time complexity due to the rankkM truncated SVD performed on each iteration. DFC significantly reduces the per-iteration complexity to O(mlkCi) time for Ci (or C) and O(ndkR) time for R. The cost of combining the submatrix estimates is even smaller, since the outputs of standard MF algorithms are returned in factored form. Indeed, the column projection step of DFC-PROJ requires only O(mk2 + lk2) time for k ≜maxi kCi: O(mk2 + lk2) time for the pseudoinversion of ˆC1 and O(mk2 + lk2) time for matrix multiplication with each ˆCi in parallel. Similarly, the generalized Nystr¨om step of DFC-NYS requires only O(l¯k2 + d¯k2 + min(m, n)¯k2) time, where ¯k ≜max(kC, kR). Hence, DFC divides the expensive task of matrix factorization into smaller subproblems that can be executed in parallel and efficiently combines the low-rank, factored results. 2.5 Ensemble Methods Ensemble methods have been shown to improve performance of matrix approximation algorithms, while straightforwardly leveraging the parallelism of modern many-core and distributed architectures [14]. As such, we propose ensemble variants of the DFC algorithms that demonstrably reduce recovery error while introducing a negligible cost to the parallel running time. For DFC-PROJENS, rather than projecting only onto the column space of ˆC1, we project [ ˆC1, . . . , ˆCt] onto the column space of each ˆCi in parallel and then average the t resulting low-rank approximations. For DFC-NYS-ENS, we choose a random d-row submatrix PΩ(R) as in DFC-NYS and independently partition the columns of PΩ(M) into {PΩ(C1), . . . , PΩ(Ct)} as in DFC-PROJ. After running the 3 base MF algorithm on each submatrix, we apply the generalized Nystr¨om method to each ( ˆCi, ˆR) pair in parallel and average the t resulting low-rank approximations. Section 3 highlights the empirical effectiveness of ensembling. 3 Experimental Evaluation We now explore the accuracy and speed-up of DFC on a variety of simulated and real-world datasets. We use state-of-the-art matrix factorization algorithms in our experiments: the Accelerated Proximal Gradient (APG) algorithm of [23] as our base noisy MC algorithm and the APG algorithm of [15] as our base noisy RMF algorithm. In all experiments, we use the default parameter settings suggested by [23] and [15], measure recovery error via root mean square error (RMSE), and report parallel running times for DFC. We moreover compare against two baseline methods: APG used on the full matrix M and PARTITION, which performs matrix factorization on t submatrices just like DFCPROJ but omits the final column projection step. 3.1 Simulations For our simulations, we focused on square matrices (m = n) and generated random low-rank and sparse decompositions, similar to the schemes used in related work, e.g., [2, 12, 25]. We created L0 ∈Rm×m as a random product, AB⊤, where A and B are m × r matrices with independent N(0, ! 1/r) entries such that each entry of L0 has unit variance. Z0 contained independent N(0, 0.1) entries. In the MC setting, s entries of L0 + Z0 were revealed uniformly at random. In the RMF setting, the support of S0 was generated uniformly at random, and the s corrupted entries took values in [0, 1] with uniform probability. For each algorithm, we report error between L0 and the recovered low-rank matrix, and all reported results are averages over five trials. 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 MC RMSE % revealed entries Part−10% Proj−10% Nys−10% Proj−Ens−10% Nys−Ens−10% Proj−Ens−25% Base−MC 0 10 20 30 40 50 60 70 0 0.05 0.1 0.15 0.2 0.25 RMF RMSE % of outliers Part−10% Proj−10% Nys−10% Proj−Ens−10% Nys−Ens−10% Base−RMF Figure 1: Recovery error of DFC relative to base algorithms. We first explored the recovery error of DFC as a function of s, using (m = 10K, r = 10) with varying observation sparsity for MC and (m = 1K, r = 10) with a varying percentage of outliers for RMF. The results are summarized in Figure 1.2 In both MC and RMF, the gaps in recovery between APG and DFC are small when sampling only 10% of rows and columns. Moreover, DFCPROJ-ENS in particular consistently outperforms PARTITION and DFC-NYS-ENS and matches the performance of APG for most settings of s. We next explored the speed-up of DFC as a function of matrix size. For MC, we revealed 4% of the matrix entries and set r = 0.001 · m, while for RMF we fixed the percentage of outliers to 10% and set r = 0.01 · m. We sampled 10% of rows and columns and observed that recovery errors were comparable to the errors presented in Figure 1 for similar settings of s; in particular, at all values of n for both MC and RMF, the errors of APG and DFC-PROJ-ENS were nearly identical. Our timing results, presented in Figure 2, illustrate a near-linear speed-up for MC and a superlinear speed-up for RMF across varying matrix sizes. Note that the timing curves of the DFC algorithms and PARTITION all overlap, a fact that highlights the minimal computational cost of the final matrix approximation step. 2In the left-hand plot of Figure 1, the lines for Proj-10% and Proj-Ens-10% overlap. 4 1.5 2 2.5 3 3.5 4 4.5 5 x 10 4 0 500 1000 1500 2000 2500 3000 MC time (s) m Part−10% Proj−10% Nys−10% Proj−Ens−10% Nys−Ens−10% Base−RMF 1000 2000 3000 4000 5000 0 2000 4000 6000 8000 10000 RMF time (s) m Part−10% Proj−10% Nys−10% Proj−Ens−10% Nys−Ens−10% Base−RMF Figure 2: Speed-up of DFC relative to base algorithms. 3.2 Collaborative Filtering Collaborative filtering for recommender systems is one prevalent real-world application of noisy matrix completion. A collaborative filtering dataset can be interpreted as the incomplete observation of a ratings matrix with columns corresponding to users and rows corresponding to items. The goal is to infer the unobserved entries of this ratings matrix. We evaluate DFC on two of the largest publicly available collaborative filtering datasets: MovieLens 10M3 (m = 4K, n = 6K, s > 10M) and the Netflix Prize dataset4 (m = 18K, n = 480K, s > 100M). To generate test sets drawn from the training distribution, for each dataset, we aggregated all available rating data into a single training set and withheld test entries uniformly at random, while ensuring that at least one training observation remained in each row and column. The algorithms were then run on the remaining training portions and evaluated on the test portions of each split. The results, averaged over three train-test splits, are summarized in Table 3.2. Notably, DFC-PROJ, DFC-PROJ-ENS, and DFCNYS-ENS all outperform PARTITION, and DFC-PROJ-ENS performs comparably to APG while providing a nearly linear parallel time speed-up. The poorer performance of DFC-NYS can be in part explained by the asymmetry of these problems. Since these matrices have many more columns than rows, MF on column submatrices is inherently easier than MF on row submatrices, and for DFC-NYS, we observe that ˆC is an accurate estimate while ˆR is not. Table 1: Performance of DFC relative to APG on collaborative filtering tasks. Method MovieLens 10M Netflix RMSE Time RMSE Time APG 0.8005 294.3s 0.8433 2653.1s PARTITION-25% 0.8146 77.4s 0.8451 689.1s PARTITION-10% 0.8461 36.0s 0.8492 289.2s DFC-NYS-25% 0.8449 77.2s 0.8832 890.9s DFC-NYS-10% 0.8769 53.4s 0.9224 487.6s DFC-NYS-ENS-25% 0.8085 84.5s 0.8486 964.3s DFC-NYS-ENS-10% 0.8327 63.9s 0.8613 546.2s DFC-PROJ-25% 0.8061 77.4s 0.8436 689.5s DFC-PROJ-10% 0.8272 36.1s 0.8484 289.7s DFC-PROJ-ENS-25% 0.7944 77.4s 0.8411 689.5s DFC-PROJ-ENS-10% 0.8119 36.1s 0.8433 289.7s 3.3 Background Modeling Background modeling has important practical ramifications for detecting activity in surveillance video. This problem can be framed as an application of noisy RMF, where each video frame is a column of some matrix (M), the background model is low-rank (L0), and moving objects and 3http://www.grouplens.org/ 4http://www.netflixprize.com/ 5 background variations, e.g., changes in illumination, are outliers (S0). We evaluate DFC on two videos: ‘Hall’ (200 frames of size 176 × 144) contains significant foreground variation and was studied by [2], while ‘Lobby’ (1546 frames of size 168×120) includes many changes in illumination (a smaller video with 250 frames was studied by [2]). We focused on DFC-PROJ-ENS, due to its superior performance in previous experiments, and measured the RMSE between the background model recovered by DFC and that of APG. On both videos, DFC-PROJ-ENS recovered nearly the same background model as the full APG algorithm in a small fraction of the time. On ‘Hall,’ the DFC-PROJ-ENS-5% and DFC-PROJ-ENS-0.5% models exhibited RMSEs of 0.564 and 1.55, quite small given pixels with 256 intensity values. The associated runtime was reduced from 342.5s for APG to real-time (5.2s for a 13s video) for DFC-PROJ-ENS-0.5%. Snapshots of the results are presented in Figure 3. On ‘Lobby,’ the RMSE of DFC-PROJ-ENS-4% was 0.64, and the speed-up over APG was more than 20X, i.e., the runtime reduced from 16557s to 792s. Original frame APG 5% sampled 0.5% sampled (342.5s) (24.2s) (5.2s) Figure 3: Sample ‘Hall’ recovery by APG, DFC-PROJ-ENS-5%, and DFC-PROJ-ENS-.5%. 4 Theoretical Analysis Having investigated the empirical advantages of DFC, we now show that DFC admits highprobability recovery guarantees comparable to those of its base algorithm. 4.1 Matrix Coherence Since not all matrices can be recovered from missing entries or gross outliers, recent theoretical advances have studied sufficient conditions for accurate noisy MC [3, 12, 20] and RMF [1, 25]. Most prevalent among these are matrix coherence conditions, which limit the extent to which the singular vectors of a matrix are correlated with the standard basis. Letting ei be the ith column of the standard basis, we define two standard notions of coherence [22]: Definition 1 (µ0-Coherence). Let V ∈Rn×r contain orthonormal columns with r ≤n. Then the µ0-coherence of V is: µ0(V) ≜n r max1≤i≤n ∥PV ei∥2 = n r max1≤i≤n ∥V(i)∥2 . Definition 2 (µ1-Coherence). Let L ∈Rm×n have rank r. Then, the µ1-coherence of L is: µ1(L) ≜! mn r maxij |e⊤ i ULV⊤ Lej| . For any µ > 0, we will call a matrix L (µ, r)-coherent if rank(L) = r, max(µ0(UL), µ0(VL)) ≤ µ, and µ1(L) ≤√µ. Our analysis will focus on base MC and RMF algorithms that express their recovery guarantees in terms of the (µ, r)-coherence of the target low-rank matrix L0. For such algorithms, lower values of µ correspond to better recovery properties. 4.2 DFC Master Theorem We now show that the same coherence conditions that allow for accurate MC and RMF also imply high-probability recovery for DFC. To make this precise, we let M = L0 + S0 + Z0 ∈Rm×n, where L0 is (µ, r)-coherent and ∥PΩ(Z0)∥F ≤∆. We further fix any ϵ, δ ∈(0, 1] and define A(X) as the event that a matrix X is ( rµ2 1−ϵ/2, r)-coherent. Then, our Thm. 3 provides a generic recovery bound for DFC when used in combination with an arbitrary base algorithm. The proof requires a novel, coherence-based analysis of column projection and random column sampling. These results of independent interest are presented in Appendix A. 6 Theorem 3. Choose t = n/l and l ≥crµ log(n) log(2/δ)/ϵ2, where c is a fixed positive constant, and fix any ce ≥0. Under the notation of Algorithm 1, if a base MF algorithm yields P ! ∥C0,i −ˆCi∥F > ce √ ml∆| A(C0,i) " ≤δC for each i, where C0,i is the corresponding partition of L0, then, with probability at least (1 −δ)(1 −tδC), DFC-PROJ guarantees ∥L0 −ˆLproj∥F ≤(2 + ϵ)ce √mn∆. Under Algorithm 2, if a base MF algorithm yields P ! ∥C0 −ˆC∥F > ce √ ml∆| A(C) " ≤δC and P ! ∥R0 −ˆR∥F > ce √ dn∆| A(R) " ≤δR for d ≥clµ0( ˆC) log(m) log(1/δ)/ϵ2, then, with probability at least (1 −δ)2(1 −δC −δR), DFC-NYS guarantees ∥L0 −ˆLnys∥F ≤(2 + 3ϵ)ce √ ml + dn∆. To understand the conclusions of Thm. 3, consider a typical base algorithm which, when applied to PΩ(M), recovers an estimate ˆL satisfying ∥L0 −ˆL∥F ≤ce √mn∆with high probability. Thm. 3 asserts that, with appropriately reduced probability, DFC-PROJ exhibits the same recovery error scaled by an adjustable factor of 2+ϵ, while DFC-NYS exhibits a somewhat smaller error scaled by 2+3ϵ.5 The key take-away then is that DFC introduces a controlled increase in error and a controlled decrement in the probability of success, allowing the user to interpolate between maximum speed and maximum accuracy. Thus, DFC can quickly provide near-optimal recovery in the noisy setting and exact recovery in the noiseless setting (∆= 0), even when entries are missing or grossly corrupted. The next two sections demonstrate how Thm. 3 can be applied to derive specific DFC recovery guarantees for noisy MC and noisy RMF. In these sections, we let ¯n ≜max(m, n). 4.3 Consequences for Noisy MC Our first corollary of Thm. 3 shows that DFC retains the high-probability recovery guarantees of a standard MC solver while operating on matrices of much smaller dimension. Suppose that a base MC algorithm solves the following convex optimization problem, studied in [3]: minimizeL ∥L∥∗ subject to ∥PΩ(M −L)∥F ≤∆. Then, Cor. 4 follows from a novel guarantee for noisy convex MC, proved in the appendix. Corollary 4. Suppose that L0 is (µ, r)-coherent and that s entries of M are observed, with locations Ωdistributed uniformly. Define the oversampling parameter βs ≜ s(1 −ϵ/2) 32µ2r2(m + n) log2(m + n), and fix any target rate parameter 1 < β ≤βs. Then, if ∥PΩ(M) −PΩ(L0)∥F ≤∆a.s., it suffices to choose t = n/l and l ≥max # nβ βs + $ n(β−1) βs , crµ log(n) log(2/δ) ϵ2 % , d ≥max # mβ βs + $ m(β−1) βs , clµ0( ˆC) log(m) log(1/δ) ϵ2 % to achieve DFC-PROJ: ∥L0 −ˆLproj∥F ≤(2 + ϵ)c′ e √mn∆ DFC-NYS: ∥L0 −ˆLnys∥F ≤(2 + 3ϵ)c′ e √ ml + dn∆ with probability at least DFC-PROJ: (1 −δ)(1 −5t log(¯n)¯n2−2β) ≥(1 −δ)(1 −¯n3−2β) DFC-NYS: (1 −δ)2(1 −10 log(¯n)¯n2−2β), respectively, with c as in Thm. 3 and c′ e a positive constant. 5Note that the DFC-NYS guarantee requires the number of rows sampled to grow in proportion to µ0( ˆC), a quantity always bounded by µ in our simulations. 7 Notably, Cor. 4 allows for the fraction of columns and rows sampled to decrease as the oversampling parameter βs increases with m and n. In the best case, βs = Θ(mn/[(m + n) log2(m + n)]), and Cor. 4 requires only O( n m log2(m + n)) sampled columns and O( m n log2(m + n)) sampled rows. In the worst case, βs = Θ(1), and Cor. 4 requires the number of sampled columns and rows to grow linearly with the matrix dimensions. As a more realistic intermediate scenario, consider the setting in which βs = Θ(√m + n) and thus a vanishing fraction of entries are revealed. In this setting, only O(√m + n) columns and rows are required by Cor. 4. 4.4 Consequences for Noisy RMF Our next corollary shows that DFC retains the high-probability recovery guarantees of a standard RMF solver while operating on matrices of much smaller dimension. Suppose that a base RMF algorithm solves the following convex optimization problem, studied in [25]: minimizeL,S ∥L∥∗+ λ∥S∥1 subject to ∥M −L −S∥F ≤∆, with λ = 1/√¯n. Then, Cor. 5 follows from Thm. 3 and the noisy RMF guarantee of [25, Thm. 2]. Corollary 5. Suppose that L0 is (µ, r)-coherent and that the uniformly distributed support set of S0 has cardinality s. For a fixed positive constant ρs, define the undersampling parameter βs ≜ ! 1 − s mn " /ρs, and fix any target rate parameter β > 2 with rescaling β′ ≜β log(¯n)/ log(m) satisfying 4βs − 3/ρs ≤β′ ≤βs. Then, if ∥M −L0 −S0∥F ≤∆a.s., it suffices to choose t = n/l and l ≥max #r2µ2 log2(¯n) (1 −ϵ/2)ρr , 4 log(¯n)β(1 −ρsβs) m(ρsβs −ρsβ′)2 , crµ log(n) log(2/δ)/ϵ2 $ d ≥max #r2µ2 log2(¯n) (1 −ϵ/2)ρr , 4 log(¯n)β(1 −ρsβs) n(ρsβs −ρsβ′)2 , clµ0( ˆC) log(m) log(1/δ)/ϵ2 $ to have DFC-PROJ: ∥L0 −ˆLproj∥F ≤(2 + ϵ)c′′ e √mn∆ DFC-NYS: ∥L0 −ˆLnys∥F ≤(2 + 3ϵ)c′′ e √ ml + dn∆ with probability at least DFC-PROJ: (1 −δ)(1 −tcp¯n−β) ≥(1 −δ)(1 −cp¯n1−β) DFC-NYS: (1 −δ)2(1 −2cp¯n−β), respectively, with c as in Thm. 3 and ρr, c′′ e, and cp positive constants. Note that Cor. 5 places only very mild restrictions on the number of columns and rows to be sampled. Indeed, l and d need only grow poly-logarithmically in the matrix dimensions to achieve highprobability noisy recovery. 5 Conclusions To improve the scalability of existing matrix factorization algorithms while leveraging the ubiquity of parallel computing architectures, we introduced, evaluated, and analyzed DFC, a divide-andconquer framework for noisy matrix factorization with missing entries or outliers. We note that the contemporaneous work of [19] addresses the computational burden of noiseless RMF by reformulating a standard convex optimization problem to internally incorporate random projections. The differences between DFC and the approach of [19] highlight some of the main advantages of this work: i) DFC can be used in combination with any underlying MF algorithm, ii) DFC is trivially parallelized, and iii) DFC provably maintains the recovery guarantees of its base algorithm, even in the presence of noise. 8 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. In International Conference on Machine Learning, 2011. [2] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM, 58 (3):1–37, 2011. [3] E.J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925 –936, 2010. [4] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Sparse and low-rank matrix decompositions. In Allerton Conference on Communication, Control, and Computing, 2009. [5] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi. Robust matrix completion and corrupted columns. In International Conference on Machine Learning, 2011. [6] P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30:844–881, 2008. [7] A. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. In Foundations of Computer Science, 1998. [8] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of pseudoskeleton approximations. Linear Algebra and its Applications, 261(1-3):1 – 21, 1997. [9] D. Gross and V. Nesme. Note on sampling without replacing from a finite collection of matrices. CoRR, abs/1001.2738, 2010. [10] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963. [11] D. Hsu, S. M. Kakade, and T. Zhang. Dimension-free tail inequalities for sums of random matrices. arXiv:1104.1672v3[math.PR], 2011. [12] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 99:2057–2078, 2010. [13] S. Kumar, M. Mohri, and A. Talwalkar. On sampling-based approximate spectral decomposition. In International Conference on Machine Learning, 2009. [14] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystr¨om method. In NIPS, 2009. [15] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. UIUC Technical Report UILU-ENG-09-2214, 2009. [16] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1-2):321–353, 2011. [17] K. Min, Z. Zhang, J. Wright, and Y. Ma. Decomposing background topics from keywords by principal component pursuit. In Conference on Information and Knowledge Management, 2010. [18] M. Mohri and A. Talwalkar. Can matrix coherence be efficiently and accurately estimated? In Conference on Artificial Intelligence and Statistics, 2011. [19] Y. Mu, J. Dong, X. Yuan, and S. Yan. Accelerated low-rank visual recovery by random projection. In Conference on Computer Vision and Pattern Recognition, 2011. [20] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. arXiv:1009.2118v2[cs.IT], 2010. [21] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma. Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. In Conference on Computer Vision and Pattern Recognition, 2010. [22] B. Recht. A simpler approach to matrix completion. arXiv:0910.0651v2[cs.IT], 2009. [23] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3):615–640, 2010. [24] C.K. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In NIPS, 2000. [25] Z. Zhou, X. Li, J. Wright, E. J. Cand`es, and Y. Ma. Stable principal component pursuit. arXiv: 1001.2363v1[cs.IT], 2010. 9
2011
109
4,157
Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations A. Emin Orhan Robert A. Jacobs Department of Brain & Cognitive Sciences University of Rochester Rochester, NY 14627 {eorhan,robbie}@bcs.rochester.edu Abstract Extensive evidence suggests that items are not encoded independently in visual short-term memory (VSTM). However, previous research has not quantitatively considered how the encoding of an item influences the encoding of other items. Here, we model the dependencies among VSTM representations using a multivariate Gaussian distribution with a stimulus-dependent mean and covariance matrix. We report the results of an experiment designed to determine the specific form of the stimulus-dependence of the mean and the covariance matrix. We find that the magnitude of the covariance between the representations of two items is a monotonically decreasing function of the difference between the items’ feature values, similar to a Gaussian process with a distance-dependent, stationary kernel function. We further show that this type of covariance function can be explained as a natural consequence of encoding multiple stimuli in a population of neurons with correlated responses. 1 Introduction In each trial of a standard visual short-term memory (VSTM) experiment (e.g. [1,2]), subjects are first presented with a display containing multiple items with simple features (e.g. colored squares) for a brief duration and then, after a delay interval, their memory for the feature value of one of the items is probed using either a recognition or a recall task. Let s = [s1, s2, . . . , sN]T denote the feature values of the N items in the display on a given trial. In this paper, our goal is to provide a quantitative description of the content of a subject’s visual memory for the display after the delay interval. That is, we want to characterize a subject’s belief state about s. We suggest that a subject’s belief state can be expressed as a random variable ˆs = [ˆs1, ˆs2, . . . , ˆsN]T that depends on the actual stimuli s: ˆs = ˆs(s). Consequently, we seek a suitable joint probability model p(ˆs) that can adequately capture the content of a subject’s memory of the display. We note that most research on VSTM is concerned with characterizing how subjects encode a single item in VSTM (for instance, the precision with which a single item can be encoded [1,2]) and, thus, does not consider the joint encoding of multiple items. In particular, we are not aware of any previous work attempting to experimentally probe and characterize exactly how the encoding of an item influences the encoding of other items, i.e. the joint probability distribution p(ˆs1, ˆs2, . . . , ˆsN). A simple (perhaps simplistic) suggestion is to assume that the encoding of an item does not influence the encoding of other items, i.e. the feature values of different items are represented independently in VSTM. If so, the joint probability distribution factorizes as p(ˆs1, ˆs2, . . . , ˆsN) = p(ˆs1)p(ˆs2) . . . p(ˆsN). However, there is now extensive evidence against this simple model [3,4,5,6]. 1 2 A Gaussian process model We consider an alternative model for p(ˆs1, ˆs2, . . . , ˆsN) that allows for dependencies among representations of different items in VSTM. We model p(ˆs1, ˆs2, . . . , ˆsN) as an N-dimensional multivariate Gaussian distribution with mean m(s) and full covariance matrix Σ(s), both of which depend on the actual stimuli s appearing in a display. This model assumes that only pairwise (or second-order) correlations exist between the representations of different items. Although more complex models incorporating higher-order dependencies between the representations of items in VSTM can be considered, it would be difficult to experimentally determine the parameters of these models. Below we show how the parameters of the multivariate Gaussian model, m(s) and Σ(s), can be experimentally determined from standard VSTM tasks with minor modifications. Importantly, we emphasize the dependence of m(s) and Σ(s) on the actual stimuli s. This is to allow for the possibility that subjects might encode stimuli with different similarity relations differently. For instance (and to foreshadow our experimental results), if the items in a display have similar feature values, one might reasonably expect there to be large dependencies among the representations of these items. Conversely, the correlations among the representations of items might be smaller if the items in a display are dissimilar. These two cases would imply different covariance matrices Σ, hence the dependence of Σ (and m) on s. Determining the properties of the covariance matrix Σ(s) is, in a sense, similar to finding an appropriate kernel for a given dataset in the Gaussian process framework [7]. In Gaussian processes, one expresses the covariance matrix in the form Σij = k(si, sj) using a parametrized kernel function k. Then one can ask various questions about the kernel function: What kind of kernel function explains the given dataset best, a stationary kernel function that only depends on |si −sj| or a more general, non-stationary kernel? What parameter values of the chosen kernel (e.g. the scale length parameter for a squared exponential type kernel) explain the dataset best? We ask similar questions about our stimulus-dependent covariance matrix Σ(s): Does the covariance between VSTM representations of two stimuli depend only on the absolute difference between their feature values, |si −sj|, or is the relationship non-stationary and more complex? If the covariance function is stationary, what is its scale length (how quickly does the covariance dissipate with distance)? In Section 3, we address these questions experimentally. Why does providing an appropriate context improve memory? Modeling subjects’ VSTM representations of multiple items as a joint probability distribution allows us to explain an intriguing finding by Jiang, Olson and Chun [3] in an elegant way. We first describe the finding, and then show how to explain this result within our framework. Jiang et al. [3] showed that relations between items in a display, as well as items’ individual characteristics, are encoded in VSTM. In their Experiment 1, they briefly presented displays consisting of colored squares to subjects. There were two test or probe conditions. In the single probe condition, only one of the squares (called the target probe) reappeared, either with the same color as in the original display, or with a different color. In the minimal color change condition, the target probe (again with the same color or with a different color) reappeared together with distracter probes which always had the same colors as in the original display. In both conditions, subjects decided whether a color change occurred in the target probe. Jiang et al. [3] found that subjects’ performances were significantly better in the minimal color change condition than in the single probe condition. This result suggests that the color for the target square was not encoded independently of the colors of the distracter squares because if the target color was encoded independently then subjects would have shown identical performances regardless of whether distractor squares were present (minimal color change condition) or absent (single probe condition). In Experiment 2 of [3], a similar result was obtained for location memory: location memory for a target was better in the minimal change condition than in the single probe condition or in a maximal change condition where all distracters were presented but at different locations than their original locations. These results are easy to understand in terms of our joint probability model for item memories, p(ˆs). Intuitively, the single probe condition taps the marginal probability of the memory for the target item, p(ˆst), where t represents the index of the target item. In contrast, the minimal color change condition taps the conditional probability of the memory for the target given the memories for the distracters, p(ˆst|ˆs−t = s−t) where −t represents the indices of the distracter items, because the actual dis2 100 ms 1000 ms 1000 ms (delay) Until response Figure 1: The sequence of events on a single trial of the experiment with N = 2. tracters s−t are shown during test. If the target probe has high probability under these distributions, then the subject will be more likely to respond ‘no-change’, whereas if it has low probability, then the subject will be more likely to respond ‘change’. If the items are represented independently in VSTM, the marginal and conditional distributions are the same; i.e. p(ˆst) = p(ˆst|ˆs−t). Hence, the independent-representation assumption predicts that there should be no difference in subjects’ performances in the single probe and minimal color change conditions. The significant differences in subjects’ performances between these conditions observed in [3] provides evidence against the independence assumption. It is also easy to understand why subjects performed better in the minimal color change condition than in the single probe condition. The conditional distribution p(ˆst|ˆs−t) is, in general, a lowervariance distribution than the marginal distribution p(ˆst). Although this is not exclusively true for the Gaussian distribution, it can analytically be proven in the Gaussian case. If p(ˆs) is modeled as an N-dimensional multivariate Gaussian distribution: ˆs = [ˆst,ˆs−t]T ∼N([a, b]T , [A, C; CT , B]) (1) (where the covariance matrix is written using Matlab notation), then the conditional distribution p(ˆst|ˆs−t) has mean a + CB−1(ˆs−t −b) and variance A −CB−1CT , whereas the marginal distribution p(ˆst) has mean a and variance A which is always greater than A −CB−1CT . [As an aside, note that when the distracter probes are different from the mean of the memories for distracters, i.e. ˆs−t ̸= b, the conditional distribution p(ˆst|ˆs−t) is biased away from a, explaining the poorer performance in the maximal change condition than in the single probe condition.] 3 Experiments We conducted two VSTM recall experiments to determine the properties of m(s) and Σ(s). The experiments used position along a horizontal line as the relevant feature to be remembered. Procedure: Each trial began with the display of a fixation cross at a random location within an approximately 12◦× 16◦region of the screen for 1 second. Subjects were then presented with a number of colored squares (N = 2 or N = 3 squares in separate experiments) on linearly spaced dark and thin horizontal lines for 100 ms. After a delay interval of 1 second, a probe screen was presented. Initially, the probe screen contained only the horizontal lines. Subjects were asked to use the computer mouse to indicate their estimate of the horizontal location of each of the colored squares presented on that trial. We note that this is a novelty of our experimental task, since in most other VSTM tasks, only one of the items is probed and the subject is asked to report the content of their memory associated with the probed item. Requiring subjects to indicate the feature values of all presented items allows us to study the dependencies between the memories for different items. Subjects were allowed to adjust their estimates as many times as they wished. When they were satisfied with their estimates, they proceeded to the next trial by pressing the space bar. Figure 1 shows the sequence of events on a single trial of the experiment with N = 2. To study the dependence of m(s) and Σ(s) on the horizontal locations of the squares s = [s1, s2, . . . , sN]T , we used different values of s on different trials. We call each different s a particular ‘display configuration’. To cover a range of possible display configurations, we selected uniformly-spaced points along the horizontal dimension, considered all possible combinations of these points (e.g. item 1 is at horizontal location s1 and item 2 is at location s2), and then added a small amount of jitter to each combination. In the experiment with two items, 6 points were selected along the horizontal dimension, and thus there were 36 (6×6) different display configurations. In the experiment with three items, 3 points were selected along the horizontal dimension, meaning that 27 (3×3×3) configurations were used. 3 −10 0 10 −10 0 10 s1 (degrees) s2 (degrees) (a) 0 8 16 −1 −0.5 0 0.5 1 |s1 −s2| Corr(ˆs1, ˆs2) (b) Figure 2: (a) Results for subject RD. The actual display configurations s are represented by magenta dots, the estimated means based on the subject’s responses are represented by black dots and the estimated covariances are represented by contours (with red contours representing Σ(s) for which the two dimensions were significantly correlated at the p < 0.05 level). (b) Results for all 4 subjects. The graph plots the mean correlation coefficients (and standard errors of the means) as a function of |s1 −s2|. Each color corresponds to a different subject. Furthermore, since m(s) and Σ(s) cannot be reliably estimated from a single trial, we presented the same configuration s a number of times and collected the subject’s response each time. We then estimated m(s) and Σ(s) for a particular configuration s by fitting an N-dimensional Gaussian distribution to the subject’s responses for the corresponding s. We thus assume that when a particular configuration s is presented in different trials, the subject forms and makes use of (i.e. samples from) roughly the same VSTM representation p(ˆs) = N(m(s), Σ(s)) in reporting the contents of their memory. In the experiment with N = 2, each of the 36 configurations was presented 24 times (yielding a total of 864 trials) and in the experiment with N = 3, each of the 27 configurations was presented 26 times (yielding a total of 702 trials), randomly interleaved. Subjects participating in the same experiment (either two or three items) saw the same set of display configurations. Participants: 8 naive subjects participated in the experiments (4 in each experiment). All subjects had normal or corrected-to-normal vision, and they were compensated at a rate of $10 per hour. For both set sizes, subjects completed the experiment in two sessions. Results: We first present the results for the experiment with N = 2. Figure 2a shows the results for a representative subject (subject RD). In this graph, the actual display configurations s are represented by magenta dots, the estimated means m(s) based on the subject’s responses are represented by black dots and the estimated covariances Σ(s) are represented by contours (red contours represent Σ(s) for which the two dimensions were significantly (p < 0.05) correlated). For this particular subject, p(ˆs1, ˆs2) exhibited a significant correlation for 12 of 36 configurations. In all these cases, correlations were positive, meaning that when the subject made an error in a given direction for one of the items, s/he was likely to make an error in the same direction for the other item. This tendency was strongest when items were at similar horizontal positions [e.g. distributions are more likely to exhibit significant correlations for display configurations close to the main diagonal (s1 = s2)]. Figure 2b shows results for all 4 subjects. This graph plots the correlation coefficients for subjects’ position estimates as a function of the absolute differences in items’ positions (|s1 −s2|). In this graph, configurations were divided into 6 equal-length bins according to their |s1−s2| values, and the correlations shown are the mean correlation coefficients (and standard errors of the means) for each bin. Clearly, the correlations decrease with increasing |s1 −s2|. Correlations differed significantly across different bins (one-way ANOVA: p < .05 for all but one subject, as well as for combined data from all subjects). One might consider this graph as representing a stationary kernel function that specifies how the covariance between the memory representations of two items changes as a function of the distance |s1 −s2| between their feature values. However, as can be observed from Figure 2a, the experimental kernel function that characterizes the dependencies between the VSTM representations of different items is not perfectly stationary. Additional analyses (not detailed here) indicate that subjects had a bias toward the center of the display. In other words, when an item appeared on the left side of a display, subjects were likely to estimate its location as being to the 4 0 9 18 −1 −0.5 0 0.5 1 d(i,j) Corr(ˆsi, ˆsj) Distance in 1 −d (a) 4 11 18 −1 −0.5 0 0.5 1 d(i,j) Distance in 2 −d Corr(ˆsi, ˆsj) (b) Figure 3: Subjects’ mean correlation coefficients (and standard errors of the means) as a function of the distance d(i, j) between items i and j. d(i, j) is measured either (a) in one dimension (considering only horizontal locations) or (b) in two dimensions (considering both horizontal and vertical locations). Each color corresponds to a different subject. right of its actual location. Conversely, items appearing on the right side of a display were estimated as lying to the left of their actual locations. (This tendency can be observed in Figure 2a by noting that the black dots in this figure are often closer to the main diagonal than the magenta crosses). This bias is consistent with similar ‘regression-to-the-mean’ type biases previously reported in visual short-term memory for spatial frequency [5,8] and size [6]. Results for the experiment with three items were qualitatively similar. Figure 3 shows that, similar to the results observed in the experiment with two items, the magnitude of the correlations between subjects’ position estimates decreases with Euclidean distance between items. In this figure, all sisj pairs (recall that si is the horizontal location of item i) for all display configurations were divided into 3 equal-length bins based on the Euclidean distance d(i, j) between items i and j where we measured distance either in one dimension (considering only the horizontal locations of the items, Figure 3a) or in two dimensions (considering both horizontal and vertical locations, Figure 3b). Correlations differed significantly across different bins as indicated by one-way ANOVA (for both distance measures: p < .01 for all subjects, as well as for combined data from all subjects). Overall subjects exhibited a smaller number of significant s1-s3 correlations than s1-s2 or s2-s3 correlations. This is probably due to the fact that the s1-s3 pair had a larger vertical distance than the other pairs. 4 Explaining the covariances with correlated neural population responses What could be the source of the specific form of covariances observed in our experiments? In this section we argue that dependencies of the form we observed in our experiments would naturally arise as a consequence of encoding multiple items in a population of neurons with correlated responses. To show this, we first consider encoding multiple stimuli with an idealized, correlated neural population and analytically derive an expression for the Fisher information matrix (FIM) in this model. This analytical expression for the FIM, in turn, predicts covariances of the type we observed in our experiments. We then simulate a more detailed and realistic network of spiking neurons and consider encoding and decoding the features of multiple items in this network. We show that this more realistic network also predicts covariances of the type we observed in our experiments. We emphasize that these predictions will be derived entirely from general properties of encoding and decoding information in correlated neural populations and as such do not depend on any specific assumptions about the properties of VSTM or how these properties might be implemented in neural populations. Encoding multiple stimuli in a neural population with correlated responses We first consider the problem of encoding N stimuli (s = [s1, . . . , sN]) in a correlated population of K neurons with Gaussian noise: p(r|s) = 1 p (2π)K det Q(s) exp[−1 2(r −f(s))T Q−1(s)(r −f(s))] (2) 5 where r is a vector containing the firing rates of the neurons in the population, f(s) represents the tuning functions of the neurons and Q represents the specific covariance structure chosen. More specifically, we assume a ‘limited range correlation structure’ for Q that has been analytically studied several times in the literature [9]–[15]. In a neural population with limited range correlations, the covariance between the firing rates of the k-th and l-th neurons (the kl-th cell of the covariance matrix) is assumed to be a monotonically decreasing function of the distance between their preferred stimuli [11]: Qkl(s) = afk(s)αfl(s)α exp(−||c(k) −c(l)|| L ) (3) where c(k) and c(l) are the tuning function centers of the neurons. There is extensive experimental evidence for this type of correlation structure in the brain [16]-[19]. For instance, Zohary et al. [16] showed that correlations between motion direction selective MT neurons decrease with the difference in their preferred directions. This ‘limited-range’ assumption about the covariances between the firing rates of neurons will be crucial in explaining our experimental results in terms of the FIM of a correlated neural population encoding multiple stimuli. We are interested in deriving the FIM, J(s), for our correlated neural population encoding the stimuli s. The significance of the FIM is that the inverse of the FIM provides a lower bound on the covariance matrix of any unbiased estimator of s and also expresses the asymptotic covariance matrix of the maximum-likelihood estimate of s in the limit of large K1. The ij-th cell of the FIM is defined as: Jij(s) = −E[ ∂2 ∂si∂sj log p(r|s)] (4) Our derivation of J(s) closely follows that of Wilke and Eurich in [11]. To derive an analytical expression for J(s), we make a number of assumptions: (i) all neurons encode the same feature dimension (e.g. horizontal location in our experiment); (ii) indices of the neurons can be assigned such that neurons with adjacent indices have the closest tuning function centers; (iii) the centers of the tuning functions of neurons are linearly spaced with density η. The last two assumptions imply that the covariance between neurons with indices k and l can be expressed as Qkl = ρ|k−l|af α k f α l (we omitted the s-dependence of Q and f for brevity) with ρ = exp(−1/(Lη)) where L is a length parameter determining the spatial extent of the correlations. With these assumptions, it can be shown that (see Supplementary Material): Jij(s) = 1 + ρ2 a(1 −ρ2) K X k=1 h(i) k h(j) k − 2ρ a(1 −ρ2) K−1 X k=1 h(i) k h(j) k+1 + 2α2 1 −ρ2 K X k=1 g(i) k g(j) k −2α2ρ2 1 −ρ2 K−1 X k=1 g(i) k g(j) k+1 (5) where h(i) k = 1 f α k ∂fk ∂si and g(i) k = 1 fk ∂fk ∂si . Although not necessary for our results (see Supplementary Material), for convenience, we further assume that the neurons can be divided into N groups where in each group the tuning functions are a function of the feature value of only one of the stimuli, i.e. fk(s) = fk(sn) for neurons in group n, so that the effects of other stimuli on the mean firing rates of neurons in group n are negligible. A population of neurons satisfying this assumption, as well as the assumptions (i)-(iii) above, for N = 2 is schematically illustrated in Figure 4a. We consider Gaussian tuning functions of the form: fk(s) = g exp(−(s −ck)2/σ2), with ck linearly spaced between −12◦and 12◦and g and σ2 are assumed to be the same for all neurons. We take the inverse of J(s), which provides a lower bound on the covariance matrix of any unbiased estimator of s, and calculate correlation coefficients based on J−1(s) for each s. For N = 2, for instance, we do this by calculating J−1 12 (s)/ p J−1 11 (s)J−1 22 (s). In Figure 4b, we plot this measure for all s1, s2 pairs between −10◦and 10◦. We see that the inverse of the FIM predicts correlations between the estimates of s1 and s2 and these correlations decrease with |s1 −s2|, just as we observed in our experiments (see Figure 4c). The best fits to experimental data were obtained with fairly broad tuning functions (see Figure 4 caption). For such broad tuning functions, the inverse of the FIM also predicts negative correlations when |s1 −s2| is very large, which does not seem to be as strong in our data. Intuitively, this result can be understood as follows. Consider the hypothetical neural population shown in Figure 4a encoding the pair s1, s2. In this population, it is assumed that fk(s) = fk(s1) 1J−1(s) provides a lower bound on the covariance matrix of any unbiased estimator of s in the matrix sense (where A ≥B means A −B is positive semi-definite). 6 ... r1 r 2 r3 r 4 Qkl=aρ∣k−l∣f k α f l α (a) s1 (degrees) s2 (degrees) Corr(ˆs1, ˆs2) −10 0 10 −10 0 10 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 (b) 0 5 10 15 −1 −0.5 0 0.5 1 |s1 −s2| Corr(ˆs1, ˆs2) (c) Figure 4: (a) A population of neurons satisfying all assumptions made in deriving the FIM. For neurons in the upper row fk(s) = fk(s1), and for neurons in the lower row fk(s) = fk(s2). The magnitude of correlations between two neurons is indicated by the thickness of the line connecting them. (b) Correlation coefficients estimated from the inverse of the FIM for all stimuli pairs s1, s2. (c) Mean correlation coefficients as a function of |s1 −s2| (red: model’s prediction; black: collapsed data from all 4 subjects in the experiment with N = 2). Parameters: α = 0.5, g = 50, a = 1 (these were set to biologically plausible values); other parameters: K = 500, σ = 9.0, L = 0.0325 (the last two were chosen to provide a good fit to the experimental results). for neurons in the upper row, and fk(s) = fk(s2) for neurons in the lower row. Suppose that in the upper row, the k-th neuron has the best-matching tuning function for a given s1. Therefore, on average, the k-th neuron has the highest firing rate in response to s1. However, since the responses of the neurons are stochastic, on some trials, neurons to the left (right) of the k-th neuron will have the highest firing rate in response to s1. When this happens, neurons in the lower row with similar preferences will be more likely to get activated, due to the limited-range correlations between the neurons. This, in turn, will introduce correlations in an estimator of s based on r that are strongest when the absolute difference between s1 and s2 is small. Encoding and decoding multiple stimuli in a network of spiking neurons There might be two concerns about the analytical argument given in the previous subsection. The first is that we needed to make many assumptions in order to derive an analytic expression for J(s). It is not clear if we would get similar results when one or more of these assumptions are violated. Secondly, the interpretation of the off-diagonal terms (covariances) in J−1(s) is somewhat different from the interpretation of the diagonal terms (variances). Although the diagonal terms provide lower bounds on the variances of any unbiased estimator of s, the off-diagonal terms do not necessarily provide lower bounds on the covariances of the estimates, that is, there might be estimators with lower covariances. To address these concerns, we simulated a more detailed and realistic network of spiking neurons. The network consisted of two layers. In the input layer, there were 169 Poisson neurons arranged in a 13 × 13 grid with linearly spaced receptive field centers between −12◦and 12◦along both horizontal and vertical directions. On a given trial, the firing rate of the k-th input neuron was determined by the following equation: rk = gin[exp(−∥x1 −c(k)∥ σin ) + exp(−∥x2 −c(k)∥ σin )] (6) for the case of N = 2. Here ∥· ∥is the Euclidean norm, xi is the vertical and horizontal locations of the i-th stimulus, c(k) is the receptive field center of the input neuron, gin is a gain parameter and σin is a scale parameter (both assumed to be the same for all input neurons). The output layer consisted of simple leaky integrate-and-fire neurons. There were 169 of these neurons arranged in a 13 × 13 grid with the receptive field center of each neuron matching the receptive field center of the corresponding neuron in the input layer. We induced limited-range correlations between the output neurons through receptive field overlap, although other ways of introducing limited-range correlations can be considered such as through local lateral connections. Each output neuron had a Gaussian connection weight profile centered at the corresponding input 7 −10 0 10 −10 0 10 s1 (degrees) s2 (degrees) (a) 0 8 16 −1 −0.5 0 0.5 1 Corr(ˆs1, ˆs2) |s1 −s2| (b) Figure 5: (a) Results for the network model. The actual display configurations s are represented by magenta dots, the estimated means based on the model’s responses are represented by black dots and the estimated covariances are represented by contours (with red contours representing Σ(s) for which the two dimensions were significantly correlated at the p < 0.05 level). (b) The mean correlation coefficients (and standard errors of the means) as a function of |s1 −s2| (red: model prediction; black: collapsed data from all 4 subjects in the experiment with N = 2). Model parameters: gin = 120, σin = 2, σout = 2. Parameters were chosen to provide a good fit to the experimental results. neuron and with a standard deviation of σout. The output neurons had a threshold of -55 mV and a reset potential of -70 mV. Each spike of an input neuron k instantaneously increased the voltage of an output neuron l by 10wkl mV, where wkl is the connection weight between the two neurons and the voltage decayed with a time constant of 10 ms. We implemented the network in Python using the Brian neural network simulator [20]. We simulated this network with the same display configurations presented to our subjects in the experiment with N = 2. Each of the 36 configurations was presented 96 times to the network, yielding a total of 3456 trials. For each trial, the network was simulated for 100 ms and its estimates of s1 and s2 were read out using a suboptimal decoding strategy. Specifically, to get an estimate of s1, we considered only the row of neurons in the output layer whose preferred vertical locations were closest to the vertical location of the first stimulus and then we fit a Gaussian function (with amplitude, peak location and width parameters) to the activity profile of this row of neurons and considered the estimated peak location as the model’s estimate of s1. We did the same for obtaining an estimate of s2. Figure 5 shows the results for the network model. Similar to our experimental results, the spiking network model predicts correlations between the estimates of s1 and s2 and these correlations decrease with |s1 −s2| (correlations differed significantly across different bins as indicated by a one-way ANOVA: F(5, 30) = 22.9713, p < 10−8; see Figure 5b). Interestingly, the model was also able to replicate the biases toward the center of the screen observed in the experimental data. This is due to the fact that output neurons near the center of the display tended to have higher activity levels, since they have more connections with the input neurons compared to the output neurons near the edges of the display. 5 Discussion Properties of correlations among the responses of neural populations have been studied extensively from both theoretical and experimental perspectives. However, the implications of these correlations for jointly encoding multiple items in memory are not known. Our results here suggest that one consequence of limited-range neural correlations might be correlations in the estimates of the feature values of different items that decrease with the absolute difference between their feature values. An interesting question is whether our results generalize to other feature dimensions, such as orientation, spatial frequency etc. Preliminary data from our lab suggest that covariances of the type reported here for spatial location might also be observed in VSTM for orientation. Acknowledgments: We thank R. Moreno-Bote for helpful discussions. This work was supported by a research grant from the National Science Foundation (DRL-0817250). 8 References [1] Bays, P.M. & Husain, M. (2008) Dynamic shifts of limited working memory resources in human vision. Science 321:851-854. [2] Zhang, P.H. & Luck, S.J. (2008) Discrete fixed-resolution representations in visual working memory. Nature 453:233-235. [3] Jiang, Y., Olson, I.R. & Chun, M.M. (2000) Organization of visual short-term memory. Journal of Experimental Psychology: Learning, Memory and Cognition 26(3):683-702. [4] Kahana, M.J. & Sekuler, R. (2002) Recognizing spatial patterns: a noisy exemplar approach. Vision Research 42:2177-2192. [5] Huang, J. & Sekuler, R. (2010) Distortions in recall from visual memory: Two classes of attractors at work Journal of Vision 10:1-27. [6] Brady, T.F. & Alvarez, G.A. (in press) Hierarchical encoding in visual working memory: ensemble statistics bias memory for individual items. Psychological Science. [7] Rasmussen, C.E. & Williams, C.K.I (2006) Gaussian Processes for Machine Learning. MIT Press. [8] Ma, W.J. & Wilken, P. (2004) A detection theory account of change detection. Journal of Vision 4:11201135. [9] Abbott, L.F. & Dayan, P. (1999) The effect of correlated variability on the accuracy of a population code. Neural Computation 11:91-101. [10] Shamir, M. & Sompolinsky, H. (2004) Nonlinear population codes. Neural Computation 16:1105-1136. [11] Wilke, S.D. & Eurich, C.W. (2001) Representational accuracy of stochastic neural populations. Neural Computation 14:155-189. [12] Berens, P., Ecker, A.S., Gerwinn, S., Tolias, A.S. & Bethge, M. (2011) Reassessing optimal neural population codes with neurometric functions. PNAS 108(11): 44234428. [13] Snippe, H.P. & Koenderink, J.J. (1992) Information in channel-coded systems: correlated receivers. Biological Cybernetics 67: 183-190. [14] Sompolinsky, H., Yoon, H., Kang, K. & Shamir, M. (2001) Population coding in neural systems with correlated noise. Physical Review E 64: 051904. [15] Josi´c, K., Shea-Brown, E., Doiron, B. & de la Rocha, J. (2009) Stimulus-dependent correlations and population codes. Neural Computation 21:2774-2804. [16] Zohary, E., Shadlen, M.N. & Newsome, W.T. (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370:140-143. [17] Bair, W., Zohary, E. & Newsome, W.T. (2001) Correlated firing in macaque area MT: Time scales and relationship to behavior. The Journal of Neuroscience 21(5): 16761697. [18] Maynard, E.M., Hatsopoulos, N.G., Ojakangas, C.L., Acuna, B.D., Sanes, J.N., Norman, R.A. & Donoghue, J.P. (1999) Neuronal interactions improve cortical population coding of movement direction. The Journal of Neuroscience 19(18): 80838093. [19] Smith, M.A. & Kohn, A. (2008) Spatial and temporal scales of neuronal correlation in primary visual cortex. The Journal of Neuroscience 28(48): 1259112603. [20] Goodman, D. & Brette, R. (2008) Brian: a simulator for spiking neural networks in Python. Frontiers in Neuroinformatics 2:5. doi: 10.3389/neuro.11.005.2008. 9
2011
11
4,158
Non-conjugate Variational Message Passing for Multinomial and Binary Regression David A. Knowles Department of Engineering University of Cambridge Thomas P. Minka Microsoft Research Cambridge, UK Abstract Variational Message Passing (VMP) is an algorithmic implementation of the Variational Bayes (VB) method which applies only in the special case of conjugate exponential family models. We propose an extension to VMP, which we refer to as Non-conjugate Variational Message Passing (NCVMP) which aims to alleviate this restriction while maintaining modularity, allowing choice in how expectations are calculated, and integrating into an existing message-passing framework: Infer.NET. We demonstrate NCVMP on logistic binary and multinomial regression. In the multinomial case we introduce a novel variational bound for the softmax factor which is tighter than other commonly used bounds whilst maintaining computational tractability. 1 Introduction Variational Message Passing [20] is a message passing implementation of the mean-field approximation [1, 2], also known as variational Bayes (VB). Although Expectation Propagation [12] can have more desirable properties as a result of the particular Kullback-Leibler divergence that is minimised, VMP is more stable than EP under certain circumstances, such as multi-modality in the posterior distribution. Unfortunately, VMP is effectively limited to conjugate-exponential models since otherwise the messages become exponentially more complex at each iteration. In conjugate exponential models this is avoided due to the closure of exponential family distributions under multiplication. There are many non-conjugate problems which arise in Bayesian statistics, for example logistic regression or learning the hyperparameters of a Dirichlet. Previous work extending Variational Bayes to non-conjugate models has focused on two aspects. The first is how to fit the variational parameters when the VB free form updates are not viable. Various authors have used standard numerical optimization techniques [15, 17, 3], or adapted such methods to be more suitable for this problem [7, 8]. A disadvantage of this approach is that the convenient and efficient message-passing formulation is lost. The second line of work applying VB to non-conjugate models involves deriving lower bounds to approximate the expectations [9, 18, 5, 10, 11] required to calculate the KL divergence. We contribute to this line of work by proposing and evaluating a new bound for the useful softmax factor, which is tighter than other commonly used bounds whilst maintaining computational tractability. We also demonstrate, in agreement with [19] and [14], that for univariate expectations such as required for logistic regression, carefully designed quadrature methods can be effective. Existing methods typically represent a compromise on modularity or performance. To maintain modularity one is effectively constrained to use exponential family bounds (e.g. quadratic in the Gaussian case [9, 5]) which we will show often gives sub-optimal performance. Methods which uses more general bounds, e.g. [3], must then resort to numerical optimisation, and sacrifice modularity. 1 This is a particular disadvantage for an inference framework such as Infer.NET [13] where we want to allow modular construction of inference algorithms from arbitrary deterministic and stochastic factors. We propose a novel message passing algorithm, which we call Non-conjugate Variational Message Passing (NCVMP), which generalises VMP and gives a recipe for calculating messages out of any factor. NCVMP gives much greater freedom in how expectations are taken (using bounds or quadrature) so that performance can be maintained along with modularity. The outline of the paper is as follows. In Sections 2 and 3 we briefly review VB and VMP. Section 4 is the main contribution of the paper: Non-conjugate VMP. Section 5 describes the binary logistic and multinomial softmax regression models, and implementation options with and without NCVMP. Results on synthetic and standard UCI datasets are given in Section 6 and some conclusions are drawn in Section 7. 2 Mean-field approximation Our aim is to approximate some model p(x), represented as a factor graph p(x) = Q a fa(xa) where factor fa is a function of all x ∈xa. The mean-field approximation assumes a fully-factorised variational posterior q(x) = Q i qi(xi) where qi(xi) is an approximation to the marginal distribution of xi (note however xi might be vector valued, e.g. with multivariate normal qi). The variational approximation q(x) is chosen to minimise the Kullback-Leibler divergence KL(q||p), given by KL(q||p) = Z q(x) log q(x) p(x)dx = −H[q(x)] − Z q(x) log p(x)dx. (1) where H[q(x)] = − R q(x) log q(x)dx is the entropy. It can be shown [1] that if the functions qi(xi) are unconstrained then minimising this functional can be achieved by coordinate descent, setting qi(xi) = exp⟨log p(x)⟩¬qi(xi), iteratively for each i, where ⟨...⟩¬qi(xi) implies marginalisation of all variables except xi. 3 Variational Message Passing on factor graphs VMP is an efficient algorithmic implementation of the mean-field approximation which leverages the fact that the mean-field updates only requires local operations on the factor graph. The variational distribution q(x) factorises into approximate factors ˜fa(xa). As a result of the fully factorised approximation, the approximate factors themselves factorise into messages, i.e. ˜fa(xa) = Q xi∈xa ma→i(xi) where the message from factor a to variable i is ma→i(xi) = exp⟨log fa(xa)⟩¬qi(xi). The message from variable i to factor a is the current variational posterior of xi, denoted qi(xi), i.e. mi→a(xi) = qi(xi) = Q a∈N(i) ma→i(xi) where N(i) are the factors connected to variable i. For conjugate-exponential models the messages to a particular variable xi, will all be in the same exponential family. Thus calculating qi(xi) simply involves summing sufficient statistics. If, however, our model is not conjugate-exponential, there will be a variable xi which receives incoming messages which are in different exponential families, or which are not even exponential family distributions at all. Thus qi(xi) will be some more complex distribution. Computing the required expectations becomes more involved, and worse still the complexity of the messages (e.g. the number of possible modes) grows exponentially per iteration. 4 Non-conjugate Variational Message Passing In this section we give some criteria under which the algorithm was conceived. We set up required notation and describe the algorithm, and prove some important properties. Finally we give some intuition about what the algorithm is doing. The approach we take aims to fulfill certain criteria: 1. provides a recipe for any factor 2. reduces to standard VMP in the case of conjugate exponential factors 3. allows modular implementation and combining of deterministic and stochastic factors 2 NCVMP ensures the gradients of the approximate KL divergence implied by the message match the gradients of the true KL. This means that we will have a fixed point at the correct point in parameter space: the algorithm will be at a fixed point if the gradient of the KL is zero. We use the following notation: variable xi has current variational posterior qi(xi; θi), where θi is the vector of natural parameters of the exponential family distribution qi. Each factor fa which is a neighbour of xi sends a message ma→i(xi; φa→i) to xi, where ma→i is in the same exponential family as qi, i.e. ma→i(xi; φ) = exp(φT u(xi)−κ(φ)) and qi(xi; θ) = exp(θT u(xi)−κ(θ)) where u(·) are sufficient statistics, and κ(·) is the log partition function. We define C(θ) as the Hessian of κ(·) evaluated at θ, i.e. Cij(θ) = ∂2κ(θ) ∂θi∂θj . It is straightforward to show that C(θ) = cov(u(x)|θ) so if the exponential family qi is identifiable, C will be symmetric positive definite, and therefore invertible. The factor fa contributes a term Sa(θi) = R qi(xi; θi)⟨log fa(x)⟩¬qi(xi)dxi to the KL divergence, where we have only made the dependence on θi explicit: this term is also a function of the variational parameters of the other variables neighbouring fa. With this notation in place we are now able to describe the NCVMP algorithm. Algorithm 1 Non-conjugate Variational Message Passing 1: Initialise all variables to uniform θi := 0∀i 2: while not converged do 3: for all variables i do 4: for all neighbouring factors a ∈N(i) do 5: φa→i := C(θi)−1 ∂Sa(θi) ∂θi 6: end for 7: θi := P a∈N(i) φa→i 8: end for 9: end while To motivate Algorithm 1 we give a rough proof that we will have a fixed point at the correct point in parameter space: the algorithm will be at a fixed point if the gradient of the KL divergence is zero. Theorem 1. Algorithm 1 has a fixed point at {θi : ∀i} if and only if {θi : ∀i} is a stationary point of the KL divergence KL(q||p). Proof. Firstly define the function ˜Sa(θ; φ) := Z qi(xi; θ) log ma→i(xi; φ)dxi, (2) which is an approximation to the function Sa(θ). Since qi and ma→i belong to the same exponential family we can simplify as follows, ˜Sa(θ; φ) = Z qi(xi; θ)(φT u(xi) −κ(φ))dxi = φT ⟨u(xi)⟩θ −κ(φ) = φT ∂κ(θ) ∂θ −κ(φ), (3) where ⟨·⟩θ implies expectation wrt qi(xi; θ) and we have used the standard property of exponential families that ⟨u(xi)⟩θ = ∂κ(θ) ∂θ . Taking derivatives wrt θ we have ∂˜ Sa(θ;φ) ∂θ = C(θ)φ. Now, the update in Algorithm 1, Line 5 for φa→i ensures that C(θ)φ = ∂Sa(θ) ∂θ ⇔ ∂˜Sa(θ; φ) ∂θ = ∂Sa(θ) ∂θ . (4) Thus this update ensures that the gradients wrt θi of S and ˜S match. The update in Algorithm 1, Line 7 for θi is minimising an approximate local KL divergence for xi: θi := arg min θ  −H[qi(xi, θ)] − X a∈N(i) ˜S(θ; φa→i)  = X a∈N(i) φa→i (5) where H[.] is the entropy. If and only if we are at a fixed point of the algorithm, we will have ∂ ∂θi  −H[qi(xi, θi)] − X a∈N(i) ˜Sa(θi; φa→i)  = ∂H[qi(xi, θi)] ∂θi − X a∈N(i) ∂˜Sa(θi; φa→i) ∂θi = 0 3 for all variables i. By (4), if and only if we are at a fixed point (so that θi has not changed since updating φ) we have −∂H[qi(xi, θi)] ∂θi − X a∈N(i) ∂Sa(θi) ∂θi = ∂KL(q||p) ∂θi = 0 (6) for all variables i. Theorem 1 showed that if NCVMP converges to a fixed point then it is at a stationary point of the KL divergence KL(q||p). In practice this point will be a minimum because any maximum would represent an unstable equilibrium. However, unlike VMP we have no guarantee to decrease KL(q||p) at every step, and indeed we do sometimes encounter convergence problems which require damping to fix: see Section 7. Theorem 1 also gives some intuition about what NCVMP is doing. ˜Sa is a conjugate approximation to the true Sa function, chosen to have the correct gradients at the current θi. The update at variable xi for θi combines all these approximations from factors involving xi to get an approximation to the local KL, and then moves θi to the minimum of this approximation. Another important property of Non-conjugate VMP is that it reduces to standard VMP for conjugate factors. Theorem 2. If ⟨log fa(x)⟩¬qi(xi) as a function of xi can be written µT u(xi) −c where c is a constant, then the NCVMP message ma→i(xi, φa→i) will be the standard VMP message ma→i(xi, µ). Proof. To see this note that ⟨log fa(x)⟩¬qi(xi) = µT u(xi) −c ⇒ Sa(θ) = µT ⟨u(xi)⟩θ −c, where µ is the expected natural statistic under the messages from the variables connected to fa other than xi. We have Sa(θ) = µT ∂κ(θ) ∂θ −c ⇒ ∂Sa(θ) ∂θ = C(θ)µ so from Algorithm 1, Line 7 we have φa→i := C(θ)−1 ∂Sa(θ) ∂θ = C(θ)−1C(θ)µ = µ, the standard VMP message. The update for θi in Algorithm 1, Line 7 is the same as for VMP, and Theorem 2 shows that for conjugate factors the messages sent to the variables are the same as for VMP. Thus NCVMP is a generalisation of VMP. NCVMP can alternatively be derived by assuming the incoming messages to xi are fixed apart from ma→i(xi; φ) and calculating a fixed point update for ma→i(xi; φ). Gradient matching for NCVMP can be seen as analogous to moment matching in EP. Due to space limitations we defer the details to the supplementary material. 4.1 Gaussian variational distribution Here we describe the NCVMP updates for a Gaussian variational distribution q(x) = N(x; m, v) and approximate factor ˜f(x; mf, vf). Although these can be derived from the generic formula using natural parameters it is mathematically more convenient to use the mean and variance (NCVMP is parameterisation invariant so it is valid to do this). 1 vf = −2dS(m, v) dv , mf vf = m vf + dS(m, v) dm . (7) 5 Logistic regression models We illustrate NCVMP on Bayesian binary and multinomial logistic regression. The regression part of the model is standard: gkn = PD d=1 WkdXdn + mk where g is the auxiliary variable, W is a matrix of weights with standard normal prior, X is the design matrix and m is a per class mean, which is also given a standard normal prior. For binary regression we just have k = 1, and the observation model is p(y = 1|g1n) = σ(g1n) where σ(x) = 1/(1 + e−x) is the logistic function. In the multinomial case p(y = k|g:n) = σk(g:n) where σk(x) = exk P l exl is the “softmax” function. The VMP messages for the regression part of the model are standard so we omit the details due to space limitations. 4 5.1 Binary logistic regression For logistic regression we require the following factor: f(s, x) = σ(x)s(1 −σ(x))1−s where we assume s is observed. The log factor is sx −log(1 + ex). There are two problems: we cannot analytically compute expectations wrt to x, and we need to optimise the variational parameters. [9] propose the “quadratic” bound on the integrand σ(x) ≥˜σ(x, t) = σ(t) exp  (x −t)/2 −λ(t) 2 (x2 −t2)  , (8) where λ(t) = tanh (t/2) t = σ(t)−1/2 t . It is straightforward to analytically optimise t to make the bound as tight as possible. The bound is conjugate to a Gaussian, but its performance can be poor. An alternative proposed in [18] is to bound the integral: ⟨log f(x)⟩q ≥sm −1 2a2v −log(1 + em+(1−2a)v/2)), (9) where m, v are the mean and variance of q(x) and a is a variational parameter which can be optimised using the fixed point iteration a := σ(m−(1−2a)v/2). We refer to this as the “tilted” bound. This bound is not conjugate to a Gaussian, but we can calculate the NCVMP message, which has parameters: 1 vf = a(1−a), mf vf = m vf +s−a, where we have assumed a has been optimised. A final possibility is to use quadrature to calculate the gradients of S(m, v) directly. The NCVMP message then has parameters 1 vf = ⟨xσ(x)⟩q−m⟨σ(x)⟩q v , mf vf = m vf + s −⟨σ(x)⟩q. The univariate expectations ⟨σ(x)⟩q and ⟨xσ(x)⟩q can be efficiently computed using Gauss-Hermite or Clenshaw-Curtis quadrature. 5.2 Multinomial softmax regression Consider the softmax factor f(x, p) = QK k=1 δ (pk −σk(x)), where xk are real valued and p is a probability vector with current Dirichlet variational posterior q(p) = Dir(p; d). We can integrate out p to give the log factor log f(x) = PK k=1(dk −1)xk −(d. −K) log P l exl where we define d. := PK k=1 dk. Let the incoming message from x be q(x) = QK k=1 N(xk; mk, vk). How should we deal with the log P l exl term? The approach used by [3] is a linear Taylor expansion of the log, which is accurate for small variances v: ⟨log X i exi⟩≤log X i ⟨exi⟩= log X i emi+vi/2, (10) which we refer to as the “log” bound. The messages are still not conjugate, so some numerical method must still be used to learn m and v: while [3] used LBFGS we will use NCVMP. Another bound was proposed by [5]: log K X k=1 exk ≤a + K X k=1 log(1 + exk−a), (11) where a is a new variational parameter. Combining with (8) we get the “quadratic bound” on the integrand, with K + 1 variational parameters. This has conjugate updates, so modularity can be achieved without NCVMP, but as we will see, results are often poor. [5] derives coordinate ascent fixed point updates to optimise a, but reducing to a univariate optimisation in a and using Newton’s method is much faster (see supplementary material). Inspired by the univariate “tilted” bound in Equation 9 we propose the multivariate tilted bound: ⟨log X i exi⟩≤1 2 X j a2 jvj + log X i emi+(1−2ai)vi/2 (12) Setting ak = 0 for all k we recover Equation 10 (hence this is the “tilted” version). Maximisation with respect to a can be achieved by the fixed point update (see supplementary material): a := σ  m + 1 2(1 −2a) · v  . This is a O(K) operation since the denominator of the softmax function is shared. For the softmax factor quadrature is not viable because of the high dimensionality of the integrals. From Equation 7 the NCVMP messages using the tilted bound have natural parameters 5 1 vkf = (d. −K)ak(1 −ak), mkf vkf = mk vkf + dk −1 −(d. −K)ak where we have assumed a has been optimised. As an alternative we suggest choosing whether to send the message resulting from the quadratic bound or tilted bound depending on which is currently the tightest, referred to as the “adaptive” method. Finally we consider a simple Taylor series expansion of the integrand around the mean of x, denoted “Taylor”, and the multivariate quadratic bound of [4], denoted “Bohning” (see the Supplementary material for details). 6 Results Here we aim to present the typical compromise between performance and modularity that NCVMP addresses. We will see that for both binary logistic and multinomial softmax models achieving conjugate updates by being constrained to quadratic bounds is sub-optimal, in terms of estimates of variational parameters, marginal likelihood estimation, and predictive performance. NCVMP gives the freedom to choose a wider class of bounds, or even use efficient quadrature methods in the univariate case, while maintaining simplicity and modularity. 6.1 The logistic factor We first test the logistic factor methods of Section 5.1 at the task of estimating the toy model σ(x)π(x) with varying Gaussian prior π(x) (see Figure 1(a)). We calculate the true mean and variance using quadrature. The quadratic bound has the largest errors for the posterior mean, and the posterior variance is severely underestimated. In contrast, NCVMP using quadrature, while being slightly more computationally costly, approximates the posterior much more accurately: the error here is due only to the VB approximation. Using the tilted bound with NCVMP gives more robust estimates of the variance than the quadratic bound as the prior mean changes. However, both the quadratic and tilted bounds underestimate the variance as the prior variance increases. −20 −10 0 10 20 prior mean −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 error in posterior mean 0 5 10 15 20 prior variance −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 error in posterior mean NCVMP quad NCVMP tilted VMP quadratic −20 −10 0 10 20 prior mean −3.5 −3.0 −2.5 −2.0 −1.5 −1.0 −0.5 0.0 error in posterior variance 0 5 10 15 20 prior variance −4 −3 −2 −1 0 error in posterior variance (a) Posterior mean and variance estimates of σ(x)π(x) with varying prior π(x). Left: varying the prior mean with fixed prior variance v = 10. Right: varying the prior variance with fixed prior mean m = 0. (b) Log likelihood of the true regression coefficients under the approximate posterior for 10 synthetic logistic regression datasets. Figure 1: Logistic regression experiments. 6.2 Binary logistic regression We generated ten synthetic logistic regression datasets with N = 30 data points and P = 8 covariates. We evaluated the results in terms of the log likelihood of the true regression coefficients under the approximate posterior, a measure which penalises poorly estimated posterior variances. Figure 1(b) compares the performance of non-conjugate VMP using quadrature and VMP using the quadratic bound. For four of the ten datasets the quadratic bound finds very poor solutions. Non-conjugate VMP finds a better solution in seven out of the ten datasets, and there is marginal 6 difference in the other three. Non-conjugate VMP (with no damping) also converges faster in general, although some oscillation is seen for one of the datasets. 6.3 Softmax bounds To have some idea how the various bounds for the softmax integral Eq[log PK k=1 exk] compare empirically we calculated relative absolute error on 100 random distributions q(x) = Q k N(xk; mk, v). We sample mk ∼N(0, u). When not being varied, K = 10, u = 1, v = 1. Ground truth was calculated using 105 Monte Carlo samples. We vary the number of classes, K, the distribution variance v and spread of the means u. Results are shown in Figure 2. As expected the tilted bound (12) dominates the log bound (10), since it is a generalisation. As K is increased the relative error made using the quadratic bound increases, whereas both the log and the tilted bound get tighter. In agreement with [5] we find the strength of the quadratic bound (11) is in the high variance case, and Bohning’s bound [4] is very loose under all conditions. Both the log and tilted bound are extremely accurate for variances v < 1. In fact, the log and tilted bounds are asymptotically optimal as v →0. “Taylor” gives accurate results but is not a bound, so convergence is not guaranteed and the global bound on the marginal likelihood is lost. The spread of the means u does not have much of an effect on the tightness of these bounds. These results show that even when quadrature is not an option, much tighter bounds can be found if the constraint of requiring quadratic bounds imposed by VMP is relaxed. For the remainder of the paper we consider only the quadratic, log and tilted bounds. 101 102 103 classes K −10 −8 −6 −4 −2 0 2 4 log(relative abs error) 10−1 100 101 102 input variance v −12 −10 −8 −6 −4 −2 0 2 quadratic log tilted Bohning Taylor 10−1 100 101 mean variance u −7 −6 −5 −4 −3 −2 −1 0 Figure 2: Log10 of the relative absolute error approximating E log P exp, averaged over 100 runs. 6.4 Multinomial softmax regression Synthetic data. For synthetic data sampled from the generative model we know the ground truth coefficients and can control characteristics of the data. We first investigate the performance with sample size N, with fixed number of features P = 6, classes K = 4, and no noise (apart from the inherent noise of the softmax function). As expected our ability to recover the ground truth regression coefficients improves with increasing N (see Figure 3(a), left). However, we see that the methods using the tilted bound perform best, closely followed by the log bound. Although the quadratic bound has comparable performance for small N < 200 it performs poorly with larger N due to its weakness at small variances. The choice of bound impacts the speed of convergence (see Figure 3(a), right). The log bound performed almost as well as the tilted bound at recovering coefficients it takes many more iterations to converge. The extra flexibility of the tilted bound allows faster convergence, analogous to parameter expansion [16]. For small N the tilted bound, log bound and adaptive method converge rapidly, but as N increases the quadratic bound starts to converge much more slowly, as do the tilted and adaptive methods to a lesser extent. “Adaptive” converges fastest because the quadratic bound gives good initial updates at high variance, and the tilted bound takes over once the variance decreases. We vary the level of noise in the synthetic data, fixing N = 200, in Figure 3(b). For all but very large noise values the tilted bound performs best. UCI datasets. We test the multinomial regression model on three standard UCI datasets: Iris (N = 150, D = 4, K = 3), Glass (N = 214, D = 8, K = 6) and Thyroid (N = 7200, D = 21, K = 3), 7 101 102 103 sample size N 0.0 0.2 0.4 0.6 0.8 RMS error of coefficents adaptive tilted quadratic log 101 102 103 sample size N 0 10 20 30 40 50 Iterations to convergence (a) Varying sample size 10−3 10−2 10−1 100 synthetic noise variance 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 RMS error of coefficents 10−3 10−2 10−1 100 synthetic noise variance 0 10 20 30 40 50 Iterations to convergence (b) Varying noise level Figure 3: Left: root mean squared error of inferred regression coefficients. Right: iterations to convergence. Results are shown as quartiles on 16 random synthetic datasets. All the bounds except “quadratic” were fit using NCVMP. Iris Quadratic Adaptive Tilted Probit Marginal likelihood −65 ± 3.5 −31.2 ± 2 −31.2 ± 2 −37.3 ± 0.79 Predictive likelihood −0.216 ± 0.07 −0.201 ± 0.039 −0.201 ± 0.039 −0.215 ± 0.034 Predictive error 0.0892 ± 0.039 0.0642 ± 0.037 0.065 ± 0.038 0.0592 ± 0.03 Glass Quadratic Adaptive Tilted Probit Marginal likelihood −319 ± 5.6 −193 ± 3.9 −193 ± 5.4 −201 ± 2.6 Predictive likelihood −0.58 ± 0.12 −0.542 ± 0.11 −0.531 ± 0.1 −0.503 ± 0.095 Predictive error 0.197 ± 0.032 0.200 ± 0.032 0.200 ± 0.032 0.195 ± 0.035 Thyroid Quadratic Adaptive Tilted Probit Marginal likelihood −1814 ± 43 −909 ± 30 −916 ± 31 −840 ± 18 Predictive likelihood −0.114 ± 0.019 −0.0793 ± 0.014 −0.0753 ± 0.008 −0.0916 ± 0.010 Predictive error 0.0241 ± 0.0026 0.0225 ± 0.0024 0.0226 ± 0.0023 0.0276 ± 0.0028 Table 1: Average results and standard deviations on three UCI datasets, based on 16 random 50 : 50 training-test splits. Adaptive and tilted use NCVMP, quadratic and probit use VMP. see Table 1. Here we have also included “Probit”, corresponding to a Bayesian multinomial probit regression model, estimated using VMP, and similar in setup to [6], except that we use EP to approximate the predictive distribution, rather than sampling. On all three datasets the marginal likelihood calculated using the tilted or adaptive bounds is optimal out of the logistic models (“Probit” has a different underlying model, so differences in marginal likelihood are confounded by the Bayes factor). In terms of predictive performance the quadratic bound seems to be slightly worse across the datasets, with the performance of the other methods varying between datasets. We did not compare to the log bound since it is dominated by the tilted bound and is considerably slower to converge. 7 Discussion NCVMP is not guaranteed to converge. Indeed, for some models we have found convergence to be a problem, which can be alleviated by damping: if the NCVMP message is mf→i(xi) then send the message mf→i(xi)1−αmold f→i(xi)α where mold f→i(xi) was the previous message sent to i and 0 ≤α < 1 is a damping factor. The fixed points of the algorithm remained unchanged. We have introduced Non-conjugate Variational Message Passing, which extends variational Bayes to non-conjugate models while maintaining the convenient message passing framework of VMP and allowing freedom to choose the most accurate available method to approximate required expectations. Deterministic and stochastic factors can be combined in a modular fashion, and conjugate parts of the model can be handled with standard VMP. We have shown NCVMP to be of practical use for fitting Bayesian binary and multinomial logistic models. We derived a new bound for the softmax integral which is tighter than other commonly used bounds, but has variational parameters that are still simple to optimise. Tightness of the bound is valuable both in terms of better approximating the posterior and giving a closer approximation to the marginal likelihood, which may be of interest for model selection. 8 References [1] H. Attias. A variational Bayesian framework for graphical models. Advances in neural information processing systems, 12(1-2):209215, 2000. [2] M. Beal and Z. Ghahramani. Variational Bayesian learning of directed graphical models with hidden variables. Bayesian Analysis, 1(4):793832, 2006. [3] D. Blei and J. Lafferty. A correlated topic model of science. Annals of Applied Statistics, 2007. [4] D. Bohning. Multinomial logistic regression algorithm. Annals of the Institute of Statistical Mathematics, 44:197–200, 1992. 10.1007/BF00048682. [5] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS workshop on approximate inference in hybrid models, 2007. [6] M. Girolami and S. Rogers. Variational bayesian multinomial probit regression with gaussian process priors. Neural Computation, 18(8):1790–1817, 2006. [7] A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate riemannian conjugate gradient learning for fixed-form variational bayes. Journal of Machine Learning Research, 11:3235–3268, 2010. [8] A. Honkela, M. Tornio, T. Raiko, and J. Karhunen. Natural conjugate gradient in variational inference. In M. Ishikawa, K. Doya, H. Miyamoto, and T. Yamakawa, editors, ICONIP (2), volume 4985 of Lecture Notes in Computer Science, pages 305–314. Springer, 2007. [9] T. S. Jaakkola and M. I. Jordan. A variational approach to bayesian logistic regression models and their extensions. In International Conference on Artificial Intelligence and Statistics, 1996. [10] M. E. Khan, B. M. Marlin, G. Bouchard, and K. P. Murphy. Variational bounds for mixed-data factor analysis. In Advances in Neural Information Processing (NIPS) 23, 2010. [11] B. M. Marlin, M. E. Khan, and K. P. Murphy. Piecewise bounds for estimating bernoullilogistic latent gaussian models. In Proceedings of the 28th Annual International Conference on Machine Learning, 2011. [12] T. P. Minka. Expectation propagation for approximate bayesian inference. In Uncertainty in Artificial Intelligence, volume 17, 2001. [13] T. P. Minka, J. M. Winn, J. P. Guiver, and D. A. Knowles. Infer.NET 2.4, 2010. Microsoft Research Cambridge. http://research.microsoft.com/infernet. [14] H. Nickisch and C. E. Rasmussen. Approximations for binary gaussian process classification. Journal of Machine Learning Research, 9:2035–2078, Oct. 2008. [15] M. Opper and C. Archambeau. The variational gaussian approximation revisited. Neural Computation, 21(3):786–792, 2009. [16] Y. A. Qi and T. Jaakkola. Parameter expanded variational bayesian methods. In B. Sch¨olkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing (NIPS) 19, pages 1097–1104. MIT Press, 2006. [17] T. Raiko, H. Valpola, M. Harva, and J. Karhunen. Building blocks for variational bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155–201, 2007. [18] L. K. Saul and M. I. Jordan. A mean field learning algorithm for unsupervised neural networks. Learning in graphical models, 1999. [19] M. P. Wand, J. T. Ormerod, S. A. Padoan, and R. Fruhwirth. Variational bayes for elaborate distributions. In Workshop on Recent Advances in Bayesian Computation, 2010. [20] J. Winn and C. M. Bishop. Variational message passing. Journal of Machine Learning Research, 6(1):661, 2006. 9
2011
110
4,159
Im2Text: Describing Images Using 1 Million Captioned Photographs Vicente Ordonez Girish Kulkarni Tamara L Berg Stony Brook University Stony Brook, NY 11794 {vordonezroma or tlberg}@cs.stonybrook.edu Abstract We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning. 1 Introduction Producing a relevant and accurate caption for an arbitrary image is an extremely challenging problem, perhaps nearly as difficult as the underlying general image understanding task. However, there are already many images with relevant associated descriptive text available in the noisy vastness of the web. The key is to find the right images and make use of them in the right way! In this paper, we present a method to effectively skim the top of the image understanding problem to caption photographs by collecting and utilizing the large body of images on the internet with associated visually descriptive text. We follow in the footsteps of past work on internet vision that has demonstrated that big data can often make big problems – e.g. image localization [13], retrieving photos with specific content [27], or image parsing [26] – much more bite size and amenable to very simple nonparametric matching methods. In our case, with a large captioned photo collection we can create an image description surprisingly well even with basic global image representations for retrieval and caption transfer. In addition, we show that it is possible to make use of large numbers of state of the art, but fairly noisy estimates of image content to produce more pleasing and relevant results. People communicate through language, whether written or spoken. They often use this language to describe the visual world around them. Studying collections of existing natural image descriptions and how to compose descriptions for novel queries will help advance progress toward more complex human recognition goals, such as how to tell the story behind an image. These goals include determining what content people judge to be most important in images and what factors they use to construct natural language to describe imagery. For example, when given a picture like that on the top row, middle column of figure 1, the user describes the girl, the dog, and their location, but selectively chooses not to describe the surrounding foliage and hut. This link between visual importance and descriptions leads naturally to the problem of text summarization in natural language processing (NLP). In text summarization, the goal is to select or generate a summary for a document. Some of the most common and effective methods proposed for summarization rely on extractive summarization [25, 22, 28, 19, 23]. where the most important or 1 Man sits in a rusted car buried in the sand on Waitarere beach Interior design of modern white and brown living room furniture against white wall with a lamp hanging. Emma in her hat looking super cute Little girl and her dog in northern Thailand. They both seemed interested in what we were doing Figure 1: SBU Captioned Photo Dataset: Photographs with user-associated captions from our web-scale captioned photo collection. We collect a large number of photos from Flickr and filter them to produce a data collection containing over 1 million well captioned pictures. relevant sentence (or sentences) is selected from a document to serve as the document’s summary. Often a variety of features related to document content [23], surface [25], events [19] or feature combinations [28] are used in the selection process to produce sentences that reflect the most significant concepts in the document. In our photo captioning problem, we would like to generate a caption for a query picture that summarizes the salient image content. We do this by considering a large relevant document set constructed from related image captions and then use extractive methods to select the best caption(s) for the image. In this way we implicitly make use of human judgments of content importance during description generation, by directly transferring human made annotations from one image to another. This paper presents two extractive approaches for image description generation. The first uses global image representations to select relevant captions (Sec 3). The second incorporates features derived from noisy estimates of image content (Sec 5). Of course, the first requirement for any extractive method is a document from which to extract. Therefore, to enable our approach we build a webscale collection of images with associated descriptions (ie captions) to serve as our document for relevant caption extraction. A key factor to making such a collection effective is to filter it so that descriptions are likely to refer to visual content. Some small collections of captioned images have been created by hand in the past. The UIUC Pascal Sentence data set1 contains 1k images each of which is associated with 5 human generated descriptions. The ImageClef2 image retrieval challenge contains 10k images with associated human descriptions. However neither of these collections is large enough to facilitate reasonable image based matching necessary for our goals, as demonstrated by our experiments on captioning with varying collection size (Sec 3). In addition this is the first – to our knowledge – attempt to mine the internet for general captioned images on a web scale! In summary, our contributions are: • A large novel data set containing images from the web with associated captions written by people, filtered so that the descriptions are likely to refer to visual content. • A description generation method that utilizes global image representations to retrieve and transfer captions from our data set to a query image. • A description generation method that utilizes both global representations and direct estimates of image content (objects, actions, stuff, attributes, and scenes) to produce relevant image descriptions. 1.1 Related Work Studying the association between words with pictures has been explored in a variety of tasks, including: labeling faces in news photographs with associated captions [2], finding a correspondence between keywords and image regions [1, 6], or for moving beyond objects to mid-level recognition elements such as attribute [16, 8, 17, 12]. Image description generation in particular has been studied in a few recent papers [9, 11, 15, 30]. Kulkarni et al [15] generate descriptions from scratch based on detected object, attribute, and prepositional relationships. This results in descriptions for images that are usually closely related to image content, but that are also often quite verbose and non-humanlike. Yao et al [30] look at the problem 1http://vision.cs.uiuc.edu/pascal-sentences/ 2http://www.imageclef.org/2011 2 Query image Gist + Tiny images ranking Top re-ranked images Across the street from Yannicks apartment. At night the headlight on the handlebars above the door lights up. The building in which I live. My window is on the right on the 4th floor This is the car I was in after they had removed the roof and successfully removed me to the ambulance. I really like doors. I took this photo out of the car window while driving by a church in Pennsylvania. Top associated captions Extract High Level Information Query Image Matched Images & extracted content Figure 2: System flow: 1) Input query image, 2) Candidate matched images retrieved from our webscale captioned collection using global image representations, 3) High level information is extracted about image content including objects, attributes, actions, people, stuff, scenes, and tfidf weighting, 4) Images are re-ranked by combining all content estimates, 5) Top 4 resulting captions. of generating text using various hierarchical knowledge ontologies and with a human in the loop for image parsing (except in specialized circumstances). Feng and Lapata [11] generate captions for images using extractive and abstractive generation methods, but assume relevant documents are provided as input, whereas our generation method requires only an image as input. A recent approach from Farhadi et al [9] is the most relevant to ours. In this work the authors produce image descriptions via a retrieval method, by translating both images and text descriptions to a shared meaning space represented by a single < object, action, scene > tuple. A description for a query image is produced by retrieving whole image descriptions via this meaning space from a set of image descriptions (the UIUC Pascal Sentence data set). This results in descriptions that are very human – since they were written by humans – but which may not be relevant to the specific image content. This limited relevancy often occurs because of problems of sparsity, both in the data collection – 1000 images is too few to guarantee similar image matches – and in the representation – only a few categories for 3 types of image content are considered. In contrast, we attack the caption generation problem for much more general images (images found via thousands of Flickr queries compared to 1000 images from Pascal) and a larger set of object categories (89 vs 20). In addition to extending the object category list considered, we also include a wider variety of image content aspects, including: non-part based stuff categories, attributes of objects, person specific action models, and a larger number of common scene classes. We also generate our descriptions via an extractive method with access to much larger and more general set of captioned photographs from the web (1 million vs 1 thousand). 2 Overview & Data Collection Our captioning system proceeds as follows (see fig 2 for illustration): 1) a query image is input to the captioning system, 2) Candidate match images are retrieved from our web-scale collection of captioned photographs using global image descriptors, 3) High level information related to image content, e.g. objects, scenes, etc, is extracted, 4) Images in the match set are re-ranked based on image content, 5) The best caption(s) is returned for the query. Captions can also be generated after step 2 from descriptions associated with top globally matched images. In the rest of the paper, we describe collecting a web-scale data set of captioned images from the internet (Sec 2.1), caption generation using a global representation (Sec 3), content estimation for various content types (Sec 4), and finally present an extension to our generation method that incorporates content estimates (Sec 5). 2.1 Building a Web-Scale Captioned Collection One key contribution of our paper is a novel web-scale database of photographs with associated descriptive text. To enable effective captioning of novel images, this database must be good in two ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The captions associated with the data base photographs must be visually relevant so that transferring captions between pictures is useful. To achieve the first requirement we query Flickr using a huge number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very large, but noisy initial set of photographs with associated text. To achieve our second requirement 3 Query  Image   1k  matches   10k  matches   100k  matches   1million  matches   Figure 3: Size Matters: Example matches to a query image for varying data set sizes. we filter this set of photos so that the descriptions attached to a picture are relevant and visually descriptive. To encourage visual descriptiveness in our collection, we select only those images with descriptions of satisfactory length based on observed lengths in visual descriptions. We also enforce that retained descriptions contain at least 2 words belonging to our term lists and at least one prepositional word, e.g. “on”, “under” which often indicate visible spatial relationships. This results in a final collection of over 1 million images with associated text descriptions – the SBU Captioned Photo Dataset. These text descriptions generally function in a similar manner to image captions, and usually directly refer to some aspects of the visual image content (see fig 1 for examples). Hereafter, we will refer to this web based collection of captioned images as C. Query Set: We randomly sample 500 images from our collection for evaluation of our generation methods (exs are shown in fig 1). As is usually the case with web photos, the photos in this set display a wide range of difficulty for visual recognition algorithms and captioning, from images that depict scenes (e.g. beaches), to images with a relatively simple depictions (e.g. a horse in a field), to images with much more complex depictions (e.g. a boy handing out food to a group of people). 3 Global Description Generation Internet vision papers have demonstrated that if your data set is large enough, some very challenging problems can be attacked with very simple matching methods [13, 27, 26]. In this spirit, we harness the power of web photo collections in a non-parametric approach. Given a query image, Iq, our goal is to generate a relevant description. We achieve this by computing the global similarity of a query image to our large web-collection of captioned images, C. We find the closest matching image (or images) and simply transfer over the description from the matching image to the query image. We also collect the 100 most similar images to a query – our matched set of images Im ∈M – for use in our our content based description generation method (Sec 5). For image comparison we utilize two image descriptors. The first descriptor is the well known gist feature, a global image descriptor related to perceptual dimensions – naturalness, roughness, ruggedness etc – of scenes. The second descriptor is also a global image descriptor, computed by resizing the image into a “tiny image”, essentially a thumbnail of size 32x32. This helps us match not only scene structure, but also the overall color of images. To find visually relevant images we compute the similarity of the query image to images in C using a sum of gist similarity and tiny image color similarity (equally weighted). Results – Size Matters! Our global caption generation method is illustrated in the first 2 panes and the first 2 resulting captions of Fig 2. This simple method often performs surprisingly well. As reflected in past work [13, 27] image retrieval from small collections often produces spurious matches. This can be seen in Fig 3 where increasing data set size has a significant effect on the quality of retrieved global matches. Quantitative results also reflect this (see Table 1). 4 Image Content Estimation Given an initial matched set of images Im ∈M based on global descriptor similarity, we would like to re-rank the selected captions by incorporating estimates of image content. For a query image, Iq and images in its matched set we extract and compare 5 kinds of image content: • Objects (e.g. cats or hats), with shape, attributes, and actions – sec 4.1 • Stuff (e.g. grass or water) – sec 4.2 4 • People (e.g. man), with actions – sec 4.3 • Scenes (e.g. pasture or kitchen) – sec 4.4 • TFIDF weights (text or detector based) – sec 4.5 Each type of content is used to compute the similarity between matched images (and captions) and the query image. We then rank the matched images (and captions) according to each content measure and combine their results into an overall relevancy ranking (Sec 5). 4.1 Objects Detection & Actions: Object detection methods have improved significantly in the last few years, demonstrating reasonable performance for a small number of object categories [7], or as a mid-level representation for scene recognition [20]. Running detectors on general web images however, still produces quite noisy results, usually in the form of a large number of false positive detections. As the number of object detectors increases this becomes even more of an obstacle to content prediction. However, we propose that if we have some prior knowledge about the content of an image, then we can utilize even these imperfect detectors. In our web collection, C, there are strong indicators of content in the form of caption words – if an object is described in the text associated with an image then it is likely to be depicted. Therefore, for the images, Im ∈M, in our matched set we run only those detectors for objects (or stuff) that are mentioned in the associated caption. In addition, we also include synonyms and hyponyms for better content coverage, e.g. “dalmatian” triggers “dog” detector. This produces pleasingly accurate detection results. For a query image we can essentially perform detection verification against the relatively clean matched image detections. Specifically, we use mixture of multi-scale deformable part detectors [10] to detect a wide variety of objects – 89 object categories selected to cover a reasonable range of common objects. These categories include the 20 Pascal categories, 49 of the most common object categories with reasonably effective detectors from Object Bank [20], and 20 additional common object categories. For the 8 animate object categories in our list (e.g. cat, cow, duck) we find that detection performance can be improved significantly by training action specific detectors, for example “dog sitting” vs “dog running”. This also aids similarity computation between a query and a matched image because objects can be matched at an action level. Our object action detectors are trained using the standard object detector with pose specific training data. Representation: We represent and compare object detections using 2 kinds of features, shape and appearance. To represent object shape we use a histogram of HoG [4] visual words, computed at intervals of 8 pixels and quantized into 1000 visual words. These are accumulated into a spatial pyramid histogram [18]. We also use an attribute representation to characterize object appearance. We use the attribute list from our previous work [15] which cover 21 visual aspects describing color (e.g. blue), texture (e.g. striped), material (e.g. wooden), general appearance (e.g. rusty), and shape (e.g. rectangular). Training images for the attribute classifiers come from Flickr, Google, the attribute dataset provided by Farhadi et al [8], and ImageNet [5]. An RBF kernel SVM is used to learn a classifier for each attribute term. Then appearance characteristics are represented as a vector of attribute responses to allow for generalization. If we have detected an object category, c, in a query image window, Oq and a matched image window, Om, then we compute the probability of an object match as: P(Oq, Om) = e−Do(Oq,Om) where Do(Oq, Om) is the Euclidean distance between the object (shape or attribute) vector in the query detection window and the matched detection window. 4.2 Stuff In addition to objects, people often describe the stuff present in images, e.g. “grass”. Because these categories are more amorphous and do not display defined parts, we use a region based classification method for detection. We train linear SVMs on the low level region features of [8] and histograms of Geometric Context output probability maps [14] to recognize: sky, road, building, tree, water, and grass stuff categories. While the low level features are useful for discriminating stuff by their appearance, the scene layout maps introduce a soft preference for certain spatial locations dependent on stuff type. Training images and bounding boxes are taken from ImageNet and evaluated at test time on a coarsely sampled grid of overlapping square regions over whole images. Pixels in any 5 Amazing colours in the sky at sunset with the orange of the cloud and the blue of the sky behind. Strange cloud formation literally flowing through the sky like a river in relation to the other clouds out there. Fresh fruit and vegetables at the market in Port Louis Mauritius. Clock tower against the sky. Tree with red leaves in the field in autumn. One monkey on the tree in the Ourika Valley Morocco A female mallard duck in the lake at Luukki Espoo The river running through town I cross over this to get to the train Street dog in Lijiang The sun was coming through the trees while I was sitting in my chair by the river Figure 4: Results: Some good captions selected by our system for query images. region with a classification probability above a fixed threshold are treated as detections, and the max probability for a region is used as the potential value. If we have detected a stuff category, s, in a query image region, Sq and a matched image region, Sm, then we compute the probability of a stuff match as: P(Sq, Sm) = P(Sq = s) ∗P(Sm = s) where P(Sq = s) is the SVM probability of the stuff region detection in the query image and P(Sm = s) is the SVM probability of the stuff region detection in the matched image. 4.3 People & Actions People often take pictures of people, making “person” the most commonly depicted object category in captioned images. We utilize effective recent work on pedestrian detectors to detect and represent people in our images. In particular, we make use of detectors from Bourdev et al [3] which learn poselets – parts that are tightly clustered in configuration and appearance space – from a large number of 2d annotated regions on person images in a max-margin framework. To represent activities, we use follow on work from Maji et al [21] which classifies actions using a the poselet activation vector. This has been shown to produce accurate activity classifiers for the 9 actions in the PASCAL VOC 2010 static image action classification challenge [7]. We use the outputs of these 9 classifiers as our action representation vector, to allow for generalization to other similar activities. If we have detected a person, Pq, in a query image, and a person Pm in a matched image, we compute the probability that the people share the same action (pose) as: P(Pq, Pm) = e−Dp(Pq,Pm) where Dp(Pq, Pm) is the Euclidean distance between the person action vector in the query detection and the person action vector in the matched detection. 4.4 Scenes The last commonly described kind of image content relates to the general scene where an image was captured. This often occurs when examining captioned photographs of vacation snapshots or general outdoor settings, e.g. “my dog at the beach”. To recognize scene types we train discriminative multikernel classifiers using the large-scale SUN scene recognition data base and code [29]. We select 23 common scene categories for our representation, including indoor (e.g. kitchen) outdoor (e.g. beach), manmade (e.g. highway), and natural (pasture) settings. Again here we represent the scene descriptor as a vector of scene responses for generalization. If a scene location, Lm, is mentioned in a matched image, then we compare the scene representation between our matched image and our query image, Lq as: P(Lq, Lm) = e−Dl(Lq,Lm) where Dl(Lq, Lm) is the Euclidean distance between the scene vector computed on the query image and the scene vector computed on the matched image. 6 I tried to cross the street to get in my car but you can see that I failed LOL. The tower is the highest building in Hong Kong. the water the boat was in girl in a box that is a train water under the bridge small dog in the grass walking the dog in the primeval forest check out the face on the kid in the black hat he looks so enthused shadows in the blue sky Figure 5: Funny Results: Some particularly funny or poetic results. 4.5 TFIDF Measures For a query image, Iq, we wish to select the best caption from the matched set, Im ∈M. For all of the content measures described so far, we have computed the similarity of the query image content to the content of each matched image independently. We would also like to use information from the entire matched set of images and associated captions to predict importance. To reflect this, we calculate TFIDF on our matched sets. This is computed as usual as a product of term frequency (tf) and inverse document frequency (idf). We calculate this weighting both in the standard sense for matched caption document words and for detection category frequencies (to compensate for more prolific object detectors). tfidf = ni,j P k nk,j ∗log |D| |j : ti ∈dj| We define our matched set of captions (images for detector based tfidf) to be our document, j and compute the tfidf score where ni,j represents the frequency of term i in the matched set of captions (number of detections for detector based tfidf). The inverse document frequency is computed as the log of the number of documents |D| divided by the number of documents containing the term i (documents with detections of type i for detector based tfidf). 5 Content Based Description Generation For a query image, Iq, with global descriptor based matched images, Im ∈M, we want to rerank the matched images according to the similarity of their content to the query. We perform this re-ranking individually for each of our content measures: object shape, object attributes, people actions, stuff classification, and scene type (Sec 4). We then combine these individual rankings into a final combined ranking in two ways. The first method trains a linear regression model of feature ranks against BLEU scores. The second method divides our training set into two classes, positive images consisting of the top 50% of the training set by BLEU score, and negative images from the bottom 50%. A linear SVM is trained on this data with feature ranks as input. For both methods we perform 5 fold cross validation with a split of 400 training images and 100 test images to get average performance and standard deviation. For a novel query image, we return the captions from the top ranked image(s) as our result. For an example matched caption like “The little boy sat in the grass with a ball”, several types of content will be used to score the goodness of the caption. This will be computed based on words in the caption for which we have trained content models. For example, for the word “ball” both the object shape and attributes will be used to compute the best similarity between a ball detection in the query image and a ball detection in the matched image. For the word “boy” an action descriptor will be used to compare the activity in which the boy is occupied between the query and the matched image. For the word “grass” stuff classifications will be used to compare detections between the query and the matched image. For each word in the caption tfidf overlap (sum of tfidf scores for the caption) is also used as well as detector based tfidf for those words referring to objects. In the event that multiple objects (or stuff, people or scenes) are mentioned in a matched image caption the 7 object (or stuff, people, or scene) based similarity measures will be a sum over the set of described terms. For the case where a matched image caption contains a word, but there is no corresponding detection in the query image, the similarity is not incorporated. Results & Evaluation: Our content based captioning method often produces reasonable results (exs are shown in Fig 4). Usually results describe the main subject of the photograph (e.g. “Street dog in Lijiang”, “One monkey on the tree in the Ourika Valley Morocco”). Sometimes they describe the depiction extremely well (e.g. “Strange cloud formation literally flowing through the sky like a river...”, “Clock tower against the sky”). Sometimes we even produce good descriptions of attributes (e.g. “Tree with red leaves in the field in autumn”). Other captions can be quite poetic (Fig 5) – a picture of a derelict boat captioned “The water the boat was in”, a picture of monstrous tree roots captioned “Walking the dog in the primeval forest”. Other times the results are quite funny. A picture of a flimsy wooden structure says, “The tower is the highest building in Hong Kong”. Once in awhile they are spookily apropos. A picture of a boy in a black bandana is described as “Check out the face on the kid in the black hat. He looks so enthused.” – and he doesn’t. We also perform two quantitative evaluations. Several methods have been proposed to evaluate captioning [15, 9], including direct user ratings of relevance and BLEU score [24]. User rating tends to suffer from user variance as ratings are inherently subjective. The BLEU score on the other hand provides a simple objective measure based on n-gram precision. As noted in past work [15], BLEU is perhaps not an ideal measure due to large variance in human descriptions (human-human BLEU scores hover around 0.5 [15]). Nevertheless, we report it for comparison. Method BLEU Global Matching (1k) 0.0774 +- 0.0059 Global Matching (10k) 0.0909 +- 0.0070 Global Matching (100k) 0.0917 +- 0.0101 Global Matching (1million) 0.1177 +- 0.0099 Global + Content Matching (linear regression) 0.1215 +- 0.0071 Global + Content Matching (linear SVM) 0.1259 +- 0.0060 Table 1: Automatic Evaluation: BLEU score measured at 1 As can be seen in Table 1 data set size has a significant effect on BLEU score; more data provides more similar and relevant matched images (and captions). Local content matching also improves BLEU score somewhat over purely global matching. In addition, we propose a new evaluation task where a user is presented with two photographs and one caption. The user must assign the caption to the most relevant image (care is taken to remove biases due to placement). For evaluation we use a query image and caption generated by our method. The other image in the evaluation task is selected at random from the web-collection. This provides an objective and useful measure to predict caption relevance. As a sanity check of our evaluation measure we also evaluate how well a user can discriminate between the original ground truth image that a caption was written about and a random image. We perform this evaluation on 100 images from our web-collection using Amazon’s mechanical turk service, and find that users are able to select the ground truth image 96% of the time. This demonstrates that the task is reasonable and that descriptions from our collection tend to be fairly visually specific and relevant. Considering the top retrieved caption produced by our final method – global plus local content matching with a linear SVM classifier – we find that users are able to select the correct image 66.7% of the time. Because the top caption is not always visually relevant to the query image even when the method is capturing some information, we also perform an evaluation considering the top 4 captions produced by our method. In this case, the best caption out of the top 4 is correctly selected 92.7% of the time. This demonstrates the strength of our content based method to produce relevant captions for images. 6 Conclusion We have described an effective caption generation method for general web images. This method relies on collecting and filtering a large data set of images from the internet to produce a novel webscale captioned photo collection. We present two variations on our approach, one that uses only global image descriptors to compose captions, and one that incorporates estimates of image content for caption generation. 8 References [1] K. Barnard, P. Duygulu, N. de Freitas, D. Forsyth, D. Blei, and M. Jordan. Matching words and pictures. Journal of Machine Learning Research, 3:1107–1135, 2003. [2] T. Berg, A. Berg, J. Edwards, M. Maire, R. White, E. Learned-Miller, Y. Teh, and D. Forsyth. Names and faces. In CVPR, 2004. [3] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people using mutually consistent poselet activations. In ECCV, 2010. [4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. [6] P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth. Object recognition as machine translation. In ECCV, 2002. [7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. http://www.pascalnetwork.org/challenges/VOC/voc2010/workshop/index.html. [8] A. Farhadi, I. Endres, D. Hoiem, and D. A. Forsyth. Describing objects by their attributes. In CVPR, 2009. [9] A. Farhadi, M. Hejrati, A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. A. Forsyth. Every picture tells a story: generating sentences for images. In ECCV, 2010. [10] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models, release 4. http://people.cs.uchicago.edu/ pff/latent-release4/. [11] Y. Feng and M. Lapata. How many words is a picture worth? automatic caption generation for news images. In Proc. of the Assoc. for Computational Linguistics, ACL ’10, pages 1239–1249, 2010. [12] V. Ferrari and A. Zisserman. Learning visual attributes. In NIPS, 2007. [13] J. Hays and A. A. Efros. im2gps: estimating geographic information from a single image. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008. [14] D. Hoiem, A. A. Efros, and M. Hebert. Recovering surface layout from an image. Int. J. Comput. Vision, 75:151–172, October 2007. [15] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk: Understanding and generating simple image descriptions. In CVPR, 2011. [16] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In ICCV, 2009. [17] C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. [18] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching. In CVPR, June 2006. [19] W. Li, W. Xu, M. Wu, C. Yuan, and Q. Lu. Extractive summarization using inter- and intra- event relevance. In Int Conf on Computational Linguistics, 2006. [20] E. P. X. Li-Jia Li, Hao Su and L. Fei-Fei. Object bank: A high-level image representation for scene classification and semantic feature sparsification. In Neural Information Processing Systems (NIPS), Vancouver, Canada, December 2010. [21] S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and appearance. In CVPR, 2011. [22] R. Mihalcea. Language independent extractive summarization. In National Conference on Artificial Intelligence, pages 1688–1689, 2005. [23] A. Nenkova, L. Vanderwende, and K. McKeown. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In SIGIR, 2006. [24] K. Papineni, S. Roukos, T. Ward, and W. jing Zhu. Bleu: a method for automatic evaluation of machine translation. pages 311–318, 2002. [25] D. R. Radev and T. Allison. Mead - a platform for multidocument multilingual text summarization. In Int Conf on Language Resources and Evaluation, 2004. [26] J. Tighe and S. Lazebnik. Superparsing: Scalable nonparametric image parsing with superpixels. In ECCV, 2010. [27] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: a large dataset for non-parametric object and scene recognition. PAMI, 30, 2008. [28] K.-F. Wong, M. Wu, and W. Li. Extractive summarization using supervised and semi-supervised learning. In International Conference on Computational Linguistics, pages 985–992, 2008. [29] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. [30] B. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. I2t: Image parsing to text description. Proc. IEEE, 98(8), 2010. 9
2011
111
4,160
Modelling Genetic Variations with Fragmentation-Coagulation Processes Yee Whye Teh, Charles Blundell and Lloyd T. Elliott Gatsby Computational Neuroscience Unit, UCL 17 Queen Square, London WC1N 3AR, United Kingdom {ywteh,c.blundell,elliott}@gatsby.ucl.ac.uk Abstract We propose a novel class of Bayesian nonparametric models for sequential data called fragmentation-coagulation processes (FCPs). FCPs model a set of sequences using a partition-valued Markov process which evolves by splitting and merging clusters. An FCP is exchangeable, projective, stationary and reversible, and its equilibrium distributions are given by the Chinese restaurant process. As opposed to hidden Markov models, FCPs allow for flexible modelling of the number of clusters, and they avoid label switching non-identifiability problems. We develop an efficient Gibbs sampler for FCPs which uses uniformization and the forward-backward algorithm. Our development of FCPs is motivated by applications in population genetics, and we demonstrate the utility of FCPs on problems of genotype imputation with phased and unphased SNP data. 1 Introduction We are interested in probablistic models for sequences arising from the study of genetic variations in a population of organisms (particularly humans). The most commonly studied class of genetic variations in humans are single nucleotide polymorphisms (SNPs), with large quantities of data now available (e.g. from the HapMap [1] and 1000 Genomes projects [2]). SNPs play an important role in our understanding of genetic processes, human historical migratory patterns, and in genome-wide association studies for discovering the genetic basis of diseases, which in turn are useful in clinical settings for diagnoses and treatment recommendations. A SNP is a specific location in the genome where a mutation has occurred to a single nucleotide at some time during the evolutionary history of a species. Because the rate of such mutations is low in human populations the chances of two mutations occurring in the same location is small and so most SNPs have only two variants (wild type and mutant) in the population. The SNP variants on a chromosome of an individual form a sequence, called a haplotype, with each entry being binary valued coding for the two possible variants at that SNP. Due to the effects of gene conversion and recombination, the haplotypes of a set of individuals often has a “mosaic” structure where contiguous subsequences recur across multiple individuals [3]. Hidden Markov Models (HMMs) [4] are often used as the basis of existing models of genetic variations that exploit this mosaic structure (e.g. [3, 5]). However, HMMs, as dynamic generalisations of finite mixture models, cannot flexibly model the number of states needed for a particular dataset, and suffer from the same label switching non-identifiability problems of finite mixture models [6] (see Section 3.2). While nonparametric generalisations of HMMs [7, 8, 9] allow for flexible modelling of the number of states, they still suffer from label switching problems. In this paper we propose alternative Bayesian nonparametric models for genetic variations called fragmentation-coagulation processes (FCPs). An FCP defines a Markov process on the space of partitions of haplotypes, such that the random partition at each time is marginally a Chinese restaurant 1 process (CRP). The clusters of the FCP are used in the place of HMM states. FCPs do not require the number of clusters in each partition to be specified, and do not have explicit labels for clusters thus avoid label switching problems. The partitions of FCPs evolve via a series of events, each of which involves either two clusters merging into one, or one cluster splitting into two. We will see that FCPs are natural models for the mosaic structure of SNP data since they can flexibly accommodate varying numbers of subsequences and they do not have the label switching problems inherent in HMMs. Further, computations in FCPs scale well. There is a rich literature on modelling genetic variations. The standard coalescent with recombination (also known as the ancestral recombination graph) model describes the genealogical history of a set of haplotypes using coalescent, recombination and mutation events [10]. Though an accurate model of the genetic process, inference is unfortunately highly intractable. PHASE [11, 12] and IMPUTE [13] are a class of HMM based models, where each HMM state corresponds to a haplotype in a reference panel (training set). This alleviates the label switching problem, but incurs higher computational costs than the normal HMMs or our FCP since there are now as many HMM states as reference haplotypes. BEAGLE [14] introduces computational improvements by collapsing the multiple occurrences of the same mosaic subsequence across the reference haplotypes into a single node of a graph, with the graph constructed in a very efficient but somewhat ad hoc manner. Section 2 introduces preliminary notation and describes random partitions and the CRP. In Section 3 we introduce FCPs, discuss their more salient properties, and describe how they are used to model SNP data. Section 4 describes an auxiliary variables Gibbs sampler for our model. Section 5 presents results on simulated and real data, and Section 6 concludes. 2 Random Partitions Let S denote a set of n SNP sequences. Label the sequences by the integers 1, . . . , n so that S can be taken to be [n] = {1, . . . , n}. A partition γ of S is a set of disjoint non-empty subsets of S (called clusters) whose union is S. Denote the set of partitions of S by ΠS. If a ⊂S, define the projection γ|a of γ onto a to be the partition of a obtained by removing the elements of S\a as well as any resulting empty subsets from γ. The canonical distribution over ΠS is the Chinese restaurant process (CRP) [15, 16]. It can be described using an iterative generative process: n customers enter a Chinese restaurant one at a time. The first customer sits at some table and each subsequent customer sit at a table with m current customers with probability proportional to m, or at a new table with probability proportional to α, where α is a parameter of the CRP. The seating arrangement of customers around tables forms a partition γ of S, with occupied tables corresponding to the clusters in γ. We write γ ∼CRP(α, S) if γ ∈ΠS is a CRP distributed random partition over S. Multiplying the conditional probabilities together gives the probability mass function of the CRP: fα,S(γ) = α|γ|Γ(α) Γ(n + α) Y a∈γ Γ(|a|) (1) where Γ is the gamma function. The CRP is exchangeable (invariant to permutations of S), and projective (the probability of the projection γ|a is simply fα,a(γ|a)), so can be extended in a natural manner to partitions of N and is related via de Finetti’s theorem to the Dirichlet process [17]. 3 Fragmentation-Coagulation Processes A fragmentation-coagulation process (FCP) is a continuous-time Markov process π ≡(π(t), t ∈ [0, T]) over a time interval [0, T] where each π(t) is a random partition in ΠS. Since the space of partitions for a finite S is finite, the FCP is a Markov jump process (MJP) [18] : it evolves according to a discrete series of random events (or jumps) at which it changes state and at all other times the state remains unchanged. In particular, the jump events in an FCP are either fragmentations or coagulations. A fragmentation at time t involves a cluster c ∈π(t−) splitting into exactly two nonempty clusters a, b ∈π(t) (all other clusters stay unchanged; the t−notation means an infinitesimal time before t), and a coagulation at t involves two clusters a, b ∈π(t−) merging to form a single cluster c = a ∪b ∈π(t) (see Figure 1). Note that fragmentations and coagulations are converses of each other; as we will see later, this will lead to some important properties of the FCP. 2 C F F C C 0 T |c| µ + i −1 R µ R |a| |a| |c| Figure 1: FCP cartoon. Each line is a sequence and bundled lines form clusters. C: coagulation event. F: fragmentation event. Fractions are, for the orange sequence, from left to right: probability of joining cluster c at time 0, probability of following cluster a at a fragmentation event, rate of starting a new table (creating a fragmentation), and rate of joining with an existing table (creating a coagulation). Following the various popular culinary processes in Bayesian nonparametrics, we will start by describing the law of π in terms of the conditional distribution of the cluster membership of each sequence i given those of 1, . . . , i −1. Since we have a Markov process with a time index, the metaphor is of a Chinese restaurant operating from time 0 to time T, where customers (sequences) may move from one table (cluster) to another and tables may split and merge at different points in time, so that the seating arrangements (partition structures) at different times might not be the same. To be more precise, define π|[i−1] = (π|[i−1](t), t ∈[0, T]) to be the projection of π onto the first i −1 sequences. π|[i−1] is piecewise constant, with π|[i−1](t) ∈Π[i−1] describing the partitioning of the sequences 1, . . . , i −1 (the seating arrangement of customers 1, . . . , i −1) at time t. Let ai(t) = c\{i}, where c is the unique cluster in π|[i](t) containing i. Note that either ai(t) ∈π|[i−1](t), meaning customer i sits at an existing table in π|[i−1](t), or ai(t) = ∅, which will mean that customer i sits at a new table. Thus the function ai describes customer i’s choice of table to sit at through times [0, T]. We define the conditional distribution of ai given π|[i−1] as a Markov jump process evolving from time 0 to T with two parameters µ > 0 and R > 0 (see Figure 1): i = 1: The first customer sits at a table for the duration of the process, i.e. a1(t) = ∅∀t ∈[0, T]. t = 0: Each subsequent customer i starts at time t = 0 by sitting at a table according to CRP probabilities with parameter µ. So, ai(0) = c ∈π|[i−1](0) with probability proportional to |c|, and ai(0) = ∅with probability proportional to µ. F1: At time t > 0, if customer i is sitting at table ai(t−) = c ∈π|[i−1](t−), and the table c fragments into two tables a, b ∈π|[i−1](t), customer i will move to table a with probability |a|/|c|, and to table b with probability |b|/|c|. C1: If the table c merges with another table at time t, the customer simply follows the other customers to the resulting merged table. F2: At all other times t, if customer i is sitting at some existing table ai(t−) = c ∈π|[i−1](t), then the customer will move to a new empty table (ai(t) = ∅) with rate R/|c|. C2: Finally, if i is sitting by himself (ai(t−) = ∅), then he will join an existing table ai(t) = c ∈π|[i−1](t) with rate R/µ. The total rate of joining any existing table is |π|[i−1](t)|R/µ. Note that when customer i moves to a new table in step F2, a fragmentation event is created, and all subsequent customers who end up in the same table will have to decide at step F1 whether to move to the original table or to the table newly created by i. The probabilities in steps F1 and F2 are exactly the same as those for a Dirichlet diffusion tree [19] with constant divergence function R. Similarly step C2 creates a coagulation event in which subsequent customers seated at the two merging tables will move to the merged table in step C1, and the probabilities are exactly the same as those for Kingman’s coalescent [20, 21]. Thus our FCP is a combination of the Dirichlet diffusion tree and Kingman’s coalescent. Theorem 3 below shows that this combination results in FCPs being stationary Markov processes with CRP equilibrium distributions. Further, FCPs are reversible, so in a sense the Dirichlet diffusion tree and Kingman’s coalescent are duals of each other. Given π|[i−1], π|[i] is uniquely determined by ai and vice versa, so that the seating of all n customers through times [0, T], a1, . . . , an, uniquely determines the sequential partition structure π. We now investigate various properties of π that follows from the iterative construction above. The first is an alternative characterisation of π as an MJP whose transitions are fragmentations or coagulations, an unsurprising observation since both the Dirichlet diffusion tree and Kingman’s coalescent, as partition-valued processes, are Markov. 3 Theorem 1. π is an MJP with initial distribution π(0)∼CRP(µ, S) and stationary transit rates, q(γ, ρ) = RΓ(|a|)Γ(|b|) Γ(|c|) q(ρ, γ) = R µ (2) where γ, ρ ∈ΠS are such that ρ is obtained from γ by fragmenting a cluster c ∈γ into two clusters a, b ∈ρ (at rate q(γ, ρ)), and conversely γ is obtained from ρ by coagulating a, b into c (at rate q(ρ, γ)). The total rate of transition out of γ is: q(γ, ·) = R X c∈γ H|c|−1 + R µ |γ|(|γ| −1) 2 (3) where H|c|−1 is the |c| −1st harmonic number. Proof. The initial distribution follows from the CRP probabilities of step t = 0. For every i, ai is Markov and ai(t) depends only on ai(t−) and π|[i−1](t), thus (ai(s), s ∈[0, t]) depends only on (aj(s), s ∈[0, t], j < i) and the Markovian structure of π follows by induction. Since ΠS is finite, π is an MJP. Further, the probabilities and rates in steps F1, F2, C1 and C2 do not depend explicitly on t so π has stationary transit rates. By construction, q(γ, ρ) is only non-zero if γ and ρ are related by a complimentary pair of fragmentation and coagulation events, as in the theorem. To derive the transition rates (2), recall that a transition rate r from state s to state s′ means that if the MJP is in state s at time t then it will transit to state s′ by an infinitesimal time later t + δ with probability δr. For the fragmentation rate q(γ, ρ), the probability of transiting from γ to ρ in an infinitesimal time δ is δ times the rate at which a customer starts his own table in step F2, times the probabilities of subsequent customers choosing either table in step F1 to form the two tables a and b. Dividing this product by δ forms the rate q(γ, ρ). Without loss of generality suppose that the table started by the customer eventually becomes a and that there were j other customers at the existing table which eventually becomes b. Thus, the rate of the customer starting his own table is R/j and the product of probabilities of subsequent customer choices in step F1 is then 1·2···(|a|−1)×j···(|b|−1) (j+1)···(|c|−1) . Multiplying these together gives q(γ, ρ) in (2). Similarly, the coagulation rate q(ρ, γ) is a product of the rate R µ at which a customer moves from his own table to an existing table in step C2 and the probability of all subsequent customers in either table moving to the merged table (which is just 1). Finally, the total transition rate q(γ, ·) is a sum over all possible fragmentations and coagulations of γ. There are |γ|(|γ|−1) 2 possible pairs of clusters to coagulate, giving the second term. The first term is obtained by summing over all c ∈γ, and over all unordered pairs a, b resulting from fragmenting c, and using the identity P {a,b} Γ(|a|)Γ(|b|) Γ(|c|) = H|c|−1. Theorem 2. π is projective and exchangeable. Thus it can be extended naturally to a Markov process over partitions of N. Proof. Both properties follow from the fact that both the initial distribution CRP(µ, S) and the transition rates (2) are projective and exchangeable. Here we will give more direct arguments for the theorem. Projectivity is a direct consequence of the iterative construction, showing that the law of π|[i] does not depend on the clustering trajectories aj of subsequent customers j > i. We can show exchangeability of π by deriving the joint probability density of a sample path of π (the density exists since both ΠS and T are finite so π has a finite number of events on [0, T]), and seeing that it is invariant to permutations of S. For an MJP the probability of a sample path is the probability of the initial state (fµ,S(π(0))) times, for each subsequent jump, the probability of staying in the current state γ until the jump (the holding time is exponential distributed with rate q(γ, ·)) and the transition from γ to the next state ρ (this is the ratio q(γ, ρ)/q(γ, ·)), and finally the probability of not transiting from the last jump time to T. Multiplying these probabilities together gives, after simplification: p(π) = R|C|+|F |µ|A|−2|C|−2|F | Γ(µ) Γ(µ + n) exp − Z T 0 q(π(t), ·)dt ! Q a∈A<> Γ(|a|) Q a∈A>< Γ(|a|) (4) with |C| the number of coagulations, |F| number of fragmentations, and A, A<>, A>< are sets of paths in π. A path is a cluster created either at time 0 or a coagulation or fragmentation, and exists for a definite amount of time until it is terminated at time T or another event (these are the horizontal 4 bundles of lines in Figure 1). A is the set of all paths in π, A<> the set of paths created either at time 0 or by a fragmentation and terminated either at time T or by a coagulation, and A>< the set of paths created by a coagulation and terminated by a fragmentation or at time T. Theorem 3. π is ergodic and has equilibrium distribution CRP(µ, S). Further, it is reversible with (π(T −t), t ∈[0, T]) having the same law as π. Proof. Ergodicity follows from the fact that for any T > 0 and any two partitions γ, ρ ∈ΠS, there is positive probability that if it starts at π(0) = γ, it will end with π(T) = ρ. For example, it may undergo a sequence of fragmentations until each sequence belong to its own cluster, then a sequence of coagulations forming the clusters in ρ. Reversibility and the equilibrium distribution can be demonstrated by detailed balance. Suppose γ, ρ ∈ΠS and a, b, c are related as in Theorem 1, fµ,S(γ)q(γ, ρ) = µ|γ|Γ(µ) Γ(n+µ) Q k∈γ Γ(|k|) × R Γ(|a|)Γ(|b|) Γ(|c|) (5) = µ|γ|+1Γ(µ) Γ(n+µ) Γ(|a|)Γ(|b|) Q k∈γ,k̸=c Γ(|k|) × R µ = fµ,S(ρ)q(ρ, γ) Finally, the terms in (4) are invariant to time reversals, i.e. p((π(T −t), t ∈[0, T])) = p(π). Theorem 3 shows that the µ parameter controls the marginal distributions of π(t), while (2) indicates that the R parameter controls the rate at which π evolves. 3.1 A Model of SNP Sequences We model the n SNP sequences (haplotypes) with an FCP π over partitions of S = [n]. Let the m assayed SNP locations on a chunk of the chromosome be at positions t1 < t2 · · · < tm. The ith haplotype consists of observations xi1, . . . , xim ∈{0, 1} each corresponding to a binary SNP variant. For j = 1, . . . , m, and for each cluster c ∈π(tj) at position tj, we have a parameter θcj ∼Bernoulli(βj) which denotes the variant at location tj of the corresponding subsequence. For each i ∈c we model xij as equal to θcj with probability 1 −ϵ, where ϵ is a noise probability. We place a prior βj ∼Beta(α˜βj, α(1 −˜βj)) with mean ˜βj given by the empirical mean of variant 1 at SNP j among the observed haplotypes. We place uninformative uniform priors on log R, log µ and log α over a bounded but large range such that the boundaries were never encountered. The properties of FCPs in Theorems 1-3 are natural in the modelling setting here. Projectivity and exchangeability relate to the assumption that sequence labels should not have an effect on the model, while stationarity and reversibility arise from the simplifying assumption that we do not expect the genetic processes operating in different parts of the genome to be different. These are also properties of the standard coalescent with recombination model of genetic variations [10]. Incidentally the coalescent with recombination model is not Markov, though there have been Markov approximations [22, 23], and all practical HMM based methods are Markov. 3.2 HMMs and the Label Switching Problem HMMs can also be interpreted as sequential partitioning processes in which each state at time step t corresponds to a cluster in the partition at t. Since each sequence can be in different states at different times this automatically induces a partition-structured Markov process, where each partition consists of at most K clusters (K being the number of states in the HMM), and where each cluster is labelled with an HMM state. This labelling of the clusters in HMMs is a significant, but subtle, difference between HMMs and FCPs. Note that the clusters in FCPs are unlabelled, and defined purely in terms of the sequences they contain. This labelling of the clusters in HMMs are a significant source of non-identifiability in HMMs, since the likelihoods of data items (and often even the priors over transition probabilities) are invariant to the labels themselves so that each permutation over labels creates a mode in the posterior. This is the so called “label switching problem” for finite mixture models [6]. Since the FCP clusters are unlabelled they do not suffer from label switching problems. On the other hand, by having labelled clusters HMMs can share statistical strength among clusters across time steps (e.g. by enforcing the same emission probabilities from each cluster across time), while FCPs do not have a natural way of sharing statistical strength across time. This means that FCPs are not suitable for sequential data where there is no natural correspondence between times across different sequences, e.g. time series data like speech and video. 5 3.3 Discrete Time Markov Chain Construction FCPs can be derived as continuous time limits of discrete time Markov chains constructed from fragmentation and coagulation operators [24]. This construction is more intuitive but lacks the rigour of the development described here. Let CRP(α, d, S) be a generalisation of the CRP on S with an additional discount parameter d (see [25] for details). For any δ > 0, construct a Markov chain over π(0), π(δ), π(2δ), . . . as follows: π(0) ∼CRP(µ, 0, S); then for every m ≥1, define ρ(mδ) to be the partition obtained by fragmenting each cluster c ∈π((m−1)δ) by a partition drawn independently from CRP(0, Rδ, c), and π(mδ) is constructed by coagulating into one the clusters of ρ(mδ) belonging to the same cluster in a draw from CRP(µ/Rδ, 0, ρ(mδ)). Results from [26] (see also [27]) show that marginally each ρ(mδ) ∼CRP(µ, Rδ, S) and π(mδ) ∼CRP(µ, 0, S). The various properties of FCPs, i.e. Markov, projectivity, exchangeability, stationarity, and reversibility, hold for this discrete time Markov chain, and the continuous time π can be derived by taking δ →0. 4 Gibbs Sampling using Uniformization We use a Gibbs sampler for inference in the FCP given SNP haplotype data. Each iteration of the sampler involves treating the ith haplotype sequence as the last sequence to be added into the FCP partition structure (making use of exchangeability), so that the iterative procedure described in Section 3 gives the conditional prior of ai given π|S\{i}. Coupling with the likelihood terms of xi1, . . . , xim gives us the desired conditional distribution of ai. Since this conditional distribution of ai is Markov, we can make use of the forward filtering-backward sampling procedure to sample it. However, ai is a continuous-time MJP so a direct application of the typical forward-backward algorithm is not possible. One possibility is to marginalise out the sample path of ai except at a finite number of locations (corresponding to the jumps in π|S\{i} and the SNP locations). This approach is computationally expensive as it requires many matrix exponentiations, and does not resolve the issue of obtaining a full sample path of ai, which may involve jumps at random locations we have marginalised out. Instead, we make use of a recently developed MCMC inference method for MJPs [28]. This sampler introduces as auxiliary variables a set of “potential jump points” distributed according to a Poisson process with piecewise constant rates, such that conditioned on them the posterior of ai becomes a Markov chain that can only transition at either its previous jump locations or the potential jump points, and we can then apply standard forward-backward to sample ai. For each t the state space of ai(t) is Cit ≡π|S\{i} ∪{∅}. For s, s′ ∈Cit let Qt(s, s′) be the transition rate from state s to s′ given in Section 3, with Qt(s, s) = −P s′̸=s Qt(s, s′). Let Ωt > maxs∈Cit −Qt(s, s) be an upper bound on the transition rates of ai at time t, a′ i be the previous sample path of ai, J′ be the jumps in a′ i, and E consists of the m SNP locations and the event times in π|S\{i}. Let Mt(s) be the forward message at time t and state s ∈Cit. The resulting forward-backward sampling algorithm is given below. In addition we update the logarithms of R, µ and α by slice sampling. 1. Sample potential jumps Jaux ∼Poisson(Λ) with rate Λ(t) = Ωt + Qt(a′ i(t), a′ i(t))). 2. Compute forward messages by iterating over t ∈{0} ∪Jaux ∪J′ ∪E from left to right: 2a. At t = 0, set Mt(s) ∝|s| for s ∈π|S\{i} and Mt(∅) ∝µ. 2b. At a fragmentation in π|S\{i}, say of c into a, b, set Mt(a)= |a| |c| Mt−(c), Mt(b)= |b| |c|Mt−(c), and Mt(k)=Mt−(k) for k ̸= a, b, c. Here t−denotes the time of the previous iteration. 2c. At a coagulation in π|S\{i}, say of a, b into c, set Mt(c) = Mt−(a) + Mt−(b). 2d. At an observation, say t = tj, set Mt(s) = p(xij|θsj)Mt−(s). We integrate out θ∅j and βj. 2e. At a potential jump in Jaux ∪J′, set Mt(s) = P s′∈Cit Mt−(s′)(1(s′ = s) + Qt(s′, s)/Ω). 3. Get new sample path ai by backward sampling. This is straightforward and involves reversing the message computations above. Note that ai can only jump at the times in Jaux ∪J′, and change state at times in E if it was involved in the fragmentation or coagulation event. 5 Experiments Label switching problem Figure 2 demonstrates the label switching problem (Section 3.2) during block Gibbs sampling of a 2-state Bayesian HMM (BHMM) compared to inference in an FCP. The 6 0 20 40 60 80 100 MCMC iteration 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 normalized log likelihood FCP BHMM 0 20 40 60 80 MCMC iterations until optimum BHMM FCP Figure 2: Label switching problem. Left: Each line is median, over 10 runs, of the normalized log-likelihoods of a Bayesian HMM (blue) and an FCP (red) at each iteration of MCMC. Lighter polygons are the 25% and 75% percentiles. Right: Number of MCMC iterations before each model first encounters the optimum states. observed data comprises 16 sequences of length 16. Eight of the sequences consist of just zeros and the others consist of just ones. Each of the binary BHMM states, zij ∈{0, 1}, i indexing sequence and j indexing position within sequence i, transits to the same state with probability τ, with a prior τ ∼Beta(10.0, 0.1) encouraging self transitions. The observations of the BHMM have distribution xij ∼Bernoulli(ρzij) where ρ1 = 1 −ρ0 and ρ0 ∼Beta(1.0, 1.0). The optimal clustering under both models assigns all zero observations to one state and all ones to another state. As shown in Figure 2, due to the lack of identifiability of its states, the BHMM requires more MCMC iterations through the data before inference converges upon an optimal state, whilst an FCP is able to find the correct state much more quickly. This is reflected in both the normalized log-likelihood of the models in Figure 2(left) and in the number of iterations before reaching the optimal state, Figure 2(right). 0.1 0.2 0.3 0.4 0.5 proportion held out SNPs 86 88 90 92 94 96 98 accuracy (%) BEAGLE FCP fastPHASE Figure 3: Accuracy vs proportion of missing data for imputation from phased data. Lines are drawn at the means and error bars at the standard error of the means. Imputation from phased data To reduce costs, typically not all known SNPs are assayed for each participant in a large association study. The problem of inferring the variants of unassayed SNPs in a study using a larger dataset (e.g. HapMap or 1000 Genomes) is called genotype imputation [13]. Figure 3 compares the genotype imputation accuracy of FCP with that of fastPHASE [5] and BEAGLE [14], two state-of-the-art methods. We used 3000 MCMC iterations for inference with the FCP, with the first 1000 iterations discarded as burn-in. We used 320 genes from 47 individuals in the Seattle SNPs dataset [29]. Each gene consists of 94 sequences, of length between 13 and 416 SNPs. We held out 10%–50% of the SNPs uniformly among all haplotypes for testing. Our model had higher accuracy than both fastPHASE and BEAGLE. Imputation from unphased data In humans, most chromosomes come in pairs. Current assaying methods are unable to determine from which of these two chromosomes each variant originates without employing expensive protocols, thus the data for each individual in large datasets actually consist of sequences of unordered pairs of variants (called genotypes). This includes the Seattle SNPs dataset (the haplotypes provided by [29] in the previous experiment were phased using PHASE [11, 12]). In this experiment, we performed imputation using the original unphased genotypes, using an extension of the FCP able to handle this sort of data. Figure 4 shows the genotype imputation accuracies and run-times of the FCP model (with 60, 600 or 3000 MCMC iterations of which 30, 200 or 600 were discarded for burn-in) and state-of-the-art software (fastPHASE [5], IMPUTE2 [30], BEAGLE 7 101 102 103 computation time (s) 88 90 92 94 96 accuracy (%) BEAGLE IMPUTE2 fastPHASE FCP 60 FCP 600 FCP 3000 0.1 0.2 0.3 0.4 0.5 proportion held out genotypes 90 91 92 93 94 accuracy (%) 0.1 0.2 0.3 0.4 0.5 proportion held out SNPs 90 91 92 93 94 accuracy (%) Figure 4: Time and accuracy performance of genotype imputation on 231 Seattle SNPs genes. Left: Accuracies evaluated by removing 10%–50% of SNPs from 10%–50% of individuals, repeated five times on each gene with the same hold out proportions. Centers of crosses correspond to median accuracy and times whilst whiskers correspond to the extent of the inter-quartile range. Middle: Lines are accuracy averaged over five repetitions of each gene with 30% of shared SNPs removed from 10%–50% of individuals. Each repetition uses a different subset of SNPs and individuals. Lighter polygons are standard errors. Right: As Middle, except with 10%–50% of shared SNPs removed from 30% of individuals. [14]). We held out 10%–50% of the shared SNPs in 10%–50% of the 47 individuals of the Seattle SNPs dataset. This paradigm mimics a popular experimental setting in which the genotypes of sparsely assayed individuals are imputed using a densely assayed reference panel [30]. We discarded 89 of the genes as they were unable to be properly pre-processed for use with IMPUTE2. As can be seen in Figure 4, FCP achieves similar state-of-the-art accuracy to IMPUTE2 and fastPHASE. Given enough iterations, the FCP outperforms all other methods in terms of accuracy. With 600 iterations, FCP has almost the same accuracy and run-time as fastPHASE. With just 60 iterations, FCP performs comparably to IMPUTE2 but is an order of magnitude faster. Note that IMPUTE2 scales quadratically in the number of genotypes, so we expect FCPs to be more scalable. Finally, BEAGLE is the fastest algorithm but has worst accuracies. 6 Discussion We have proposed a novel class of Bayesian nonparametric models called fragmentation-coagulation processes (FCPs), and applied them to modelling population genetic variations, showing encouraging empirical results on genotype imputation. FCPs are the simplest non-trivial examples of exchangeable fragmentation-coalescence processes (EFCP) [31]. In general EFCPs the fragmentation and coagulation events may involve more than two clusters. They also have an erosion operation, where a single element of S forms a single element cluster. EFCPs were studied by probabilists for their theoretical properties, and our work represents the first application of EFCPs as probabilistic models of real data, and the first inference algorithm derived for EFCPs. There are many interesting avenues for future research. Firstly, we are currently exploring a number of other applications in population genetics, including phasing and genome-wide association studies. Secondly, it would be interesting to explore the discrete time Markov chain version of FCPs, which although not as elegant will have simpler and more scalable inference. Thirdly, the haplotype graph in BEAGLE is constructed via a series of cluster splits and merges, and bears striking resemblance to the partition structures inferred by FCPs. It would be interesting to explore the use of BEAGLE as a fast initialisation of FCPs, and to use FCPs as a Bayesian interpretation of BEAGLE. Finally, beyond population genetics, FCPs can also be applied to other time series and sequential data, e.g. the time evolution of community structure in network data, or topical change in document corpora. Acknowledgements We thank the Gatsby Charitable Foundation for generous funding, and Vinayak Rao, Andriy Mnih, Chris Holmes and Gil McVean for fruitful discussions. 8 References [1] The International HapMap Consortium. The international HapMap project. Nature, 426:789–796, 2003. [2] The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature, 467:1061–1073, 2010. [3] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and R. S. Lander. High-resolution haplotype structure in the human genome. Nature Genetics, 29:229–232, 2001. [4] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257–285, 1989. [5] P. Scheet and M. Stephens. A fast and flexible statistical model for large-scale population genotype data: Applications to inferring missing genotypes and haplotypic phase. The American Journal of Human Genetics, 78(4):629 – 644, 2006. [6] A. Jasra, C. C. Holmes, and D. A. Stephens. Markov chain Monte Carlo methods and the label switching problem in Bayesian mixture modeling. Statistical Science, 20(1):50–67, 2005. [7] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems, volume 14, 2002. [8] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [9] E. P. Xing and K. Sohn. Hidden Markov Dirichlet process: Modeling genetic recombination in open ancestral space. Bayesian Analysis, 2(2), 2007. [10] R. R. Hudson. Properties of a neutral allele model with intragenic recombination. Theoretical Population Biology, 23(2):183 – 201, 1983. [11] M. Stephens and P. Donnelly. A comparison of Bayesian methods for haplotype reconstruction from population genotype data. American Journal of Human Genetics, 73:1162–1169. [12] N. Li and M. Stephens. Modeling Linkage Disequilibrium and Identifying Recombination Hotspots Using Single-Nucleotide Polymorphism Data. Genetics, 165(4):2213–2233, 2003. [13] J. Marchini, B. Howie, S. Myers, G. McVean, and P. Donnelly. A new multipoint method for genome-wide association studies by imputation of genotypes. Nature Genetics, 39(7):906–913, 2007. [14] B. L. Browning and S. R. Browning. A unified approach to genotype imputation and haplotype-phase inference for large data sets of trios and unrelated individuals. American Journal of Human Genetics, 84:210–223, 2009. [15] D. Aldous. Exchangeability and related topics. In ´Ecole d’ ´Et´e de Probabilit´es de Saint-Flour XIII–1983, pages 1–198. Springer, Berlin, 1985. [16] J. Pitman. Combinatorial Stochastic Processes. Lecture Notes in Mathematics. Springer-Verlag, 2006. [17] D. Blackwell and J. B. MacQueen. Ferguson distributions via P´olya urn schemes. Annals of Statistics, 1:353–355, 1973. [18] E. C¸ inlar. Introduction to Stochastic Processes. Prentice Hall, 1975. [19] R. M. Neal. Slice sampling. Annals of Statistics, 31:705–767, 2003. [20] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27–43, 1982. Essays in Statistical Science. [21] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235–248, 1982. [22] G. A. T. McVean and N. J. Cardin. Approximating the coalescent with recombination. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 360(1459):1387–1393, 2005. [23] P. Marjoram and J. Wall. Fast “coalescent” simulation. BMC Genetics, 7(1):16, 2006. [24] J. Bertoin. Random Fragmentation and Coagulation Processes. Cambridge University Press, 2006. [25] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855–900, 1997. [26] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870–1902, 1999. [27] J. Gasthaus and Y. W. Teh. Improvements to the sequence memoizer. In Advances in Neural Information Processing Systems, 2010. [28] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011. [29] NHLBI Program for Genomic Applications. SeattleSNPs. June 2011. http://pga.gs.washington.edu. [30] B. N. Howie, P. Donnelly, and J. Marchini. A flexible and accurate genotype imputation method for the next generation of genome-wide association studies. PLoS Genetics, (6), 2009. [31] J. Berestycki. Exchangeable fragmentation-coalescence processes and their equilibrium measures. http://arxiv.org/abs/math/0403154, 2004. 9
2011
112
4,161
Uniqueness of Belief Propagation on Signed Graphs Yusuke Watanabe∗ The Institute of Statistical Mathematics 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan watay@ism.ac.jp Abstract While loopy Belief Propagation (LBP) has been utilized in a wide variety of applications with empirical success, it comes with few theoretical guarantees. Especially, if the interactions of random variables in a graphical model are strong, the behaviors of the algorithm can be difficult to analyze due to underlying phase transitions. In this paper, we develop a novel approach to the uniqueness problem of the LBP fixed point; our new “necessary and sufficient” condition is stated in terms of graphs and signs, where the sign denotes the types (attractive/repulsive) of the interaction (i.e., compatibility function) on the edge. In all previous works, uniqueness is guaranteed only in the situations where the strength of the interactions are “sufficiently” small in certain senses. In contrast, our condition covers arbitrary strong interactions on the specified class of signed graphs. The result of this paper is based on the recent theoretical advance in the LBP algorithm; the connection with the graph zeta function. 1 Introduction The belief propagation algorithm [1] was originally proposed as an efficient method for the exact computation in the inference with graphical models associated to trees; the algorithm has been extended to general graphs with cycles and called Loopy Belief Propagation (LBP) algorithm. It has shown empirical success in a wide class of problems including computer vision, compressed sensing and error correcting codes [2, 3, 4]. In such applications, existence of cycles and strong interactions between variables make the behaviors of the LBP algorithm difficult to analyze. In this paper we propose a novel approach to the uniqueness problem of LBP fixed point. Although a considerable number of researches have been done in this decade [5, 6], understating of the LBP algorithm is not yet complete. An important step toward better understanding of the algorithm has been the variational interpretation by the Bethe free energy function; the fixed points of LBP correspond to the stationary points of the Bethe free energy function [7]. This view provides a number of algorithms that (provably) find a stationary point of the Bethe free energy function [8, 9, 10, 11]. For the uniqueness problem of the LBP fixed point a number of conditions has been proposed [12, 13, 14, 15]. (Note that the convergence property implies uniqueness by definition.) In all previous works, the uniqueness is guaranteed only in the situations where the strength of the interactions are “sufficiently” small in certain senses. In this paper we propose a completely new approach to the uniqueness condition of the LBP algorithm; it should be emphasized that strength of interactions on specified class of signed graphs can be arbitrary large in this condition. (The signs denote the attractive/repulsive types of the compatibility function on the edges.) Generally speaking, the behavior of the algorithm is complex if the strength of interactions are strong. In such regions, phase transition phenomena can occur in the underlying computation tree [15], making theoretical analyses difficult. To overcome such difficulties, ∗Current affiliation: SONY, Intelligent Systems Research Laboratory. YusukeB.Watanabe@jp.sony.com 1 we utilize the connection between the Bethe free energy and the graph zeta function established in [16]; the determinant of the Hessian of the Bethe free energy equals the reciprocal of the graph zeta function up to a positive factor. Combined with the index formula [16], the uniqueness problem is reduced to a positivity property of the graph zeta function. This paper is organized as follows. In section 2 we introduce the background of LBP. In section 3 we explain the condition for the uniqueness, which is the main result of this paper. In section 4 the proof of the main result is given by a graph theoretic approach. In section 5 we remark foregoing researches based on the new technique. 2 Loopy Belief Propagation, Bethe free energy and graph zeta function In this section, we provide basic facts on LBP; the connection with the Bethe free energy and graph zeta function. Throughout this paper, G = (V, E) is a connected undirected graph with V , the vertices, and E, the undirected edges. We consider the binary pairwise model, which is given by the following factorization form with respect to G: p(x) = 1 Z Y ij∈E ψij(xi, xj) Y i∈V ψi(xi), (1) where x = (xi)i∈V is a list of binary ( i.e., xi ∈{±1}) variables, Z is the normalization constant and ψij, ψi are positive functions called compatibility functions. Without loss of generality we assume that ψij(xi, xj) = exp(Jijxixj) and ψi(xi) = exp(hixi). We refer Jij as interaction and its absolute value as “strength”. In various applications, we would like to compute marginal distributions pi(xi) := X x\{xi} p(x) and pij(xi, xj) := X x\{xixj} p(x) (2) though exact computations are often intractable due to the combinatorial complexities. If the graph is a tree, however, they are efficiently computed by the belief propagation algorithm [1]. Even if the graph has cycles, the direct application of the algorithm (Loopy Belief Propagation; LBP) often gives good approximation [6]. LBP is a message passing algorithm. For each directed edge, a message vector µi→j(xj) is assigned and initialized arbitrarily. The update rule of messages is given by µnew i→j(xj) ∝ X xi ψji(xj, xi)ψi(xi) Y k∈Ni\j µk→i(xi), (3) where Ni is the neighborhood of i ∈V . The order of edges in the update is arbitrary; the set of fixed point does not depend on the order. If the messages converge to some fixed point {µ∞ i→j(xj)}, the approximations of pi(xi) and pij(xi, xj) are calculated as bi(xi) ∝ψi(xi) Y k∈Ni µ∞ k→i(xi), (4) bij(xi, xj) ∝ψij(xi, xj)ψi(xi)ψj(xj) Y k∈Ni\j µ∞ k→i(xi) Y k∈Nj\i µ∞ k→j(xj), (5) with normalization P xi bi(xi) = 1 and P xi,xj bij(xi, xj) = 1. From (3) and (5), the constraints bij(xi, xj) > 0, and P xj bij(xi, xj) = bi(xi) are automatically satisfied. 2.1 The Bethe free energy The LBP algorithm is interpreted as a variational problem of the Bethe free energy function [7]. In this formulation, the domain of the function is given by L(G) = n {qi, qij}; qij(xi, xj) > 0, X xi,xj qij(xi, xj) = 1, X xj qij(xi, xj) = qi(xi) o (6) 2 and element of this set is called pseudomarginals, i.e., a set of locally consistent probability distributions. The closure of this set is called local marginal polytope [6]. The objective function called Bethe free energy is defined on L(G) by: F(q) := − X ij∈E X xixj qij(xi, xj) log ψij(xi, xj) − X i∈V X xi qi(xi) log ψi(xi) + X ij∈E X xixj qij(xi, xj) log qij(xi, xj) + X i∈V (1 −di) X xi qi(xi) log qi(xi), (7) where di = |Ni|. The outcome of this variational problem is the same as that of LBP. More precisely, there is a one-to-one correspondence between the set of stationary points of the Bethe free energy and the set of fixed points of LBP. The correspondence is given by (4, 5). 2.2 Zeta function and Ihara’s formula In this section, we explain the connection of LBP to the graph zeta function. We use the following terms for graphs [17, 16]. Let ⃗E be the set of directed edges obtained by duplicating undirected edges. For each directed edge e ∈⃗E, o(e) ∈V is the origin of e and t(e) ∈V is the terminus of e. For e ∈⃗E, the inverse edge is denoted by ¯e, and the corresponding undirected edge by [e] = [¯e] ∈E. A closed geodesic in G is a sequence (e1, . . . , ek) of directed edges such that t(ei) = o(ei+1), ei ̸= ¯ei+1 for i ∈Z/kZ. For a closed geodesic c, we may form the m-multiple, cm, by repeating it m-times. A closed geodesic c is prime if there are no closed geodesic d and natural number m(≥2) such that c = dm. For example, a closed geodesic c = (e1, e2, e3, e1, e2, e3) is not prime and c = (e1, e2, e3, e4, e1, e2, e3) is prime. Two closed geodesics are said to be equivalent if one is obtained by cyclic permutation of the other. For example, closed geodesics (e1, e2, e3), (e2, e3, e1) and (e3, e1, e2) are equivalent. An equivalence class of prime closed geodesics is called a prime cycle. Let P be the set of prime cycles of G. For given (complex or real) weights u = (ue)e∈⃗E, the Ihara’s graph zeta function [18, 19] is given by ζG(u) := Y p∈P (1 −g(p))−1 g(p) := ue1 · · · uek for p = (e1, . . . , ek), = det(I −UM)−1, where the second equality is the determinant representation [19] with matrices indexed by the directed edges. The definitions of M and U are Me,e′ := 1 if e ̸= ¯e′ and o(e) = t(e′), 0 otherwise. (8) and Ue,e′ := ueδe,e′, respectively. The following theorem gives the connection between the Bethe free energy and the zeta function. More precisely, the theorem asserts that the determinant of the Hessian of the Bethe free energy function is the reciprocal of the zeta function up to a positive factor. Theorem 1 ([16, 20]). The following equality holds at any point of L(G): ζG(u)−1 = det(∇2F) Y ij∈E Y xi,xj=±1 qij(xi, xj) Y i∈V Y xi=±1 qi(xi)1−di 22|V |+4|E| (9) where the derivatives are taken over a affine coordinate of L(G): mi = Eqi[xi], χij = Eqij[xixj], and ui→j = χij −mimj {(1 −m2 i )(1 −m2 j)}1/2 = Covqij[xi, xj] {Varqi[xi]Varqj[xj]}1/2 . =: βij (10) Note that, from (7), the Hessian ∇2F does not depend on Jij and hi. Since the weight (10) in Theorem 1 is symmetric with respect to the inversion of edges, the zeta function can be reduced to undirected edge weights. To avoid confusion, we introduce a notation: the zeta function of undirected edge weights β = (βij)ij∈E is denoted by ZG(β). Note also that, since βij is the correlation coefficient of qij, we have |βij| < 1. The equality does not occur by the positivity assumption of probabilities. 3 Figure 1: w1-reduction Figure 2: Example of the complete wreduction. 3 Signed graphs with unique solution In this section, we state the main result of this paper, Theorem 3. The result shows a new type of approach towards uniqueness conditions. The proof of the theorem is given in the next section. 3.1 Existing conditions on uniqueness There have been many works on the uniqueness and/or convergence of the LBP algorithm for discrete graphical models [12, 13, 14, 15] and Gaussian graphical models [21]. As we are discussing binary pairwise graphical models, we review some of the conditions for the model. The following condition is given by Mooij and Kappen: Theorem 2. [[13]] Let ρ(X) denote the spectral radius (i.e., the maximum of the absolute value of the eigenvalues) of a matrix X. If ρ(J M) < 1, then the LBP converges to the unique fixed point, where J is a diagonal matrix defined by Je,e′ = tanh(|Je|)δe,e′. This theorem gives the uniqueness property by bounding the strengths of the interactions, i.e., {|Jij|}ij∈E. Therefore, the condition does not depend on the signs of the interactions. The situations are the same in other existing conditions [12, 13, 14, 15]. For example, Heskes’s condition [12] is X j∈Ni |Jij| < 1. (11) These conditions are unsatisfactory in a sense that they do not use the information of the signs, {sgn Jij}ij∈E. In fact, the behaviors of LBP algorithm can be dramatically different if the signs of the compatibility functions are changed. Note that each edge compatibility function ψij tend to force the variables xi, xj equal if Jij > 0 and not equal if Jij < 0; the first case is called attractive interaction and the latter repulsive. In contrast to the above uniqueness conditions, we pursue another approach: we use the information of signs, {sgn Jij}ij∈E, rather than the strengths. In this paper, we characterize the signed graphs that guarantee the uniqueness of the solution; this result is stated in Theorem 3. 3.2 Statement of main theorem of this section We introduce basic terms to state the main theorem. A signed graph, (G, s), is a graph equipped with a sign map, s, from the edges to {±1}. A compatibility function defines the sign function, s, by s(ij) = sgn Jij. The sign function of all plus (resp. minus) sign is denoted by s+ (resp. s−). The deletion and subgraph of a signed graph is defined naturally restricting the sign function. Definition 1. A w-reduction of a signed graph (G, s) is a signed graph that is obtained by one of the following operations: (w1) Erasure of a vertex of degree two. (Let j be a vertex of degree two and ij, jk (i ̸= k) be the connecting edges. Delete them and make a new edge ik with the sign s(ij)s(jk). See Figure 1.) (w2) Deletion of a loop with minus sign. (An edge ij is called a loop if i = j.) (w3) Contraction of a bridge. (An edge is a bridge if the deletion of the edge makes the number of the connected component increase. The sign on the bridge can be either +1 or −1.) 4 Figure 3: B3 Figure 4: P3 Figure 5: D4. Figure 6: Example 4 in Subsection 3.3. Note that all the operations decrease the number of edges by one. A signed graph is w-reduced if no w-reduction is applicable. Any signed graph is reduced to the unique w-reduced signed graph called the complete w-reduction. Example of a complete w-reduction is given in Figure 2. From the viewpoint of the computational complexity, finding the complete w-reduction is easy. (See the supplementary material for further discussions.) Here are important (signed) graphs. See Figures 3, 4 and 5. A bouquet graph, Bn, is a graph with the single node with n loops. Pn is a graph with two vertices and n parallel edges. Kn is the complete graph of n vertices. Cn is cycle of length n. Dn is a signed graph obtained by duplicating each edge of Cn with plus and minus signs. Definition 2. Two signed graphs (G, s) and (G, s′) are said to be gauge equivalent if there exists a map g : V −→{±1} such that s′(ij) = s(ij)g(i)g(j). The map g is called gauge transformation. Theorem 3. For a signed graph (G, s) the following conditions are equivalent. 1. LBP algorithm on G has the unique fixed point for any compatibility functions with sign s. 2. The complete w-reduction of (G, s) is one of the followings: (i) B0 (ii) (B1, +) (iii) (P3, +, −, −) and (P3, +, +, −). (iv) (K4, s−) and its gauge equivalent signed graphs. (v) Dn and its w-reduced subgraphs (n ≥2). The proof of this theorem is given in the next section. 3.3 Examples and experiments In this subsection we present concrete examples of signed graphs which do or do not satisfy the condition of Theorem 3. (Ex.1) Trees and graphs with a single cycle: In these cases it is well known that LBP has the unique fixed point irrespective of the compatibility functions [1, 22]. This fact is easily derived by Theorem 3 since the complete w-reduction of them are B0 or (B1, +). (Ex.2) Complete graph Kn: (Kn, s) is w-reduced as we can not apply w-reduction. For n = 4, the condition of sign is given in 2.(iv). If n ≥5 it does not satisfy the condition for any sign. (Ex.3) 2 × 2 grid graph: This graph does not satisfy the condition for any sign because its complete w-reduction is different from the signed graphs in the item 2 of Theorem 3. (Ex.4) Consider a signed graph in Figure 6. Notice that the products of signs along the five cycles are all minus. Applying (w2) and (w3), we see that the complete w-reduction is B0. Therefore the signed graph satisfies the condition. We experimentally check convergence behaviors of the LBP algorithm on D4, which satisfies the condition of Theorem 3. Since the LBP fixed point is unique, it is the absolute minimum of the Bethe free energy function. We set the compatibility functions Jij = ±J, hi = h and initialized messages randomly. We judged convergence if average message update is less than 10−3 after 50 iterations. The result is shown in Figure 7. LBP is not convergent in the right white region and convergent in the rest of gray region. Convergence is theoretically guaranteed for tanh(|J|) < 1/3 (|J| ⪅0.347) by Theorem 2. In the non-convergent region LBP appears to be unstable around the fixed point. 4 Proofs: conditions in terms of graph zeta function The aim of this subsection is to prove Theorem 3. For the proof, Lemma 2, which is purely a result of the graph zeta function, is utilized. 5 Figure 7: Convergence region of LBP. Figure 8: X1 and X2. 4.1 Graph theoretic results We denote by G−ϵ the deletion of an undirected edge ϵ from a graph G and by G/ϵ the contraction. A minor of a graph is obtained by the repeated applications of the deletion, contraction and removal of isolated vertices. The Deletion and contraction operations have natural meaning in the context of the graph zeta function as follows: Lemma 1. 1. Let ij be an edge, then ζ−1 G−ij(u) = ζ−1 G (˜u), where ˜ue is equal to ue if [e] ̸= ij and 0 otherwise. 2. Let ij be a non-loop edge, then ζ−1 G/ij(u) = ζ−1 G (˜u), where ˜ue is equal to ue if [e] ̸= ij and 1 otherwise. Proof. From the prime cycle representation of zeta functions, both of the assertions are trivial. Next, to prove Theorem 3, we formally define the notion of deletions, contractions and minors on signed graphs [23]. For a signed graph the signed-deletion of an edge is just the deletion of the edge along with the sign on it. The signed-contraction of a non-loop edge ij ∈E is defined up to gauge equivalence as follows. For any non-loop edge ij, there is a gauge equivalent signed graph that has the sign + on ij. The signed-contraction is obtained by contracting the edge. The resulting signed graph is determined up to gauge equivalence. A signed minor of a signed graph is obtained by repeated applications of the signed-deletion, signed-contraction, and removal of isolated vertices. Lemma 2. For a signed graph, (G, s), the following conditions are equivalent. 1. (G, s) is U-type. That is, if βij ∈Is(ij) for all ij ∈E then Z−1 G (β) > 0, where β = (βij)ij∈E, I+ = [0, 1) and I−= (−1, 0]. 2. (G, s) is weakly U-type. That is, if βij ∈Is(ij) for all ij ∈E then Z−1 G (β) ≥0 3. (B2, s+) is not contained as a signed minor. 4. The complete w-reduction of (G, s) is one of the followings: (i) B0 (ii) (B1, s+) (iii) (P3, +, −, −) and (P3, +, +, −). (iv) (K4, s−) and its gauge equivalent signed graphs. (v) Dn and its w-reduced subgraphs (n ≥2). The uniqueness condition in Theorem 3 is equivalent to all the conditions in this lemma. Here, we remark properties of this condition (the proof is straightforward from definition and Lemma 2): (1) (G, s) is U-type iff its gauge equivalents are U-type. (2) If (G, s) is U-type then its signed minors are U-type. We prove the equivalence cyclic manner. Here we give a sketch of the proof (Detail is given in the supplementary material.) 6 Proof of 1 ⇒2. Trivial. Proof of 2 ⇒3. If (G, s) is weakly U-type, then its signed minors are weakly U-type; this is obvious from Lemma 1. However, direct computation of the zeta of (B2, s+) shows that this signed graph is not weakly U-type. In fact, the directed edge matrix with weight of B2 is BM =   βϵ1 βϵ1 0 βϵ1 βϵ2 βϵ2 βϵ2 0 0 βϵ1 βϵ1 βϵ1 βϵ2 0 βϵ2 βϵ2   and det(I −BM) = (1 −βϵ1)(1 −βϵ2)(1 −βϵ1 −βϵ2 −3βϵ1βϵ2). This value can be negative in the region 0 ≤βϵ1, βϵ2 < 1. Proof of 3 ⇒4. Note that if (G, s) does not contain (B2, s+) as a signed minor then any wreductions of (G, s) also do not contain (B2, s+) as a signed minor; we can check this property for each type of w-reductions, (w,1,2,3). Therefore, it is sufficient to show that if a w-reduced signed graph (G, s) does not contain (B2, +, +) as a signed minor then it is one of the five types. Notice that G has no vertex of degree less than three. First, if the nullity of G is less than three, it is not hard to see that the signed graph is type (i), (ii) or (iii). Secondly, we consider the case that the graph G has nullity three. Note that all w-reduced signed graphs of nullity two have the signed minor (B1, +). Therefore, we can assume that G does not have (plus) loop. Since (G, s) is w-reduced, G must be one of the following graphs: K4, P4, X1 and X2, where X1 and X2 are defined in Figure 8. It is easy to check that possible way of assigning signs on these graphs are one of the types, (iii-v). Finally, we consider the case of the nullity, n, is more than three. In this case, we can show that (G, s) must be Dn or its subgraph. (Details are found in the supplementary material.) Proof of 4 ⇒1. First we claim the following statement: if ζ−1 G (u) ≥0 ∀u = (ue) ∈ Y e∈⃗E {0, s([e])}, (12) then (G, s) is U-type. This claim can be proved using the property that ζ−1 G (u) = det(I −UM) is linear for each variable, ue. (That is, if we fix u except for one variable, say ue1, then ζ−1 G = C1 +C2ue1.) Take the product of the closed intervals from 0 to s(e) (e ∈⃗E) and make a hypercube. If there is a non-positive point in the hypercube then there must be a non-positive point in a face; we can repeat this argument until we arrive at a vertex. We check the condition (12) for all the four classes. Notice that if (G, s) satisfies (12) then its gauge equivalents, the deletion and signed-contraction has the same property. So far, we have proven the assertion for w-reduced graphs; we extend the proof to arbitrary signed graphs. For any signed graph, the complete w-reductions are obtained by first using reductions (w1,w2) and then reducing the bridges (w3) because (w3) always makes the degree bigger and does not make a loop. Therefore, the following two claims complete the proof. Claim 1. Let (G′, s′) be a (w3)-reduction of a signed graph (G, s), i.e., obtained by contraction of a bridge ϵ. If (G′, s′) has the property (12) then (G, s) also has the property. Proof of Claim 1. Let b and ¯b be the corresponding directed edges of ϵ. Since any prime cycles pass b and ¯b at the same number of times, ζ−1 G (u) = ζ−1 G−ϵ(˜u) + ubu¯bf(˜u), (13) where ˜u is restriction of u on G −ϵ and f is a function. Assume that s(ϵ) = 1. (The case s(ϵ) = −1 is completely analogous.) Since (G′, s′) has the property (12), (G, s) has the property for (ub, u¯b) = (1, 1). For (ub, u¯b) = (0, 0), (1, 0), (0, 1) cases, we can deduce form the property of G −ϵ. ■ Claim 2. Let (G′, s′) be a (w1) or (w2)-reduction of a signed graph (G, s). If (G′, s′) is U-type then (G, s) is U-type. 7 Proof of Claim 2. The case of (w1) is trivial. We prove the case (w2). From the multivariate Ihara’s formula, the positivity of Z−1 G′ (β) on the set Q ij∈E Is(ij) implies the positive definiteness of I + ˆD′ −ˆA′ on the set. Adding a minus loop correspond to adding 2β2(1 −β2)−1 −2β(1 −β2) = −2β(1 + β) on the diagonal, where −1 < β ≤0. Therefore the new matrix is also positive definite and (G, s) is U-type. ■ 4.2 Proof of Theorem 3 Proof of 2 ⇒1. The basic strategy is to use the following theorem. Theorem 4 (Index sum theorem [16]). As usual, consider the Bethe free energy function, F, defined on L(G). Assume that det ∇2F(q) ̸= 0 for all LBP fixed points q. Then the sum of indices at the LBP fixed points are equal to one: X q:∇F (q)=0 sgn det ∇2F(q)  = 1, where sgn(x) := 1 if x > 0, −1 if x < 0. (We call each summand, which is +1 or −1, the index of F at q.) At each LBP fixed point, the beta values for a solution can be computed using (10). Since the signs of βij and Jij are equal [16], β = (βij) ∈Q ij∈E Is(ij) is satisfied. Therefore, from the assumption and Lemma 2, the index of the solution is positive. We conclude the uniqueness of the solution from the above index sum theorem. Proof of 1 ⇒2. We show the contraposition. From Lemma 2, (G, s) is not weakly U-type; there is β = (βij) ∈Q ij∈E Is(ij) such that ζ−1 G (β) < 0. Take pseudomarginals q = {qij}ij∈E ∪{qi}i∈V that has the correlation coefficients of qij equal to βij. (For example, set χij = βij, mi = 0.) We can choose Jij and hi such that Y ij∈E qij(xi, xj) Y i∈V q1−di i (xi) ∝exp  X ij∈E Jijxixj + X i∈V hixi  . (14) This construction implies that q correspond to a LBP fixed point with compatibility functions {Jij, hi}. This solution has index -1 by definition. If this is the unique solution, it contradicts the index sum formula. Therefore, there must be other solutions. 5 Concluding remarks In this paper we have developed a new approach to the uniqueness problem of the LBP algorithm. As a result, we have obtained a new class of LBPs that are guaranteed to have the unique solution. The uniqueness problem is reduced to the properties of graph zeta functions, Lemma 2, using the indexed formula. In contrast to the existing conditions, our uniqueness guarantee includes graphical models with strong interactions. Though our result is shown in the case of binary pairwise models, the idea can be extended to factor graph models with many states. In fact, Theorem 1 has been extended to the general settings of the LBP algorithm on factor graphs [20]. One direction for the future research is to combine the information of the signs and strengths of the interactions to show the uniqueness. The uniqueness problem is reduced to the positivity of the graph zeta function on a restricted set, rather than the hypercube of size one. If we can check the positivity of graph zeta functions theoretically or algorithmically, the result can be used for a better guarantee of the uniqueness. References [1] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, San Mateo, CA, 1988. 8 [2] P.F. Felzenszwalb and D.P. Huttenlocher. Efficient belief propagation for early vision. International journal of computer vision, 70(1):41–54, 2006. [3] D. Baron, S. Sarvotham, and R.G. Baraniuk. Bayesian compressive sensing via belief propagation. Signal Processing, IEEE Transactions on, 58(1):269–280, 2010. [4] R.J. McEliece, D.J.C. MacKay, and J.F. Cheng. Turbo decoding as an instance of Pearl’s ”belief propagation” algorithm. IEEE J. Sel. Areas Commun., 16(2):140–52, 1998. [5] S. Ikeda, T. Tanaka, and S. Amari. Stochastic reasoning, free energy, and information geometry. Neural Computation, 16(9):1779–1810, 2004. [6] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [7] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Generalized belief propagation. Adv. in Neural Information Processing Systems, 13:689–95, 2001. [8] A.L. Yuille. CCCP algorithms to minimize the bethe and kikuchi free energies: Convergent alternatives to belief propagation. Neural computation, 14(7):1691–1722, 2002. [9] A.L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15(4):915–936, 2003. [10] Y.W. Teh, M. Welling, et al. The unified propagation and scaling algorithm. Advances in neural information processing systems, 2:953–960, 2002. [11] T. Heskes. Convexity arguments for efficient minimization of the bethe and kikuchi free energies. Journal of Artificial Intelligence Research, 26(1):153–190, 2006. [12] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation, 16(11):2379–2413, 2004. [13] J. M. Mooij and H. J. Kappen. Sufficient Conditions for Convergence of the Sum-Product Algorithm. IEEE Transactions on Information Theory, 53(12):4422–4437, 2007. [14] A.T. Ihler, JW Fisher, and A.S. Willsky. Loopy belief propagation: Convergence and effects of message errors. Journal of Machine Learning Research, 6(1):905–936, 2006. [15] S. Tatikonda and M.I. Jordan. Loopy belief propagation and Gibbs measures. Uncertainty in AI, 18:493–500, 2002. [16] Y. Watanabe and K. Fukumizu. Graph zeta function in the bethe free energy and loopy belief propagation. Adv. in Neural Information Processing Systems, 22:2017–2025, 2009. [17] M. Kotani and T. Sunada. Zeta functions of finite graphs. J. Math. Sci. Univ. Tokyo, 7(1):7–25, 2000. [18] K. Hashimoto. Zeta functions of finite graphs and representations of p-adic groups. Automorphic forms and geometry of arithmetic varieties, 15:211–280, 1989. [19] H.M. Stark and A.A. Terras. Zeta functions of finite graphs and coverings. Advances in Mathematics, 121(1):124–165, 1996. [20] Y. Watanabe and K. Fukumizu. Loopy belief propagation, Bethe free energy and graph zeta function. arXiv:1103.0605. [21] D.M. Malioutov, J.K. Johnson, and A.S. Willsky. Walk-sums and belief propagation in Gaussian graphical models. The Journal of Machine Learning Research, 7:2064, 2006. [22] Y. Weiss. Correctness of Local Probability Propagation in Graphical Models with Loops. Neural Computation, 12(1):1–41, 2000. [23] Thomas Zaslavsky. Characterizations of signed graphs. Journal of Graph Theory, 5(4):401– 406, 1981. 9
2011
113
4,162
Improving Topic Coherence with Regularized Topic Models David Newman University of California, Irvine newman@uci.edu Edwin V. Bonilla Wray Buntine NICTA & Australian National University {edwin.bonilla, wray.buntine}@nicta.com.au Abstract Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data. 1 Introduction Topic modeling holds much promise for improving the ways users search, discover, and organize online content by automatically extracting semantic themes from collections of text documents. Learned topics can be useful in user interfaces for ad-hoc document retrieval [18]; driving faceted browsing [14]; clustering search results [19]; or improving display of search results by increasing result diversity [10]. When the text being modeled is plentiful, clear and well written (e.g. large collections of abstracts from scientific literature), learned topics are usually coherent, easily understood, and fit for use in user interfaces. However, topics are not always consistently coherent, and even with relatively well written text, one can learn topics that are a mix of concepts or hard to understand [1, 6]. This problem is exacerbated for content that is sparse or noisy, such as blog posts, tweets, or web search result snippets. Take for instance the task of learning categories in clustering search engine results. A few searches with Carrot2, Yippee, or WebClust quickly demonstrate that consistently learning meaningful topic facets is a difficult task [5]. Our goal in this paper is to improve the coherence, interpretability and ultimate usability of learned topics. To achieve this we propose QUAD-REG and CONV-REG, two new methods for regularizing topic models, which produce more coherent and interpretable topics. Our work is predicated on recent evidence that a pointwise mutual information-based score (PMI-Score) is highly correlated with human-judged topic coherence [15, 16]. We develop two Bayesian regularization formulations that are designed to improve PMI-Score. We experiment with five search result datasets from 7M Blog posts, four search result datasets from 1M News articles, and four datasets of Google search results. Using these thirteen datasets, our experiments demonstrate that both regularizers consistently improve topic coherence and interpretability, as measured separately by PMI-Score and human judgements. To the best of our knowledge, our models are the first to address the problem of learning topics when dealing with limited and/or noisy text content. This work opens up new application areas for topic modeling. 1 2 Topic Coherence and PMI-Score Topics learned from a statistical topic model are formally a multinomial distribution over words, and are often displayed by printing the 10 most probable words in the topic. These top-10 words usually provide sufficient information to determine the subject area and interpretation of a topic, and distinguish one topic from another. However, topics learned on sparse or noisy text data are often less coherent, difficult to interpret, and not particularly useful. Some of these noisy topics can be vaguely interpretable, but contain (in the top-10 words) one or two unrelated words – while other topics can be practically incoherent. In this paper we wish to improve topic models learned on document collections where the text data is sparse and/or noisy. We postulate that using additional (possibly external) data will regularize the learning of the topic models. Therefore, our goal is to improve topic coherence. Topic coherence – meaning semantic coherence – is a human judged quality that depends on the semantics of the words, and cannot be measured by model-based statistical measures that treat the words as exchangeable tokens. Fortunately, recent work has demonstrated that it is possible to automatically measure topic coherence with near-human accuracy [16, 15] using a score based on pointwise mutual information (PMI). In that work they showed (using 6000 human evaluations) that the PMI-Score broadly agrees with human-judged topic coherence. The PMI-Score is motivated by measuring word association between all pairs of words in the top-10 topic words. PMI-Score is defined as follows: PMI-Score(w) = 1 45 X i<j PMI(wi, wj), ij ∈{1 . . . 10} (1) where PMI(wi, wj) = log P(wi, wj) P(wi)P(wj), (2) and 45 is the number of PMI scores over the set of distinct word pairs in the top-10 words. A key aspect of this score is that it uses external data – that is data not used during topic modeling. This data could come from a variety of sources, for example the corpus of 3M English Wikipedia articles. For this paper, we will use both PMI-Score and human judgements to measure topic coherence. Note that we can measure the PMI-Score of an individual topic, or for a topic model of T topics (in that case PMI-Score will refer to the average of T PMI-Scores). This PMI-Score – and the idea of using external data to measure it – forms the foundation of our idea for regularization. 3 Regularized Topic Models In this section we describe our approach to regularization in topic models by proposing two different methods: (a) a quadratic regularizer (QUAD-REG) and (b) a convolved Dirichlet regularizer (CONV-REG). We start by introducing the standard notation in topic modeling and the baseline latent Dirichlet allocation method (LDA, [4, 9]). 3.1 Topic Modeling and LDA Topic models are a Bayesian version of probabilistic latent semantic analysis [11]. In standard LDA topic modeling each of D documents in the corpus is modeled as a discrete distribution over T latent topics, and each topic is a discrete distribution over the vocabulary of W words. For document d, the distribution over topics, θt|d, is drawn from a Dirichlet distribution Dir[α]. Likewise, each distribution over words, φw|t, is drawn from a Dirichlet distribution, Dir[β]. For the ith token in a document, a topic assignment, zid, is drawn from θt|d and the word, xid, is drawn from the corresponding topic, φw|zid. Hence, the generative process in LDA is given by: θt|d ∼Dirichlet[α] φw|t ∼Dirichlet[β] (3) zid ∼Mult[θt|d] xid ∼Mult[φw|zid]. (4) We can compute the posterior distribution of the topic assignments via Gibbs sampling by writing down the joint probability, integrating out θ and φ, and following a few simple mathematical manipulations to obtain the standard Gibbs sampling update: p(zid = t|xid = w, z¬i) ∝ N ¬i wt + β N ¬i t + Wβ (N ¬i td + α) . (5) 2 where z¬i denotes the set of topic assignment variables except the ith variable; Nwt is the number of times word w has been assigned to topic t; Ntd is the number of times topic t has been assigned to document d, and Nt = PW w=1 Nwt. Given samples from the posterior distribution we can compute point estimates of the document-topic proportions θt|d and the word-topic probabilities φw|t. We will denote henceforth φt as the vector of word probabilities for a given topic t and analogously for other variables. 3.2 Regularization via Structured Priors To learn better topic models for small or noisy collections we introduce structured priors on φt based upon external data, which has a regularization effect on the standard LDA model. More specifically, our priors on φt will depend on the structural relations of the words in the vocabulary as given by external data, which will be characterized by the W × W “covariance” matrix C. Intuitively, C is a matrix that captures the short-range dependencies between words in the external data. More importantly, we are only interested in relatively frequent terms from the vocabulary, so C will be a sparse matrix and hence computations are still feasible for our methods to be used in practice. 3.3 Quadratic Regularizer (QUAD-REG) Here we use a standard quadratic form with a trade-off factor. Therefore, given a matrix of word dependencies C, we can use the prior: p(φt|C) ∝  φT t Cφt ν (6) for some power ν. Note we do not know the normalization factor but for our purposes of MAP estimation we do not need it. The log posterior (omitting irrelevant constants) is given by: LMAP = W X i=1 Nit log φi|t + ν log  φT t Cφt  . (7) Optimizing Equation (7) with respect to φw|t subject to the constraints PW i=1 φi|t = 1, we obtain the following fixed point update: φw|t ← 1 Nt + 2ν Nwt + 2ν φw|t PW i=1 Ciwφi|t φT t Cφt ! . (8) We note that unlike other topic models in which a covariance or correlation structure is used (as in the correlated topic model, [3]) in the context of correlated priors for θt|d, our method does not require the inversion of C, which would be impractical for even modest vocabulary sizes. By using the update in Equation (8) we obtain the values for φw|t. This means we no longer have neat conjugate priors for φw|t and thus the sampling in Equation (5) does not hold. Instead, at the end of each major Gibbs cycle, φw|t is re-estimated and the corresponding Gibbs update becomes: p(zid = t|xid = w, z¬i, φw|t) ∝φw|t(N ¬i td + α) . (9) 3.4 Convolved Dirichlet Regularizer (CONV-REG) Another approach to leveraging information on word dependencies from external data is to consider that each φt is a mixture of word probabilities ψt, where the coefficients are constrained by the word-pair dependency matrix C: φt ∝Cψt where ψt ∼Dirichlet(γ1). (10) Each topic has a different ψt drawn from a Dirichlet, thus the model is a convolved Dirichlet. This means that we convolve the supplied topic to include a spread of related words. Then we have that: p(w|z = t, C, ψt) = W Y i=1   W X j=1 Cijψj|t   Nit . (11) 3 Table 1: Search result datasets came from a collection of 7M Blogs, a collection of 1M News articles, and the web. The first two collections were indexed with Lucene. The queries below were issued to create five Blog datasets, four News datasets, and four Web datasets. Search result set sizes ranged from 1000 to 18,590. For Blogs and News, half of each dataset was set aside for Test, and Train was sampled from the remaining half. For Web, Train was the top-40 search results. Name Query # Results DTest DTrain Blogs beijing beijing olympic ceremony 5024 2512 39 climate climate change 14,932 7466 58 obama obama debate 18,590 9295 72 palin palin interview 10,478 5239 40 vista vista problem 4214 2107 32 News baseball major league baseball game team player 3774 1887 29 drama television show series drama 3024 1512 23 health health medicine insurance 1655 828 25 legal law legal crime court 2976 1488 23 Web depression depression 1000 1000 40 migraine migraine 1000 1000 40 america america 1000 1000 40 south africa south africa 1000 1000 40 We obtain the MAP solution to ψt by optimizing: LMAP = W X i=1 Nit log W X j=1 Cijψj|t + W X j=1 (γ −1) log ψj|t s.t. W X j=1 ψj|t = 1. (12) Solving for ψw|t we obtain: ψw|t ∝ W X i=1 NitCiw PW j=1 Cijψj|t ψw|t + γ. (13) We follow the same semi-collapsed inference procedure used for QUAD-REG, with the updates in Equations (13) and (10) producing the values for φw|t to be used in the semi-collapsed sampler (9). 4 Search Result Datasets Text datasets came from a collection of 7M Blogs (from ICWSM 2009), a collection of 1M News articles (LDC Gigaword), and the Web (using Google’s search). Table 1 shows a summary of the datasets used. These datasets provide a diverse range of content for topic modeling. Blogs are often written in a chatty and informal style, which tends to produce topics that are difficult to interpret. News articles are edited to a higher standard, so learned topics are often fairly interpretable when one models, say, thousands of articles. However, our experiments use 23-29 articles, limiting the data for topic learning. Snippets from web search result present perhaps the most sparse data. For each dataset we created the standard bag-of-words representation and performed fairly standard tokenization. We created a vocabulary of terms that occurred at least five times (or two times, for the Web datasets), after excluding stopwords. We learned the topic models on the Train data set, setting T = 15 for Blogs datasets, T = 10 for News datasets, and T = 8 for the Web datasets. Construction of C: The word co-occurrence data for regularization was obtained from the entire LDC dataset of 1M articles (for News), a subset of the 7M blog posts (for Blogs), and using all 3M English Wikipedia articles (for Web). Word co-occurrence was computed using a sliding window of ten words to emphasize short-range dependency. Note that we only kept positive PMI values. For each dataset we created a W × W matrix of co-occurrence counts using the 2000-most frequent terms in the vocabulary for that dataset, thereby maintaining reasonably good sparsity for these data. Selecting most-frequent terms makes sense because our objective is to improve PMI-Score, which is defined over the top-10 topic words, which tend to involve relatively high-frequency terms. Using high-frequency terms also avoids potential numerical problems of large PMI values arising from co-occurrence of rare terms. 4 beijing climate obama palin vista 1.8 2 2.2 2.4 PMI−Score LDA Quad−Reg Conv−Reg beijing climate obama palin vista 0 2000 4000 6000 8000 Test Perplexity LDA Quad−Reg Conv−Reg Figure 1: PMI-Score and test perplexity of regularized methods vs. LDA on Blogs, T = 15. Both regularization methods improve PMI-Score and perplexity for all datasets, with the exception of ‘vista’ where QUAD-REG has slightly higher perplexity. 5 Experiments In this section we evaluate our regularized topic models by reporting the average PMI-Score over 10 different runs, each computed using Equations (1) and (2) (and then in Section 5.4, we use human judgements). Additionally, we report the average test data perplexity over 10 samples from the posterior across ten independent chains, where each perplexity is calculated using: Perp(xtest) = exp  − 1 N test log p(xtest)  log p(xtest) = X dw N test dw log X t φw|tθt|d (14) θt|d = α + Ntd Tα + Nd φw|t = β + Nwt Wβ + Nt . (15) The document mixture θt|d is learned from test data, and the log probability of the test words is computed using this mixture. Each φw|t is computed by Equation (15) for the baseline LDA model, and it is used directly for the QUAD-REG and CONV-REG methods. For the Gibbs sampling algorithms we set α = 0.05 N DT and β = 0.01 (initially). This setting of α allocates 5% of the probability mass for smoothing. We run the sampling for 300 iterations; applied the fixed point iterations (on the regularized models) 10 times every 20 Gibbs iterations and ran 10 different random initializations (computing average over these runs). We used T = 10 for the News datasets, T = 15 for the Blogs datasets and T = 8 for the Web datasets. Note that test perplexity is computed on DTest (Table 1) that is at least an order of magnitude larger than the training data. After some preliminary experiments, we fixed QUAD-REG’s regularization parameter to ν = 0.5 N T . 5.1 Results Figures 1 and 2 show the average PMI-Scores and average test perplexities for the Blogs and News datasets. For Blogs (Figure 1) we see that our regularized models consistently improve PMI-Score and test perplexity on all datasets with the exception of the ‘vista’ dataset where QUAD-REG has slightly higher perplexity. For News (Figure 2) we see that both regularization methods improve PMI-Score and perplexity for all datasets. Hence, we can conclude that our regularized models not only provide a good characterization of the collections but also improve the coherence of the learned topics as measured by the PMI-Score. It is reasonable to expect both PMI-Score and perplexity to improve as semantically related words should be expected in topic models, so with little data, our regularizers push both measures in a positive direction. 5.2 Coherence of Learned Topics Table 2 shows selected topics learned by LDA and our QUAD-REG model. To obtain correspondence of topics (for this experiment), we initialized the QUAD-REG model with the converged LDA model. Overall, our regularized model tends to learn topics that are more focused on a particular subject, contain fewer spurious words, and therefore are easier to interpret. The following list explains how the regularized version of the topic is more useful: 5 baseball drama health legal 2 2.5 3 3.5 PMI−Score LDA Quad−Reg Conv−Reg baseball drama health legal 0 2000 4000 6000 8000 10000 Test Perplexity LDA Quad−Reg Conv−Reg Figure 2: PMI-Score and test perplexity of regularized methods vs. LDA on News, T = 10. Both regularization methods improve PMI-Score and perplexity for all datasets. Table 2: Selected topics improved by regularization. Each pair first shows an LDA topic and the corresponding topic produced by QUAD-REG (initialized from the converged LDA model). QUADREG’s PMI-Scores were always better than LDA’s on these examples. The regularized versions tend to be more focused on a particular subject and easier to interpret. Name Model Topic beijing LDA girl phony world yang fireworks interest maybe miaoke peiyi young REG girl yang peiyi miaoke lin voice real lip music sync obama LDA palin biden sarah running mccain media hilton stein paris john REG palin sarah mate running biden vice governor selection alaska choice drama LDA wire david place police robert baltimore corner friends com simon REG drama episode characters series cop cast character actors detective emmy legal LDA saddam american iraqi iraq judge against charges minister thursday told REG iraqi saddam iraq military crimes tribunal against troops accused officials beijing QUAD-REG has better focus on the names and issues involved in the controversy over the Chinese replacing the young girl doing the actual singing at the Olympic opening ceremony with the girl who lip-synched. obama QUAD-REG focuses on Sarah Palin’s selection as a GOP Vice Presidential candidate, while LDA has a less clear theme including the story of Paris Hilton giving Palin fashion advice. drama QUAD-REG learns a topic related to television police dramas, while LDA narrowly focuses on David Simon’s The Wire along with other scattered terms: robert and friends. legal LDA topic is somewhat related to Saddam Hussein’s appearance in court, but includes uninteresting terms such as: thursday, and told. The QUAD-REG topic is an overall better category relating to the tribunal and charges against Saddam Hussein. 5.3 Modeling of Google Search Results Are our regularized topic models useful for building facets in a clustering-web-search-results type of application? Figure 3 (top) shows the average PMI-Score (mean +/−two standard errors over 10 runs) for the four searches described in Table 1 (Web dataset) and the average perplexity using top-1000 results as test data (bottom). In all cases QUAD-REG and CONV-REG learn better topics, as measured by PMI-Score, compared to those learned by LDA. Additionally, whereas QUAD-REG exhibits slightly higher values of perplexity compared to LDA, CONV-REG consistently improved perplexity on all four search datasets. This level of improvement in PMI-Score through regularization was not seen in News or Blogs likely because of the greater sparsity in these data. 5.4 Human Evaluation of Regularized Topic Models So far we have evaluated our regularized topic models by assessing (a) how faithful their representation is to the collection of interest, as measured by test perplexity, and (b) how coherent they are, 6 depressionmigraine americasouth africa 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 PMI−Score LDA top−40 Quad−Reg top−40 Conv−Reg top−40 LDA top−1000 depressionmigraine americasouth africa 0 200 400 600 800 1000 Test Perplexity LDA top−40 Quad−Reg top−40 Conv−Reg top−40 Figure 3: PMI-Score and test perplexity of regularized methods vs. LDA on Google search results. Both methods improve PMI-Score and CONV-REG also improves test perplexity, which is computed using top-1000 results as test data (therefore top-1000 test perplexity is not reported). as given by the PMI-Score. Ultimately, we have hypothesized that humans will find our regularized topic models more semantically coherent than baseline LDA and therefore more useful for tasks such as document clustering, search and browsing. To test this hypothesis we performed further experiments where we asked humans to directly compare our regularized topics with LDA topics and choose which is more coherent. As our experimental results in this section show, our regularized topic models significantly outperform LDA based on actual human judgements. To evaluate our models with human judgments we used Amazon Mechanical Turk (AMT, https: //www.mturk.com) where we asked workers to compare topic pairs (one topic given by one of our regularized models and the other topic given by LDA) and to answer explicitly which topic was more coherent according to how clearly they represented a single theme/idea/concept. To keep the cognitive load low (while still having a fair and sound evaluation of the topics) we described each topic by its top-10 words. We provided an additional option “...Can’t decide...” indicating that the user could not find a qualitative difference between the topics presented. We also included control comparisons to filter out bad workers. These control comparisons were done by replacing a randomly-selected topic word with an intruder word. To have aligned (matched) pairs of topics, the sampling procedure of our regularized topic models was initialized with LDA’s topic assignment obtained after convergence of Gibbs sampling. These experiments produced a total of 3650 topiccomparison human evaluations and the results can be seen in Figure 4. 6 Related Work Several authors have investigated the use of domain knowledge from external sources in topic modeling. For example, [7, 8] propose a method for combining topic models with ontological knowledge to tag web pages. They constrain the topics in an LDA-based model to be amongst those in the given ontology. [20] also use statistical topic models with a predefined set of topics to address the task of query classification. Our goal is different to theirs in that we are not interested in constraining the learned topics to those in the external data but rather in improving the topics in small or noisy collections by means of regularization. Along a similar vein, [2] incorporate domain knowledge into topic models by encouraging some word pairs to have similar probability within a topic. Their method, as ours, is based on replacing the standard Dirichlet prior over word-topic probabilities. However, unlike our approach that is entirely data-driven, it appears that their method relies on interactive feedback from the user or on the careful selection of words within an ontological concept. The effect of structured priors in LDA has been investigated by [17] who showed that learning hierarchical Dirichlet priors over the document-topic distribution can provide better performance than using a symmetric prior. Our work is motivated by the fact that priors matter but is focused on a rather different use case of topic models, i.e. when we are dealing with small or noisy collections and want to improve the coherence of the topics by re-defining the prior on the word-topic distributions. Priors that introduce correlations in topic models have been investigated by [3]. Unlike our work that considers priors on the word-topic distributions (φw|t), they introduce a correlated prior on the 7 baseball drama health legal beijing climate obama palin vista depression migraine america southafrica 0 20 40 60 80 % Time Method is Better QuadReg LDA baseball drama health legal beijing climate obama palin vista depression migraine america southafrica 0 20 40 60 80 % Time Method is Better ConvReg LDA Figure 4: The proportion of times workers in Amazon Mechanical Turk selected each topic model as showing better coherence. In nearly all cases our regularized models outperform LDA. CONV-REG outperforms LDA in 11 of 13 datasets. QUAD-REG never performs worse than LDA (at the dataset level). On average (from 3650 topic comparisons) workers selected QUAD-REG as more coherent 57% of the time while they selected LDA as more coherent only 37% of the time. Similarly, they chose CONV-REG’s topics as more coherent 56% of the time, and LDA as more coherent only 39% of the time. These results are statistically significant at 5% level of significance when performing a paired t-test on the total values across all datasets. Note that the two bars corresponding to each dataset do not add up to 100% as the remaining mass corresponds to “...Can’t decide...” responses. topic proportions (θt|d). In our approach, considering similar priors for φw|t to those studied by [3] would be unfeasible as they would require the inverse of a W × W covariance matrix. Network structures associated with a collection of documents are used in [12] in order to “smooth” the topic distributions of the PLSA model [11]. Our methods are different in that they do not require the collection under study to have an associated network structure as we aim at addressing the different problem of regularizing topic models on small or noisy collections. Additionally, their work is focused on regularizing the document-topic distributions instead of the word-topic distributions. Finally, the work in [13], contemporary to ours, also addresses the problem of improving the quality of topic models. However, our approach focuses on exploiting the knowledge provided by external data given the noisy and/or small nature of the collection of interest. 7 Discussion & Conclusions In this paper we have proposed two methods for regularization of LDA topic models based upon the direct inclusion of word dependencies in our word-topic prior distributions. We have shown that our regularized models can improve the coherence of learned topics significantly compared to the baseline LDA method, as measured by the PMI-Score and assessed by human workers in Amazon Mechanical Turk. While our focus in this paper has been on small, and small and noisy datasets, we would expect our regularization methods also to be effective on large and noisy datasets. Note that mixing and rate of convergence may be more of an issue with larger datasets, since our regularizers use a semi-collapsed Gibbs sampler. We will address these large noisy collections in future work. Acknowledgments NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. DN was also supported by an NSF EAGER Award, an IMLS Research Grant, and a Google Research Award. 8 References [1] L. AlSumait, D. Barbar´a, J. Gentle, and C. Domeniconi. Topic significance ranking of LDA generative models. In ECML/PKDD, 2009. [2] D. Andrzejewski, X. Zhu, and M. Craven. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In ICML, 2009. [3] David M. Blei and John D. Lafferty. Correlated topic models. In NIPS, 2005. [4] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993–1022, 2003. [5] Claudio Carpineto, Stanislaw Osinski, Giovanni Romano, and Dawid Weiss. A survey of web clustering engines. ACM Comput. Surv., 41(3), 2009. [6] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009. [7] Chaitanya Chemudugunta, America Holloway, Padhraic Smyth, and Mark Steyvers. Modeling documents by combining semantic concepts with unsupervised statistical learning. In ISWC, 2008. [8] Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. Combining concept hierarchies and statistical topic models. In CIKM, 2008. [9] T. Griffiths and M. Steyvers. Probabilistic topic models. In Latent Semantic Analysis: A Road to Meaning, 2006. [10] Shengbo Guo and Scott Sanner. Probabilistic latent maximal marginal relevance. In SIGIR, 2010. [11] Thomas Hofmann. Probabilistic latent semantic indexing. In SIGIR, 1999. [12] Qiaozhu Mei, Deng Cai, Duo Zhang, and ChengXiang Zhai. Topic modeling with network regularization. In WWW, 2008. [13] David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. Optimizing semantic coherence in topic models. In EMNLP, 2011. [14] D.M. Mimno and A. McCallum. Organizing the OCA: learning faceted subjects from a library of digital books. In JCDL, 2007. [15] D. Newman, J.H. Lau, K. Grieser, and T. Baldwin. Automatic evaluation of topic coherence. In NAACL HLT, 2010. [16] D. Newman, Y. Noh, E. Talley, S. Karimi, and T. Baldwin. Evaluating topic models for digital libraries. In JCDL, 2010. [17] H. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009. [18] Xing Wei and W. Bruce Croft. LDA-based document models for ad-hoc retrieval. In SIGIR, 2006. [19] Hua-Jun Zeng, Qi-Cai He, Zheng Chen, Wei-Ying Ma, and Jinwen Ma. Learning to cluster web search results. In SIGIR, 2004. [20] Haijun Zhai, Jiafeng Guo, Qiong Wu, Xueqi Cheng, Huawei Sheng, and Jin Zhang. Query classification based on regularized correlated topic model. In Proceedings of the International Joint Conference on Web Intelligence and Intelligent Agent Technology, 2009. 9
2011
114
4,163
Beating SGD: Learning SVMs in Sublinear Time Elad Hazan Tomer Koren Technion, Israel Institute of Technology Haifa, Israel 32000 {ehazan@ie,tomerk@cs}.technion.ac.il Nathan Srebro Toyota Technological Institute Chicago, Illinois 60637 nati@ttic.edu Abstract We present an optimization approach for linear SVMs based on a stochastic primal-dual approach, where the primal step is akin to an importance-weighted SGD, and the dual step is a stochastic update on the importance weights. This yields an optimization method with a sublinear dependence on the training set size, and the first method for learning linear SVMs with runtime less then the size of the training set required for learning! 1 Introduction Stochastic approximation (online) approaches, such as stochastic gradient descent and stochastic dual averaging, have become the optimization method of choice for many learning problems, including linear SVMs. This is not surprising, since such methods yield optimal generalization guarantees with only a single pass over the data. They therefore in a sense have optimal, unbeatable runtime: from a learning (generalization) point of view, in a “data laden” setting [2, 13], the runtime to get to a desired generalization goal is the same as the size of the data set required to do so. Their runtime is therefore equal (up to a small constant factor) to the runtime required to just read the data. In this paper we show, for the first time, how to beat this unbeatable runtime, and present a method that, in a certain relevant regime of high dimensionality, relatively low noise and accuracy proportional to the noise level, learns in runtime less then the size of the minimal training set size required for generalization. The key here, is that unlike online methods that consider an entire training vector at each iteration, our method accesses single features (coordinates) of training vectors. Our computational model is thus that of random access to a desired coordinate of a desired training vector (as is standard for sublinear time algorithms), and our main computational cost are these feature accesses. Our method can also be understood in the framework of “budgeted learning” [5] where the cost is explicitly the cost of observing features (but unlike, e.g. [8], we do not have differential costs for different features), and gives the first non-trivial guarantee in this setting (i.e. first theoretical guarantee on the number of feature accesses that is less then simply observing entire feature vectors). We emphasize that our method is not online in nature, and we do require repeated access to training examples, but the resulting runtime (as well as the overall number of features accessed) is less (in some regimes) then for any online algorithms that considers entire training vectors. Also, unlike recent work by Cesa-Bianchi et al. [3], we are not constrained to only a few features from every vector, and can ask for however many we need (with the aim of minimizing the overall runtime, and thus the overall number of feature accesses), and so we obtain an overall number of feature accesses which is better then with SGD, unlike Cesa-Bianchi et al., which aim at not being too much worse then full-information SGD. As discussed in Section 3, our method is a primal-dual method, where both the primal and dual steps are stochastic. The primal steps can be viewed as importance-weighted stochastic gradient descent, and the dual step as a stochastic update on the importance weighting, informed by the current primal solution. This approach builds on the work of [4] that presented a sublinear time algorithm for approximating the margin of a linearly separable data set. Here, we extend that work to the more rel1 evant noisy (non-separable) setting, and show how it can be applied to a learning problem, yielding generalization runtime better then SGD. The extension to the non-separable setting is not straightforward and requires re-writing the SVM objective, and applying additional relaxation techniques borrowed from [10]. 2 The SVM Optimization Problem We consider training a linear binary SVM based on a training set of n labeled points {xi, yi}i=1...n, xi ∈Rd, yi ∈{±1}, with the data normalized such that ∥xi∥≤1. A predictor is specified by w ∈Rd and a bias b ∈R. In training, we wish to minimize the empirical error, measured in terms of the average hinge loss ˆRhinge(w, b) = 1 n Pn i=1[1 −y(⟨w, xi⟩+ b)]+ , and the norm of w. Since we do not typically know a-priori how to balance the norm with the error, this is best described as an unconstrained bi-criteria optimization problem: min w∈Rd,b∈R ∥w∥, ˆRhinge(w, b) (1) A common approach to finding Pareto optimal points of (1) is to scalarize the objective as: min w∈Rd,b∈R ˆRhinge(w, b) + λ 2 ∥w∥2 (2) where the multiplier λ ≥0 controls the trade-off between the two objectives. However, in order to apply our framework, we need to consider a different parametrization of the Pareto optimal set (the “regularization path”): instead of minimizing a trade-off between the norm and the error, we maximize the margin (equivalent to minimizing the norm) subject to a constraint on the error. This allows us to write the objective (the margin) as a minimum over all training points—a form we will later exploit. Specifically, we introduce slack variables and consider the optimization problem: max w∈Rd, b∈R, 0≤ξi min i∈[n] yi(⟨w, xi⟩+ b) + ξi s.t. ∥w∥≤1 and n X i=1 ξi ≤nν (3) where the parameter ν controls the trade-off between desiring a large margin (low norm) and small error (low slack), and parameterizes solutions along the regularization path. This is formalized by the following Lemma, which also gives guarantees for ε-sub-optimal solutions of (3): Lemma 2.1. For any w ̸= 0,b ∈R consider problem (3) with ν = ˆRhinge(w, b)/ ∥w∥. Let wε, bε, ξε be an ε-suboptimal solution to this problem with value γε, and consider the rescaled solution ˜w = wε/γε, ˜b = bε/γε. Then: ∥˜w∥≤ 1 1 −∥w∥ε ∥w∥, and ˆRhinge( ˜w) ≤ 1 1 −∥w∥ε ˆRhinge(w). That is, solving (3) exactly (to within ε = 0) yields Pareto optimal solutions of (1), and all such solutions (i.e. the entire regularization path) can be obtained by varying ν. When (3) is only solved approximately, we obtain a Pareto sub-optimal point, as quantified by Lemma 2.1. Before proceeding, we also note that any solution of (1) that classifies at least some positive and negative points within the desired margin must have ∥w∥≥1 and so in Lemma 2.1 we will only need to consider 0 ≤ν ≤1. In terms of (3), this means that we could restrict 0 ≤ξi ≤2 without affecting the optimal solution. 3 Overview: Primal-Dual Algorithms and Our Approach The CHW framework The method of [4] applies to saddle-point problems of the form max z∈K min i∈[n] ci(z). (4) 2 where ci(z) are concave functions of z over some set K ⊆Rd. The method is a stochastic primaldual method, where the dual solution can be viewed as importance weighting over the n terms ci(z). To better understand this view, consider the equivalent problem: max z∈K min p∈∆n n X i=1 pici(z) (5) where ∆n = {p ∈Rn | pi ≥0, ∥p∥1 = 1} is the probability simplex. The method maintains and (stochastically) improves both a primal solution (in our case, a predictor w ∈Rd) and a dual solution, which is a distribution p over [n]. Roughly speaking, the distribution p is used to focus in on the terms actually affecting the minimum. Each iteration of the method proceeds as follows: 1. Stochastic primal update: (a) A term i ∈[n] is chosen according to the distribution p, in time O(n). (b) The primal variable z is updated according to the gradient of the ci(z), via an online low-regret update. This update is in fact a Stochastic Gradient Descent (SGD) step on the objective of (5), as explained in section 4. Since we use only a single term ci(z), this can be usually done in time O(d). 2. Stochastic dual update: (a) We obtain a stochastic estimate of ci(z), for each i ∈[n]. We would like to use an estimator that has a bounded variance, and can be computed in O(1) time per term, i.e. in overall O(n) time. When the ci’s are linear functions, this can be achieved using a form of ℓ2-sampling for estimating an inner-product in Rd. (b) The distribution p is updated toward those terms with low estimated values of ci(z). This is accomplished using a variant of the Multiplicative Updates (MW) framework for online optimization over the simplex (see for example [1]), adapted to our case in which the updates are based on random variables with bounded variance. This can be done in time O(n). Evidently, the overall runtime per iteration is O(n + d). In addition, the regret bounds on the updates of z and p can be used to bound the number of iterations required to reach an ε-suboptimal solution. Hence, the CHW approach is particularly effective when this regret bound has a favorable dependence on d and n. As we note below, this is not the case in our application, and we shall need some additional machinery to proceed. The PST framework The Plotkin-Shmoys-Tardos framework [10] is a deterministic primal-dual method, originally proposed for approximately solving certain types of linear programs known as “fractional packing and covering” problems. The same idea, however, applies also to saddle-point problems of the form (5). In each iteration of this method, the primal variable z is updated by solving the “simple” optimization problem maxz∈K Pn i=1 pici(z) (where p is now fixed), while the dual variable p is again updated using a MW step (note that we do not use an estimation for ci(z) here, but rather the exact value). These iterations yield convergence to the optimum of (5), and the regret bound of the MW updates is used to derive a convergence rate guarantee. Since each iteration of the framework relies on the entire set of functions ci, it is reasonable to apply it only on relatively small-sized problems. Indeed, in our application we shall use this method for the update of the slack variables ξ and the bias term b, for which the implied cost is only O(n) time. Our hybrid approach The saddle-point formulation (3) of SVM from section 2 suggests that the SVM optimization problem can be efficiently approximated using primal-dual methods, and specifically using the CHW framework. Indeed, taking z = (w, b, ξ) and K = Bd × [−1, 1] × Ξν where Bd ⊆Rd is the Euclidean unit ball and Ξν = {ξ ∈Rn | ∀i 0 ≤ξi ≤2, ∥ξ∥1 ≤νn} , we cast the problem into the form (4). However, as already pointed out, a na¨ıve application of the CHW framework yields in this case a rather slow convergence rate. Informally speaking, this is because our set K is “too large” and thus the involved regret grows too quickly. In this work, we propose a novel hybrid approach for tackling problems such as (3), that combines the ideas of the CHW and PST frameworks. Specifically, we suggest using a SGD-like low-regret 3 update for the variable w, while updating the variables ξ and b via a PST-like step; the dual update of our method is similar to that of CHW. Consequently, our algorithm enjoys the benefits of both methods, each in its respective domain, and avoids the problem originating from the “size” of K. We defer the detailed description of the method to the following section. 4 Algorithm and Analysis In this section we present and analyze our algorithm, which we call SIMBA (stands for “Sublinear IMportance-sampling Bi-stochastic Algorithm”). The algorithm is a sublinear-time approximation algorithm for problem (3), which as shown in section 2 is a reformulation of the standard softmargin SVM problem. For the simplicity of presentation, we omit the bias term for now (i.e., fix b = 0 in (3)) and later explain how adding such bias to our framework is almost immediate and does not affect the analysis. This allows us to ignore the labels yi, by setting xi ←−xi for any i with yi = −1. Let us begin the presentation with some additional notation. To avoid confusion, we use the notation v(i) to refer to the i’th coordinate of a vector v. We also use the shorthand v2 to denote the vector for which v2(i) = (v(i))2 for all i. The n-vector whose entries are all 1 is denoted as 1n. Finally, we stack the training instances xi as the rows of a matrix X ∈Rn×d, although we treat each xi as a column vector. Algorithm 1 SVM-SIMBA 1: Input: ε > 0, 0 ≤ν ≤1, and X ∈Rn×d with xi ∈Bd for i ∈[n]. 2: Let T ←1002ε−2 log n, η ← p log(n)/T and u1 ←0, q1 ←1n 3: for t = 1 to T do 4: Choose it ←i with probability pt(i) 5: Let ut ←ut−1 + xit/ √ 2T, ξt ←arg maxξ∈Ξν (p⊤ t ξ) 6: wt ←ut/ max{1, ∥ut∥} 7: Choose jt ←j with probability wt(j)2/∥wt∥2. 8: for i = 1 to n do 9: ˜vt(i) ←xi(jt)∥wt∥2/wt(jt) + ξt(i) 10: vt(i) ←clip(˜vt(i), 1/η) 11: qt+1(i) ←qt(i)(1 −ηvt(i) + η2vt(i)2) 12: end for 13: pt ←qt/∥qt∥1 14: end for 15: return ¯w = 1 T P t wt, ¯ξ = 1 T P t ξt The pseudo-code of the SIMBA algorithm is given in figure 1. In the primal part (lines 4 through 6), the vector ut is updated by adding an instance xi, randomly chosen according to the distribution pt. This is a version of SGD applied on the function p⊤ t (Xw + ξt), whose gradient with respect to w is p⊤ t X; by the sampling procedure of it, the vector xit is an unbiased estimator of this gradient. The vector ut is then projected onto the unit ball, to obtain wt. On the other hand, the primal variable ξt is updated by a complete optimization of p⊤ t ξ with respect to ξ ∈Ξν. This is an instance of the PST framework, described in section 3. Note that, by the structure of Ξν, this update can be accomplished using a simple greedy algorithm that sets ξt(i) = 2 corresponding to the largest entries pt(i) of pt, until a total mass of νn is reached, and puts ξt(i) = 0 elsewhere; this can be implemented in O(n) time using standard selection algorithms. In the dual part (lines 7 through 13), the algorithm first updates the vector qt using the jt column of X and the value of wt(jt), where jt is randomly selected according to the distribution w2 t /∥wt∥2. This is a variant of the MW framework (see definition 4.1 below) applied on the function p⊤(Xwt + ξt); the vector ˜v serves as an estimator of Xwt + ξt, the gradient with respect to p. We note, however, that the algorithm uses a clipped version v of the estimator ˜v; see line 10, where we use the notation clip(z, C) = max(min(z, C), −C) for z, C ∈R. This, in fact, makes v a biased estimator of the gradient. As we show in the analysis, while the clipping operation is crucial to the stability of the algorithm, the resulting slight bias is not harmful. Before stating the main theorem, we describe in detail the MW algorithm we use for the dual update. 4 Definition 4.1 (Variance MW algorithm). Consider a sequence of vectors v1, . . . , vT ∈Rn and a parameter η > 0. The Variance Multiplicative Weights (Variance MW) algorithm is as follows. Let w1 ←1n, and for t ≥1, pt ←wt/ ∥wt∥1 , and wt+1(i) ←wt(i)(1 −ηvt(i) + η2vt(i)2). (6) The following lemma establishes a regret bound for the Variance MW algorithm. Lemma 4.2 (Variance MW Lemma). The Variance MW algorithm satisfies X t∈[T ] p⊤ t vt ≤min i∈[n] X t∈[T ] max{vt(i), −1/η} + log n η + η X t∈[T ] p⊤ t v2 t . We now state the main theorem. Due to space limitations, we only give here a sketch of the proof. Theorem 4.3 (Main). The SIMBA algorithm above returns an ε-approximate solution to formulation (3) with probability at least 1/2. It can be implemented to run in time ˜O(ε−2(n + d)). Proof (sketch). The main idea of the proof is to establish lower and upper bounds on the average objective value 1 T P t∈[T ] p⊤ t (Xwt + ξt). Then, combining these bounds we are able to relate the value of the output solution ( ¯w, ¯ξ) to the value of the optimum of (3). In the following, we let (w∗, ξ∗) be the optimal solution of (3) and denote the value of this optimum by γ∗. For the lower bound, we consider the primal part of the algorithm. Noting that P t∈[T ] p⊤ t ξt ≥ P t∈[T ] p⊤ t ξ∗(which follows from the PST step) and employing a standard regret guarantee for bounding the regret of the SGD update, we obtain the lower bound (with probability ≥1 −O( 1 n)): 1 T X t∈[T ] p⊤ t (Xwt + ξt) ≥γ∗−O q log n T  . For the upper bound, we examine the dual part of the algorithm. Applying lemma 4.2 for bounding the regret of the MW update, we get the following upper bound (with probability > 3 4 −O( 1 n)): 1 T X t∈[T ] p⊤ t (Xwt + ξt) ≤1 T min i∈[n] X t∈[T ] [x⊤ i wt + ξt(i)] + O q log n T  . Relating the two bounds we conclude that mini∈[n] [x⊤ i ¯w + ¯ξ(i)] ≥γ∗−O( p log(n)/T) with probability ≥1 2, and using our choice for T the claim follows. Finally, we note the runtime. The algorithm makes T = O(ε−2 log n) iterations. In each iteration, the update of the vectors wt and pt takes O(d) and O(n) time respectively, while ξt can be computed in O(n) time as explained above. The overall runtime is therefore ˜O(ε−2(n + d)). Incorporating a bias term We return to the optimization problem (3) presented in section 2, and show how the bias term b can be integrated into our algorithm. Unlike with SGD-based approaches, including the bias term in our framework is straightforward. The only modification required to our algorithm as presented in figure 1 occurs in lines 5 and 9, where the vector ξt is referred. For additionally maintaining a bias bt, we change the optimization over ξ in line 5 to a joint optimization over both ξ and b: (ξt, bt) ← argmax ξ∈Ξν, b∈[−1,1] p⊤ t (ξ + b · y) and use the computed bt for the dual update, in line 9: ˜vt(i) ←xi(jt)∥wt∥2/wt(jt) + ξt(i) + yibt, while returning the average bias ¯b = P t∈[T ] bt/T in the output of the algorithm. Notice that we still assume that the labels yi were subsumed into the instances xi, as in section 4. The update of ξt is thus unchanged and can be carried out as described in section 4. The update of bt, on the other hand, admits a simple, closed-form formula: bt = sign(p⊤ t y). Evidently, the running time of each iteration remains O(n + d), as before. The adaptation of the analysis to this case, which involves only a change of constants, is technical and straightforward. 5 The sparse case We conclude the section with a short discussion of the common situation in which the instances are sparse, that is, each instance contains very few non-zero entries. In this case, we can implement algorithm 1 so that each iteration takes ˜O(α(n + d)), where α is the overall data sparsity ratio. Implementing the vector updates is straightforward, using a data representation similar to [12]. In order to implement the sampling operations in time O(log n) and O(log d), we maintain a tree over the points and coordinates, with internal nodes caching the combined (unnormalized) probability mass of their descendants. 5 Runtime Analysis for Learning In Section 4 we saw how to obtain an ε-approximate solution to the optimization problem (3) in time ˜O(ε−2(n + d)). Combining this with Lemma 2.1, we see that for any Pareto optimal point w∗of (1) with ∥w∗∥= B and ˆRhinge(w∗) = ˆR∗, the runtime required for our method to find a predictor with ∥w∥≤2B and ˆRhinge(w) ≤ˆR∗+ ˆδ is ˜O  B2(n + d)  ˆR∗+ ˆδ ˆδ 2 . (7) This guarantee is rather different from guarantee for other SVM optimization approaches. E.g. using a stochastic gradient descent (SGD) approach, we could find a predictor with ∥w∥≤B and ˆRhinge(w) ≤ˆR∗+ ˆδ in time O(B2d/ˆδ2). Compared with SGD, we only ensure a constant factor approximation to the norm, and our runtime does depend on the training set size n, but the dependence on ˆδ is more favorable. This makes it difficult to compare the guarantees and suggests a different form of comparison is needed. Following [13], instead of comparing the runtime to achieve a certain optimization accuracy on the empirical optimization problem, we analyze the runtime to achieve a desired generalization performance. Recall that our true learning objective is to find a predictor with low generalization error Rerr(w) = Pr(x,y)(y ⟨w, x⟩≤0) where x, y are distributed according to some unknown source distribution, and the training set is drawn i.i.d. from this distribution. We assume that there exists some (unknown) predictor w∗that has norm ∥w∗∥≤B and low expected hinge loss R∗= Rhinge(w∗) = E [[1 −y ⟨w∗, x⟩]+], and analyze the runtime to find a predictor w with generalization error Rerr(w) ≤R∗+ δ. In order to understand the runtime from this perspective, we must consider the required sample size to obtain generalization to within δ, as well as the required suboptimality for ∥w∥and ˆRhinge(w). The standard SVMs analysis calls for a sample size of n = O(B2/δ2). But since, as we will see, our analysis will be sensitive to the value of R∗, we will consider a more refined generalization guarantee which gives a better rate when R∗is small relative to δ. Following Theorem 5 of [14] (and recalling that the hinge-loss is an upper bound on margin violations), we have that with high probability over a sample of size n, for all predictors w: Rerr(w) ≤ˆRhinge(w) + O  ∥w∥2 n + s ∥w∥2 ˆRhinge(w) n  . (8) This implies that a training set of size n = ˜O B2 δ · R∗+ δ δ  (9) is enough for generalization to within δ. We will be mostly concerned here with the regime where either R∗is small and we seek generalization to within δ = Ω(R∗)—a typical regime in learning. This is always the case in the realizable setting, where R∗= 0, but includes also the non-realizable setting, as long as the desired estimation error δ is not much smaller then the unavoidable error R∗. In any case, in such a regime In that case, the second factor in (9) is of order one. In fact, an online approach 1 can find a predictor with Rerr(w) ≤R∗+ δ with a single pass over n = ˜O(B2/δ · (δ + R∗)/δ) training points. Since each step takes O(d) time (essentially the time 1The Perceptron rule, which amounts to SGD on Rhinge(w), ignoring correctly classified points [7, 3]. 6 required to read the training point), the overall runtime is: O B2 δ d · R∗+ δ δ  . (10) Returning to our approach, approximating the norm to within a factor of two is fine, as it only effects the required sample size, and hence the runtime by a constant factor. In particular, in order to ensure Rerr(w) ≤R∗+ δ it is enough to have ∥w∥≤2B, optimize the empirical hinge loss to within ˆδ = δ/2, and use a sample size as specified in (9) (where we actually consider a radius of 2B and require generalization to within δ/4, but this is subsumed in the constant factors). Plugging this into the runtime analysis (7) yields: Corollary 5.1. For any B ≥1 and δ > 0, with high probability over a training set of size, n = ˜O(B2/δ · (δ + R∗)/δ), Algorithm 1 outputs a predictor w with Rerr(w) ≤R∗+ δ in time ˜O  B2d + B4 δ · δ + R∗ δ  · δ + R∗ δ 2! where R∗= inf∥w∗∥≤B Rhinge(w∗). Let us compare the above runtime to the online runtime (10), focusing on the regime where R∗is small and δ = Ω(R∗) and so R∗+δ δ = O(1), and ignoring the logarithmic factors hidden in the ˜O(·) notation in Corollary 5.1. To do so, we will first rewrite the runtime in Corollary 5.1 as: ˜O B2 δ d · R∗+ δ δ · (R∗+ δ) + B2 δ B2 · R∗+ δ δ 3! . (11) In order to compare the runtimes, we must consider the relative magnitudes of the dimensionality d and the norm B. Recall that using a norm-regularized approach, such as SVM, makes sense only when d ≫B2. Otherwise, the low dimensionality would guarantee us good generalization, and we wouldn’t gain anything from regularizing the norm. And so, at least when R∗+δ δ = O(1), the first term in (11) is the dominant term and we should compare it with (10). More generally, we will see an improvement as long as d ≫B2( R∗+δ δ )2. Now, the first term in (11) is more directly comparable to the online runtime (10), and is always smaller by a factor of (R∗+ δ) ≤1. This factor, then, is the improvement over the online approach, or more generally, over any approach which considers entire sample vectors (as opposed to individual features). We see, then, that our proposed approach can yield a significant reduction in runtime when the resulting error rate is small. Taking into account the hidden logarithmic factors, we get an improvement as long as (R∗+ δ) = O(1/ log(B2/δ)). Returning to the form of the runtime in Corollary 5.1, we can also understand the runtime as follows: Initially, a runtime of O(B2d) is required in order for the estimates of w and p to start being reasonable. However, this runtime does not depend on the desired error (as long as δ = Ω(R∗), including when R∗= 0), and after this initial runtime investment, once w and p are “reasonable”, we can continue decreasing the error toward R∗with runtime that depends only on the norm, but is independent of the dimensionality. 6 Experiments In this section we present preliminary experimental results, that demonstrate situations in which our approach has an advantage over SGD-based methods. To this end, we choose to compare the performance of our algorithm to that of the state-of-the-art Pegasos algorithm [12], a popular SGD variant for solving SVM. The experiments were performed with two standard, large-scale data sets: • The news20 data set of [9] that has 1,355,191 features and 19,996 examples. We split the data set into a training set of 8,000 examples and a test set of 11,996 examples. • The real vs. simulated data set of McCallum, with 20,958 features and 72,309 examples. We split the data set into a training set of 20,000 examples and a test set of 52,309 examples. We implemented the SIMBA algorithm exactly as in Section 4, with a single modification: we used a time-adaptive learning rate ηt = p log(n)/t and a similarly an adaptive SGD step-size (in line 5), 7 107 108 109 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 feature accesses test error SIMBA(ν = 5 · 10−5) Pegasos (λ = 5 · 10−5) 109 1010 1011 0.25 0.3 0.35 0.4 0.45 0.5 0.55 feature accesses test error SIMBA(ν = 1 · 10−3) Pegasos (λ = 1.25 · 10−4) Figure 1: The test error, averaged over 10 repetitions, vs. the number of feature accesses, on the real vs. simulated (left) and news20 (right) data sets. The error bars depict one standard-deviation of the measurements. instead of leaving them constant. While this version of the algorithm is more convenient to work with, we found that in practice its performance is almost equivalent to that of the original algorithm. In both experiments, we tuned the tradeoff parameter of each algorithm (i.e., ν and λ) so as to obtain the lowest possible error over the test set. Note that our algorithm assumes random access to features (as opposed to instances), thus it is not meaningful to compare the test error as a function of the number of iterations of each algorithm. Instead, and according to our computational model, we compare the test error as a function of the number of feature accesses of each algorithm. The results, averaged over 10 repetitions, are presented in figure 1 along with the parameters we used. As can be seen from the graphs, on both data sets our algorithm obtains the same test error as Pegasos achieves at the optimum, using about 100 times less feature accesses. 7 Summary Building on ideas first introduced by [4], we present a stochastic-primal-stochastic-dual approach that solves a non-separable linear SVM optimization problem in sublinear time, and yields a learning method that, in a certain regime, beats SGD and runs in less time than the size of the training set required for learning. We also showed some encouraging preliminary experiments, and we expect further work can yield significant gains, either by improving our method, or by borrowing from the ideas and innovations introduced, including: • Using importance weighting, and stochastically updating the importance weights in a dual stochastic step. • Explicitly introducing the slack variables (which are not typically represented in primal SGD approaches). This allows us to differentiate between an accounted-for margin mistakes, and a constraint violation where we did not yet assign enough “slack” and want to focus our attention on. This differs from heuristic importance weighting approaches for stochastic learning, which tend to focus on all samples with a non-zero loss gradient. • Employing the PST methodology when the standard low-regret tools fail to apply. We believe that our ideas and framework can also be applied to more complex situations where much computational effort is currently being spent, including highly multiclass and structured SVMs, latent SVMs [6], and situations where features are very expensive to calculate, but can be calculated on-demand. The ideas can also be extended to kernels, either through linearization [11], using an implicit linearization as in [4], or through a representation approach. Beyond SVMs, the framework can apply more broadly, whenever we have a low-regret method for the primal problem, and a sampling procedure for the dual updates. E.g. we expect the approach to be successful for ℓ1regularized problems, and are working on this direction. Acknowledgments This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors’ views. 8 References [1] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta algorithm and applications. Manuscript, 2005. [2] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information processing systems, 20:161–168, 2008. [3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. Information Theory, IEEE Transactions on, 50(9):2050–2057, 2004. [4] K.L. Clarkson, E. Hazan, and D.P. Woodruff. Sublinear optimization for machine learning. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 449–457. IEEE, 2010. [5] K. Deng, C. Bourke, S. Scott, J. Sunderman, and Y. Zheng. Bandit-based algorithms for budgeted learning. In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pages 463–468. IEEE, 2007. [6] P. Felzenszwalb, D. Mcallester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In In IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2008, 2008. [7] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265–299, 2003. [8] A. Kapoor and R. Greiner. Learning and classifying under hard budgets. Machine Learning: ECML 2005, pages 170–181, 2005. [9] S.S. Keerthi and D. DeCoste. A modified finite newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6(1):341, 2006. [10] S.A. Plotkin, D.B. Shmoys, and ´E. Tardos. Fast approximation algorithms for fractional packing and covering problems. In Proceedings of the 32nd annual symposium on Foundations of computer science, pages 495–504. IEEE Computer Society, 1991. [11] A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in neural information processing systems, 20:1177–1184, 2008. [12] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of the 24th international conference on Machine learning, pages 807–814. ACM, 2007. [13] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In Proceedings of the 25th international conference on Machine learning, pages 928–935, 2008. [14] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in Neural Information Processing Systems 23, pages 2199–2207. 2010. 9
2011
115
4,164
Inferring spike-timing-dependent plasticity from spike train data Ian H. Stevenson and Konrad P. Kording Department of Physical Medicine and Rehabilitation Northwestern University {i-stevenson, kk}@northwestern.edu Abstract Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1 Introduction One of the fundamental questions in computational neuroscience is how synapses are modified by neural activity [1, 2]. A number of experimental results, using intracellular recordings in vitro, have shown that synaptic plasticity depends on the precise pairing of pre- and post-synaptic spiking [3]. While such spike-timing-dependent plasticity (STDP) is thought to serve as a powerful regulatory mechanism [4], measuring STDP in vivo using intracellular recordings is experimentally difficult [5]. Here we instead attempt to estimate STDP in vivo by using simultaneously recorded extracellular spike trains and develop methods to estimate the time-varying strength of synapses. In the past few years model-based methods have been developed that allow the estimation of coupling between pairs of neurons from spike train data [6, 7, 8, 9, 10, 11]. These methods have been successfully applied to data from a variety of brain areas including retina [10], hippocampus [8], as well as cortex [12]. While anatomical connections between pairs of extracellularly recorded neurons are generally not guaranteed, these phenomenological methods regularly improve encoding accuracy and provide a statistical description of the functional coupling between neurons. Here we present two techniques that extend these statistical methods to time-varying coupling between neurons and allow the estimation of spike-timing-dependent plasticity from spike trains. First we introduce a generative model for time-varying coupling between neurons where the changes in coupling strength depend on the relative timing of pre- and post-synaptic spikes: a bilinearnonlinear-Poisson model. We then present two approaches for inferring STDP modification functions from spike data. We test these methods on both simulated data and data recorded from the motor cortex of a sleeping macaque monkey. 1 Nonlinearity Predicted Spiking Coupling to Pre-Synaptic Neuron Post-Spike History Modification Function tpre - tpost Post-Synaptic Spikes Pre-Synaptic Spikes + 0.5 1.5 1 min Synaptic Strength 200 ms log(λ) Pre Post A B Synaptic Strength x Figure 1: Generative model. A) A generative model of spikes where the coupling between neurons undergoes spike-timing dependent modification. Post-synaptic spiking is modeled as a doubly stochastic Poisson process with a conditional intensity that depends on the neuron’s own history and coupling to a pre-synaptic neuron. We consider the case where the strength of the coupling changes over time, depending on the relative timing of pre- and post-synaptic spikes through a modification function. B) As the synaptic strength changes over time, the influence of the pre-synaptic neuron on the post-synaptic neuron changes. Insets illustrate two points in time where synaptic strength is low (left) and high (right), respectively. Red lines illustrate the time-varying influence of the pre-synaptic neuron, while the black lines denote the static influence. 2 Methods Many studies have examined nonstationarity in neural systems, including for decoding [13], unitary event detection [14], and assessing statistical dependencies between neurons [15]. Here we focus specifically on non-stationarity in coupling between neurons due to spike-timing dependent modification of synapses. Our aim is to provide a framework for inferring spike-timing dependent modification functions from spike train data alone. We first present a generative model for spike trains where neurons are undergoing STDP. We then present two methods for estimating spike-timing dependent modification functions from spike train data: a direct method based on a time-varying generalized linear model (GLM) and an indirect method based on point-process adaptive filtering. 2.1 A generative model for coupling with spike-timing dependent modification While STDP has traditionally been modeled using integrate-and-fire neurons [4, 16], here we model neurons undergoing STDP using a simple rate model of coupling between neurons, a linearnonlinear-Poisson (LNP) model. In our LNP model, the conditional intensity (instantaneous firing rate) of a neuron is given by a linear combination of covariates passed through a nonlinearity. Here, we assume that this nonlinearity is exponential, and the LNP reduces to generalized linear model (GLM) with a canonical log link function. The covariates driving variations in the neuron’s firing rate can depend on the past spiking history of the neuron, the past spiking history of other neurons (coupling), as well as any external covariates such as visual stimuli [10] or hand movement [12]. To model coupling from a pre-synaptic neuron to a post-synaptic neuron, here we assume that the post-synaptic neuron’s firing is generated by λ(t | Ht, α, β) = exp  α0 + X i fi (npost(t −τ : t)) αi + X j fj(npre(t −τ : t))βj   npost(t) ∼Poisson(λ(t | Ht, α, β)∆t) (1) where λ(t | Ht, α, β) is the conditional intensity of the post-synaptic neuron at time t, given a short history of past spikes from the two neurons Ht and the model parameters. α0 defines a baseline firing rate, which is modulated by both the neuron’s own spike history from t−τ to t, npost(t−τ : t), and the history of the pre-synaptic neuron npre(t −τ : t) (together abbreviated as Ht). Here we have assumed that the post-spike history and coupling effects are mapped into a smooth basis by a set of functions fi and then weighted by a set of post-spike coefficients α and a set of coupling 2 coefficients β. Finally, we assume that spikes npost(t) are generated by a Poisson random variable with rate λ(t | Ht, α, β)∆t. This model has been used extensively over the past few years to model coupling between neurons [10, 12]. Details and extensions of this basic form have been previously published [6]. It is important to note, however, that the parameters α and β can be easily estimated by maximizing the loglikelihood. Since the likelihood is log-concave [9], there is a single, global solution which can be found quickly by a number of methods, such as iterative reweighted least squares (IRLS, used here). Here we consider the case where the coupling strength can vary over time, and particularly as a function of precise timing between pre- and post-synaptic spikes. To incorporate these spike-timing dependent changes in coupling into the generative model we introduce a time-varying coupling strength or ”synaptic weight” w(t) λ(t | X, α, β) = exp (α0 + Xs(t)α + w(t)Xc(t)β) npost(t) ∼Poisson(λ(t | X, α, β)∆t) (2) where w(t) changes based on the relative timing of pre- and post-synaptic spikes. Here, for simplicity, we have re-written the stable post-spike history and coupling terms in matrix form. The vector Xs(t) summarizes the post-spike history covariates at time t while Xc(t) summarizes the covariates related to the history of the pre-synaptic neuron. In this model, the synaptic weight w(t) simply acts to scale the stable coupling defined by β, and we update w(t) such that every pre-post spike pair alters the synaptic weight independently following the second spike. Under this model, the firing rate of the post-synaptic neuron is influenced by it’s own past spiking, as well as the activity of a pre-synaptic neuron. A synaptic weight determining the strength of coupling between the two neurons changes over time depending on the relative spike-timing (Fig 1A). In the simulations that follow we consider three types of modification functions: 1) a traditional double-exponential function that accurately models STDP found in cortical and hippocampal slices, 2) a mexican-hat type function that qualitatively matches STDP found in GABA-ergic neurons in hippocampal cultures, and 3) a smoothed double-exponential function that has recently been demonstrated to stabilize weight distributions [17]. The double-exponential modification function is consistent with original STDP observations [2, 3] and has been used extensively in simulated populations of integrate-and-fire neurons [4, 16]. In this case each pair of pre- and post-synaptic spikes modifies the synapse by ∆w(tpre −tpost) =    A+ exp  tpre−tpost τ+  if tpre < tpost A−exp  −tpre−tpost τ−  if tpre ≥tpost (3) where tpre and tpost denote the relative spike times, and the parameters A+, A−, τ+, and τ−determine the magnitude and drop-off of each side of the double-exponential. This creates a sharp boundary where the synapse is strengthened whenever pre-synaptic spikes appear to cause postsynaptic spikes and weakened when post-synaptic spikes do not immediately proceed pre-synaptic spikes. Similarly, mexican-hat type functions qualitatively match observations of STDP in GABA-ergic neurons in hippocampal cultures [18] where ∆w(tpre −tpost) = A+ exp −(tpre −tpost)2 2τ 2 +  + A−exp −(tpre −tpost)2 2τ 2 −  (4) For τ−> τ+ this corresponds to a more general Hebbian rule, where synapses are strengthened whenever pre- and post-synaptic spikes occur in close proximity. When spikes do not occur in close proximity the synapse is weakened. In this case, the parameters A+, A−, τ+, and τ−determine the magnitude and standard deviation of the positive and negative components of the modification function. Finally, we consider a smoothed double-exponential modification function that has recently been shown to stabilize weight distributions. The sharp causal boundary in the classical doubleexponential tends to drive synaptic weights either towards a maximum or to zero. By adding noise 3 to tpre −tpost, this causal boundary can be smoothed and weight distributions become stable [17]. Here we add Gaussian noise to (3) such that (tpre −tpost)′ = (tpre −tpost) + ϵ, ϵ ∼N(0, σ2). It is important to note that, unlike more common integrate-and-fire models of STDP, these modification function do not describe a change in the magnitude of post synaptic potentials (PSPs). Rather, ∆w defines a change in the statistical influence of the pre-synaptic neuron on the post-synaptic neuron. When w(t)Xc(t)β is large, the post-synaptic neuron is more likely to fire following a presynaptic spike. However, in this bilinear form, w(t) is only uniquely defined up to a multiplicative constant. This generative model includes two distinct components: a GLM that defines the stationary firing properties of the post-synaptic neuron and a modification function that defines how the coupling between the pre- and post-synaptic neuron changes over time as a function of relative spike timing. In simulating isolated pairs of neurons, each of the modification functions described above induces large variations in the synaptic weight. For the sake of stable simulation we add an additional longtimescale forgetting factor that pushes the synaptic weights back to 1. Namely, w(t + ∆t) = ( w(t) −∆t τf (w(t) −1) + ∆w(tpre −tpost) if npre or npost = 1 w(t) −∆t τf (w(t) −1) otherwise (5) where, here, we use τf = 60s. The next sections describe two methods for estimating time-varying synaptic strength as well as STDP modification functions from spike train data. 2.2 Point-process adaptive filtering of coupling strength Several recent studies have examined the possibility that the tuning properties of neurons may drift over time. In this context, techniques for estimating arbitrary changes in the parameters of LNP models have been especially useful. Point-process adaptive filtering is one such method which allows accurate estimation of arbitrary time-varying parameters within LNP models and GLMs [19, 20]. The goal of this filtering approach is to update the model parameters at each time step, following spike observations, based on the instantaneous likelihood. Here we use this approach to track variations in coupling strength between two neurons over time. Details and a complete derivation of this model have been previously presented [20]. Briefly, the basic recursive point-process adaptive filter follows a standard state-space modeling approach and assumes that the model parameters in a GLM, such as (1), vary according to a random walk βt+1 = F tβt + ηt (6) where Ft denotes the transition matrix from one timestep to the next and ηt ∼N(0, Qt) denotes Gaussian noise with covariance Qt. Given this state-space assumption, we can update the model parameters β given incoming spike observations. The prediction density at each timestep is given by βt|t−1 = F tβt−1|t−1 W t|t−1 = F tW t−1|t−1F T t + Qt (7) where βt−1|t−1 and W t−1|t−1 denote the estimated mean and covariance from the previous timestep. Given a new spike count observation nt, we then integrate this prior information with the likelihood to obtain the posterior. Here, for simplicity, we use a quadratic expansion of the log-posterior (a Laplace approximation). When log λ is linear in the parameters, the conditional intensity and posterior are given by λt = exp  Xtβt|t−1 + ct  W −1 t|t = W −1 t|t−1 + XT t [λt∆t]Xt βt|t = βt|t−1 + W t|t h XT t (nt −λt∆t) i (8) where Xt denotes the covariates corresponding to the state-space variable, and ct describes variation in log λ that is assumed to be stable over time. Here, the state-space variable is coupling strength, 4 and stable components of the model, such as post-spike history effects, are summarized with ct. The initial values of β and W can be estimated using a short training period before filtering. The only free parameters are those describing the state-space: F and Q. In the analysis that follows we will reduce the problem to a single dimension, where the shape of coupling is fixed during training, and we apply the point-process adaptive filter to a single coefficient for the covariate X′(t) = Xc(t)β. Together, (7) and (8) allow us to track changes in the model parameters over time. Given an estimate of the time-varying synaptic weight ˆw(t), we can then estimate the modification function ∆ˆw(tpre− tpost) by correlating the estimated changes in ˆw(t) with the relative spike timings that we observe. 2.3 Inferring STDP with a nonparametric, generalized bilinear model Point-process adaptive filtering allows us to track noisy changes in coupling strength over time. However, it does not explicitly model the fact that these changes may be spike-timing dependent. In this section we introduce a method to directly infer modification functions from spike train data. Specifically, we model the modification function non-parametrically by generating covariates W that depend on the relative spike timing. This non-parametric approximation to the modification gives a generalized bilinear model (GBLM). λ(t | X, W , α, β, βw) = exp  α0 + Xs(t)α + βT wW T (t)Xc(t)β  npost(t) ∼Poisson(λ(t | X, W , α, β, βw)∆t) (9) where βw describes the modification function and W (t)βw approximates w(t). Each of the K STDP covariates, W k, describes the cumulative effect of spike pairs tpre −tpost within a specific range [T − k , T + k ], Wk(t + ∆t) = Wk(t) −∆t τf (Wk(t) −1) + 1(tpre −tpost ∈[T − k , T + k ]) (10) such that, together, W (t)βw captures the time-varying coupling due to pre-post spike pairs within a given window (i.e. -100 to 100ms). To model any decay in STDP over time, we, again, allow these covariates to decay exponentially with τf. In this form, maximum likelihood estimation along each axis is a log-concave optimization problem [21]. The parameters describing the modification function βw and the parameters describing the stable parts of the model α and β can be estimated by holding one set of parameters fixed while updating the other and alternating between the two optimizations. In practice, convergence is relatively fast, with the deviance changing by < 0.1% within 3 iterations (Fig 3A), and, empirically, using random restarts, we find that the solutions tend to be stable. In addition to estimates of the post-spike history and coupling filters, the GBLM thus provides a non-parametric approximation to the modification function and explicitly accounts for spike-timing dependent modification of the coupling strength. 3 Results To examine the accuracy and convergence properties of the two inference methods presented above, we sampled spike trains from the generative model with various parameters. We simulated a presynaptic neuron as a homogeneous Poisson process with a firing rate of 5Hz, and the post-synaptic neuron as a conditionally Poisson process with a baseline firing rate of 5Hz. Through the GBLM, the post-synaptic neuron’s firing rate is affected by its own post-spike history as well as the activity of the pre-synaptic neuron (modeled using 5 raised cosine basis functions [10]). However, as STDP occurs the strength of coupling between the neurons changes according to one of three modification functions: a double-exponential, a mexican-hat, or a smoothed double-exponential (Fig 2). We find that both point-process adaptive filtering and the generalized bilinear model are able to accurately reconstruct the time-varying synaptic weight for each type of modification function (Fig 2, left). However, adaptive filtering generally provides a much less accurate estimate of the underlying modification function than the GBLM (Fig 2, center). Since the adaptive filter only updates the 5 0 1 2 −5 0 5 x 10 −3 −5 0 5 x 10 −3 0 1 2 −2 0 2 4 x 10 −3 −2 0 2 4 x 10 −3 0 60 0 1 2 -100 0 100 −5 0 5 x 10 −3 -100 0 100 −5 0 5 x 10 −3 0 1 0 1 0 1 1 2 1 2 0 100 1 2 B A C Synaptic Strength GBLM Adaptive Filter Time [min] tpre-tpost [ms] tpre-tpost [ms] Time [ms] 30 Simulated AF GBLM Figure 2: Recovering simulated STDP. Spikes were simulated from two neurons whose coupling varied over time, depending on the relative timing of pre- and post-synaptic spikes. Using two distinct methods (point-process adaptive filtering and the GBLM) we estimated the time-varying coupling strength and modification function from simulated spike train data. Results are shown for three different modification functions A) double-exponential, B) Mexican-hat, and C) smoothed double-exponential. Black lines denote true values, red lines denote estimates from adaptive filtering, and blue lines denote estimates from the GBLM. The post-spike history and coupling terms are shown at left for the GBLM as multiplicative gains exp(β). Error bars denote standard errors for the post-spike and coupling filters and 95% confidence intervals for the modification function estimates. synaptic weight following the observations nt, this is not entirely unsurprising. Changes in coupling strength are only detected by the filter after they have occurred and become evident in the spiking of the post-synaptic neuron. In contrast to the GBLM, there is a substantial delay between changes in the true synaptic weight and those estimated by the adaptive filter. In this case, we find that the accuracy of the adaptive filter follows changes in the synaptic weight approximately exponentially with τ ∼25ms (Fig 3B). An important question for the practical application of these methods is how much data is necessary to detect and accurately estimate modification functions for various effect sizes. Since the size of spiketiming dependent changes may be small in vivo, it is essential that we know under which conditions modification functions can be recovered. Here we simulated the standard double-exponential STDP model with several different effect-sizes, modifying A+ and A−and examining the estimation error in both ˆw(t) and ∆ˆw(tpre −tpost) (Fig 3). The three different effect-sizes simulated here used coupling kernels similar to Fig 2A and began with w(t) = 1. After spike simulation the standard deviation in w(t) was 0.060±0.002 for the small effect size, 0.13±0.01 for the medium effect size, and 0.27±0.01 for the large effect size. For all effect sizes, we found that with small amounts of data (< 1000 s), the GBLM tends to over-fit the data. In these situations Adaptive Filtering reconstructs both the synaptic weight (Fig 3E) and modification function (Fig 3F) more accurately than the GBLM (Fig 3C,E). However, once enough data is available maximum likelihood estimation of the GBLM out-performs both the stable coupling model and adaptive filtering. The extent of over-fitting can be assessed by the cross-validated log likelihood ratio relative to the homogeneous Poisson process (Fig 3G, shown in log2 for 2-fold cross-validation). Here, the stable coupling model has an average cross-validated log likelihood ratio relative to a homogeneous Poisson process of 0.185± 0.004 bits/spike across all effect sizes. Even in this controlled simulation the contribution of timevarying coupling is relatively small. Both the GBLM and Adaptive Filtering only increase the log likelihood relative to a homogeneous Poisson process by 3-4% for the parameters used here at the largest recording length. 6 10 3 0.1 0.2 Recording Length [s] 1 2 3 4 5 10 −6 10 −4 10 −2 10 0 10 2 Convergence Iterations 10 3 0 1 GBLM 10 3 0 1 Adaptive Filter 10 3 0 1 Recording Length [s] 10 3 0 1 Recording Length [s] Log Likelihood [bits/s] Relative Deviance σw=0.27 σw=0.13 σw=0.06 PSH+Coupling Post-Spike History A G C D E F Correlation Correlation Correlation Over-Fitting B ∆t [ms] Cross-Correlation Estimation Delay GBLM Adaptive Filter Effect Sizes Models Correlation −50 0 50 100 150 -10 -6 -2 2 x 10 −4 τ~25 ms Synaptic Weight Synaptic Weight Modification Function Modification Function Figure 3: Estimation errors for simulated STDP. A) Convergence of the joint optimization problem for three different effect sizes. Filled circles denote updates of the stable coupling terms. Open circles denote updates of the modification function terms. Note that after 3 iterations the deviance is changing by < 0.1% and the model has (essentially) converged. B) Cross-correlation between changes in the true synaptic weight and estimated weight for the GBLM and Adaptive Filter. Note that Adaptive Filtering fails to predict weight changes as they occur. Error bars denote SEM across N=10 simulations at the largest effect size. C,D) Correlation between the simulated and estimated synaptic weight (C) and modification function (D) for the GBLM as a function of the recording length. E,F) Correlation between the simulated and estimated synaptic weight and modification function for Adaptive Filtering. Error bars denote SEM across N=40 simulations for each effect size. G) Cross-validated (2-fold) log likelihood relative to a homogeneous Poisson process for the GBLM and Adaptive Filtering models. The GBLM (blue) over-fits for small amounts of data, but eventually out-performs both the stable coupling model (gray) and Adaptive Filtering (red). Error bands denote SEM across N=120 simulations, all effect sizes. -100 0 100 −0.02 0.02 0.06 -100 0 100 −0.02 0.02 0.06 0 0.1 0.2 0.3 −0.01 0 0.01 0.02 0 0.04 0.08 0.12 Model Comparison A Log-Likelihood [bits/s] PSH PSH+Couping GBLM Adaptive Filter B C D GBLM Adaptive Filter Aveverage Modification Function [AU] tpre-tpost [ms] tpre-tpost [ms] LGBLM- LPSH+Coup [bits/spike] LPSH+Coup - LPSH [bits/spike] 0 0 0.04 0.04 * Figure 4: Results for data from monkey motor cortex. A) Log likelihood relative to a homogeneous Poisson process for each of four models: a stable GLM with only post-spike history (PSH), a stable GLM with PSH and coupling, the GBLM, and the Adaptive Filter. Bars and error bars denote median and inter-quartile range. * denotes significance under a paired t-test, p<0.05. B) The average modification function estimated under the GBLM for N=75 pairs of neurons. C) The modification function estimated from adaptive filtering for the same data. In both cases there does not appear to be a strong, stereotypically shaped modification function. D) The degree to which adding nonstationary coupling improves model accuracy does not appear to be related to coupling strength as measured by how much the PSH+Coupling model improves model accuracy over the PSH model. 7 Finally, to test these methods on actual neural recordings, we examined multi-electrode recordings from the motor cortex of a sleeping macaque monkey. The experimental details of this task have been previously published [22]. Approximately 180 minutes of data from 83 neurons were collected (after spike sorting) during REM and NREM sleep. In the simulations above we assumed that the forgetting factor τf was known. For the GBLM τf determines the timescale of the spike-timing dependent covariates Xw, while for adaptive filtering τf defines the transition matrix F . In the analysis that follows we make the simplifying assumption that the forgetting factor is fixed at τf = 60s. Additionally, during adaptive filtering we fit the variance of the process noise Q by maximizing the cross-validated log-likelihood. Analyzing the most strongly correlated 75 pairs of neurons during the 180 minute recording (2-fold cross-validation) we find that the GBLM and Adaptive Filtering both increase model accuracy (Fig 4A). However, the resulting modification functions do not show any of the structure previously seen in intracellular experiments. In both individual pairs and the average across pairs (Fig 4B,C) the modification functions are noisy and generally not significantly different from zero. Additionally, we find that the increase in model accuracy provided by adding non-stationary coupling to the traditional, stable coupling GLM does not appear to be correlated with the strength of coupling itself. These results suggest that STDP may be difficult to detect in vivo, requiring even longer recordings or, possibly, different electrode configurations. Particularly, with the electrode array used here (Utah array, 400 µm electrode spacing), neurons are unlikely to be mono-synaptically connected. 4 Discussion Here we have presented two methods for estimating spike-timing dependent modification functions from multiple spike train data: an indirect method based on point-process adaptive filtering and a direct method using a generalized bilinear model. We have shown that each of these methods is able to accurately reconstruct both ongoing fluctuations in synaptic weight and modification functions in simulation. However, there are several reasons that detecting similar STDP in vivo may be difficult. In vivo, pairs of neurons do not act in isolation. Rather, each neuron receives input from thousands of other neurons, inputs which may confound estimation of the coupling between a given pair. It would be relatively straightforward to include multiple pre-synaptic neurons in the model using either stable coupling [6, 10] or time-varying, spike-timing dependent coupling. Additionally, unobserved common input or external covariates, such as hand position, could also be included in the model. These extra covariates should further improve spike prediction accuracy, and could, potentially, result in better estimation of STDP modification functions. Despite these caveats the statistical description of time-varying coupling presented here shows promise. Although the neurons in vivo are not guaranteed to be anatomically connected and estimated coupling must be always be interpreted cautiously [11], including synaptic modification terms does improve model accuracy on in vivo data. Several experimental studies have even suggested that understanding plasticity may not require well-isolated pairs of neurons. The effects of STDP may be visible through poly-synaptic potentiation [23, 24, 25]. In analyzing real data our ability to detect STDP may vary widely across experimental preparations. For instance, recordings from hippocampal slice or dissociated neuronal cultures may reveal substantially more plasticity than in vivo cortical recordings and are less likely to be confounded by unobserved common-input. There are a number of extensions to the basic Adaptive Filtering and GBLM frameworks that may yield more accurate estimation and more biophysically realistic models of STDP. The over-fitting observed in the GBLM could be reduced by regularizing the modification function, and Adaptive Smoothing (using both forward and backward updates) will likely out-perform Adaptive Filtering as used here. By changing the functional form of the covariates included in the GBLM we may be able to distinguish between standard models of STDP where spike pairs are treated independently and other models such as those with self-normalization [16] or where spike triplets are considered [26]. Ultimately, the framework presented here extends recent GLM-based approaches to modeling coupling between neurons to allow for time-varying coupling between neurons and, particularly, changes in coupling related to spike-timing dependent plasticity. Although it may be difficult to resolve the small effects of STDP in vivo, both improvements in recording techniques and statistical methods promise to make the observation of these ongoing changes possible. 8 References [1] LF Abbott and SB Nelson. Synaptic plasticity: taming the beast. Nature Neuroscience, 3:1178–1183, 2000. [2] G Bi and M Poo. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annual Review of Neuroscience, 24(1):139–166, 2001. [3] H Markram, J Lubke, M Frotscher, and B Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science, 275(5297):213–215, 1997. [4] S Song, KD Miller, and LF Abbott. Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience, 3(9):919–926, 2000. [5] V Jacob, DJ Brasier, I Erchova, D Feldman, and D.E Shulz. Spike timing-dependent synaptic depression in the in vivo barrel cortex of the rat. The Journal of Neuroscience, 27(6):1271, 2007. [6] Z Chen, D Putrino, S Ghosh, R Barbieri, and E Brown. Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, (99):1–1, 2010. [7] S Gerwinn, JH Macke, M Seeger, and M Bethge. Bayesian inference for spiking neuron models with a sparsity prior. Advances in Neural Information Processing Systems, 20, 2007. [8] M Okatan, MA Wilson, and EN Brown. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Computation, 17(9):1927–1961, 2005. [9] L Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243–262, 2004. [10] JW Pillow, J Shlens, L Paninski, A Sher, AM Litke, EJ Chichilnisky, and EP Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995–999, 2008. [11] IH Stevenson, JM Rebesco, LE Miller, and KP Kording. Inferring functional connections between neurons. Current Opinion in Neurobiology, 18(6):582–588, 2008. [12] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology, 93(2):1074–1089, 2005. [13] W Wu and NG Hatsopoulos. Real-time decoding of nonstationary neural activity in motor cortex. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 16(3):213–222, 2008. [14] S Grun, M Diesmann, and A Aertsen. Unitary events in multiple single-neuron spiking activity: Ii. nonstationary data. Neural Computation, 14(1):81–119, 2002. [15] V. Ventura, C. Cai, and R. E. Kass. Statistical assessment of time-varying dependency between two neurons. Journal of Neurophysiology, 94(4):2940–2947, 2005. [16] MCW Van Rossum, GQ Bi, and GG Turrigiano. Stable hebbian learning from spike timing-dependent plasticity. Journal of Neuroscience, 20(23):8812, 2000. [17] B Babadi and LF Abbott. Intrinsic stability of temporally shifted spike-timing dependent plasticity. PLoS Comput Biol, 6(11):e1000961, 2010. [18] M.A. Woodin, K. Ganguly, and M. Poo. Coincident pre-and postsynaptic activity modifies gabaergic synapses by postsynaptic changes in cl-transporter activity. Neuron, 39(5):807–820, 2003. [19] EN Brown, DP Nguyen, LM Frank, MA Wilson, V Solo, and A Sydney. An analysis of neural receptive field dynamics by point process adaptive filtering. Proc Natl Acad Sci USA, 98:12261–12266, 2001. [20] UT Eden, LM Frank, R Barbieri, V Solo, and EN Brown. Dynamic analysis of neural encoding by point process adaptive filtering. Neural Computation, 16(5):971–998, 2004. [21] MB Ahrens, L Paninski, and M Sahani. Inferring input nonlinearities in neural encoding models. Network: Computation in Neural Systems, 19(1):35–67, 2008. [22] N Hatsopoulos, J Joshi, and JG O’Leary. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. Journal of Neurophysiology, 92(2):1165–1174, 2004. [23] G Bi and M Poo. Distributed synaptic modification in neural networks induced by patterned stimulation. Nature, 401(6755):792–795, 1999. [24] A Jackson, J Mavoori, and EE Fetz. Long-term motor cortex plasticity induced by an electronic neural implant. Nature, 444(7115):56–60, 2006. [25] JM Rebesco, IH Stevenson, K Kording, SA Solla, and LE Miller. Rewiring neural interactions by microstimulation. Frontiers in Systems Neuroscience, 4:12, 2010. [26] RC Froemke and Y Dan. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature, 416(6879):433–438, 2002. 9
2011
116
4,165
Bayesian Spike-Triggered Covariance Analysis Il Memming Park Center for Perceptual Systems University of Texas at Austin Austin, TX 78712, USA memming@austin.utexas.edu Jonathan W. Pillow Center for Perceptual Systems University of Texas at Austin Austin, TX 78712, USA pillow@mail.utexas.edu Abstract Neurons typically respond to a restricted number of stimulus features within the high-dimensional space of natural stimuli. Here we describe an explicit modelbased interpretation of traditional estimators for a neuron’s multi-dimensional feature space, which allows for several important generalizations and extensions. First, we show that traditional estimators based on the spike-triggered average (STA) and spike-triggered covariance (STC) can be formalized in terms of the “expected log-likelihood” of a Linear-Nonlinear-Poisson (LNP) model with Gaussian stimuli. This model-based formulation allows us to define maximum-likelihood and Bayesian estimators that are statistically consistent and efficient in a wider variety of settings, such as with naturalistic (non-Gaussian) stimuli. It also allows us to employ Bayesian methods for regularization, smoothing, sparsification, and model comparison, and provides Bayesian confidence intervals on model parameters. We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference. We validate these methods using neural data recorded extracellularly from macaque primary visual cortex. 1 Introduction A central problem in systems neuroscience is to understand the probabilistic relationship between sensory stimuli and neural responses. Most neurons in the early sensory pathway are only sensitive to a low-dimensional space of stimulus features, and ignore the other axes in the high-dimensional space of stimuli. Dimensionality reduction therefore plays an important role in neural characterization. The most popular dimensionality-reduction method for neural data uses the first two moments of the spike-triggered stimulus distribution: the spike-triggered average (STA) and the eigenvectors of the spike-triggered covariance (STC) [1–5]. These features are interpreted as filters or “receptive fields” that form the first stage in a linear-nonlinear-Poisson (LNP) cascade model [6,7]. In this model, stimuli are projected onto a bank of linear filters, whose outputs are combined via a nonlinear function, which drives spiking as an inhomogeneous Poisson process (see Fig. 1). Prior work has established the conditions for statistical consistency and efficiency of the STA and STC as feature space estimators [1, 2, 8, 9]. However, these moment-based estimators have not yet been interpreted in terms of an explicit probabilistic encoding model. We formalize that relationship here, building on a recent information-theoretic treatment of spike-triggered average and covariance analysis (iSTAC) [9]. Our general approach is inspired by probabilistic and Bayesian formulations of principal components analysis (PCA) and extreme components analysis (XCA), moment-based methods for linear dimensionality reduction that are closely related to STC analysis, but which were only more recently formulated in terms of an explicit probabilistic model [10–14]. 1 linear filters nonlinearity Poisson spiking Figure 1: Schematic of linear-nonlinear-Poisson (LNP) neural encoding model [6]. Here we show, first of all, that STA and STC arise naturally from the expected log-likelihood of an LNP model with an “exponentiated-quadratic” nonlinearity, where expectation is taken with respect to a Gaussian stimulus distribution. This insight allows us to formulate exact maximum-likelihood estimators that apply to arbitrary stimulus distributions. We then introduce Bayesian methods for regularizing and smoothing receptive field estimates, and an approximate empirical Bayes method for selecting the feature space dimensionality, which obviates nested hypothesis tests, bootstrapping, or cross-validation based methods [5]. Finally, we generalize these estimators to accommodate LNP models with arbitrary elliptically symmetric nonlinearities. The resulting model class provides a richer and more flexible model of neural responses but can still recover a high-dimensional feature space (unlike more general information-theoretic estimators [8, 15], which do not scale easily to more than 2 filters). We apply these methods to a variety of simulated datasets and to responses from neurons in macaque primary visual cortex stimulated with binary white noise stimuli [16]. 2 Model-based STA and STC In a typical neural characterization experiment, the experimenter presents a train of rapidly varying sensory stimuli and records a spike train response. Let x denote a D-dimensional vector containing the spatio-temporal stimulus affecting a neuron’s scalar spike response y in a single time bin. A principal goal of neural characterization is to identify β, a low-dimensional projection matrix such that β>x captures the neuron’s dependence on the stimulus x. The columns of β can be regarded as linear receptive fields that provide a basis for the neural feature space. The methods we consider here all assume that neural responses can be described by an LNP cascade model (Fig. 1). Under this model, the conditional probability of a response y|x is Poisson with rate f(β>x), where f is a vector function mapping feature space to instantaneous spike rate.1 2.1 STA and STC analysis The STA and the STC matrix are the (empirical) first and second moments, respectively, of the spike-triggered stimulus ensemble {xi|yi}N i=1. They are defined as: STA: µ = 1 nsp N X i=1 yixi, and STC: ⇤= 1 nsp N X i=1 yi(xi −µ)(xi −µ)>, (1) where nsp = P yi is the number of spikes and N is the total number of time bins. Traditional STA/STC analysis provides an estimate for the feature space basis β consisting of: (1) µ, if it is significantly different from zero; and (2) the eigenvectors of ⇤whose eigenvalues are significantly smaller or larger from those of the prior stimulus covariance Φ = E[xxT ]. This estimate is provably consistent only in the case of stimuli drawn from a spherically symmetric (for STA) or independent Gaussian distribution (for STC) [17].2 1Here f has units of spikes/bin, for some fixed bin size ∆. In the limit ∆! 0, the model output is an inhomogeneous Poisson process, but we use discrete time bins here for concreteness. 2For elliptically symmetric or colored Gaussian stimuli, a consistent estimate requires whitening the stimuli by Φ−1 2 and then multiplying the estimated features (STA and STC eigenvectors) again by Φ−1 2 (see [5]). 2 2.2 Equivalent model-based formulation Motivated by [9], we consider an LNP model where the spike rate is defined by an exponentiated general quadratic function: f(x) = exp # 1 2x>Cx + b>x + a $ , (2) where C is a symmetric matrix, b is a vector, and a is a scalar. Then the log-likelihood per spike, the conditional log-probability of the data divided by the number of spikes, is L = 1 nsp X i log P(yi|C, b, a, xi) = 1 nsp X i (yi log f(xi) −f(xi)) (3) = 1 2 Tr [C⇤] + 1 2µ>Cµ + b>µ + a − N nsp ea " 1 N X i exp # 1 2xi >Cxi + b>xi $ # . (4) If the stimuli are drawn from x ⇠N(0, Φ), a zero-mean Gaussian with covariance Φ, then the expression in square brackets (eq. 4) will converge to its expectation, given by: E  e 1 2 x>Cx+b>x ( = |I −ΦC|−1 2 exp ⇣ 1 2b>(Φ−1 −C) −1b ⌘ , (5) so long as (Φ−1−C) is invertible and positive definite.3 Substituting this expectation (eq. 5) into the log-likelihood (eq. 4) yields a quantity we call the expected log-likelihood ˜L, which can be expressed in terms of the STA, STC, Φ, and the model parameters: ˜L = 1 2 Tr [C⇤] + 1 2µ>Cµ + b>µ + a − N nsp |I −ΦC|−1 2 exp ⇣ 1 2b>(Φ−1 −C) −1b + a ⌘ . (6) Maximizing this expression yields expected-ML estimates (see online supplement for derivation): ˜Cml = Φ−1 −⇤−1, ˜bml = ⇤−1µ, ˜aml = log ✓ nsp N ,,Φ⇤−1,, 1 2 ◆ −1 2µ>Φ−1⇤−1µ. (7) Thus, for an LNP model with exponentiated-quadratic nonlinearity stimulated with Gaussian noise, the (expected) maximum likelihood estimates can be obtained in closed form from the STA, STC, stimulus covariance, and mean spike rate nsp/N. Several features of this solution are worth remarking. First, if the quadratic component C = 0, then ˜bml = Φ−1µ, the whitened STA (as in [17]). Second, if the stimuli are white, meaning Φ = I, then ˜Cml = I −⇤−1, which has the same eigenvectors as the STC matrix. Third, if we plug the expected-ML estimates back into the log-likelihood, we get ˜L = 1 2 # Tr ⇥ ⇤Φ−1⇤ + µ>Φ−1µ −log ,,⇤Φ−1,,$ + const (8) which (for Φ = I) is the information-theoretic spike-triggered average and covariance (iSTAC) cost function [9]. The iSTAC estimator finds the subspace that maximizes the “single-spike information” [18] under a Gaussian model of the raw and spike-triggered stimulus distributions (that coincides with (eq. 8)), but its precise relationship to maximum likelihood has not been shown previously. 2.3 Generalizing to non-Gaussian stimuli The conditions for which the STA and STC provide asymptotically efficient estimators for a neural feature space are clear from the derivations above: if the stimuli are Gaussian (a condition which is rarely if ever met in practice), the STA is optimal when the nonlinearity is f(x) = exp(b>x + a) (as shown in [8]); the STC is optimal when f(x) = exp(x>Cx + a) (as shown in [9]). However, the maximum of the exact model log-likelihood (eq. 4) yields a consistent and asymptotically efficient estimator even when stimuli are not Gaussian. Numerically optimizing this loss 3If it is not, then this expectation does not exist, and simulations of the corresponding model will produce impossibly high spike counts, with STA and STC dominated by the response to a single stimulus. 3 function is computationally more expensive than computing the STA and STC eigendecomposition, but the log-likelihood is jointly concave in the model parameters (C, b, a), meaning ML estimates can be obtained rapidly by convex optimization [19]. For cases where x is high-dimensional, it is easier to directly estimate a low-rank representation of C, rather than optimize the entire D ⇥D matrix. We therefore define a rank-d representation for C: C = d X i=1 wisiwi > = WSW >, (9) where W is a matrix whose columns wi are features, si 2 {−1, 1} are constants that control the shape of the nonlinearity along each axis in feature space (-1 for suppressive, +1 for excitatory), and S is a diagonal matrix containing si along the diagonal. (We will assume the si are fixed using the sign of the eigenvalues of the expected-ML estimate ˜Cml, and not varied thereafter). The feature space of the resulting model is spanned by b and the columns of W. We refer to ML estimators for (b, W) as maximum-likelihood STA and STC (or exact ML, as opposed to expectedML estimates from moment-based formulas (eq. 7); see Figs. 2-3 for comparisons). These estimates will closely match the standard STA and STC-based feature space when stimuli are Gaussian, but (as maximum-likelihood estimates) are also consistent and asymptotically efficient for arbitrary stimuli. An additional difference between maximum-likelihood and standard STA/STC analysis is that the parameters (b, W) have meaningful units of length: the vector norm of b determines the amplitude of the “linear” contribution to the neural response (via b>x), while the norm of columns in W determines the amplitude of “symmetric” excitatory or suppressive contributions to the response (via x>WSW >x). Shrinking these vectors (e.g., with a prior) has the effect of reducing their influence in the model, and they drop out of the model entirely if we shrink them to zero (a fact that we will exploit in the next section). By contrast, the standard STA and STC eigenvectors are usually taken as unit vectors, providing a basis for the neural feature space in which the nonlinearity (“N” stage) must still be estimated. We are free to normalize the ML estimates (ˆb, ˆW) and estimate an arbitrary nonlinearity in a similar manner, but it is noteworthy that the parameters (a, b, W) specify a complete encoding model in and of themselves. 3 Bayesian STC Now that we have defined an explicit model and likelihood function underlying STA and STC analysis, we can straightforwardly apply Bayesian methods for estimation, prediction, error bars, model comparison, etc., by introducing a prior over the model parameters. Bayesian methods can be especially useful in cases where we have prior information (e.g., about smoothness or sparseness of neural features, [20–25]), and in general have attractive theoretical properties for high-dimensional inference problems [26–28]. Here we consider two types of priors: (1) a smoothing prior, which holds the filters to be smooth in space/time; and (2) a sparsifying prior, which we employ to directly estimate the feature space dimensionality (i.e., the number of significant filters). We apply these priors to b and the columns of W, in conjunction with either exact (for accuracy) or expected (for speed) log-likelihood functions defined above. We refer to the resulting estimators as Bayesian STC (or “BSTC”). We perform BSTC estimation by maximizing the sum of log-likelihood and log-prior to obtain maximum a posteriori (MAP) estimates of the filters and constant a. It is worth noting that since the derivatives of the expected likelihood (eq. 6) are also written in terms of STA/STC, optimization using the expected log-likelihood can be carried out more efficiently—it reduces the cost of each iteration by a factor of N compared to optimizing the exact likelihood (eq. 3). 3.1 Smoothing prior Neural receptive fields are generally smooth, so a prior that encourages this tendency will tend to improve performance. Receptive field estimates under such a prior will be smooth unless the likelihood provides sufficient evidence for jaggedness. To encourage smoothness, we placed a zeromean Gaussian prior on the second-order differences of each filter [29]: Lw ⇠N(0, φ−1I), (10) 4 # samples # samples reconstruction error 103 104 105 103 104 105 0 0.2 0.4 0.6 0.8 1 true filter STA/STC Expected ML Exact ML Bayesian smoothing sparse binary stimuli Gaussian stimuli A B C Figure 2: Estimated filters and error rates for various estimators. An LNP model with 4 orthogonal 32-elements filters (see text) was simulated with two types of stimuli (A-B: white Gaussian: C: sparse binary). Mean firing rate 0.16 spk/s. (A) Filters estimated from 10,000 samples. STA/STC filters are normalized to match the norm of true filters. (B) Convergence to the true filter under each method, Gaussian stimuli. (C) Convergence for sparse binary stimuli. where L is the discrete Laplacian operator and φ is a hyperparameter controlling the smoothness of feature vectors. This is equivalent to imposing a penalty (given by 1 2φwi>LL>wi) on the squared second derivatives b and W in the optimization function. Larger φ implies a narrower Gaussian prior on these differences, hence a stronger preference for smooth filters. For simplicity, we assumed all filters came from the same prior, resulting in a single hyperparameter φ for all filters, and used cross-validation to choose an appropriate φ for each dataset. To illustrate the effects of this prior, we simulated an example dataset from an LNP neuron with exponentiated-quadratic nonlinearity and four 32-element, 1-dimensional (temporal) filters. The filter shapes were given by orthogonalized randomly-placed Gaussians (Fig. 2). We fixed the dimensionality of our feature space estimates to be the same as the true model, since our focus was the quality of each corresponding filter estimate. For Gaussian stimuli, we found that classical STA/STC, expected-ML, and exact-ML estimates were indistinguishable (Fig. 2). However, for “sparse” binary stimuli (3 of the 32 pixels set randomly to ±1), for which STA/STC and expected-ML estimates are no longer consistent, we found significantly better performance from the exact-ML estimates (Fig. 2C). Most importantly, for both Gaussian and sparse stimuli alike, the smoothing prior provided a large improvement in the quality of feature space estimates, achieving similar error with 2 orders of magnitude fewer stimuli. 3.2 Automatic selection of feature space dimensionality While smoothing regularizes receptive field estimates by penalizing filter roughness, a perhaps more critical aspect of the STA/STC model is its vast number of possible parameters due to uncertainty in the number of filters. Our approach to this problem was inspired by Bayesian PCA [10], a method for automatically choosing the number of meaningful principle components using a “feature-selection prior” designed to encourage sparsity. The basic idea behind this approach is that a zero-mean Gaussian prior on each filter wi (separately controlled by a hyperparameter ↵i) can be used to “shrink to zero” any components that do not contribute meaningfully to the evidence, just as in automatic relevance determination (ARD), also known as sparse Bayesian learning [27,30]. Unlike PCA, we seek to preserve components of the STC matrix with both large and small eigenvalues, which correspond to excitatory and suppressive filters, respectively. One solution to this problem, Bayesian Extreme Components Analysis [14], preserves large and small eigenvalues of the covariance matrix, but does not incorporate additional priors on filter shape, and has not yet been formulated for our (Poisson) likelihood function. Instead, we address the problem by using the sign of the diagonal elements in S to determine whether a feature w produces a positive or negative eigenvalue in C (eq. 9). (Recall that the eigenvalues of C = Φ−1 −⇤−1 are positive and negative, while those of the STC matrix ⇤ are strictly positive). Reparametrizing the STC in terms of C therefore allows us to apply a variant of the Bayesian PCA algorithm directly to b and the columns of W. The details of our approach are as follows. We put the ARD prior on each column of W: wi ⇠N # 0, ↵−1 i I $ , (11) 5 0 0.05 0.1 0.15 0.2 0.25 goodness-of-fit (nats/spk) 10000 100000 500000 expected likelihood exact likelihood ML smooth ARD both ML smooth ARD both # of samples fnal # of dimensions cross-validation expected ARD expected smooth+ARD exact ARD exact smooth+ARD true 10 3 10 4 10 5 10 6 0 1 2 3 4 5 6 7 8 Figure 3: Goodness-of-fit of estimated models and the estimated dimension as a function of number of samples. The same simulation parameters as Fig. 2 were used. Left: Information per spike (normalized difference in log-likelihoods) captured by different estimates. Models were estimated from 103, 104, and 5 ⇥104 stimuli respectively. Right: Estimated number of dimensions as a function of the number of training samples. When both smoothing and ARD priors are used, the variability rapidly diminishes to near zero. where ↵i is a hyperparameter controlling the prior variance of wi. We impose the same prior on b, with an additional hyperparamter ↵0, resulting in (D + 1) hyperparameters for the complete model. We initialize b to its ML estimate and the wi to the eigenvectors of ˜Cml, scaled by the square root of their eigenvalues. Then, we optimize the parameters and hyperparameters in a similar fashion to the Bayesian PCA algorithm [10]: we alternate between maximizing the posterior for the parameters (a, b, W) given hyperparameters ↵, and evidence optimization (arg max↵Pr[(x, y)|↵]) to update ↵. Since a closed form for the evidence is not known, we use the approximate fixed point update rule developed in [10]: ↵new i D |||wi||2 . This update is valid when each element of the receptive field wi is well defined (non-zero), otherwise it overestimates the corresponding ↵i. The algorithm begins with all ↵i set to zero (infinite prior variance), giving ML estimates for the parameters. Subsequent updates will cause some ↵i to grow without bound, shrinking the prior variance of the corresponding feature vector wi until it drops out of the model entirely as ↵i ! 1. The remaining wj, for which ↵j remain finite, define the feature space estimate. Note that these updates are fast (especially with expected log-likelihood), providing a much less computationally intensive estimate of feature space dimensionality than bootstrap-based methods [5]. Figure 3 (left) shows that ARD prior greatly increases the model goodness-of-fit (likelihood on test data), and is synergistic with the smoothing prior defined above. The improvement (relative to ML estimates) is greatest when the number of samples is small, and it enhances both expected and exact likelihood estimates. We compared this method for estimating feature space dimensionality with a more classical (non-Bayesian) approach based on cross-validation. We first fit a full-rank model with exact likelihood, and built a sparse model by adding filters from this set greedily until the likelihood of test data began to decrease. The resulting estimate of dimension is underestimated when there is not enough data, and even with large amount of data, it has high variance (Fig. 3, right). In comparison, our ARD-based estimate converged quickly to the correct dimension and exhibited smaller variability. When both smoothing and ARD priors were used, the variability decreased markedly and always achieved the correct dimension even for moderate amounts of data. One additional advantage of Bayesian approach is that it can use all the available data; under crossvalidation, some proportion of data is needed to form the test set (in this example we provided extra data for this method only). 4 Extension: the elliptical-LNP model Finally, the model and inference procedures we have described above can be extended to a much more general class of response functions with zero additional computational cost. We can replace the exponential function which operates on the quadratic form in the model nonlinearity (eq. 2) 6 0 2 4 6 8 10 12 14 rate (spk/bin) data exp(x) log(1+exp(x)) spline fit ï 0  Figure 4: 1-D nonlinear functions g mapping z, the output of the quadratic stage, to spike rate for a V1 complex cell [16]. The exact-ML filter estimate for W and b were obtained using the smoothing BSTC with an exponential nonlinearity. (Final filter estimates for this cell shown in Fig. 5). The quadratic projection (z) was computed using the filter estimates, and is plotted against the observed spike counts (gray circles), histogram-based estimate of the nonlinearity (green diamonds), exponential nonlinearity (black trace), a well-known alternative nonlinearity log(1 + ez) (red), and a cubic spline estimated using 7 knots (green trace). We fixed the fitted cubic spline nonlinearity and then refit the filters, resulting in an estimate of the elliptical-LNP model. with an arbitrary function g(·), resulting in a model class that includes any elliptically symmetric mapping of the stimulus to spike rate. We call this the elliptical-LNP model. The elliptical-LNP model can be formalized by writing the nonlinearity f(x) (depicted in Fig. 1) as the composition of two nonlinear functions: a quadratic function that maps high dimensional stimulus to real line z(x) = 1 2x>Cx + b>x + a, and a 1-D nonlinearity g(z). The full nonlinearity is thus f(x) = g(z(x)). Although LNP with exponential nonlinearity has been widely adapted in neuroscience for its simplicity, the actual nonlinearity of neural systems is often sub-exponential. Moreover, the effect of nonlinearity is even more pronounced in the exponentiated-quadratic function, and hence it may be helpful to use a sub-exponential function g. Figure 4 shows the nonlinearity of an example neuron from V1 (see next section) compared to g(z) = ez (the assumption implicit in STA/STC), a more linear function g(z) = log(1 + ez), and a cubic spline fit by maximum likelihood. The likelihood given by eq. 3 can be optimized efficiently as long as g and g0 can be computed efficiently. The log-likelihood is concave in (a, b, C) so long as g obeys the standard regularity conditions (convex and log-concave), but we did not impose those conditions here. For fast optimization, we first used the exponentiated-quadratic nonlinearity as an initialization (expected then exact-ML), then we refined the model with a spline nonlinearity. 5 Application to neural data We applied BSTC to data from a V1 complex cell (data published in [16]). The stimulus consisted of oriented binary white noise (“flickering bars”) aligned with the cell’s preferred orientation. We selected a cell (544l029.p21) that was reported to have large set of filters, to illustrate the power of our technique. The size of receptive field was chosen to be 16 bars ⇥10 time bins, yielding a 160-dimensional stimulus space. Three features of this data that make BSTC appropriate: (1) the stimulus is non-Gaussian; (2) the nonlinearity is not exponential (Fig. 4); (3) the filters are smooth in space and time (Fig. 5). We estimated the nonlinearity using a cubic spline, and applied a smoothing BSTC to 104 samples presented at 100 Hz (Fig. 5, top). The ARD-prior BSTC estimate trained on 2⇥105 stimuli preserved 14 filters (Fig. 5, bottom). The quality of the filters are qualitatively close to that obtained by STA/STC. However, the resulting model has better overall goodness-of-fit, as well as significant improvement over the exact ML model for each reduced dimension model (Fig. 6). To achieve the same level of fit as using 2 filters for BSTC, the exact ML based sparse model required 6 additional filters (dotted line). We also compared BSTC to a generalized linear model (GLM) with same number of linear and quadratic filters fit by STA/STC (a method described previously by [7]). This approach places a prior over the weights on squared filter outputs, but not on the filters themselves. On a test set, 7 STA/STC STA/STC BSTC+ARD BSTC b excitatory suppresive Figure 5: Estimating visual receptive fields from a complex cell. Each image corresponds to a normalized 16 dimensions spatial pixels (horizontal) by 10 time bins (vertical) filter. (top) Smoothing prior recovers better filters. Bayesian STC (BSTC) with smoothing prior and fixed spline nonlinearity applied to a fixed number of filters. (bottom) Sparsification determines the number of filters. BSTC with ARD, smoothing, and spline nonlinearity recovers 14 receptive fields out of 160. 0 2 4 6 8 10 12 14 0 0.05 0.1 0.15 0.2 0.25 0.3 goodness-of-fit (nats/spk) # dimensions BSTC ML train test train test Figure 6: Goodness-of-model fits from exact ML solution with exponential nonlinearity compared to BSTC with a fixed spline nonlinearity and smoothing prior (2 ⇥105 samples). Filters are added in the order that increases the likelihood on the training set the most. The corresponding filters are visualized in fig. 5. BSTC outperformed the GLM on all cells in the dataset, achieving 34% more bits/spike (normalized log-likelihood) over a population of 50 cells. 6 Conclusion We have provided an explicit, probabilistic, model-based framework that formalizes the classical moment-based estimators (STA, STC) and a more recent information-theoretic estimator (iSTAC) for neural feature spaces. The maximum of the “expected log-likelihood” under this model, where expectation is taken with respect to Gaussian stimulus distribution, corresponds precisely to the moment-based estimators for uncorrelated stimuli. A model-based formulation allows us to compute exact maximum-likelihood estimates when stimuli are non-Gaussian, and we have incorporated priors in conjunction with both expected and exact likelihoods to achieve Bayesian methods for smoothing and feature selection (estimation of the number of filters). The elliptical-LNP model extends BSTC analysis to a richer class of nonlinear response models. Although the assumption of elliptical symmetry makes it less general than information-theoretic estimators such as maximally informative dimensions (MID) [8, 15], it has significant advantages in computational efficiency, number of local optima, and suitability for high-dimensional feature spaces. The elliptical-LNP model may also be easily extended to incorporate spike-history effects by adding linear projections of the neuron’s spike history as inputs, as in the generalized linear model (GLM) [9,17,25,31]. We feel the synthesis of multi-dimensional nonlinear stimulus sensitivity (as described here) and non-Poisson, history-dependent spiking presents a promising tool for unlocking the statistical structure of the neural code. 8 References [1] J. Bussgang. Crosscorrelation functions of amplitude-distorted gaussian signals. RLE Technical Reports, 216, 1952. [2] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Comput. Neural Syst., 12:199–213, 2001. [3] R. de Ruyter and W. Bialek. Real-time performance of a movement-senstivive neuron in the blowfly visual system. Proc. R. Soc. Lond. B, 234:379–414, 1988. [4] O. Schwartz, E. J. Chichilnisky, and E. P. Simoncelli. Characterizing neural gain control using spiketriggered covariance. Adv. Neural Information Processing Systems, pages 269–276, 2002. [5] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization. J. Vision, 6(4):484–507, 7 2006. [6] E. P. Simoncelli, J. Pillow, L. Paninski, and O. Schwartz. Characterization of neural responses with stochastic stimuli. The Cognitive Neurosciences, III, chapter 23, pages 327–338. MIT Press, 2004. [7] S. Gerwinn, J. Macke, M. Seeger, and M. Bethge. Bayesian inference for spiking neuron models with a sparsity prior. Adv. in Neural Information Processing Systems 20, pages 529–536. MIT Press, 2008. [8] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network: Comput. Neural Syst., 14:437–464, 2003. [9] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis. J. Vision, 6(4):414–428, 4 2006. [10] C. M. Bishop. Bayesian PCA. Adv. in Neural Information Processing Systems, pages 382–388, 1999. [11] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. J. the Royal Statistical Society. Series B, Statistical Methodology, pages 611–622, 1999. [12] T. P. Minka. Automatic choice of dimensionality for PCA. NIPS, pages 598–604, 2001. [13] M. Welling, F. Agakov, and C. K. I. Williams. Extreme components analysis. Adv. in Neural Information Processing Systems 16. MIT Press, 2004. [14] Y. Chen and M. Welling. Bayesian extreme components analysis. IJCAI, 2009. [15] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural Comput, 16(2):223–250, Feb 2004. [16] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46(6):945–956, Jun 2005. [17] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Comput. Neural Syst., 15(04):243–262, November 2004. [18] N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. Synergy in a neural code. Neural Comput, 12(7):1531–1552, Jul 2000. [19] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243–262, 2004. [20] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Comput. Neural Syst., 12:289–316, 2001. [21] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions. NIPS, 15, 2003. [22] S. V. David, N. Mesgarani, and S. A. Shamma. Estimating sparse spectro-temporal receptive fields with natural stimuli. Network: Comput. Neural Syst., 18(3):191–212, 2007. [23] I. H. Stevenson, J. M. Rebesco, N. G. Hatsopoulos, Z. Haga, L. E. Miller, and K. P. K¨ording. Bayesian inference of functional connectivity and network structure from spikes. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17(3):203–213, 2009. [24] S. Gerwinn, J. H Macke, and M. Bethge. Bayesian inference for generalized linear models for spiking neurons. Frontiers in Computational Neuroscience, 2010. [25] A. Calabrese, J. W. Schumacher, D. M. Schneider, L. Paninski, and S. M. N. Woolley. A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds. PLoS One, 6(1):e16104, 2011. [26] W. James and C. Stein. Estimation with quadratic loss. 4th Berkeley Symposium on Mathematical Statistics and Probability, 1:361–379, 1960. [27] M. Tipping. Sparse Bayesian learning and the relevance vector machine. JMLR, 1:211–244, 2001. [28] D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. PNAS, 100:2197–2202, 2003. [29] K. R. Rad and L. Paninski. Efficient, adaptive estimation of two-dimensional firing rate surfaces via gaussian process methods. Network: Comput. Neural Syst., 21(3-4):142–168, 2010. [30] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. Adv. in Neural Information Processing Systems 20, pages 1625–1632. MIT Press, 2008. [31] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J. Neurophysiol, 93(2):1074–1089, 2005. 9
2011
117
4,166
Adaptive Hedge Tim van Erven Department of Mathematics VU University De Boelelaan 1081a 1081 HV Amsterdam, the Netherlands tim@timvanerven.nl Peter Gr¨unwald Centrum Wiskunde & Informatica (CWI) Science Park 123, P.O. Box 94079 1090 GB Amsterdam, the Netherlands pdg@cwi.nl Wouter M. Koolen CWI and Department of Computer Science Royal Holloway, University of London Egham Hill, Egham, Surrey TW20 0EX, United Kingdom wouter@cs.rhul.ac.uk Steven de Rooij Centrum Wiskunde & Informatica (CWI) Science Park 123, P.O. Box 94079 1090 GB Amsterdam, the Netherlands s.de.rooij@cwi.nl Abstract Most methods for decision-theoretic online learning are based on the Hedge algorithm, which takes a parameter called the learning rate. In most previous analyses the learning rate was carefully tuned to obtain optimal worst-case performance, leading to suboptimal performance on easy instances, for example when there exists an action that is significantly better than all others. We propose a new way of setting the learning rate, which adapts to the difficulty of the learning problem: in the worst case our procedure still guarantees optimal performance, but on easy instances it achieves much smaller regret. In particular, our adaptive method achieves constant regret in a probabilistic setting, when there exists an action that on average obtains strictly smaller loss than all other actions. We also provide a simulation study comparing our approach to existing methods. 1 Introduction Decision-theoretic online learning (DTOL) is a framework to capture learning problems that proceed in rounds. It was introduced by Freund and Schapire [1] and is closely related to the paradigm of prediction with expert advice [2, 3, 4]. In DTOL an agent is given access to a fixed set of K actions, and at the start of each round must make a decision by assigning a probability to every action. Then all actions incur a loss from the range [0, 1], and the agent’s loss is the expected loss of the actions under the probability distribution it produced. Losses add up over rounds and the goal for the agent is to minimize its regret after T rounds, which is the difference in accumulated loss between the agent and the action that has accumulated the least amount of loss. The most commonly studied strategy for the agent is called the Hedge algorithm [1, 5]. Its performance crucially depends on a parameter η called the learning rate. Different ways of tuning the learning rate have been proposed, which all aim to minimize the regret for the worst possible sequence of losses the actions might incur. If T is known to the agent, then the learning rate may be tuned to achieve worst-case regret bounded by p T ln(K)/2, which is known to be optimal as T and K become large [4]. Nevertheless, by slightly relaxing the problem, one can obtain better guarantees. Suppose for example that the cumulative loss L∗ T of the best action is known to the agent beforehand. Then, if the learning rate is set appropriately, the regret is bounded by p 2L∗ T ln(K) + ln(K) [4], which has the same asymptotics as the previous bound in the worst case 1 (because L∗ T ≤T) but may be much better when L∗ T turns out to be small. Similarly, Hazan and Kale [6] obtain a bound of 8 p VARmax T ln(K) + 10 ln(K) for a modification of Hedge if the cumulative empirical variance VARmax T of the best expert is known. In applications it may be unrealistic to assume that T or (especially) L∗ T or VARmax T is known beforehand, but at the cost of slightly worse constants such problems may be circumvented using either the doubling trick (setting a budget on the unknown quantity and restarting the algorithm with a double budget when the budget is depleted) [4, 7, 6], or a variable learning rate that is adjusted each round [4, 8]. Bounding the regret in terms of L∗ T or VARmax T is based on the idea that worst-case performance is not the only property of interest: such bounds give essentially the same guarantee in the worst case, but a much better guarantee in a plausible favourable case (when L∗ T or VARmax T is small). In this paper, we pursue the same goal for a different favourable case. To illustrate our approach, consider the following simplistic example with two actions: let 0 < a < b < 1 be such that b −a > 2ϵ. Then in odd rounds the first action gets loss a + ϵ and the second action gets loss b −ϵ; in even rounds the actions get losses a −ϵ and b + ϵ, respectively. Informally, this seems like a very easy instance of DTOL, because the cumulative losses of the actions diverge and it is easy to see from the losses which action is the best one. In fact, the Follow-the-Leader strategy, which puts all probability mass on the action with smallest cumulative loss, gives a regret of at most 1 in this case — the worst-case bound O( p L∗ T ln(K)) is very loose by comparison, and so is O( p VARmax T ln(K)), which is of the same order p T ln(K). On the other hand, for Follow-the-Leader one cannot guarantee sublinear regret for worst-case instances. (For example, if one out of two actions yields losses 1 2, 0, 1, 0, 1, . . . and the other action yields losses 0, 1, 0, 1, 0, . . ., its regret will be at least T/2 −1.) To get the best of both worlds, we introduce an adaptive version of Hedge, called AdaHedge, that automatically adapts to the difficulty of the problem by varying the learning rate appropriately. As a result we obtain constant regret for the simplistic example above and other ‘easy’ instances of DTOL, while at the same time guaranteeing O( p L∗ T ln(K)) regret in the worst case. It remains to characterise what we consider easy problems, which we will do in terms of the probabilities produced by Hedge. As explained below, these may be interpreted as a generalisation of Bayesian posterior probabilities. We measure the difficulty of the problem in terms of the speed at which the posterior probability of the best action converges to one. In the previous example, this happens at an exponential rate, whereas for worst-case instances the posterior probability of the best action does not converge to one at all. Outline In the next section we describe a new way of tuning the learning rate, and show that it yields essentially optimal performance guarantees in the worst case. To construct the AdaHedge algorithm, we then add the doubling trick to this idea in Section 3, and analyse its worst-case regret. In Section 4 we show that AdaHedge in fact incurs much smaller regret on easy problems. We compare AdaHedge to other instances of Hedge by means of a simulation study in Section 5. The proof of our main technical lemma is postponed to Section 6, and open questions are discussed in the concluding Section 7. Finally, longer proofs are only available as Additional Material in the full version at arXiv.org. 2 Tuning the Learning Rate Setting Let the available actions be indexed by k ∈{1, . . . , K}. At the start of each round t = 1, 2, . . . the agent A is to assign a probability wk t to each action k by producing a vector wt = (w1 t , . . . , wK t ) with nonnegative components that sum up to 1. Then every action k incurs a loss ℓk t ∈[0, 1], which we collect in the loss vector ℓt = (ℓ1 t, . . . , ℓK t ), and the loss of the agent is wt · ℓt = PK k=1 wk t ℓk t . After T rounds action k has accumulated loss Lk T = PT t=1 ℓk t , and the agent’s regret is RA(T) = T X t=1 wt · ℓt −L∗ T , where L∗ T = min1≤k≤K Lk T is the cumulative loss of the best action. 2 Hedge The Hedge algorithm chooses the weights wk t+1 proportional to e−ηLk t , where η > 0 is the learning rate. As is well-known, these weights may essentially be interpreted as Bayesian posterior probabilities on actions, relative to a uniform prior and pseudo-likelihoods P k t = e−ηLk t = Qt s=1 e−ηℓk s [9, 10, 4]: wk t+1 = e−ηLk t P k′ e−ηLk′ t = 1 K · P k t Bt , where Bt = X k 1 K · P k t = X k 1 K · e−ηLk t (1) is a generalisation of the Bayesian marginal likelihood. And like the ordinary marginal likelihood, Bt factorizes into sequential per-round contributions: Bt = tY s=1 ws · e−ηℓs. (2) We will sometimes write wt(η) and Bt(η) instead of wt and Bt in order to emphasize the dependence of these quantities on η. The Learning Rate and the Mixability Gap A key quantity in our and previous [4] analyses is the gap between the per-round loss of the Hedge algorithm and the per-round contribution to the negative logarithm of the “marginal likelihood” BT , which we call the mixability gap: δt(η) = wt(η) · ℓt −  −1 η ln(wt(η) · e−ηℓt)  . In the setting of prediction with expert advice, the subtracted term coincides with the loss incurred by the Aggregating Pseudo-Algorithm (APA) which, by allowing the losses of the actions to be mixed with optimal efficiency, provides an idealised lower bound for the actual loss of any prediction strategy [9]. The mixability gap measures how closely we approach this ideal. As the same interpretation still holds in the more general DTOL setting of this paper, we can measure the difficulty of the problem, and tune η, in terms of the cumulative mixability gap: ∆T (η) = T X t=1 δt(η) = T X t=1 wt(η) · ℓt + 1 η ln BT (η). We proceed to list some basic properties of the mixability gap. First, it is nonnegative and bounded above by a constant that depends on η: Lemma 1. For any t and η > 0 we have 0 ≤δt(η) ≤η/8. Proof. The lower bound follows by applying Jensen’s inequality to the concave function ln, the upper bound from Hoeffding’s bound on the cumulant generating function [4, Lemma A.1]. Further, the cumulative mixability gap ∆T (η) can be related to L∗ T via the following upper bound, proved in the Additional Material: Lemma 2. For any T and η ∈(0, 1] we have ∆T (η) ≤ηL∗ T + ln(K) e −1 . This relationship will make it possible to provide worst-case guarantees similar to what is possible when η is tuned in terms of L∗ T . However, for easy instances of DTOL this inequality is very loose, in which case we can prove substantially better regret bounds. We could now proceed by optimizing the learning rate η given the rather awkward assumption that ∆T (η) is bounded by a known constant b for all η, which would be the natural counterpart to an analysis that optimizes η when a bound on L∗ T is known. However, as ∆T (η) varies with η and is unknown a priori anyway, it makes more sense to turn the analysis on its head and start by fixing η. We can then simply run the Hedge algorithm until the smallest T such that ∆T (η) exceeds an appropriate budget b(η), which we set to b(η) =  1 η + 1 e−1  ln(K). (3) 3 When at some point the budget is depleted, i.e. ∆T (η) ≥b(η), Lemma 2 implies that η ≥ q (e −1) ln(K)/L∗ T , (4) so that, up to a constant factor, the learning rate used by AdaHedge is at least as large as the learning rates proportional to p ln(K)/L∗ T that are used in the literature. On the other hand, it is not too large, because we can still provide a bound of order O( p L∗ T ln(K)) on the worst-case regret: Theorem 3. Suppose the agent runs Hedge with learning rate η ∈(0, 1], and after T rounds has just used up the budget (3), i.e. b(η) ≤∆T (η) < b(η) + η/8. Then its regret is bounded by RHedge(η)(T) < q 4 e−1L∗ T ln(K) + 1 e−1 ln(K) + 1 8. Proof. The cumulative loss of Hedge is bounded by T X t=1 wt · ℓt = ∆T (η) −1 η ln BT < b(η) + η/8 −1 η ln BT ≤ 1 e−1 ln(K) + 1 8 + 2 η ln(K) + L∗ T , (5) where we have used the bound BT ≥1 K e−ηL∗ T . Plugging in (4) completes the proof. 3 The AdaHedge Algorithm We now introduce the AdaHedge algorithm by adding the doubling trick to the analysis of the previous section. The doubling trick divides the rounds in segments i = 1, 2, . . ., and on each segment restarts Hedge with a different learning rate ηi. For AdaHedge we set η1 = 1 initially, and scale down the learning rate by a factor of φ > 1 for every new segment, such that ηi = φ1−i. We monitor ∆t(ηi), measured only on the losses in the i-th segment, and when it exceeds its budget bi = b(ηi) a new segment is started. The factor φ is a parameter of the algorithm. Theorem 5 below suggests setting its value to the golden ratio φ = (1 + √ 5)/2 ≈1.62 or simply to φ = 2. Algorithm 1 AdaHedge(φ) ▷Requires φ > 1 η ←φ for t = 1, 2, . . . do if t = 1 or ∆≥b then ▷Start a new segment η ←η/φ; b ←( 1 e−1 + 1 η) ln(K) ∆←0; w = (w1, . . . , wK) ←( 1 K , . . . , 1 K ) end if ▷Make a decision Output probabilities w for round t Actions receive losses ℓt ▷Prepare for the next round ∆←∆+ w · ℓt + 1 η ln(w · e−ηℓt) w ←(w1 · e−ηℓ1 t , . . . , wK · e−ηℓK t )/(w · e−ηℓt) end for end The regret of AdaHedge is determined by the number of segments it creates: the fewer segments there are, the smaller the regret. Lemma 4. Suppose that after T rounds, the AdaHedge algorithm has started m new segments. Then its regret is bounded by RAdaHedge(T) < 2 ln(K) φm −1 φ −1  + m  1 e−1 ln(K) + 1 8  . Proof. The regret per segment is bounded as in (5). Summing over all m segments, and plugging in Pm i=1 1/ηi = Pm−1 i=0 φi = (φm −1)/(φ −1) gives the required inequality. 4 Using (4), one can obtain an upper bound on the number of segments that leads to the following guarantee for AdaHedge: Theorem 5. Suppose the agent runs AdaHedge for T rounds. Then its regret is bounded by RAdaHedge(T) ≤φ p φ2 −1 φ −1 q 4 e−1L∗ T ln(K) + O ln(L∗ T + 2) ln(K)  , For details see the proof in the Additional Material. The value for φ that minimizes the leading factor is the golden ratio φ = (1 + √ 5)/2, for which φ p φ2 −1/(φ −1) ≈3.33, but simply taking φ = 2 leads to a very similar factor of φ p φ2 −1/(φ −1) ≈3.46. 4 Easy Instances While the previous sections reassure us that AdaHedge performs well for the worst possible sequence of losses, we are also interested in its behaviour when the losses are not maximally antagonistic. We will characterise such sequences in terms of convergence of the Hedge posterior probability of the best action: w∗ t (η) = max 1≤k≤K wk t (η). (Recall that wk t is proportional to e−ηLk t−1, so w∗ t corresponds to the posterior probability of the action with smallest cumulative loss.) Technically, this is expressed by the following refinement of Lemma 1, which is proved in Section 6. Lemma 6. For any t and η ∈(0, 1] we have δt(η) ≤(e −2)η 1 −w∗ t (η)  . This lemma, which may be of independent interest, is a variation on Hoeffding’s bound on the cumulant generating function. While Lemma 1 leads to a bound on ∆T (η) that grows linearly in T, Lemma 6 shows that ∆T (η) may grow much slower. In fact, if the posterior probabilities w∗ t converge to 1 sufficiently quickly, then ∆T (η) is bounded, as shown by the following lemma. Recall that L∗ T = min1≤k≤K Lk T . Lemma 7. Let α and β be positive constants, and let τ ∈Z+. Suppose that for t = τ, τ + 1, . . . , T there exists a single action k∗that achieves minimal cumulative loss Lk∗ t = L∗ t , and for k ̸= k∗the cumulative losses diverge as Lk t −L∗ t ≥αtβ. Then for all η > 0 T X t=τ 1 −w∗ t+1(η)  ≤CK η−1/β, where CK = (K −1)α−1/βΓ(1 + 1 β ) is a constant that does not depend on η, τ or T. The lemma is proved in the Additional Material. Together with Lemmas 1 and 6, it gives an upper bound on ∆T (η), which may be used to bound the number of segments started by AdaHedge. This leads to the following result, whose proof is also delegated to the Additional Material. Let s(m) denote the round in which AdaHedge starts its m-th segment, and let Lk r(m) = Lk s(m)+r−1 −Lk s(m)−1 denote the cumulative loss of action k in that segment. Lemma 8. Let α > 0 and β > 1/2 be constants, and let CK be as in Lemma 7. Suppose there exists a segment m∗∈Z+ started by AdaHedge, such that τ := ⌊8 ln(K)φ(m∗−1)(2−1/β) −8(e − 2)CK + 1⌋≥1 and for some action k∗the cumulative losses in segment m∗diverge as Lk r(m∗) −Lk∗ r (m∗) ≥αrβ for all r ≥τ and k ̸= k∗. (6) Then AdaHedge starts at most m∗segments, and hence by Lemma 4 its regret is bounded by a constant: RAdaHedge(T) = O(1). In the simplistic example from the introduction, we may take α = b −a −2ϵ and β = 1, such that (6) is satisfied for any τ ≥1. Taking m∗large enough to ensure that τ ≥1, we find that AdaHedge never starts more than m∗= 1 + ⌈logφ( e−2 α ln(2) + 1 8 ln(2))⌉segments. Let us also give an example of a probabilistic setting in which Lemma 8 applies: 5 Theorem 9. Let α > 0 and δ ∈(0, 1] be constants, and let k∗be a fixed action. Suppose the loss vectors ℓt are independent random variables such that the expected differences in loss satisfy min k̸=k∗E[ℓk t −ℓk∗ t ] ≥2α for all t ∈Z+. (7) Then, with probability at least 1 −δ, AdaHedge starts at most m∗= 1 + l logφ (K −1)(e −2) α ln(K) + ln 2K/(α2δ)  4α2 ln(K) + 1 8 ln(K) m (8) segments and consequently its regret is bounded by a constant: RAdaHedge(T) = O K + log(1/δ)  . This shows that the probabilistic setting of the theorem is much easier than the worst case, for which only a bound on the regret of order O( p T ln(K)) is possible, and that AdaHedge automatically adapts to this easier setting. The proof of Theorem 9 is in the Additional Material. It verifies that the conditions of Lemma 8 hold with sufficient probability for β = 1, and α and m∗as in the theorem. 5 Experiments We compare AdaHedge to other hedging algorithms in two experiments involving simulated losses. 5.1 Hedging Algorithms Follow-the-Leader. This algorithm is included because it is simple and very effective if the losses are not antagonistic, although as mentioned in the introduction its regret is linear in the worst case. Hedge with fixed learning rate. We also include Hedge with a fixed learning rate η = q 2 ln(K)/L∗ T , (9) which achieves the regret bound p 2 ln(K)L∗ T + ln(K)1. Since η is a function of L∗ T , the agent needs to use post-hoc knowledge to use this strategy. Hedge with doubling trick. The common way to apply the doubling trick to L∗ T is to set a budget on L∗ T and multiply it by some constant φ′ at the start of each new segment, after which η is optimized for the new budget [4, 7]. Instead, we proceed the other way around and with each new segment first divide η by φ = 2 and then calculate the new budget such that (9) holds when ∆t(η) reaches the budget. This way we keep the same invariant (η is never larger than the right-hand side of (9), with equality when the budget is depleted), and the frequency of doubling remains logarithmic in L∗ T with a constant determined by φ, so both approaches are equally valid. However, controlling the sequence of values of η allows for easier comparison to AdaHedge. AdaHedge (Algorithm 1). Like in the previous algorithm, we set φ = 2. Because of how we set up the doubling, both algorithms now use the same sequence of learning rates 1, 1/2, 1/4, . . . ; the only difference is when they decide to start a new segment. Hedge with variable learning rate. Rather than using the doubling trick, this algorithm, described in [8], changes the learning rate each round as a function of L∗ t . This way there is no need to relearn the weights of the actions in each block, which leads to a better worst-case bound and potentially better performance in practice. Its behaviour on easy problems, as we are currently interested in, has not been studied. 5.2 Generating the Losses In both experiments we choose losses in {0, 1}. The experiments are set up as follows. 1Cesa-Bianchi and Lugosi use η = ln(1 + p 2 ln K/L∗ T ) [4], but the same bound can be obtained for the simplified expression we use. 6 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 10 20 30 40 50 60 70 80 90 100 Number of Rounds Regret Hedge (doubling) Hedge (fixed learning rate) Hedge (variable learning rate) AdaHedge Follow the leader (a) I.I.D. losses 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 2 4 6 8 10 12 14 16 18 20 Number of Rounds Regret Hedge (doubling) Hedge (fixed learning rate) Hedge (variable learning rate) AdaHedge Follow the leader (b) Correlated losses Figure 1: Simulation results I.I.D. losses. In the first experiment, all T = 10 000 losses for all K = 4 actions are independent, with distribution depending only on the action: the probabilities of incurring loss 1 are 0.35, 0.4, 0.45 and 0.5, respectively. The results are then averaged over 50 repetitions of the experiment. Correlated losses. In the second experiment, the T = 10 000 loss vectors are still independent, but no longer identically distributed. In addition there are dependencies within the loss vectors ℓt, between the losses for the K = 2 available actions: each round is hard with probability 0.3, and easy otherwise. If round t is hard, then action 1 yields loss 1 with probability 1 −0.01/t and action 2 yields loss 1 with probability 1−0.02/t. If the round is easy, then the probabilities are flipped and the actions yield loss 0 with the same probabilities. The results are averaged over 200 repetitions. 5.3 Discussion and Results Figure 1 shows the results of the experiments above. We plot the regret (averaged over repetitions of the experiment) as a function of the number of rounds, for each of the considered algorithms. I.I.D. Losses. In the first considered regime, the accumulated losses for each action diverge linearly with high probability, so that the regret of Follow-the-Leader is bounded. Based on Theorem 9 we expect AdaHedge to incur bounded regret also; this is confirmed in Figure 1(a). Hedge with a fixed learning rate shows much larger regret. This happens because the learning rate, while it optimizes the worst-case bound, is much too small for this easy regime. In fact, if we would include more rounds, the learning rate would be set to an even smaller value, clearly showing the need to determine the learning rate adaptively. The doubling trick provides one way to adapt the learning rate; indeed, we observe that the regret of Hedge with the doubling trick is initially smaller than the regret of Hedge with fixed learning rate. However, unlike AdaHedge, the algorithm never detects that its current value of η is working well; instead it keeps exhausting its budget, which leads to a sequence of clearly visible bumps in its regret. Finally, it appears that the Hedge algorithm with variable learning rate also achieves bounded regret. This is surprising, as the existing theory for this algorithm only considers its worst-case behaviour, and the algorithm was not designed to do specifically well in easy regimes. Correlated Losses. In the second simulation we investigate the case where the mean cumulative loss of two actions is extremely close — within O(log t) of one another. If the losses of the actions where independent, such a small difference would be dwarfed by random fluctuations in the cumulative losses, which would be of order O( √ t). Thus the two actions can only be distinguished because we have made their losses dependent. Depending on the application, this may actually be a more natural scenario than complete independence as in the first simulation; for example, we can think of the losses as mistakes of two binary classifiers, say, two naive Bayes classifiers with different smoothing parameters. In such a scenario, losses will be dependent, and the difference in cumulative loss will be much smaller than O( √ t). In the previous experiment, the posterior weights of the actions 7 converged relatively quickly for a large range of learning rates, so that the exact value of the learning rate was most important at the start (e.g., from 3000 rounds onward Hedge with fixed learning rate does not incur much additional regret any more). In this second setting, using a high learning rate remains important throughout. This explains why in this case Hedge with variable learning rate can no longer keep up with Follow-the-Leader. The results for AdaHedge are also interesting: although Theorem 9 does not apply in this case, we may still hope that ∆t(η) grows slowly enough that the algorithm does not start too many segments. This turns out to be the case: over the 200 repetitions of the experiment, AdaHedge started only 2.265 segments on average, which explains its excellent performance in this simulation. 6 Proof of Lemma 6 Our main technical tool is Lemma 6. Its proof requires the following intermediate result: Lemma 10. For any η > 0 and any time t, the function f(ℓt) = ln  wt · e−ηℓt  is convex. This may be proved by observing that f is the convex conjugate of the Kullback-Leibler divergence. An alternative proof based on log-convexity is provided in the Additional Material. Proof of Lemma 6. We need to bound δt = wt(η) · ℓt + 1 η ln(wt(η) · e−ηℓt), which is a convex function of ℓt by Lemma 10. As a consequence, its maximum is achieved when ℓt lies on the boundary of its domain, such that the losses ℓk t are either 0 or 1 for all k, and in the remainder of the proof we will assume (without loss of generality) that this is the case. Now let αt = wt · ℓt be the posterior probability of the actions with loss 1. Then δt = αt + 1 η ln (1 −αt) + αte−η = αt + 1 η ln 1 + αt(e−η −1)  . Using ln x ≤x −1 and e−η ≤1 −η + 1 2η2, we get δt ≤1 2αtη, which is tight for αt near 0. For αt near 1, rewrite δt = αt −1 + 1 η ln(eη(1 −αt) + αt) and use ln x ≤x −1 and eη ≤1 + η + (e −2)η2 for η ≤1 to obtain δt ≤(e −2)(1 −αt)η. Combining the bounds, we find δt ≤(e −2)η min{αt, 1 −αt}. Now, let k∗be an action such that w∗ t = wk∗ t . Then ℓk∗ t = 0 implies αt ≤1 −w∗ t . On the other hand, if ℓk∗ t = 1, then αt ≥w∗ t so 1−αt ≤1−w∗ t . Hence, in both cases min{αt, 1−αt} ≤1−w∗ t , which completes the proof. 7 Conclusion and Future Work We have presented a new algorithm, AdaHedge, that adapts to the difficulty of the DTOL learning problem. This difficulty was characterised in terms of convergence of the posterior probability of the best action. For hard instances of DTOL, for which the posterior does not converge, it was shown that the regret of AdaHedge is of the optimal order O( p L∗ T ln(K)); for easy instances, for which the posterior converges sufficiently fast, the regret was bounded by a constant. This behaviour was confirmed in a simulation study, where the algorithm outperformed existing versions of Hedge. A surprising observation in the experiments was the good performance of Hedge with a variable learning rate on some easy instances. It would be interesting to obtain matching theoretical guarantees, like those presented here for AdaHedge. A starting point might be to consider how fast the posterior probability of the best action converges to one, and plug that into Lemma 6. Acknowledgments The authors would like to thank Wojciech Kotłowski for useful discussions. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886, and by NWO Rubicon grant 680-50-1010. This publication only reflects the authors’ views. 8 References [1] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119–139, 1997. [2] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [3] V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 56(2):153–173, 1998. [4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [5] Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999. [6] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), pages 57–67, 2008. [7] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427–485, 1997. [8] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48–75, 2002. [9] V. Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213–248, 2001. [10] D. Haussler, J. Kivinen, and M. K. Warmuth. Sequential prediction of individual sequences under general loss functions. IEEE Transactions on Information Theory, 44(5):1906–1925, 1998. [11] A. N. Shiryaev. Probability. Springer-Verlag, 1996. 9
2011
118
4,167
Matrix Completion for Multi-label Image Classification Ricardo S. Cabral†,‡ Fernando De la Torre‡ ‡Carnegie Mellon University, Pittsburgh, PA João P. Costeira†, Alexandre Bernardino† †ISR - Instituto Superior Técnico, Lisboa, Portugal rscabral@cmu.edu, ftorre@cs.cmu.edu, {jpc,alex}@isr.ist.utl.pt Abstract Recently, image categorization has been an active research topic due to the urgent need to retrieve and browse digital images via semantic keywords. This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion. Under this setting, classification of testing data is posed as a problem of completing unknown label entries on a data matrix that concatenates training and testing features with training labels. We propose two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and prove its convergence properties. A major advantage of our approach w.r.t. standard discriminative classification methods for image categorization is its robustness to outliers, background noise and partial occlusions both in the feature and label space. Experimental validation on several datasets shows how our method outperforms state-of-the-art algorithms, while effectively capturing semantic concepts of classes. 1 Introduction With the ever-growing amount of digital image data in multimedia databases, there is a great need for algorithms that can provide effective semantic indexing. Categorizing digital images using keywords, however, is the quintessential example of a challenging classification problem. Several aspects contribute to the difficulty of the image categorization problem, including the large variability in appearance, illumination and pose of different objects. Moreover, in the multi-label setting the interaction between objects also needs to be modeled. Over the last decade, progress in the image classification problem has been achieved by using more powerful classifiers and building or learning better image representations. On one hand, standard discriminative approaches such as Support Vector Machines or Boosting have been extended to the multi-label case [28, 14] and incorporated under frameworks such as Multiple Instance Learning [31, 33, 32, 20, 27] and Multi-task Learning [26]. However, a major limitation of discriminative approaches is the lack of robustness to outliers and missing data. Recall most discriminative approaches project the data directly onto linear or non-linear spaces, thus lacking a noise model for it. To address this issue, we propose formulating the image classification problem under a matrix completion framework, that has been fueled by recent advances in Rank Minimization [7, 18]. Using this paradigm, we can easily deal with incomplete descriptions and errors in features and labels. On the other hand, traversal to the use of more powerful classifiers, better image representations, such as SIFT [17] or GIST [21] have boosted recognition and categorization performance. A common approach to represent an object has been to group local descriptors using the bag of words model [24]. Our algorithms make use of the fact that in this model the histogram of an entire image contains information of all of its subparts. By modeling the error in the histogram, our matrix completion algorithm is able to capture semantically discriminative portions of the image, thus obviating the need for training with precise localization, as required by previous methods [31, 33, 32, 20, 27]. 1 Our main contributions are twofold: (1) We propose two new Rank Minimization algorithms, MCPos and MC-Simplex, motivated by the image categorization problem. We study the advantages of matrix completion over classic discriminative approaches and show that performing classification under this paradigm not only improves state-of-the-art results on several datasets, but it does so without recurring to bounding boxes or other precise localization methods in its labeling or modeling. (2) We prove that MC-Pos and MC-Simplex enjoy the same convergence properties of Fixed Point Continuation methods for Rank Minimization without constraints. We also show that this result extends to the framework presented by [11], whose convergence was only empirically verified. 2 Previous Work This section reviews related work in the area of image categorization and the problem of Matrix Completion using a Rank Minimization criterion, optimized with Nuclear Norm methods. Image Categorization Since the seminal work of Barnard et al. [3], many researchers have addressed the problem of associating words to images. Image semantic understanding is now typically formulated as a multi-label problem. In this setting, each image may be simultaneously categorized into more than one of a set of predefined categories. An important difference between multi-class classification and multi-label classification is that classes in multi-class classification are assumed to be mutually exclusive whereas in multi-label classification are normally interdependent from one another. Therefore, many multi-class techniques such as SVM, LDA and Boosting have been modified to make use of label correlations to improve multi-label classification performance [28, 14]. Additionally, Multiple Instance Learning (MIL) approaches can be used to explicitly model the relations between labels and specific regions of the image, as initially proposed by Maron et al. [19]. This framework allows for the localization and classification tasks to benefit from each other, thus reducing noise in the corresponding feature space and making the learned semantic models more accurate [31, 33, 32, 20, 27, 26]. Although promising, the MIL framework is combinatorial, so several approaches have been proposed to avoid local minima and deal with the prohibitive number of possible subregions in an image. Zha et al. [32] make use of hidden CRFs while Vijayanarasimhan et al. [27] recur to multi-set kernels to emphasize instances differently. Yang et al. [31] exploit asymmetric loss functions to balance false positives and negatives. These methods, however, require an explicit enumeration of instances in the image. This is usually obtained by pre-segmenting images to a small fixed number of parts or applied in settings where detectors perform well, such as the problem of associating faces to captioned names [4]. On the other hand, to avoid explicitly enumerating the instances, Nguyen et al. [20] couple constraint generation algorithms with a branch and bound method for fast localization. Multi-task learning has also been proposed as a way to regularize the MIL problem, so as to avoid local minima due to many available degrees of freedom. In this setting, the MIL problem is jointly learned with an easier fully supervised task such as geometric context [26]. Matrix Completion using Rank Minimization Rank Minimization has recently received much attention due to its success in matrix completion problems such as the Netflix challenge, where one wishes to predict a user’s movie preferences based on a subset of his and other people’s choices, or minimum order control [10], where the goal is to find the least complex controller achieving some performance measure. A major breakthrough by [7] states the minimization of the rank function, under broad conditions, can be achieved using the minimizer obtained with the Nuclear Norm (sum of singular values). Since the natural reformulation of the Nuclear Norm gives rise to a Semidefinite Program, existing interior point methods can only handle problems with a number of variables in the order of the hundreds. Thus, several methods have been devised to perform this optimization efficiently [15, 6, 18, 25, 13, 1, 7, 2]. In the last few years, incremental matrix completion methods have also been proposed [1, 2, 5]. In the context of Computer Vision, minimization of the Nuclear Norm has been applied to several problems: Structure from Motion [1, 8, 5], Robust PCA [29], Subspace Alignment [22], Subspace Segmentation [16] and Tag Refinement [34]. 2 3 Multi-label classification using Matrix Completion In a supervised setting, a classifier learns a mapping1 W : X →Y between the space of features X and the space of labels Y, from Ntr tuples of known features and labels. Linear classifiers define (xj, yj) ∈RF × RK, where F is the feature dimension and K the number of classes, and minimize the loss l between the output space and the projection of the input space, as minimize W,b Ntr X j=1 l  yj, [W b]  xj 1  , (1) with parameters W ∈RK×F , b ∈RK. Given (1), Goldberg et al. [11] note that the problem of classifying Ntst test entries can be cast as a Matrix Completion. For this purpose, they concatenate all labels and features into matrices Ytst ∈RK×Ntst, Ytr ∈RK×Ntr, Xtst ∈RF ×Ntst, Xtr ∈ RF ×Ntr. If the linear model holds, then the matrix Z0 =   Ytr Ytst Xtr Xtst 1⊤  , (2) should be rank deficient. The classification process consists in filling the unknown entries in Ytst such that the Nuclear Norm of Z0, the convex envelope of rank [7], is minimized. Since in practice we may have errors and partial knowledge in the training labels and in the feature space, let us define ΩX and ΩY as the set of known feature and label entries and zero out unknown entries in Z0. Additionally, let the data matrix Z be defined as a sum of Z0 with an error term E, as Z = " ZY ZX Z1 # =   Ytr Ytst Xtr Xtst 1⊤  +   EYtr 0 EXtr EXtst 0⊤  = Z0 + E, (3) where ZY , ZX, Z1 respectively stand for the label, feature and last rows of Z. Then, classification can be posed as an optimization problem that finds the best label assignment Ytst and error matrix E such that the rank of Z is minimized. The resulting optimization problem, MC-1 [11], is minimize Ytst,EXtr,EYtr,EXtst µ∥Z∥∗+ 1 |ΩX| X ij∈ΩX cx(zij, z0ij) + λ |ΩY | X ij∈ΩY cy(zij, z0ij) subject to Z = Z0 + E Z1 = 1⊤. (4) Note that the constraint that Z1 remains equal to one is necessary for dealing with the bias b in (1). To avoid trivial solutions, large distortions of Z from known entries in Z0 are penalized according to losses cy(·) and cx(·): in [11], the former is defined as the Least Squares error, while the latter is a log loss to emphasize the error on entries switching classes as opposed to their absolute numerical difference. The parameters λ, µ are positive trade-off weights between better feature adaptation and label error correction. We note this problem is equivalent to minimize Z µ∥Z∥∗+ 1 |ΩX| X ij∈ΩX cx(zij, z0ij) + λ |ΩY | X ij∈ΩY cy(zij, z0ij) subject to Z1 = 1⊤ (5) which can be solved using a Fixed Point Continuation method [18], described in Sec. 4.1. 1 Bold capital letters denote matrices (e.g., D), bold lower-case letters represent column vectors (e.g., d). All non-bold letters denote scalar variables. dj is the jth column of the matrix D. dij denotes the scalar in the row i and column j of D. ⟨d1, d2⟩denotes the inner product between two vectors d1 and d2. ||d||2 2 = ⟨d, d⟩= P i d2 i denotes the squared Euclidean Norm of the vector d. tr(A) = P i aii is the trace of the matrix A. ||A||∗designates the Nuclear Norm (sum of singular values) of A.∥A||2 F = tr(A⊤A) = tr(AA⊤) designates the squared Frobenius Norm of A. 1k ∈Rk×1 is a vector of ones, 0k×n ∈Rk×n is a matrix of zeros and Ik ∈Rk×k denotes the identity matrix (dimensions are omitted when trivially inferred). 3 4 Matrix completion for multi-label classification of visual data In this section, we present the main contributions of this paper: the application of Matrix Completion to the multi-label image classification problem and its convergence proof. In the bag of (visual) words (BoW) model [24], visual data is encoded by the distribution of features among entries from a codebook. The codebook is typically created by clustering local feature representations such as SIFT [17] or GIST [21]. In this setting, the formulation MC-1 (5) is inadequate because it introduces negative values to the histograms in ZX. To address this issue, we replace the penalties used so they reflect the nature of data: we replace the Least-Squares penalty in cx(·) by Pearson’s χ2 distance, that takes into account the asymmetry in histogram data, as χ2(zj, z0j) = F X i=1 χ2 i (zij, z0ij) = F X i=1 (zij −z0ij)2 zij + z0ij . (6) As the modification to cx(·) alone does not ensure that data retains its histogram nature, we add to (5) a constraint that all feature vectors in ZX are either positive, resulting in the MC-Pos formulation minimize Z µ∥Z∥∗+ 1 |ΩX| X ij∈ΩX χ2 i (zij, z0ij) + λ |ΩY | X ij∈ΩY cy(zij, z0ij) subject to ZX ≥0 Z1 = 1⊤, (7) or alternatively, that they belong to the Probability Simplex P (positive elements that sum to 1), resulting in the MC-Simplex formulation minimize Z µ∥Z∥∗+ 1 |ΩX| X ij∈ΩX χ2 i (zij, z0ij) + λ |ΩY | X ij∈ΩY cy(zij, z0ij) subject to ZX ∈P Z1 = 1⊤, (8) depending on whether we wish to perform normalization on the data or not. Additionally, we note that the Log label error in cy(·), albeit asymmetric, incurs in unnecessary penalization of entries belonging to the same class as the original entry (see Fig. 1). Therefore, we generalize this loss to progressively resemble smooth version of the Hinge loss, specified by the parameter γ as cy(zij, z0ij) = 1 γ log(1 + exp (−γz0ijzij)). (9) ï0.8 ï0.6 ï0.4 ï0.2 0 0.2 0.4 0.6 0.8 1 0 0.5 1 z cy(1,z) Log Loss (a = 1) Gen. Log Loss (a = 3) Gen. Log Loss (a = 30) Figure 1: Comparison of Generalized Log loss with Log loss (γ = 1). 4.1 Fixed Point continuation (FPC) for MC-1 Albeit convex, the Nuclear Norm operator makes (5), (7), (8) not smooth. Since the natural reformulation of a Nuclear Norm minimization is a Semidefinite Program, existing off-the-shelf interior point methods are not applicable due to the large dimension of Z. Thus, several methods have been devised to efficiently optimize this problem class [15, 6, 18, 25, 13, 1, 7, 2]. The FPC method [18], in particular, is comprised by a series of gradient updates h(·) = I(·) −τg(·) with step size τ and gradient g(·) given by the error penalizations cx(·) and cy(·). These steps are alternated with a shrinkage operator Sν(·) = max (0, · −ν), applied to the singular values of the resulting matrix, so the rank is minimized. Provided h(·) is a contraction, this method provably converges to the optimal solution for the unconstrained problem. However, the formulation MC-1 (5) is constrained so in [11] 4 a projection step is added to the algorithm (see Alg. 1), whose convergence was only empirically verified. In this paper, we prove the convergence of FPC to the constrained problem class by using the fact that projections onto Convex sets are also non-expansive; thus, the composition of gradient, shrinkage and projection steps is also a contraction. Since the problem is convex, a unique fixed point exists in the optimal solution of the problem. First, let us write some preliminary results. Algorithm 1 FPC algorithm for solving MC-1 (5) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for µ = µ1 > µ2 > · · · > µk do while Rel. Error > ϵ do Gradient Descent: A = h(A) = Z −τg(Z) Shrink: A = UΣV⊤, Z = USτµ(Σ)V⊤ Project onto feasible set: Z1 = 1⊤ end while end for Output: Complete Matrix Z Lemma 1 Let pC(·) be a projection operator onto any given convex set C. Then, pC(·) is nonexpansive. Moreover, ∥pC(Z) −pC(Z∗)∥= ∥Z −Z∗∥iff pC(Z) −pC(Z∗) = Z −Z∗. Proof For the first part, we apply the Cauchy-Schwarz inequality on the fact that (see [12, pg. 48]) ∥pC(Z) −pC(Z∗)∥2 F ≤⟨pC(Z) −pC(Z∗), Z −Z∗⟩. (10) For the second part, let us write ∥pC(Z) −pC(Z∗) −(Z −Z∗) ∥2 F = ∥pC(Z) −pC(Z∗)∥2 F + ∥Z −Z∗∥2 F −2⟨pC(Z) −pC(Z∗), Z −Z∗⟩, (11) where the inner product can be bounded by applying (10), yielding ∥pC(Z)−pC(Z∗)−(Z −Z∗) ∥2 F ≤∥pC(Z)−pC(Z∗)∥2 F +∥Z−Z∗∥2 F −2∥pC(Z)−pC(Z∗)∥2 F . (12) Introducing our hypothesis ∥pC(Z) −pC(Z∗)∥= ∥Z −Z∗∥into (12) yields ∥pC(Z) −pC(Z∗) −(Z −Z∗) ∥2 F ≤0, (13) from which we conclude an equality is in place. Theorem 2 Let Z∗be an optimal solution to (5). Then Z is also an optimal solution if ∥pC(Sν(h(Z))) −pC(Sν(h(Z∗)))∥= ∥Z −Z∗∥. (14) Proof Using the non-expansiveness of operators pC(·), Sν(·) and h(·) (Lemma 1 and [18, Lemmas 1 and 2]), we can write ∥Z −Z∗∥= ∥pC(Sν(h(Z))) −pC(Sν(h(Z∗)))∥≤ ≤∥Sν(h(Z)) −Sν(h(Z∗))∥≤ ∥h(Z) −h(Z∗))∥≤∥Z −Z∗∥, (15) so we conclude the inequalities are equalities. Using the second part of the Lemmas, we get pC(Sν(h(Z∗))) −pC(Sν(h(Z))) = Sν(h(Z∗)) −Sν(h(Z)) = h(Z∗) −h(Z) = Z −Z∗. (16) Since Z∗is optimal, by the projected subgradient method, we have pC(Sν(h(Z∗))) = Z∗, (17) which, in turn, implies that pC(Sν(h(Z))) = Z, (18) from which we conclude Z is an optimal solution to (5). We are now ready to prove the convergence of MC-1 to a fixed point Z∗= pC(Sν(h(Z∗))), which allows us to state its result as an optimal solution of (5). Theorem 3 The sequence {Zk} generated by Alg. 1 converges to Z∗, an optimal solution of (5). Proof Once we note the non-expansiveness of pC(·), Sν(·) and h(·) ensures the composite operator pC(Sν(h(·))) is also non-expansive, we can use the same rationale as in [18, Theorem 4]. 5 4.2 Fixed Point Continuation for MC-Pos and MC-Simplex The condition that h(·) is a contraction [18, Lemma 2] used for proving the convergence of Alg. 1 is still valid for the new loss functions proposed in (6) and (9), since the new gradient g(zij) =        λ |ΩY | −z0ij 1+exp (γz0ijzij) if zij ∈ΩY , 1 |ΩX| z2 ij+2zijz0ij−3z0 2 ij (zij+z0ij)2 if zij ∈ΩX, 0 otherwise (19) is contractive, provided we choose a step size of τ ∈[0, min ( 4|ΩY | λγ , τX|ΩX|)]. These values are easily obtained by noting the gradient of the Log loss function is Lipschitz continuous with L = 0.25 and choosing τX such that the χ2 error, for the Non-Negative Orthant, is Lipschitz continuous with L = 1. Key to the feasibility of (7) and (8) within this algorithmic framework, however, is an efficient way to project Z onto the newly defined constraint sets. While for MC-Pos (7) projecting a vector onto the Non-Negative Orthant is done in closed form by truncating negative components to zero, efficiently performing the projection onto the Probability Simplex in MC-Simplex (8) is not straightforward. We note, however, this is a projection onto a convex subset of an ℓ1 ball [9]. Therefore, we can explore the dual of the projection problem and use a sorting procedure to implement this projection in closed form, as described in Alg. 2. The final algorithms are summarized in Alg. 3 and Alg. 4. Algorithm 2 Projection of a vector onto probability Simplex Input: Vector v ∈RF to be projected Sort v into µ : µ1 ≥µ2 ≥... ≥µF Find ρ = max n j ∈n : µj −1 j (Pρ i=1 µi −1) > 0 o Compute θ = 1 ρ (Pρ i=1 µi −1) Output: w s.t. wi = max{vi −θ, 0} Algorithm 3 FPC Solver for MC-Pos (7) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for µ = µ1 > µ2 > · · · > µk do while Rel. Error > ϵ do Gradient Descent: A = Z −τg(Z) Shrink 1: A = UΣV⊤ Shrink 2: Z = USτµ(Σ)V⊤ Project ZX: ZX = max (ZX, 0) Project Z1: Z1 = 1⊤ end while end for Output: Complete Matrix Z Algorithm 4 FPC Solver for MC-Simplex (8) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for µ = µ1 > µ2 > · · · > µk do while Rel. Error > ϵ do Gradient Descent: A = Z −τg(Z) Shrink: A = UΣV⊤ Shrink: Z = USτµ(Σ)V⊤ Project ZX onto P (Alg. 2) Project Z1: Z1 = 1⊤ end while end for Output: Complete Matrix Z 5 Experiments This section presents the performance evaluation of the proposed algorithms MC-Pos (7) and MCSimplex (8) in image categorization tasks. We compare our results with MC-1 (5) and standard discriminative and MIL approaches [30, 20, 27, 26, 33, 32] on three datasets: CMU-Face , MSRC and 15 Scene. For our algorithms and MC-1, the values considered for the parameter tuning were γ ∈{1, 3, 30}, λ ∈[10−4, 102]. The continuation steps require a decreasing sequence of µ, which we chose as µk = 0.25µk−1, stopping when µ = 10−12. We use µ0 = 0.25σ1, where σ1 is the largest singular value of Z0. Convergence was defined as a relative change in the objective function smaller than 10−2. CMU-Face dataset This dataset consists in 624 images of 20 subject faces with several expressions and poses, under two conditions: wearing sunglasses and not. We test single class classifica6 tion and localization. As in [20], our training set is built using images of the first 8 subjects (126 images with glasses and 128 without), leaving the remainder for testing (370, equally split among the classes). We describe each image by extracting 10000 SIFT features [17] at random scales and positions and quantizing them onto a 1000 visual codebook, obtained by performing hierarchical k-means clustering on 100000 features randomly selected from the training set. For this dataset, note that subjects were captured in a very similar environment, so the most discriminative part is the eye region. Thus, Nguyen et al. [20] argue that better results are obtained when the classifier training is restricted to that region. Since the face position varies, they propose using a Multiple Instance Learning framework (MIL-SegSVM), that localizes the most discriminative region in each image while learning a classifier to split both classes. We compare the results of our classifier to the ones obtained by MIL-SegSVM as well as a Support Vector Machine. For the SVM, we either trained with the entire image information (SVM-Img) or with only the features extracted from the relevant, manually labeled, region of the eyes. For MC-1, MC-Pos and MC-Simplex, we proceed as follows. We fill Z with the label vector and the BoW histograms of each entire image and leave the test set labels Ytst as unknown entries. For the MCSimplex case, we preprocess Z by ℓ1-normalizing each histogram in ZX. This is done to avoid the Simplex projection picking a single bin and zeroing out the others, due to scale disparities in the bin counts. The obtained results are presented in Table 1, in terms of area under ROC curve (AUROC). These indicate both the fully supervised and the MIL approaches are more robust to the variability introduced by background noise, when compared to what is obtained when training without localization information (SVM-Img). However, this is done at either the cost of cumbersome labeling efforts or iteratively approximating the solution of MIL, an integer quadratic problem. By using Matrix Completion, in turn, we are able to surpass these classification scores by solving a single convex minimization, since our error term E removes noise introduced by non-discriminative parts of the image. To validate this hypothesis, we run a sliding window search in the images using the same size criteria of [20]. We search for the box having the normalized histogram most closely resemblant to the corrected version in ZX according to the χ2 distance, and get the results shown in Fig. 2 (similar results were obtained using MC-Simplex). These show how the corrected histograms capture the semantic concept being trained. When comparing Matrix Completion approaches, we note that while the previous method MC-1 achieves competitive performance against previous baselines, it is outperformed by both MC-Pos, showing the improvement introduced by the domain knowledge constraints. Moreover, MC-1 does not allow to pursue further localization of the class representative since it introduces erroneous negative numbers in the histograms (Fig. 5). Figure 2: Histograms corrected by MC-Pos (7) preserve semantic meaning. Table 1: AUROC result comparison for the CMU Face dataset. Method AUROC SVM-Img [20] 0.90 SVM-FS [20] 0.94 MIL-SegSVM [20] 0.96 MC-1 [11] 0.96 MC-Pos 0.97 MC-Simplex 0.96 0 200 400 600 800 1000 0 50 100 bin bin count 0 200 400 600 800 1000 ï4 ï2 0 2 4 bin bin count Figure 3: Erroneous histogram correction performed by MC-1 (5). Top: Global view. Bottom: Rescaling shows negative entries. 7 MSRC dataset Next, we run our method on a multi-label object recognition setting. The MSRC dataset consists of 591 real world images distributed among 21 classes, with an average of 3 classes present per image. We mimic the setup of [27] and use as features histograms of Textons [23] concatenated with histograms of colors in the L+U+V space. Our algorithm is given the task of classifying the presence of each object class in the images. We proceed as in the CMU-Face dataset. In this dataset, we compare our formulations to MC-1 and several state-of-the-art approaches for categorization using Multiple-Label Multiple Instance Learning: Multiple Set Kernel MIL SVM (MSK-MIL) by Vijayanarasimhan et al. [27], Multi-label Multiple Instance Learning (ML-MIL) approach by Zha [32] and the Multi-task Random Texton Forest (MTL-RF) method of Vezhnevets et al. [26]. For localization, [32, 27] enumerate possible instances as the result of pre-segmenting images into a fixed number of parts, whereas [26] provides pixel level classification. The obtained average AUROC scores using 5-fold cross validation are shown in Table 2. Results show our methods significantly outperform MC-1. Moreover, MC-Simplex (8) outperforms results given by MIL techniques. Again, the fact that feature errors are corrected allows us to achieve good results while training with the entire image. This is opposed to relying on full blown pixel classification or segmentation techniques, which is still considered an open problem in Computer Vision. Moreover, we point out that MSK-MIL is a kernel approach as opposed to ours which, despite non-linear error penalizations, assumes a linear classification model in the feature space. 15 Scene dataset Finally, we test the performance of our algorithm for scene classification. Scenes differ from objects in the sense that they do not necessarily have a constrained physical location in the image. The 15 scene dataset is a multi-label dataset with 4485 images. According to the feature study in [30], we use GIST [21], the non-histogram feature achieving best results on this dataset. Notice that while not a BoW model, this feature represents the output energy of a bank of 24 filters, thus also positive. We run our algorithm on 10 folds, each comprised by 1500 training and 2985 test examples. The results on Table 3 show again that our method is able to achieve results comparable to state-of-the-art. One should note here that the state-of-the-art results are obtained by using a kernel space, whereas our method is essentially a linear technique aided by non linear error corrections. When we compare our results to using a linear kernel, MC-Simplex is able to achieve better performance. Relating to the results obtained for CMU-Face and MSRC datasets, we note that the roles of the MC-Pos and MC-Simplex are inverted, thus emphasizing the need for existence of models with and without normalization. Table 2: 5-fold CV AUROC comparison for the MSRC dataset (Std. Dev. negligible at this precision). Method Avg. AUROC MSK-MIL[27] 0.90 ML-MIL [32] 0.90 MTL-RF [26] 0.89 MC-1 [11] 0.87 MC-Pos 0.92 MC-Simplex 0.90 Table 3: 10-fold CV AUROC comparison for the 15 Scene dataset (Std. Dev. negligible at this precision). Method Avg. AUROC 1-vs-all Linear SVM [30] 0.94 1-vs-all χ2 SVM [30] 0.97 MC-1 [11] 0.90 MC-Pos 0.91 MC-Simplex 0.94 6 Conclusions We presented two new convex methods for performing semi-supervised multi-label classification of histogram data, with proven convergence properties. Casting the classification under a Matrix Completion framework allows for easily handling of partial data and labels and robustness to outliers. Moreover, since histograms of full images contain the information for parts contained therein, the error embedded in our formulation is able to capture intra class variability arising from different backgrounds. Experiments show that our methods perform comparably to state-of-the-art MIL methods in several image datasets, surpassing them in several cases, without the need for precise localization of objects in the training set. Acknowledgements: Support for this research was provided by the Portuguese Foundation for Science and Technology through the Carnegie Mellon Portugal program under the project FCT/CMU/P11. Partially funded by FCT project Printart PTDC/EEA-CRO/098822/2008. Fernando De la Torre is partially supported by Grant CPS-0931999 and NSF IIS-1116583. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 8 References [1] P. Aguiar, J. Xavier, and M. Stosic. Spectrally optimal factorization of incomplete matrices. In CVPR, 2008. [2] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Proceedings of the 48th Annual Allerton Conference, 2010. [3] K. Barnard and D. Forsyth. Learning the semantics of words and pictures. In ICCV, 2001. [4] T. L. Berg, A. C. Berg, J. Edwards, and D. A. Forsyth. Who’s in the Picture? In NIPS, 2004. [5] R. S. Cabral, J. P. Costeira, F. De la Torre, and A. Bernardino. Fast incremental method for matrix completion: an application to trajectory correction. In ICIP, 2011. [6] J.-F. Cai, E. J. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM J. on Optimization, 20(4):1956–1982, 2008. [7] E. Candes and B. Recht. Exact low-rank matrix completion via convex optimization. In Allerton, 2008. [8] Y. Dai, H. Li, and M. He. Element-wise factorization for n-view projective reconstruction. In ECCV, 2010. [9] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1-ball for learning in high dimensions. In ICML, 2008. [10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings American Control Conference, 2001. [11] A. B. Goldberg, X. Zhu, B. Recht, J. ming Xu, and R. Nowak. Transduction with matrix completion: Three birds with one stone. In NIPS, 2010. [12] J.-B. Hiriart-Urruty and C. Lemaréchal. Fundamentals of Convex Analysis. Grundlehren der mathematien Wissenschaften. Springer-Verlag, New York–Heildelberg–Berlin, 2001. [13] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inf. Theor., 56:2980–2998, June 2010. [14] V. Lavrenko, R. Manmatha, and J. Jeon. A model for learning the semantics of pictures. In NIPS, 2003. [15] Z. Lin and M. Chen. The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices. preprint. [16] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In ICML, 2010. [17] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004. [18] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, to appear. [19] O. Maron and A. Ratan. Multiple-instance learning for natural scene classification. In ICML, 1998. [20] M. H. Nguyen, L. Torresani, F. De la Torre, and C. Rother. Weakly supervised discriminative localization and classification: a joint learning process. In ICCV, 2009. [21] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42:145–175, 2001. [22] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma. Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. In CVPR, 2010. [23] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In ECCV, 2006. [24] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In CVPR, 2003. [25] K.-C. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. preprint, 2009. [26] A. Vezhnevets and J. Buhmann. Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning. In CVPR, 2010. [27] S. Vijayanarasimhan and K. Grauman. What’s it going to cost you?: Predicting effort vs. informativeness for multi-label image annotations. In CVPR, 2009. [28] H. Wang, C. Ding, and H. Huang. Multi-label linear discriminant analysis. In ECCV, 2010. [29] J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization. In NIPS, 2009. [30] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. [31] C. Yang, M. Dong, and J. Hua. Region-based image annotation using asymmetrical support vector machine-based multiple-instance learning. In CVPR, 2006. [32] Z.-j. Zha, X.-s. Hua, T. Mei, J. Wang, and G.-j. Q. Zengfu. Joint multi-label multi-instance learning for image classification. In CVPR, 2008. [33] Z.-h. Zhou and M. Zhang. Multi-instance multi-label learning with application to scene classification. In NIPS, 2006. [34] G. Zhu, S. Yan, and Y. Ma. Image tag refinement towards low-rank, content-tag prior and error sparsity. In ICMM, 2010. 9
2011
119
4,168
Bayesian Bias Mitigation for Crowdsourcing Fabian L. Wauthier University of California, Berkeley flw@cs.berkeley.edu Michael I. Jordan University of California, Berkeley jordan@cs.berkeley.edu Abstract Biased labelers are a systemic problem in crowdsourcing, and a comprehensive toolbox for handling their responses is still being developed. A typical crowdsourcing application can be divided into three steps: data collection, data curation, and learning. At present these steps are often treated separately. We present Bayesian Bias Mitigation for Crowdsourcing (BBMC), a Bayesian model to unify all three. Most data curation methods account for the effects of labeler bias by modeling all labels as coming from a single latent truth. Our model captures the sources of bias by describing labelers as influenced by shared random effects. This approach can account for more complex bias patterns that arise in ambiguous or hard labeling tasks and allows us to merge data curation and learning into a single computation. Active learning integrates data collection with learning, but is commonly considered infeasible with Gibbs sampling inference. We propose a general approximation strategy for Markov chains to efficiently quantify the effect of a perturbation on the stationary distribution and specialize this approach to active learning. Experiments show BBMC to outperform many common heuristics. 1 Introduction Crowdsourcing is becoming an increasingly important methodology for collecting labeled data, as demonstrated among others by Amazon Mechanical Turk, reCAPTCHA, Netflix, and the ESP game. Motivated by the promise of a wealth of data that was previously impractical to gather, researchers have focused in particular on Amazon Mechanical Turk as a platform for collecting label data [11, 12]. Unfortunately, the data collected from crowdsourcing services is often very dirty: Unhelpful labelers may provide incorrect or biased responses that can have major, uncontrolled effects on learning algorithms. Bias may be caused by personal preference, systematic misunderstanding of the labeling task, lack of interest or varying levels of competence. Further, as soon as malicious labelers try to exploit incentive schemes in the data collection cycle yet more forms of bias enter. The typical crowdsourcing pipeline can be divided into three main steps: 1) Data collection. The researcher farms the labeling tasks to a crowdsourcing service for annotation and possibly adds a small set of gold standard labels. 2) Data curation. Since labels from the crowd are contaminated by errors and bias, some filtering is applied to curate the data, possibly using the gold standard provided by the researcher. 3) Learning. The final model is learned from the curated data. At present these steps are often treated as separate. The data collection process is often viewed as a black box which can only be minimally controlled. Although the potential for active learning to make crowdsourcing much more cost effective and goal driven has been appreciated, research on the topic is still in its infancy [4, 9, 17]. Similarly, data curation is in practice often still performed as a preprocessing step, before feeding the data to a learning algorithm [6, 8, 10, 11, 12, 14]. We believe that the lack of systematic solutions to these problems can make crowdsourcing brittle in situations where labelers are arbitrarily biased or even malicious, such as when tasks are particularly ambiguous/hard or when opinions or ratings are solicited. 1 Our goal in the current paper is to show how crowdsourcing can be leveraged more effectively by treating the overall pipeline within a Bayesian framework. We present Bayesian Bias Mitigation for Crowdsourcing (BBMC) as a way to achieve this. BBMC makes two main contributions. The first is a flexible latent feature model that describes each labeler’s idiosyncrasies through multiple shared factors and allows us to combine data curation and learning (steps 2 and 3 above) into one inferential computation. Most of the literature accounts for the effects of labeler bias by assuming a single, true latent labeling from which labelers report noisy observations of some kind [2, 3, 4, 6, 8, 9, 10, 11, 15, 16, 17, 18]. This assumption is inappropriate when labels are solicited on subjective or ambiguous tasks (ratings, opinions, and preferences) or when learning must proceed in the face of arbitrarily biased labelers. We believe that an unavoidable and necessary extension of crowdsourcing allows multiple distinct (yet related) “true” labelings to co-exist, but that at any one time we may be interested in learning about only one of these “truths.” Our BBMC framework achieves this by modeling the sources of labeler bias through shared random effects. Next, we want to perform active learning in this model to actively query labelers, thus integrating step 1 with steps 2 and 3. Since our model requires Gibbs sampling for inference, a straightforward application of active learning is infeasible: Each active learning step relies on many inferential computations and would trigger a multitude of subordinate Gibbs samplers to be run within one large Gibbs sampler. Our second contribution is a new methodology for solving this problem. The basic idea is to approximate the stationary distribution of a perturbed Markov chain using that of an unperturbed chain. We specialize this idea to active learning in our model and show that the computations are efficient and that the resulting active learning strategy substantially outperforms other active learning schemes. The paper is organized as follows: We discuss related work in Section 2. In Section 3 we propose the latent feature model for labelers and in Section 4 we discuss the inference procedure that combines data curation and learning. Then we present a general method to approximate the stationary distribution of perturbed Markov chains and apply it to derive an efficient active learning criterion in Section 5. In Section 6 we present comparative results and we draw conclusions in Section 7. 2 Related Work Relevant work on active learning in multi-teacher settings has been reported in [4, 9, 17]. Sheng et al. [9] use the multiset of current labels with a random forest label model to score which task to next solicit a repeat label for. The quality of the labeler providing the new label does not enter the selection process. In contrast, Donmez et al. [4] actively choose the labeler to query next using a formulation based on interval estimation, utilizing repeated labelings of tasks. The task to label next is chosen separately from the labeler. In contrast, our BBMC framework can perform meaningful inferences even without repeated labelings of tasks and treats the choices of which labeler to query on which task as a joint choice in a Bayesian framework. Yan et al. [17] account for the effects of labeler bias through a coin flip observation model that filters a latent label assignment, which in turn is modeled through a logistic regression. As in [4], the labeler is chosen separately from the task by solving two optimization problems. In other work on data collection strategies, Wais et al. [14] require each labeler to first pass a screening test before they are allowed to label any more data. In a similar manner, reputation systems of various forms are used to weed out historically unreliable labelers before collecting data. Consensus voting among multiple labels is a commonly used data curation method [12, 14]. It works well when low levels of bias or noise are expected but becomes unreliable when labelers vary greatly in quality [9]. Earlier work on learning from variable-quality teachers was revisited by Smyth et al. [10] who looked at estimating the unknown true label for a task from a set of labelers of varying quality without external gold standard signal. They used an EM strategy to iteratively estimate the true label and the quality of the labelers. The work was extended to a Bayesian formulation by Raykar et al. [8] who assign latent variables to labelers capturing their mislabeling probabilities. Ipeirotis et al. [6] pointed out that a biased labeler who systematically mislabels tasks is still more useful than a labeler who reports labels at random. A method is proposed that separates low quality labelers from high quality, but biased labelers. Dekel and Shamir [3] propose a two-step process. First, they filter labelers by how far they disagree from an estimated true label and then retrain the model on the cleaned data. They give a generalization analysis for anticipated performance. In a 2 similar vein, Dekel and Shamir [2] show that, under some assumptions, restricting each labeler’s influence on a learned model can control the effect of low quality or malicious labelers. Together with [8, 16, 18], [2] and [3] are among the recent lines of research to combine data curation and learning. Work has also focused on using gold standard labels to determine labeler quality. Going beyond simply counting tasks on which labelers disagree with the gold standard, Snow et al. [11] estimate labeler quality in a Bayesian setting by comparing to the gold standard. Lastly, collaborative filtering has looked extensively at completing sparse matrices of ratings [13]. Given some gold standard labels, collaborative filtering methods could in principle also be used to curate data represented by a sparse label matrix. However, collaborative filtering generally does not combine this inference with the learning of a labeler-specific model for prediction (step 3). Also, with the exception of [19], active learning has not been studied in the collaborative filtering setting. 3 Modeling Labeler Bias In this section we specify a Bayesian latent feature model that accounts for labeler bias and allows us to combine data curation and learning into a single inferential calculation. For ease of exposition we will focus on binary classification, but our method can be generalized. Suppose we solicited labels for n tasks from m labelers. In practical settings it is unlikely that a task is labeled by more than 3–10 labelers [14]. Let task descriptions xi ∈Rd, i = 1, . . . , n, be collected in the matrix X. The label responses are recorded in the matrix Y so that yi,l ∈{−1, 0, +1} denotes the label given to task i by labeler l. The special label 0 denotes that a task was not labeled. A researcher is interested in learning a model that can be used to predict labels for new tasks. When consensus is lacking among labelers, our desideratum is to predict the labels that the researcher (or some other expert) would have assigned, as opposed to labels from an arbitrary labeler in the crowd. In this situation it makes sense to stratify the labelers in some way. To facilitate this, the researcher r provides gold standard labels in column r of Y to a small subset of the tasks. Loosely speaking, the gold standard allows our model to curate the data by softly combining labels from those labelers whose responses will useful in predicting r’s remaining labels. It is important to note that our model is entirely symmetric in the role of the researcher and labelers. If instead we were interested in predicting labels for labeler l, we would treat column l as containing the gold standard labels. The researcher r is just another labeler, the only distinction being that we wish to learn a model that predicts r’s labels. To simplify our presentation, we will accordingly refer to labelers in the crowd and the researcher occasionally just as “labelers,” indexed by l, and only use the distinguishing index r when necessary. We account for each labeler l’s idiosyncrasies by assigning a parameter βl ∈Rd to l and modeling labels yi,l, i = 1, . . . , n, through a probit model p(yi,l|xi, βl) = Φ(yi,lx⊤ i βl), where Φ(·) is the standard normal CDF. This section describes a joint Bayesian prior on parameters βl that allows for parameter sharing; two labelers that share parameters have similar responses. In the context of this model, the two-step process of data curation and learning a model that predicts r’s labels is reduced to posterior inference on βr given X and Y . Inference softly integrates labels from relevant labelers, while at the same time allowing us to predict r’s remaining labels. 3.1 Latent feature model Labelers are not independent, so it makes sense to impose structure on the set of βl’s. Specifically, each vector βl is modeled as the sum of a set of latent factors that are shared across the population. Let zl be a latent binary vector for labeler l whose component zl,b indicates whether the latent factor γb ∈Rd contributes to βl. In principle, our model allows for an infinite number of distinct factors (i.e., zl is infinitely long), as long as only a finite number of those factors is active (i.e., P∞ b=1 zl,b < ∞). Let γ = (γb)∞ b=1 be the concatenation of the factors γb. Given a labeler’s vector zl and factors γ we define the parameter βl = P∞ b=1 zl,bγb. For multiple labelers we let the infinitely long matrix Z = (z1, . . . , zm)⊤collect the vectors zl and define the index set of all observed labels L = {(i, l) : yi,l ̸= 0}, so that the likelihood is p(Y |X, γ, Z) = Y (i,l)∈L p(yi,l|xi, γ, zl) = Y (i,l)∈L Φ(yi,lx⊤ i βl). (1) To complete the model we need to specify priors for γ and Z. We define the prior distribution of each γb to be a zero-mean Gaussian γb ∼N(0, σ2I), and let Z be governed by an Indian Buffet 3 Process (IBP) Z ∼IBP(α), parameterized by α [5]. The IBP is a stochastic process on infinite binary matrices consisting of vectors zl. A central property of the IBP is that with probability one, a sampled matrix Z contains only a finite number of nonzero entries, thus satisfying our requirement that P∞ b=1 zl,b < ∞. In the context of our model this means that when working with finite data, with probability one only a finite set of features is active across all labelers. To simplify notation in subsequent sections, we use this observation and collapse an infinite matrix Z and vector γ to finite dimensional equivalents. From now on, we think of Z as the finite matrix having all zero-columns removed. Similarly, we think of γ as having all blocks γb corresponding to zero-columns in the original matrix Z removed. With probability one, the number of columns K(Z) of Z is finite so we may write βl = PK(Z) b=1 zl,bγb ≜Z⊤ l γ, with Zl = zl ⊗I the Kronecker product of zl and I. 4 Inference: Data Curation and Learning We noted before that our model combines data curation and learning in a single inferential computation. In this section we lay out the details of a Gibbs sampler for achieving this. Given a task j which was not labeled by r (and possibly no other labeler), we need the predictive probability p(yj,r = +1|X, Y ) = Z p(yj,r = +1|xj, βr)p(βr|X, Y )dβr. (2) To approximate this probability we need to gather samples from the posterior p(βr|Y, X). Equivalently, since βr = Z⊤ r γ, we need samples from the posterior p(γ, zr|Y, X). Because latent factors can be shared across multiple labelers, the posterior will softly absorb label information from labelers whose latent factors tend to be similar to those of the researcher r. Thus, Bayesian inference p(βr|Y, X) automatically combines data curation and learning by weighting label information through an inferred sharing structure. Importantly, the posterior is informative even when no labeler in the crowd labeled any of the tasks the researcher labeled. 4.1 Gibbs sampling For Gibbs sampling in the probit model one commonly augments the likelihood in Eq. (1) with intermediate random variables T = {ti,l : yi,l ̸= 0}. The generative model for the label yi,l given xi, γ and zl first samples ti,l from a Gaussian N(β⊤ l xi, 1). Conditioned on ti,l, the label is then defined as yi,l = 2 1[ti,l > 0] −1. Figure 1(a) summarizes the augmented graphical model by letting β denote the collection of βl variables. We are interested in sampling from p(γ, zr|Y, X). The Gibbs sampler for this lives in the joint space of T, γ, Z and samples iteratively from the three conditional distributions p(T|X, γ, Z), p(γ|X, Z, T) and p(Z|γ, X, Y ). The different steps are: Sampling T given X, γ, Z: We independently sample elements of T given X, γ, Z from a truncated normal as (ti,l|X, γ, Z) ∼N yi,l(ti,l|γ⊤Zlxi, 1), (3) where we use N −1(t|µ, 1) and N +1(t|µ, 1) to indicate the density of the negative- and positiveorthant-truncated normal with mean µ and variance 1, respectively, evaluated at t. Sampling γ given X, Z, T: Straightforward calculations show that conditional sampling of γ given X, Z, T follows a multivariate Gaussian (γ|X, Z, T) ∼N(γ|µ, Σ), (4) where Σ−1 = I σ2 + X (i,l)∈L Zlxix⊤ i Z⊤ l µ = Σ X (i,l)∈L Zlxiti,l. (5) 4 γ Z T Y X β (a) βt+2 βt−1 βt βt+1 ˆβt−1 ˆβt+1 ˆβt+2 ˆβt (b) Figure 1: (a) A graphical model of the augmented latent feature model. Each node corresponds to a collection of random variables in the model. (b) A schematic of our approximation scheme. The top chain indicates an unperturbed Markov chain, the lower a perturbed Markov chain. Rather than sampling from the lower chain directly (dashed arrows), we transform samples from the top chain to approximate samples from the lower (wavy arrows). Sampling Z given γ, X, Y : Finally, for inference on Z given γ, X, Y we may use techniques outlined in [5]. We are interested in performing active learning in our model, so it is imperative to keep the conditional sampling calculations as compact as possible. One simple way to achieve this is to work with a finite-dimensional approximation to the IBP: We constrain Z to be an m × K matrix, assigning each labeler at most K active latent features. This is not a substantial limitation; in practice the truncated IBP often performs comparably, and for K →∞converges in distribution to the full IBP [5]. Let m−l,b = P l′̸=l zl′,b be the number of labelers, excluding l, with feature b active. Define βl(zl,b) = zl,bγb + P b′̸=b zl,b′γb′ as the parameter βl either specifically including or excluding γb. Now if we let z−l,b be the column b of Z, excluding element zl,b then updated elements of Z can be sampled one by one as p(zl,b = 1|z−l,b) = m−l,b + α K n + α K (6) p(zl,b|z−l,b, γ, X, Y ) ∝p(zl,b|z−l,b) Y i:yi,l̸=0 Φ(yi,lx⊤ i βl(zl,b)). (7) After reaching approximate stationarity, we collect samples (γs, Zs) , s = 1, . . . , S, from the Gibbs sampler as they are generated. We then compute samples from p(βr|Y, X) by writing βs r = Zs r ⊤γs. 5 Active Learning The previous section outlined how, given a small set of gold standard labels from r, the remaining labels can be predicted via posterior inference p(βr|Y, X). In this section we take an active learning approach [1, 7] to incrementally add labels to Y so as to quickly learn about βr while reducing data acquisition costs. Active learning allows us to guide the data collection process through model inferences, thus integrating the data collection, data curation and learning steps of the crowdsourcing pipeline. We envision a unified system that automatically asks for more labels from those labelers on those tasks that are most useful in inferring βr. This is in contrast to [9], where labelers cannot be targeted with tasks. It is also unlike [4] since we can let labelers be arbitrarily unhelpful, and differs from [17] which assumes a single latent truth. A well-known active learning criterion popularized by Lindley [7] is to label that task next which maximizes the prior-posterior reduction in entropy of an inferential quantity of interest. The original formulation has been generalized beyond entropy to arbitrary utility functionals U(·) of the updated posterior probability [1]. The functional U(·) is a model parameter that can depend on the type of inferences we are interested in. In our particular setup, we wish to infer the parameter βr to predict labels for the researcher r. Suppose we chose to solicit a label for task i′ from labeler l′, which produced label yi′,l′. The utility of this observation is U(p(βr|yi′,l′)). The average utility of receiving 5 a label on task i′ from labeler l′ is I((i′, l′) , p(βr)) = E(U(p(βr|yi′,l′))), where the expectation is taken with respect to the predictive label probabilities p(yi′,l′|xi′) = R p(yi′,l′|xi′, βl′)p(βl′)dβl′. Active learning chooses that pair (i′, l′) which maximizes I((i′, l′) , p(βr)). If we want to choose the next task for the researcher to label, we constrain l′ = r. To query the crowd we let l′ ̸= r. Similarly, we can constrain i′ to any particular value or subset of interest. For the following discussion we let U(p(βr|yi′,l′)) = ||Ep(βr)(βr)−Ep(βr|yi′,l′)(βr)||2 be the ℓ2 norm of the difference in means of βr. Picking the task that shifts the posterior mean the most is similar in spirit to the common criterion of maximizing the Kullback-Leibler divergence between the prior and posterior. 5.1 Active learning for MCMC inference A straightforward application of active learning is impractical using Gibbs sampling, because to score a single task-labeler pair (i′, l′) we would have to run two Gibbs samplers (one for each of the two possible labels) in order to approximate the updated posterior distributions. Suppose we started with k task-labeler pairs that active learning could choose from. Depending on the number of selections we wish to perform, we would have to run k ≲g ≲k2 Gibbs samplers within the topmost Gibbs sampler of Section 4. Clearly, such a scoring approach is not practical. To solve this problem, we propose a general purpose strategy to approximate the stationary distribution of a perturbed Markov chain using that of an unperturbed Markov chain. The approximation allows efficient active learning in our model that outperforms na¨ıve scoring both in speed and quality. The main idea can be summarized as follows. Suppose we have two Markov chains, p(βt r|βt−1 r ) and ˆp(ˆβt r|ˆβt−1 r ), the latter of which is a slight perturbation of the former. Denote the stationary distributions by p∞(βr) and ˆp∞(ˆβr), respectively. If we are given the stationary distribution p∞(βr) of the unperturbed chain, then we propose to approximate the perturbed stationary distribution by ˆp∞(ˆβr) ≈ Z ˆp(ˆβr|βr)p∞(βr)dβr. (8) If ˆp(ˆβt|ˆβt−1) = p(ˆβt|ˆβt−1) the approximation is exact. Our hope is that if the perturbation is small enough the above approximation is good. To use this practically with MCMC, we first run the unperturbed MCMC chain to approximate stationarity, and then use samples of p∞(βr) to compute approximate samples from ˆp∞(ˆβr). Figure 1(b) shows this scheme visually. To map this idea to our active learning setup we conceptually let the unperturbed chain p(βt r|βt−1 r ) be the chain on βr induced by the Gibbs sampler in Section 4. The perturbed chain ˆp(ˆβt r|ˆβt−1 r ) represents the chain where we have added a new observation yi′,l′ to the measured data. If we have S samples βs r from p∞(βr), then we approximate the perturbed distribution as ˆp∞(ˆβr) ≈1 S S X s=1 ˆp(ˆβr|βs r), (9) and the active learning score as U(p(βr|yi′,l′)) ≈U  ˆp∞(ˆβr)  . To further specialize this strategy to our model we first rewrite the Gibbs sampler outlined in Section 4. We suppress mentions of X and Y in the subsequent presentation. Instead of first sampling T|γt−1, Z  from Eq. (3), and then sampling (γt|T, Z) from Eq. (4), we combine them into one larger sampling step γt|γt−1, Z  . Starting from a fixed γt−1 and Z we sample from γt as γt|γt−1, Z  d= ηΣ + µ = Σ  ησ−2I + X (i,l)∈L Zlxi  η1 + ti,l|γt−1, Z   , (10) where ηΣ is a zero-mean Gaussian with covariance Σ, and η1 a standard normal random variable. If it were feasible, we could also absorb the intermediate sampling of Z into the notation and write down a single induced Markov chain βt r|βt−1 r  , as referred to in Eqs. (8) and (9). As this is not possible, we will account for Z separately. We see that the effect of adding a new observation yi′,l′ is to perturb the Markov chain in Eq. (10) by adding an element to L. Supposing we added this new observation at time t −1, let Σ(i′,l′) be defined as Σ but with (i′, l′) added to L. Straightforward calculations using the Sherman-Morrison-Woodbury identity on Σ(i′,l′) give that, conditioned on γt−1, Z, we can 6 (a) (b) (c) Figure 2: Examples of easy and ambiguous labeling tasks. We asked labelers to determine if the triangle is to the left or above the square. write the first step of the perturbed Gibbs sampler as a function of the unperturbed Gibbs sampler. If we let Ai′,l′ = ΣZl′xi′x⊤ i′ Z⊤ l′ /(1 + x⊤ i′ Z⊤ l′ ΣZl′xi′) for compactness, then we yield  γt (i′,l′)|γt−1, Z  d= (I −Ai′,l′) γt|γt−1, Z  + Σ(i′,l′)Zl′xi′  η1 + ti′,l′|γt−1, Z  . (11) To approximate the utility U(·) we now appeal to Eq. (9) and estimate the difference in means using recent samples γs, Zs, s = 1, . . . , S from the unperturbed sampler. In terms of Eqs. (10) and (11), U(p(βr|yi′,l′)) = Ep(βr)(βr) −Ep(βr|yi′,l′)(βr) 2 (12) ≈ E 1 S −1 S X s=2 Zs−1 r ⊤γ|γs−1, Zs−1 − γ(i′,l′)|γs−1, Zs−1 ! 2 . (13) By simple cancellations and expectations of truncated normal variables we can reduce the above expression to a sample average of elementary calculations. Note that the sample γs is a realization of γ|γs−1, Zs−1 . We have used this to approximate E γ|γs−1, Zs−1 ≈γs. Thus, the sum only runs over S −1 terms. In principle the exact expectation could also be computed. The final utility calculation is straightforward but too long to expand. Finally, we use samples from the Gibbs sampler to approximate p(yi′,l′|xi′) and estimate I((i′, l′) , p(βr)) for querying labeler l′ on task i′. 6 Experimental Results We evaluated our active learning method on an ambiguous localization task which asked labelers on Amazon Mechanical Turk to determine if a triangle was to the left or above a rectangle. Examples are shown in Figure 6. Tasks such as these are important for learning computer vision models of perception. Rotation, translation and scale, as well as aspect ratios, were pseudo-randomly sampled in a way that produced ambiguous tasks. We expected labelers to use centroids, extreme points and object sizes in different ways to solve the tasks, thus leading to structurally biased responses. Additionally, our model will also have to deal with other forms of noise and bias. The gold standard was to compare only the centroids of the two objects. For training we generated 1000 labeling tasks and solicited 3 labels for each task. Tasks were solved by 75 labelers with moderate disagreement. To emphasize our results, we retained only the subset of 523 tasks with disagreement. We provided about 60 gold standard labels to BBMC and then performed inference and active learning on βr so as to learn a predictive model emulating gold standard labels. We evaluated methods based on the log likelihood and error rate on a held-out test set of 1101 datapoints.1 All results shown in Table 1 were averaged across 10 random restarts. We considered two scenarios. The first compares our model to other methods when no active learning is performed. This will demonstrate the advantages of the latent feature model presented in Sections 3 and 4. The second scenario compares performance of our active learning scheme to various other methods. This will highlight the viability of our overall scheme presented in Section 5 that ties data collection together with data curation and learning. First we show performance without active learning. Here only about 60 gold standard labels and all the labeler data is available for training. The results are shown in the top three rows of Table 1. Our method, “BBMC,” outperforms the other two methods by a large margin. The BBMC scores were computed by running the Gibbs sampler of Section 4 with 2000 iterations burnin and then computing 1The test set was similarly constructed by selecting from 2000 tasks those on which three labelers disagreed. 7 Final Loglik Final Error GOLD −3716 ± 1695 0.0547 ± 0.0102 CONS −421.1 ± 2.6 0.0935 ± 0.0031 BBMC −219.1 ± 3.1 0.0309 ± 0.0033 GOLD-ACT −1957 ± 696 0.0290 ± 0.0037 CONS-ACT −396.1 ± 3.6 0.0906 ± 0.0024 RAND-ACT −186.0 ± 2.2 0.0292 ± 0.0029 DIS-ACT −198.3 ± 5.8 0.0392 ± 0.0052 MCMC-ACT −196.1 ± 6.7 0.0492 ± 0.0050 BBMC-ACT −160.8 ± 3.9 0.0188 ± 0.0018 Table 1: The top three rows give results without and the bottom six rows results with active learning. a predictive model by averaging over the next 20000 iterations. The alternatives include “GOLD,” which is a logistic regression trained only on gold standard labels, and “CONS,” which evaluates logistic regression trained on the overall majority consensus. Training on the gold standard only often overfits, and training on the consensus systematically misleads. Next, we evaluate our active learning method. As before, we seed the model with about 60 gold standard labels. We repeatedly select a new task for which to receive a gold standard label from the researcher. That is, for this experiment we constrained active learning to use l′ = r. Of course, in our framework we could have just as easily queried labelers in the crowd. Following 2000 steps burnin we performed active learning every 200 iterations for a total of 100 selections. The reported scores were computed by estimating a predictive model from the last 200 iterations. The results are shown in the lower six rows of Table 1. Our model with active learning, “BBMC-ACT,” outperforms all alternatives. The first alternative we compared against, “MCMC-ACT,” does active learning with the MCMC-based scoring method outlined in Section 5. In line with our utility U(·) this method scores a task by running two Gibbs samplers within the overall Gibbs sampler and then approximates the expected mean difference of βr. Due to time constraints, we could only afford to run each subordinate chain for 10 steps. Even then, this method requires on the order of 10 × 83500 Gibbs sampling iterations for 100 active learning steps. It takes about 11 hours to run the entire chain, while BBMC only requires 2.5 hours. The MCMC method performs very poorly. This demonstrates our point: Since the MCMC method computes a similar quantity as our approximation, it should perform similarly given enough iterations in each subchain. However, 10 iterations is not nearly enough time for the scoring chains to mix and also quite a small number to compute empirical averages, leading to decreased performance. A more realistic alternative to our model is “DIS-ACT,” which picks one of the tasks with most labeler disagreement to label next. Lastly, the baseline alternatives include “GOLD-ACT” and “CONS-ACT” which pick a random task to label and then learn logistic regressions on the gold standard or consensus labels respectively. Those results can be directly compared against “RAND-ACT,” which uses our model and inference procedure but similarly selects tasks at random. In line with our earlier evaluation, we still outperform these two methods when effectively no active learning is done. 7 Conclusions We have presented Bayesian Bias Mitigation for Crowdsourcing (BBMC) as a framework to unify the three main steps in the crowdsourcing pipeline: data collection, data curation and learning. Our model captures labeler bias through a flexible latent feature model and conceives of the entire pipeline in terms of probabilistic inference. An important contribution is a general purpose approximation strategy for Markov chains that allows us to efficiently perform active learning, despite relying on Gibbs sampling for inference. Our experiments show that BBMC is fast and greatly outperforms a number of commonly used alternatives. Acknowledgements We would like to thank Purnamrita Sarkar for helpful discussions and Dave Golland for assistance in developing the Amazon Mechanical Turk HITs. 8 References [1] K. Chaloner and I. Verdinelli. Bayesian Experimental Design: A Review. Statistical Science, 10(3):273–304, 1995. [2] O. Dekel and O. Shamir. Good Learners for Evil Teachers. In L. Bottou and M. Littman, editors, Proceedings of the 26th International Conference on Machine Learning (ICML). Omnipress, 2009. [3] O. Dekel and O. Shamir. Vox Populi: Collecting High-Quality Labels from a Crowd. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), Montreal, Quebec, Canada, 2009. [4] P. Donmez, J. G. Carbonell, and J. Schneider. Efficiently Learning the Accuracy of Labeling Sources for Selective Sampling. In Proceedings of the 15th ACM SIGKDD, KDD, Paris, France, 2009. [5] T. L. Griffiths and Z. Ghahramani. Infinite Latent Feature Models and the Indian Buffet Process. Technical report, Gatsby Computational Neuroscience Unit, 2005. [6] P. G. Ipeirotis, F. Provost, and J. Wang. Quality Management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP, pages 64–67, Washington DC, 2010. [7] D. V. Lindley. On a Measure of the Information Provided by an Experiment. The Annals of Mathematical Statistics, 27(4):986–1005, 1956. [8] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from Crowds. Journal of Machine Learning Research, 11:1297–1322, April 2010. [9] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get Another Label? Improving Data Quality and Data Mining using Multiple, Noisy Labelers. In Proceeding of the 14th ACM SIGKDD, KDD, Las Vegas, Nevada, 2008. [10] P. Smyth, U. M. Fayyad, M. C. Burl, P. Perona, and P. Baldi. Inferring Ground Truth from Subjective Labelling of Venus Images. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7 (NIPS). MIT Press, 1994. [11] R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. Cheap and Fast—But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. In Proceedings of EMNLP. Association for Computational Linguistics, 2008. [12] A. Sorokin and D. Forsyth. Utility Data Annotation with Amazon Mechanical Turk. In CVPR Workshop on Internet Vision, Anchorage, Alaska, 2008. [13] X. Su and T. M. Khoshgoftaar. A Survey of Collaborative Filtering Techniques. Advances in Artificial Intelligence, 2009:4:2–4:2, January 2009. [14] P. Wais, S. Lingamnei, D. Cook, J. Fennell, B. Goldenberg, D. Lubarov, D. Marin, and H. Simons. Towards Building a High-Quality Workforce with Mechanical Turk. In NIPS Workshop on Computational Social Science and the Wisdom of Crowds, Whistler, BC, Canada, 2010. [15] P. Welinder, S. Branson, S. Belongie, and P. Perona. The Multidimensional Wisdom of Crowds. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances in Neural Information Processing Systems 23 (NIPS). MIT Press, 2010. [16] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22 (NIPS). MIT Press, 2009. [17] Y. Yan, R. Rosales, G. Fung, and J. G. Dy. Active Learning from Crowds. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, Washington, 2011. [18] Y. Yan, R. Rosales, G. Fung, M. Schmidt, G. Hermosillo, L. Bogoni, L. Moy, and J. G. Dy. Modeling Annotator Expertise: Learning When Everybody Knows a Bit of Something. In Proceedings of AISTATS, volume 9, Chia Laguna, Sardinia, Italy, 2010. [19] K. Yu, A. Schwaighofer, V. Tresp, X. Xu, and H. Kriegel. Probabilistic Memory-based Collaborative Filtering. IEEE Transactions On Knowledge and Data Engineering, 16(1):56–69, January 2004. 9
2011
12
4,169
Continuous-Time Regression Models for Longitudinal Networks Duy Q. Vu Department of Statistics Pennsylvania State University University Park, PA 16802 dqv100@stat.psu.edu Arthur U. Asuncion∗ Department of Computer Science University of California, Irvine Irvine, CA 92697 asuncion@ics.uci.edu David R. Hunter Department of Statistics Pennsylvania State University University Park, PA 16802 dhunter@stat.psu.edu Padhraic Smyth Department of Computer Science University of California, Irvine Irvine, CA 92697 smyth@ics.uci.edu Abstract The development of statistical models for continuous-time longitudinal network data is of increasing interest in machine learning and social science. Leveraging ideas from survival and event history analysis, we introduce a continuous-time regression modeling framework for network event data that can incorporate both time-dependent network statistics and time-varying regression coefficients. We also develop an efficient inference scheme that allows our approach to scale to large networks. On synthetic and real-world data, empirical results demonstrate that the proposed inference approach can accurately estimate the coefficients of the regression model, which is useful for interpreting the evolution of the network; furthermore, the learned model has systematically better predictive performance compared to standard baseline methods. 1 Introduction The analysis of the structure and evolution of network data is an increasingly important task in a variety of disciplines, including biology and engineering. The emergence and growth of largescale online social networks also provides motivation for the development of longitudinal models for networks over time. While in many cases the data for an evolving network are recorded on a continuous time scale, a common approach is to analyze “snapshot” data (also known as collapsed panel data), where multiple cross-sectional snapshots of the network are recorded at discrete time points. Various statistical frameworks have been previously proposed for discrete snapshot data, including dynamic versions of exponential random graph models [1, 2, 3] as well as dynamic block models and matrix factorization methods [4, 5]. In contrast, there is relatively little work to date on continuous-time models for large-scale longitudinal networks. In this paper, we propose a general regression-based modeling framework for continuous-time network event data. Our methods are inspired by survival and event history analysis [6, 7]; specifically, we employ multivariate counting processes to model the edge dynamics of the network. Building on recent work in this context [8, 9], we use both multiplicative and additive intensity functions that allow for the incorporation of arbitrary time-dependent network statistics; furthermore, we consider ∗current affiliation: Google Inc. 1 time-varying regression coefficients for the additive approach. The additive form in particular enables us to develop an efficient online inference scheme for estimating the time-varying coefficients of the model, allowing the approach to scale to large networks. On synthetic and real-world data, we show that the proposed scheme accurately estimates these coefficients and that the learned model is useful for both interpreting the evolution of the network and predicting future network events. The specific contributions of this paper are: (1) We formulate a continuous-time regression model for longitudinal network data with time-dependent statistics (and time-varying coefficients for the additive form); (2) we develop an accurate and efficient inference scheme for estimating the regression coefficients; and (3) we perform an experimental analysis on real-world longitudinal networks and demonstrate that the proposed framework is useful in terms of prediction and interpretability. The next section introduces the general regression framework and the associated inference scheme is described in detail in Section 3. Section 4 describes the experimental results on synthetic and real-world networks. Finally, we discuss related work and conclude with future research directions. 2 Regression models for continuous-time network data Below we introduce multiplicative and additive regression models for the edge formation process in a longitudinal network. We also describe non-recurrent event models and give examples of timedependent statistics in this context. 2.1 General framework Assume in our network that nodes arrive according to some stochastic process and directed edges among these nodes are created over time. Given the ordered pair (i, j) of nodes in the network at time t, let Nij(t) be a counting process denoting the number of edges from i to j up to time t. In this paper, each Nij(t) will equal zero or one, though this can be generalized. Combining the individual counting processes of all potential edges gives a multivariate counting process N(t) = (Nij(t) : i, j ∈{1, . . .n}, i ̸= j); we make no assumption about the independence of individual edge counting processes. (See [7] for an overview of counting processes.) We do not consider an edge dissolution process in this paper, although in theory it is possible to do so by placing a second counting process on each edge for dissolution events. (See [10, 3] for different examples of formation–dissolution process models.) As proposed in [9], we model the multivariate counting process via the Doob-Meyer decomposition [7], N(t) = Z t 0 λ(s) ds + M(t), (1) where essentially λ(t) and M(t) may be viewed as the (deterministic) signal and (martingale) noise, respectively. To model the so-called intensity process λ(t), we denote the entire past of the network, up to but not including time t, by Ht−and consider for each potential directed edge (i, j) two possible intensity forms, the multiplicative Cox and the additive Aalen functions [7], respectively: λij(t|Ht−) = Yij(t)α0(t) exp  β⊤s(i, j, t)  ; (2) λij(t|Ht−) = Yij(t)  β0(t) + β(t)⊤s(i, j, t)  , (3) where the “at risk” indicator function Yij(t) equals one if and only if (i, j) could form an edge at time t, a concept whose interpretation is determined by the context (e.g., see Section 2.2). In equations (2) and (3), s(i, j, t) is a vector of p statistics for directed edge (i, j) constructed based on Ht−; examples of these statistics are given in Section 2.2. In each of the two models, the intensity process depends on a linear combination of the coefficients β, which can be time-varying in the additive Aalen formulation. When all elements of sk(i, j, t) equal zero, we obtain the baseline hazards α0(t) and β0(t). The two intensity forms above, the Cox and Aalen, each have their respective strengths (e.g., see [7, chapter 4]). In particular, the coefficients of the Aalen model are quite easy to estimate via linear regression, unlike the Cox model. We leverage this computational advantage to develop an efficient inference algorithm for the Aalen model later in this paper. On the other hand, the Cox model forces the hazard function to be non-negative, while the Aalen model does not—however, in our experiments on both simulated and real-world data we did not encounter any issues with negative hazard functions when using the Aalen model. 2 2.2 Non-recurrent event models for network formation processes If tarr i and tarr j are the arrival times of nodes i and j, then the risk indicator of equations (2) and (3) is Yij(t) = I max(tarr i , tarr j ) < t ≤teij  . The time teij of directed edge (i, j) is taken to be +∞ if the edge is never formed during the observation time. The reason for the upper bound teij is that the counting process is non-recurrent; i.e., formation of an edge means that it can never occur again. The network statistics s(i, j, t) of equations (2) and (3), corresponding to the ordered pair (i, j), can be time-invariant (such as gender match) or time-dependent (such as the number of two-paths from i to j just before time t). Since it has been found empirically that most new edges in social networks are created between nodes separated by two hops [11], we limit our statistics to the following: 1. Out-degree of sender i: s1(i, j, t) = P h∈V,h̸=i Nih(t−) 2. In-degree of sender i: s2(i, j, t) = P h∈V,h̸=i Nhi(t−) 3. Out-degree of receiver j: s3(i, j, t) = P h∈V,h̸=j Njh(t−) 4. In-degree of receiver j: s4(i, j, t) = P h∈V,h̸=j Nhj(t−) 5. Reciprocity: s5(i, j, t) = Nji(t−) 6. Transitivity: s6(i, j, t) = P h∈V,h̸=i,j Nih(t−)Nhj(t−) 7. Shared contactees: s7(i, j, t) = P h∈V,h̸=i,j Nih(t−)Njh(t−) 8. Triangle closure: s8(i, j, t) = P h∈V,h̸=i,j Nhi(t−)Njh(t−) 9. Shared contacters: s9(i, j, t) = P h∈V,h̸=i,j Nhi(t−)Nhj(t−) Here Nji(t−) denotes the value of the counting process (i, j) right before time t. While this paper focuses on the non-recurrent setting for simplicity, one can also develop recurrent models using this framework, by capturing an alternative set of statistics specialized for the recurrent case [8, 12, 9]. Such models are useful for data where interaction edges occur multiple times (e.g., email data). 3 Inference techniques In this section, we describe algorithms for estimating the coefficients of the multiplicative Cox and additive Aalen models. We also discuss an efficient online inference technique for the Aalen model. 3.1 Estimation for the Cox model Recent work has posited Cox models similar to (2) with the goal of estimating general network effects [8, 12] or citation network effects [9]. Typically, α0(t) is considered a nuisance parameter, and estimation for β proceeds by maximization of the so-called partial likelihood of Cox [13]: L(β) = m Y e=1 exp β⊤s(ie, je, te)  Pn i=1 P j̸=i Yij(te) exp β⊤s(i, j, te) , (4) where m is the number of edge formation events, and te, ie, and je are the time, sender, and receiver of the eth event. In this paper, maximization is performed via the Newton-Raphson algorithm. The covariance matrix of ˆβ is estimated as the inverse of the negative Hessian matrix of the last iteration. We use the caching method of [9] to compute the likelihood, the score vector, and the Hessian matrix more efficiently. We will illustrate this method through the computation of the likelihood, where the most expensive computation is for the denominator κ(te) = n X i=1 X j̸=i Yij(te) exp β⊤s(i, j, te)  . (5) For models such as the one in Section 2.2, a na¨ıve update for κ(te) needs O(pn2) operations, where n is the the current number of nodes. A na¨ıve calculation of log L(β) needs O(mpn2) operations (where m is the number of edge events), which is costly since m and n may be large. Calculations of the score vector and Hessian matrix are similar, though they involve higher exponents of p. 3 Alternatively, as in [9], we may simply write κ(te) = κ(te−1) + ∆κ(te), where ∆κ(te) entails all of the possible changes that occur during the time interval [te−1, te). Since we assume in this paper that edges do not dissolve, it is necessary to keep track only of the group of edges whose covariates change during this interval, which we call Ue−1, and those that first become at risk during this interval, which we call Ce−1. These groups of edges may be cached in memory during an initialization step; then, subsequent calculations of ∆κ(te) are simple functions of the values of s(i, j, te−1) and s(i, j, te) for (i, j) in these two groups (for Ce−1, only the time te statistic is relevant). The number of edges cached at each time step tends to be small, generally O(n) because our network statistics s are limited to those based on node degrees and two-paths. This leads to substantial computational savings; since we must still initialize κ(t1), the total computational complexity of each Newton-Raphson iteration is O(p2n2 + m(p2n + p3)). 3.2 Estimation for the Aalen model Inference in model (3) proceeds not for the βk parameters directly but rather for their time-integrals Bk(t) = Z t 0 βk(s)ds. (6) The reason for this is that B(t) = [B1(t), . . . , Bp(t)] may be estimated straightforwardly using a procedure akin to simple least squares [7]: First, let us impose some ordering on the n(n −1) possible ordered pairs (i, j) of nodes. Take W(t) to be the n(n −1) × p matrix whose (i, j)th row equals Yij(t)s(i, j, t)⊤. Then ˆB(t) = Z t 0 J(s)W−(s)dN(s) = X te≤t J(te)W−(te)∆N(te) (7) is the estimator of B(t), where the multivariate counting process N(te) uses the same ordering of its n(n −1) entries as the W(t) matrix, W−(t) =  W(t)⊤W(t) −1W(t)⊤, and J(t) is the indicator that W(t) has full column rank, where we take J(t)W−(t) = 0 whenever W(t) does not have full column rank. As with typical least squares, a covariance matrix for these ˆB(t) may also be estimated [7]; we give a formula for this matrix in equation (11). If estimates of βk(t) are desired for the sake of interpretability, a kernel smoothing method may be used: ˆβk(t) = 1 b X te K t −te b  ∆ˆBk(te), (8) where b is the bandwidth parameter, ∆ˆBk(te) = ˆBk(te) −ˆBk(te−1), and K is a bounded kernel function with compact support [−1, 1] such as the Epanechnikov kernel. 3.3 Online inference for the Aalen model Similar to the caching method for the Cox model in Section 3.1, it is possible to streamline the computations for estimating the integrated Aalen model coefficients B(t). First, we rewrite (7) as ˆB(t) = X te≤t J(te)  W(te)⊤W(te) −1W(te)⊤∆N(te) = X te≤t A−1(te)W(te)⊤∆N(te), (9) where A(te) = W(te)⊤W(te) and J(te) is omitted because for large network data sets and for reasonable choices of starting observation times, the covariate matrix is always of full rank. The computation of W(te)⊤∆N(te) is simple because ∆N(te) consists of all zeros except for a single entry equal to one. The most expensive computation is to update the (p + 1) × (p + 1) matrix A(te) at every event time te; inverting A(te) is not expensive since p is relatively small. Using Ue−1 and Ce−1 as in Section 3.1, the component (k, l) of the matrix A(te) corresponding to covariates k and l can be written as Akl(te) = Akl(te−1) + ∆Akl(te−1), where ∆Akl(te−1) = − X (i,j)∈Ue−1 Wijk(te−1)Wijl(te−1) + X (i,j)∈Ue−1∪Ce−1 Wijk(te)Wijl(te). (10) 4 For models such as the one presented in Section 2.2, if n is the current number of nodes, the cost of na¨ıvely calculating Akl(te) by iterating through all “at-risk” edges is nearly n2. As in Section 3.1, the cost will be O(n) if we instead use caching together with equation (10). In other cases, there may be restrictions on the set of edges at risk at a particular time. Here the computational burden for the na¨ıve calculation can be substantially smaller than O(n2); yet it is generally the case that using (10) will still provide a substantial reduction in computing effort. Our online inference algorithm during the time interval [te−1, te) may be summarized as follows: 1. Update A(te−1) using equation (10). 2. Compute ˆB(te−1) = ˆB(te−2) + A−1(te−1)W(te−1)′∆N(te−1). 3. Compute and cache the network statistics changed by the event e −1, then initialize Ue−1 with a list of those at-risk edges whose network statistics are changed by this event. 4. Compute and cache all values of network statistics changed during the time interval [te−1, te). Define Ce−1 as the set of edges that switch to at-risk during this interval. 5. Before considering the event e: (a) Compute look-ahead summations at time te−1 indexed by Ue−1. (b) Update the covariate matrix W(te−1) based on the cache. (c) Compute forward summations at time te indexed by Ue−1 and Ce−1. For the first event, A(t1) must be initialized by na¨ıve summation over all current at-risk edges, which requires O(p2n2) calculations. Assuming that the number n of nodes stays roughly the same over each of the m edge events, the overall computational complexity of this online inference algorithm is thus O(p2n2 + m(p2n + p3)). If a covariance matrix estimate for ˆB(t) is desired, it can also be derived online using the ideas above, since we may write it as ˆΣ(t) = X te≤t W−(te)diag{∆N(te)}W−(te)⊤= X te≤t A−1(te)  Wije(te) ⊗Wije(te)  A−1(te), (11) where Wije(te) denotes the vector W(te)⊤∆N(te) and ⊗is the outer product. 4 Experimental analysis In this section, we empirically analyze the ability of our inference methods to estimate the regression coefficients as well as the predictive power of the learned models. Before discussing the experimental results, we briefly describe the synthetic and real-world data sets that we use for evaluation. We simulate two data sets, SIM-1 and SIM-2, from ground-truth regression coefficients. In particular, we simulate a network formation process starting from time unit 0 until time 1200, where nodes arrive in the network at a constant rate λ0 = 10 (i.e., on average, 10 nodes join the network at each time unit); the resulting simulated networks have 11,997 nodes. The edge formation process is simulated via Otaga’s modified thinning algorithm [14] with an additive conditional intensity function. From time 0 to 1000, the baseline coefficient is set to β0 = 10−6; the coefficients for sender out-degree and receiver in-degree are set to β1 = β4 = 10−7; the coefficients for reciprocity, transitivity, and shared contacters are set to β5 = β6 = β9 = 10−5; and the coefficients for sender in-degree, receiver out-degree, shared contactees, and triangle closure are set to 0. For SIM-1, these coefficients are kept constant and 118,672 edges are created. For SIM-2, between times 1000 and 1150, we increase the coefficients for transitivity and shared contacters to β6 = β9 = 4 × 10−5, and after 1150, the coefficients return to their original values; in this case, 127,590 edges are created. We also evaluate our approach on two real-world data sets, IRVINE and METAFILTER. IRVINE is a longitudinal data set derived from an online social network of students at UC Irvine [15]. This dataset has 1,899 users and 20,296 directed contact edges between users, with timestamps for each node arrival and edge creation event. This longitudinal network spans April to October of 2004. The METAFILTER data set is from a community weblog where users can share links and discuss Web content1. This dataset has 51,362 users and 76,791 directed contact edges between users. The continuous-time observation spans 8/31/2007 to 2/5/2011. Note that both data sets are non-recurrent in that the creation of an edge between two nodes only occurs at most once. 1The METAFILTER data are available at http://mssv.net/wiki/index.php/Infodump 5 1000 1150 Time 1e−5 4e−5 Coefficient (a) Constant Transitivity 1000 1150 Time 1e−5 4e−5 Coefficient (b) Constant Shared Contacters 1000 1150 Time 1e−5 4e−5 Coefficient (c) Piecewise Transitivity 1000 1150 Time 1e−5 4e−5 Coefficient (d) Piecewise Shared Contacters Figure 1: (a,b) Estimated time-varying coefficients on SIM-1; (c,d) Estimated time-varying coefficients on SIM-2. Ground-truth coefficients are also shown in red dashed lines. 6/1/04 7/21/04 9/9/04 Time 0 3e−5 Coefficient (a) Sender Out-Degree 6/1/04 7/21/04 9/9/04 Time 0 .02 Coefficient (b) Reciprocity 6/1/04 7/21/04 9/9/04 Time 0 .0003 Coefficient (c) Transitivity 6/1/04 7/21/04 9/9/04 Time −5e−5 0 Coefficient (d) Shared Contacters Figure 2: Estimated time-varying coefficients on IRVINE data. These plots suggest that there are two distinct phases of network evolution, consistent with an independent analysis of these data [15]. 1/21/10 7/10/10 12/2/10 Time 0 1e−8 Coefficient (a) Sender Out-Degree 1/21/10 7/10/10 12/2/10 Time 0 4e−4 Coefficient (b) Reciprocity 1/21/10 7/10/10 12/2/10 Time 0 1e−5 Coefficient (c) Transitivity 1/21/10 7/10/10 12/2/10 Time −2e−6 0 Coefficient (d) Shared Contacters Figure 3: Estimated time-varying coefficients on METAFILTER. Here, the network effects continuously change during the observation time. 4.1 Recovering the time-varying regression coefficients This section focuses on the ability of our additive Aalen modeling approach to estimate the timevarying coefficients, given an observed longitudinal network. The first set of experiments attempts to recover the ground-truth coefficients on SIM-1 and SIM-2. We run the inference algorithm described in Section 3.3 and use an Epanechnikov smoothing kernel (with a bandwidth of 10 time units) to obtain smoothed coefficients. On SIM-1, Figures 1(a,b) show the estimated coefficients associated with the transitivity and shared contacters statistics, as well as the ground-truth coefficients. Likewise, Figures 1(c,d) show the same estimated and groundtruth coefficients for SIM-2. These results demonstrate that our inference algorithm can accurately recover the ground-truth coefficients in cases where the coefficients are fixed (SIM-1) and modulated (SIM-2). We also tried other settings for the ground-truth coefficients (e.g., multiple sinusoidal-like bumps) and found that our approach can accurately recover the coefficients in those cases as well. On the IRVINE and METAFILTER data, we also learn time-varying coefficients which are useful for interpreting network evolution. Figure 2 shows several of the estimated coefficients for the IRVINE data, using an Epanechnikov kernel (with a bandwidth of 30 days). These coefficients suggest the existence of two distinct phases in the evolution of the network. In the first phase of network formation, the network grows at an accelerated rate. Positive coefficients for sender outdegree, reciprocity, and transitivity in these plots imply that users with a high numbers of friends tend to make more friends, tend to reciprocate their relations, and tend to make friends with their friends’ friends, respectively. However, these coefficients decrease towards zero (the blue line) and enter a second phase where the network is structurally stable. Both of these phases have also been observed in an independent study of the data [15]. Figure 3 shows the estimated coefficients for METAFILTER, using an Epanechnikov kernel (with a bandwidth of 30). Interestingly, the coefficients suggest that there is a marked change in the edge formation process around 7/10/10. Unlike the IRVINE coefficients, the estimated METAFILTER coefficients continue to vary over time. 6 Table 1: Lengths of building, training, and test periods. The number of events are in parentheses. Building Training Test IRVINE 4/15/04 – 5/11/04 (7073) 5/12/04 – 5/31/04 (7646) 6/1/04 – 10/19/04 (5507) METAFILTER 6/15/04 – 12/21/09 (60376) 12/22/09 – 7/9/10 (8763) 7/10/10 – 2/5/11 (7620) 4.2 Predicting future links We perform rolling prediction experiments over the real-world data sets to evaluate the predictive power of the learned regression models. Following the evaluation methodology of [9], we split each longitudinal data set into three periods: a statistics-building period, a training period, and a test period (Table 1). The statistics-building period is used solely to build up the network statistics, while the training period is used to learn the coefficients and the test period is used to make predictions. Throughout the training and test periods, the time-dependent statistics are continuously updated. Furthermore, for the additive Aalen model, we use the online inference technique from Section 3.3. When we predict an event in the test period, all the previous events from the test period are used as training data as well. Meanwhile, for the multiplicative Cox model, we adaptively learn the model in batch-online fashion; during the test period, for every 10 days, we retrain the model (using the Newton-Raphson technique described in Section 3.1) with additional training examples coming from the test set. Our Newton-Raphson implementation uses a step-halving procedure, halving the length of each step if necessary until log L(β) increases. The iterations continue until every element in ∇log L(β) is smaller that 10−3 in absolute value, or until the relative increase in log L(β) is less than 10−100, or until 100 Newton-Raphson iterations are reached, whichever occurs first. The baseline that we consider is logistic regression (LR) with the same time-dependent statistics used in the Aalen and Cox models. Note that logistic regression is a competitive baseline that has been used in previous link prediction studies (e.g., [11]). We learn the LR model in the same adaptive batch-online fashion as the Cox model. We also use case control sampling to address the imbalance between positive and negative cases (since at each “positive” edge event there are order of n2 “negative” training cases). At each event, we sample K negative training examples for that same time point. We use two settings for K in the experiments: K = 10 and K = 50. To make predictions using the additive Aalen model, one would need to extrapolate the time-varying coefficients to future time points. For simplicity, we use a uniform smoothing kernel (weighting all observations equally), with a window size of 1 or 10 days. A more advanced extrapolation technique could yield even better predictive performance for the Aalen model. Each model can provide us with the probability of an edge formation event between two nodes at a given point in time, and so we can calculate an accumulative recall metric across all test events: Recall = P (i→j,t)∈TestSet I[j ∈Top(i, t, K)] |TestSet| , (12) where Top(i, t, K) is the top-K list of i’s potential “friends” ranked based on intensity λij(t). We evaluate the predictive performance of the Aalen model (with smoothing windows of 1 and 10), the Cox model, and the LR baseline (with case control ratios 1:10 and 1:50). Figure 4(a) shows the recall results on IRVINE. In this case, both the Aalen and Cox models outperform the LR baseline; furthermore, it is interesting to note that the Aalen model with time-varying coefficients does not outperform the Cox model. One explanation for this result is that the IRVINE coefficients are pretty stable (apart from the initial phase as shown in Figure 2), and thus time-varying coefficients do not provide additional predictive power in this case. Also note that LR with ratio 1:10 outperforms 1:50. We also tried an LR ratio of 1:3 (not shown) but found that it performed nearly identically to LR 1:10; thus, both the Aalen and Cox models outperform the baseline substantially on these data. Figure 4(b) shows the recall results on METAFILTER. As in the previous case, both the Aalen and Cox models significantly outperform the LR baseline. However, the Aalen model with time-varying coefficients also substantially outperforms the Cox model with time-fixed coefficients. In this case, estimating time-varying coefficients improves predictive performance, which makes sense because we have seen in Figure 3 that METAFILTER’s coefficients tend to vary more over time. We also calculated precision results (not shown) on these data sets which confirm these conclusions. 7 Adaptive LR (1:10) Adaptive LR (1:50) Adaptive Cox Aalen (Uniform−1) Aalen (Uniform−10) 1 5 10 15 20 Cut−Point K 0.2 0.3 0.4 Recall (a) IRVINE Adaptive LR (1:10) Adaptive LR (1:50) Adaptive Cox Aalen (Uniform−1) Aalen (Uniform−10) 1 5 10 15 20 Cut−Point K 0.1 0.2 0.3 Recall (b) METAFILTER Figure 4: Predictive performanceof the additive Aalen model, multiplicative Cox model, and logistic regression baseline on the IRVINE and METAFILTER data sets, using recall as the metric. 5 Related Work and Conclusions Evolving networks have been descriptively analyzed in exploratory fashion in a variety of domains, including email data [16], citation graphs [17], and online social networks [18]. On the modeling side, temporal versions of exponential random graph models [1, 2, 3] and latent space models [19, 4, 5, 20] have been developed. Such methods operate on cross-sectional snapshot data, while our framework models continuous-time network event data. It is worth noting that continuous-time Markov process models for longitudinal networks have been proposed previously [21]; however, these approaches have only been applied to very small networks, while our regression-based approach can scale to large networks. Recently, there has also been work on inferring unobserved time-varying networks from evolving nodal attributes which are observed [22, 23, 24]. In this paper, the main focus is the statistical modeling of observed continuous-time networks. More recently, survival and event history models based on the Cox model have been applied to network data [8, 12, 9]. A significant difference between our previous work [9] and this paper is that scalability is achieved in our earlier work by restricting the approach to “egocentric” modeling, in which counting processes are placed only on nodes. In contrast, here we formulate scalable inference techniques for the general “relational” setting where counting processes are placed on edges. Prior work also assumed static regression coefficients, while here we develop a framework for time-varying coefficients for the additive Aalen model. Regression models with varying coefficients have been previously proposed in other contexts [25], including a time-varying version of the Cox model [26], although to the best of our knowledge such models have not been developed or fitted on longitudinal networks. A variety of link prediction techniques have also been investigated by the machine learning community over the past decade (e.g., [27, 28, 29]). Many of these methods use standard classifiers (such as logistic regression) and take advantage of key features (such as similarity measures among nodes) to make accurate predictions. While our focus is not on feature engineering, we note that arbitrary network and nodal features such as those developed for link prediction can be incorporated into our continuous-time regression framework. Other link prediction techniques based on matrix factorization [30] and random walks [11] have also been studied. While these link prediction techniques mainly focus on making accurate predictions, our proposed approach here not only gives accurate predictions but also provides a statistical model (with time-varying coefficient estimates) which can be useful in evaluating scientific hypotheses. In summary, we have developed multiplicative and additive regression models for large-scale continuous-time longitudinal networks. On simulated and real-world data, we have shown that the proposed inference approach can accurately estimate regression coefficients and that the learned model can be used for interpreting network evolution and predicting future network events. An interesting direction for future work would be to incorporate time-dependent nodal attributes (such as textual content) into this framework and to investigate regularization methods for these models. Acknowledgments This work is supported by ONR under the MURI program, Award Number N00014-08-1-1015. 8 References [1] S. Hanneke and E. P. Xing. Discrete temporal models of social networks. In Proc. 2006 Conf. on Statistical Network Analysis, pages 115–125. Springer-Verlag, 2006. [2] D. Wyatt, T. Choudhury, and J. Bilmes. Discovering long range properties of social networks with multivalued time-inhomogeneous models. In Proc. 24th AAAI Conf. on AI, 2010. [3] P. N. Krivitsky and M. S. Handcock. A separable model for dynamic networks. Under review, November 2010. http://arxiv.org/abs/1011.1937. [4] W. Fu, L. Song, and E. P. Xing. Dynamic mixed membership blockmodel for evolving networks. In Proc. 26th Intl. Conf. on Machine Learning, pages 329–336. ACM, 2009. [5] J. Foulds, C. DuBois, A. Asuncion, C. Butts, and P. Smyth. A dynamic relational infinite feature model for longitudinal social networks. In AI and Statistics, volume 15 of JMLR W&C Proceedings, pages 287–295, 2011. [6] P. K. Andersen, O. Borgan, R. D. Gill, and N. Keiding. Statistical Models Based on Counting Processes. Springer, 1993. [7] O. O. Aalen, O. Borgan, and H. K. Gjessing. Survival and Event History Analysis: A Process Point of View. Springer, 2008. [8] C. T. Butts. A relational event framework for social action. Soc. Meth., 38(1):155–200, 2008. [9] D. Q. Vu, A. U. Asuncion, D. R. Hunter, and P. Smyth. Dynamic egocentric models for citation networks. In Proc. 28th Intl. Conf. on Machine Learning, pages 857–864, 2011. [10] P. Holland and S. Leinhardt. A dynamic model for social networks. J. Math. Soc., 5:5–20, 1977. [11] L. Backstrom and J. Leskovec. Supervised random walks: Predicting and recommending links in social networks. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining, pages 635–644. ACM, 2011. [12] P. O. Perry and P. J. Wolfe. Point process modeling for directed interaction networks. Under review, October 2011. http://arxiv.org/abs/1011.1703. [13] D. R. Cox. Regression models and life-tables. J. Roy. Stat. Soc., Series B, 34:187–220, 1972. [14] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes, Volume 1. Probability and its Applications (New York). Springer, New York, 2nd edition, 2008. [15] P. Panzarasa, T. Opsahl, and K. M. Carley. Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community. J. Amer. Soc. for Inf. Sci. and Tech., 60(5):911–932, 2009. [16] G. Kossinets and D. J. Watts. Empirical analysis of an evolving social network. Science, 311(5757):88– 90, 2006. [17] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proc. 11th ACM SIGKDD Intl. Conf. on Knowledge Discovery in Data Mining, pages 177–187. ACM, 2005. [18] B. Viswanath, A. Mislove, M. Cha, and K. P. Gummadi. On the evolution of user interaction in Facebook. In Proc. 2nd ACM SIGCOMM Wkshp. on Social Networks, pages 37–42. ACM, 2009. [19] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. SIGKDD Explorations, 7(2):31–40, 2005. [20] Q. Ho, L. Song, and E. Xing. Evolving cluster mixed-membership blockmodel for time-varying networks. In AI and Statistics, volume 15 of JMLR W&C Proceedings, pages 342–350, 2011. [21] T. A. B. Snijders. Models for longitudinal network data. Mod. Meth. in Soc. Ntwk. Anal., pages 215–247, 2005. [22] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs. Machine Learning, 80:295–319, 2010. [23] A. Ahmed and E. P. Xing. Recovering time-varying networks of dependencies in social and biological studies. Proc. Natl. Acad. Scien., 106(29):11878–11883, 2009. [24] M. Kolar, L. Song, A. Ahmed, and E. P. Xing. Estimating time-varying networks. Ann. Appl. Stat., 4(1):94–123, 2010. [25] Z. Cai, J. Fan, and R. Li. Efficient estimation and inferences for varying-coefficient models. J. Amer. Stat. Assn., 95(451):888–902, 2000. [26] T. Martinussen and T.H. Scheike. Dynamic Regression Models for Survival Data. Springer, 2006. [27] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. J. Amer. Soc. for Inf. Sci. and Tech., 58(7):1019–1031, 2007. [28] M. Al Hasan, V. Chaoji, S. Salem, and M. Zaki. Link prediction using supervised learning. In SDM ’06: Workshop on Link Analysis, Counter-terrorism and Security, 2006. [29] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online social networks. In Proc. 19th Intl. World Wide Web Conference, pages 641–650. ACM, 2010. [30] D. M. Dunlavy, T. G. Kolda, and E. Acar. Temporal link prediction using matrix and tensor factorizations. ACM Transactions on Knowledge Discovery from Data, 5(2):10, February 2011. 9
2011
120
4,170
Stochastic convex optimization with bandit feedback Alekh Agarwal Department of EECS UC Berkeley alekh@cs.berkeley.edu Dean P. Foster Department of Statistics University of Pennysylvania dean.foster@gmail.com Daniel Hsu Microsoft Research New England dahsu@microsoft.com Sham M. Kakade Department of Statistics Microsoft Research University of Pennysylvania New England skakade@microsoft.com Alexander Rakhlin Department of Statistics University of Pennysylvania rakhlin@wharton.upenn.edu Abstract This paper addresses the problem of minimizing a convex, Lipschitz function f over a convex, compact set X under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value f(x) at any query point x ∈X. We demonstrate a generalization of the ellipsoid algorithm that incurs e O(poly(d) √ T) regret. Since any algorithm has regret at least Ω( √ T) on this problem, our algorithm is optimal in terms of the scaling with T. 1 Introduction This paper considers the problem of stochastic convex optimization under bandit feedback which is a generalization of the classical multi-armed bandit problem, formulated by Robbins in 1952. Our problem is specified by a mean cost function f which is assumed to be convex and Lipschitz, and a convex, compact domain X. The algorithm repeatedly queries f at points x ∈X and observes noisy realizations of f(x). Performance of an algorithm is measured by regret, that is the difference between values of f at the query points and the minimum value of f over X. This specializes to the classical K-armed setting when X is the probability simplex and f is linear. Several recent works consider the continuum-armed bandit problem, making different assumptions on the structure of f over X. For instance, the f is assumed to be linear in the paper [9], a Lipschitz condition on f is assumed in the works [3, 12, 13], and Srinivas et al. [16] exploit the structure of a Gaussian processes. For these “non-parametric” bandit problems, the rates of regret (after T queries) are of the form T α, with exponent α approaching 1 for large dimension d. The question addressed in the present paper is: How can we leverage convexity of the mean cost function as a structural assumption? Our main contribution is an algorithm which achieves, with high probability, an ˜O(poly(d) √ T) regret after T requests. This result holds for all convex Lipschitz mean cost functions. We observe that the rate with respect to T does not deteriorate with d unlike the non-parametric problems mentioned earlier. Let us also remark that Ω( √ dT) lower bounds have been shown for linear mean cost functions, making our algorithm optimal up to factors polynomial in d and logarithmic in T. Prior Work Asymptotic rates of √ T have been previously achieved by Cope [8] for unimodal functions under stringent conditions (smoothness and strong convexity of the mean 1 cost function, in addition to the maxima being achieved inside the set). Auer et al. [4] show a regret of ˜O( √ T) for a one-dimensional non-convex problem with finite number of maximizers. Yu and Mannor [17] recently studied unimodal bandits in one dimension, but they do not consider higher dimensional cases. Bubeck et al. [7] show √ T regret for a subset of Lipschitz functions with certain metric properties. Convex, Lipschitz cost functions have also been looked at in the adversarial model [10, 12], but the best-known regret bounds for these algorithms are O(T 3/4). We also note that previous results of Agarwal et al. [1] and Nesterov [15] do not apply to our setting as noted in the full-length version of this paper [2]. The problem addressed in this paper is closely related to noisy zeroth order convex optimization, whereby the algorithm queries a point of the domain X and receives a noisy evaluation of the function. While the literature on stochastic optimization is vast, we emphasize that an optimization guarantee does not necessarily imply a bound on regret. In particular, we directly build on an optimization method that has been developed by Nemirovski and Yudin [14, Chapter 9]. Given ϵ > 0, the method is guaranteed to produce an ϵ-minimizer in e O(poly(d)ϵ−2) iterations, yet this does not immediately imply small regret. The latter is the quantity of interest in this paper, since it is the standard performance measure in decision theory. More importantly, in many applications every query to the function involves a consumption of resources or a monetary cost. A low regret guarantees that the net cost over the entire process is bounded unlike an optimization error bound. The remainder of this paper is organized as follows. In the next section, we give the formal problem setup and highlight differences between the regret and optimization error settings. We then present a simple algorithm and its analysis for 1-dimension that illustrates some of the key insights behind the general d-dimensional algorithm in Section 3. Section 4 describes our generalization of the ellipsoid algorithm for d dimensions along with its regret guarantee. Proofs of our results can be found in the full-length version [2]. 2 Setup In this section we will give the basic setup and the performance criterion, and explain the differences between the metrics of regret and optimization error. 2.1 Problem definition and notation Let X be a compact and convex subset of Rd, and let f : X →R be a 1-Lipschitz convex function on X, so f(x) −f(x′) ≤∥x −x′∥for all x, x′ ∈X. We assume X is specified in a way so that the algorithm can efficiently construct the smallest Euclidian ball containing X. Furthermore, we assume the algorithm has noisy black-box access to f. Specifically, the algorithm is allowed to query the value of f at any x ∈X, and it observes y = f(x)+ε where ε is an independent σ-subgaussian random variable with mean zero: E[exp(λε)] ≤exp(λ2σ2/2) for all λ ∈R. The goal of the algorithm is to minimize its regret: after making T queries x1, . . . , xT ∈X, the regret of the algorithm compared to any x∗∈arg minx∈X f(x) is RT = PT t=1  f(xt) −f(x∗)  . (1) We will construct an average and confidence interval (henceforth CI) for the function values at points queried by the algorithm. Letting LBγi(x) and UBγi(x) denote the lower and upper bounds of a CI of width γi for the function estimate of a point x, we will say that CI’s at two points are γ-separated if LBγi(x) ≥UBγi(y) + γ or LBγi(y) ≥UBγi(x) + γ. 2.2 Regret vs. optimization error Since f is convex, the average ¯xT = 1 T PT t=1 xt satisfies f(¯xT ) −f(x∗) ≤RT /T so that low regret (1) also gives a small optimization error. The converse, however, is not necessarily true. An optimization method might can query far from the minimum of the function (that is, explore) on most rounds, and output the solution at the last step. Guaranteeing a small regret typically involves a more careful balancing of exploration and exploitation. 2 To better understand the difference, suppose X = [0, 1], and let f(x) be one of xT −1/3, −xT −1/3 and x(x −1). Let us sample function values at x = 1/4 and x = 3/4. To distinguish the first two cases, we need Ω(T 2/3) points. If f is linear indeed, we only incur O(T 1/3) regret on these rounds. However, if instead f(x) = x(x −1), we incur an undesirable Ω(T 2/3) regret. For purposes of optimization, it suffices to eventually distinguish the three cases. For the purposes of regret minimization, however, an algorithm has to detect that the function curves between the two sampled points. To address this issue, we additionally sample at x = 1/2. The center point acts as a sentinel: if it is recognized that f(1/2) is noticeably below the other two values, the region [0, 1/4] or [3/4, 1] can be discarded. Similarly, one of these regions can be discarded if it is recognized that the value of f either at x = 1/4 or at x = 3/4 is greater than others. Finally, if f at all three points appears to be similar at a given scale, we have a certificate (due to convexity) that the algorithm is not paying regret per query larger than this scale. This center-point device that allows to quickly detect that the optimization method might be paying high regret and to act on this information is the main novel tool of our paper. Unlike discretization-based methods, the proposed algorithm uses convexity in a crucial way. We first demonstrate the device on one-dimensional problems in the next section, where the solution is clean and intuitive. We then develop a version of the algorithm for higher dimensions, basing our construction on the beautiful zeroth order optimization method of Nemirovski and Yudin [14]. Their method does not guarantee vanishing regret by itself, and a careful fusion of this algorithm with our center-point device is required. 3 One-dimensional case We start with a special case of 1-dimension to illustrate some of the key ideas including the center-point device. We assume wlog that the domain X = [0, 1], and f(x) ∈[0, 1] (the latter can be achieved by pinning f(x∗) = 0 since f is 1-Lipschitz). 3.1 Algorithm description Algorithm 1 One-dimensional stochastic convex bandit algorithm input noisy black-box access to f : [0, 1] →R, total number of queries allowed T. 1: Let l1 := 0 and r1 := 1. 2: for epoch τ = 1, 2, . . . do 3: Let wτ := rτ −lτ. 4: Let xl := lτ + wτ/4, xc := lτ + wτ/2, and xr := lτ + 3wτ/4. 5: for round i = 1, 2, . . . do 6: Let γi := 2−i. 7: For each x ∈{xl, xc, xr}, query f(x) 2σ γ2 i log T times. 8: if max{LBγi(xl), LBγi(xr)} ≥min{UBγi(xl), UBγi(xr)} + γi then 9: {Case 1: CI’s at xl and xr are γi separated} 10: if LBγi(xl) ≥LBγi(xr) then let lτ+1 := xl and rτ+1 := rτ. 11: if LBγi(xl) < LBγi(xr) then let lτ+1 := lτ and rτ+1 := xr. 12: Continue to epoch τ + 1. 13: else if max{LBγi(xl), LBγi(xr)} ≥UBγi(xc) + γi then 14: {Case 2: CI’s at xc and xl or xr are γi separated} 15: if LBγi(xl) ≥LBγi(xr) then let lτ+1 := xl and rτ+1 := rτ. 16: if LBγi(xl) < LBγi(xr) then let lτ+1 := lτ and rτ+1 := xr. 17: Continue to epoch τ + 1. 18: end if 19: end for 20: end for Algorithm 1 proceeds in a series of epochs demarcated by a working feasible region (the interval Xτ = [lτ, rτ] in epoch τ). In each epoch, the algorithm aims to discard a portion of Xτ determined to only contain suboptimal points. To do this, the algorithm repeatedly 3 makes noisy queries to f at three different points in Xτ. Each epoch is further subdivided into rounds, where we query the function (2σ log T)/γ2 i times in round i at each of the points. By Hoeffding’s inequality, this implies that we know the function value to within γi with high probability. The value γi is halved at every round. At the end of an epoch τ, Xτ is reduced to a subset Xτ+1 = [lτ+1, rτ+1] ⊂[lτ, rτ] of the current region for the next epoch τ + 1, and this reduction is such that the new region is smaller in size by a constant fraction. This geometric rate of reduction guarantees that only a small number of epochs can occur before Xτ only contains near-optimal points. For the algorithm to identify a sizable portion of Xτ to discard, the queries in each epoch should be suitably chosen, and the convexity of f must be exploited. To this end, the algorithm makes its queries at three equally-spaced points xl < xc < xr in Xτ (see Section 4.1 of the full-length version for graphical illustrations of these cases). Case 1: If the CIs around f(xl) and f(xr) are sufficiently separated, the algorithm discards a fourth of [lτ, rτ] (to the left of xl or right of xr) which does not contain x∗. Case 2: If the above separation fails, the algorithm checks if the CI around f(xc) is sufficiently below at least one of the other CIs (for f(xl) or f(xr)). If that happens, the algorithm again discards a quartile of [lτ, rτ] that does not contain x∗. Case 3: Finally, if none of the earlier cases is true, then the algorithm is assured (by convexity) that the function is sufficiently flat on Xτ and hence it has not incurred much regret so far . The algorithm continues the epoch, with an increased number of queries to obtain smaller confidence intervals at each of the three points. 3.2 Analysis The analysis of Algorithm 1 relies on the function values being contained in the confidence intervals we construct at each round of each epoch. To avoid having probabilities throughout our analysis, we define an event E where at each epoch τ, and each round i, f(x) ∈[LBγi(x), UBγi(x)] for x ∈{xl, xc, xr}. We will carry out the remainder of the analysis conditioned on E and bound the probability of Ec at the end. The following theorem bounds the regret incurred by Algorithm 1. We note that the regret would be maintained in terms of the points xt queried by the algorithm at time t. Within any given round, the order of queries is immaterial to the regret. Theorem 1 (Regret bound for Algorithm 1). Suppose Algorithm 1 is run on a convex, 1-Lipschitz function f bounded in [0,1]. Suppose the noise in observations is i.i.d. and σ-subGaussian. Then with probability at least 1 −1/T we have T X t=1 f(xt) −f(x∗) ≤108 p σT log T log4/3  T 8σ log T  . Remarks: As stated Algorithm 1 and Theorem 1 assume knowledge of T, but we can make the algorithm adaptive to T by a standard doubling argument. We remark that O( √ T) is the smallest possible regret for any algorithm even with noisy gradient information. Hence, this result shows that for purposes of regret, noisy zeroth order information is no worse than noisy first-order information apart from logarithmic factors. Theorem 1 is proved via a series of lemmas below. The key idea is to show that the regret on any epoch is small and the total number of epochs is bounded. To bound the per-epoch regret, we will show that the total number of queries made on any epoch depends on how flat the function is on Xτ. We either take a long time, but the function is very flat, or we stop early when the function has sufficient slope, never accruing too much regret. We start by showing that the reduction in Xτ after each epoch always preserves near-optimal points. Lemma 1 (Survival of approx. minima). If epoch τ ends in round i, then [lτ+1, rτ+1] contains every x ∈[lτ, rτ] such that f(x) ≤f(x∗) + γi. In particular, x∗∈[lτ, rτ] for all τ. 4 The next two lemmas bound the regret incurred in any single epoch. To show this, we first establish that an algorithm incurs low regret in a round as long as it does not end an epoch. Then, as a consequence of the doubling trick, we show that the regret incurred in an epoch is on the same order as that incurred in the last round of the epoch. Lemma 2 (Certificate of low regret). If epoch τ continues from round i to round i+1, then the regret incurred in round i is at most 72γ−1 i σ log T. Lemma 3 (Regret in an epoch). If epoch τ ends in round i, then the regret incurred in the entire epoch is at most 216γ−1 i σ log T. To obtain a bound on the overall regret, we bound the number of epochs that can occur before Xτ only contains near-optimal points. The final regret bound is simply the product of the number of epochs and the regret incurred in any single epoch. Lemma 4 (Bound on the number of epochs). The total number of epochs τ performed by Algorithm 1 is bounded as τ ≤1 2 log4/3  T 8σ log T  . 4 Algorithm for optimization in higher dimensions We now move to present the general algorithm that works in d-dimensions. The natural approach would be to try and generalize Algorithm 1 to work in multiple dimensions. However, the obvious extension requires querying the function along every direction in a covering of the unit sphere so that we know the behavior of the function along every direction. Such an approach yields regret and running time that scales exponentially with the dimension d. Nemirovski and Yudin [14] address this problem in the setup of zeroth order optimization by a clever construction to capture all the directions in polynomially many queries. We define a pyramid to be a d-dimensional polyhedron defined by d + 1 points; d points form a d-dimensional regular polygon that is the base of the pyramid, and the apex lies above the hyperplane containing the base (see Figure 1 for a graphic illustration in 3 dimensions). The idea of Nemirovski and Yudin is to build a sequence of pyramids, each capturing the variation of function in certain directions, in such a way that with O(d log d) pyramids we can explore all the directions. However, as mentioned earlier, their approach fails to give a low regret. We combine their geometric construction with ideas from the one-dimensional case to obtain Algorithm 2 which incurs a bounded regret. ϕ h APEX Figure 1: Pyramid in 3-dimensions x0 x2 xd+1 x1 rτ Rτ Xτ Figure 2: The regular simplex constructed at round i of epoch τ with radius rτ, center x0 and vertices x1, . . . , xd+1. Just like the 1-dimensional case, Algorithm 2 proceeds in epochs. We start with the optimization domain X, and at the beginning we set X0 = X. At the beginning of epoch τ, we have a current feasible set Xτ which contains at least one approximate optimum of the convex function. The epoch ends with discarding some portion of the set Xτ in such a way that we still retain at least one approximate optimum in the remaining set Xτ+1. At the start of the epoch τ, we apply an affine transformation to Xτ so that the smallest volume ellipsoid containing it is a Euclidian ball of radius Rτ (denoted as B(Rτ)). We define rτ = Rτ/c1d for a constant c1 ≥1, so that B(rτ) ⊆Xτ (see e.g. Lecture 1, p. 2 [5]). We will use the notation Bτ to refer to the enclosing ball. Within each epoch, the algorithm proceeds in several rounds, each round maintaining a value γi which is successively halved. 5 Algorithm 2 Stochastic convex bandit algorithm input feasible region X ⊂Rd; noisy black-box access to f : X →R, constants c1 and c2, functions ∆τ(γ), ∆τ(γ) and number of queries T allowed. 1: Let X1 := X. 2: for epoch τ = 1, 2, . . . do 3: Round Xτ so B(rτ) ⊆Xτ ⊆B(Rτ), Rτ is minimized, and rτ := Rτ/(c1d). Let Bτ = B(Rτ). 4: Construct regular simplex with vertices x1, . . . , xd+1 on the surface of B(rτ). 5: for round i = 1, 2, . . . do 6: Let γi := 2−i. 7: Query f at xj for each j = 1, . . . , d + 1 2σ log T γ2 i times. 8: Let y1 := arg maxj LBγi(xj). 9: for k = 1, 2, . . . do 10: Construct pyramid Πk with apex yk; let z1, . . . , zd be the vertices of the base of Πk and z0 be the center of Πk. 11: Let bγ := 2−1. 12: loop 13: Query f at each of {yk, z0, z1, . . . , zd} 2σ log T bγ2 times. 14: Let center := z0, apex := yk, top be the vertex v of Πk maximizing LBbγ(v), bottom be the vertex v of Πk minimizing LBbγ(v). 15: if LBbγ(top) ≥UBbγ(bottom) + ∆τ(bγ) and LBbγ(top) ≥UBbγ(apex) + bγ then 16: {Case (1a)} 17: Let yk+1 := top, and immediately continue to pyramid k + 1. 18: else if LBbγ(top) ≥UBbγ(bottom) + ∆τ(bγ) and LBbγ(top) < UBbγ(apex) + bγ then 19: {Case (1b)} 20: Set (Xτ+1, B ′ τ+1) = Cone-cutting(Πk, Xτ, Bτ), and proceed to epoch τ + 1. 21: else if LBbγ(top) < UBbγ(bottom) + ∆τ(bγ) and UBbγ(center) ≥LBbγ(bottom) − ∆τ(bγ) then 22: {Case (2a)} 23: Let bγ := bγ/2. 24: if bγ < γi then start next round i + 1. 25: else if LBbγ(top) < UBbγ(bottom) + ∆τ(bγ) and UBbγ(center) < LBbγ(bottom) − ∆τ(bγ) then 26: {Case (2b)} 27: Set (Xτ+1, B ′ τ+1)= Hat-raising(Πk, Xτ, Bτ), and proceed to epoch τ + 1. 28: end if 29: end loop 30: end for 31: end for 32: end for Algorithm 3 Cone-cutting input pyramid Π with apex y, (rounded) feasible region Xτ for epoch τ, enclosing ball Bτ 1: Let z1, . . . , zd be the vertices of the base of Π, and ¯ϕ the angle at its apex. 2: Define the cone Kτ = {x | ∃λ > 0, α1, . . . , αd > 0, d X i=1 αi = 1 : x = y −λ d X i=1 αi(zi −y)} 3: Set B ′ τ+1 to be the min. volume ellipsoid containing Bτ \ Kτ. 4: Set Xτ+1 = Xτ ∩B ′ τ+1. output new feasible region Xτ+1 and enclosing ellipsoid B ′ τ+1. Algorithm 4 Hat-raising input pyramid Π with apex y, (rounded) feasible region Xτ for epoch τ, enclosing ball Bτ. 1: Let center be the center of Π. 2: Set y′ = y + (y −center). 3: Set Π ′ to be the pyramid with apex y′ and same base as Π. 4: Set Xτ+1, B ′ τ+1 = Cone-cutting(Π ′, Xτ, Bτ). output new feasible region Xτ+1 and enclosing ellipsoid B ′ τ+1. 6 x0 ϕ z1 z2 y1 x0 y1 y2 x0 y1 y2 y3 Figure 3: Sequence of pyramids constructed by Algorithm 2 Let x0 be the center of the ball B(Rτ) containing Xτ. At the start of a round i, we construct a regular simplex centered at x0 and contained in B(rτ). The algorithm queries the function f at all the vertices of the simplex, denoted by x1. . . . , xd+1, until the CI’s at each vertex shrink to γi. The algorithm picks the point y1 that maximizes LBγi(xi). By construction, f(y1) ≥f(xj) −γi for all j = 1, . . . , d + 1. This step is depicted in Figure 2. The algorithm now successively constructs a sequence of pyramids, with the goal of identifying a region of the feasible set Xτ such that at least one approximate optimum of f lies outside the selected region. This region will be discarded at the end of the epoch. The construction of the pyramids follows the construction from Section 9.2.2 of Nemirovski and Yudin [14]. The pyramids we construct will have an angle 2ϕ at the apex, where cos ϕ = c2/d. The base of the pyramid consists of vertices z1, . . . , zd such that zi −x0 and y1 −zi are orthogonal. We note that the construction of such a pyramid is always possible— we take a sphere with y1 −x0 as the diameter, and arrange z1, . . . , zd on the boundary of the sphere such that the angle between y1 −x0 and y1 −zi is ϕ. The construction of the pyramid is depicted in Figure 3. Given this pyramid, we set bγ = 1, and sample the function at y1 and z1, . . . , zd as well as the center of the pyramid until the CI’s all shrink to bγ. Let top and bottom denote the vertices of the pyramid (including y1) with the largest and the smallest function value estimates resp. For consistency, we will also use apex to denote the apex y1. We then check for one of the following conditions (see Section 5 of the full-length version [2] for graphical illustrations of these cases): (1) If LBbγ(top) ≥UBbγ(bottom) + ∆τ(bγ), we proceed based on the separation between top and apex CI’s. (a) If LBbγ(top) ≥UBbγ(apex) + bγ, then we know that with high probability f(top) ≥f(apex) + bγ ≥f(apex) + γi. (2) In this case, we set top to be the apex of the next pyramid, reset bγ = 1 and continue the sampling procedure on the next pyramid. (b) If LBbγ(top) ≤UBbγ(apex)+bγ, then we know that LBbγ(apex) ≥UBbγ(bottom)+ ∆τ(bγ) −2bγ. In this case, we declare the epoch over and pass the current apex to the cone-cutting step. (2) If LBbγ(top) ≤UBbγ(bottom) + ∆τ(bγ), then one of the following happens: (a) If UBbγ(center) ≥LBbγ(bottom) −∆τ(bγ), then all of the vertices and the center of the pyramid have their function values within a 2∆τ(bγ) + 3bγ interval. In this case, we set bγ = bγ/2. If this sets bγ < γi, we start the next round with γi+1 = γi/2. Otherwise, we continue sampling the current pyramid with the new value of bγ. (b) If UBbγ(center) ≤LBbγ(bottom)−∆τ(bγ), then we terminate the epoch and pass the center and the current apex to the hat-raising step. Hat-Raising: This step happens when the algorithm enters case 2(b). In this case, we will show that if we move the apex of the pyramid a little from yi to y ′ i, then y ′ i’s CI is above the top CI while the angle of the new pyramid at y ′ i is not much smaller than ϕ. Letting centeri denote the center of the pyramid, we set y ′ i = yi + (yi −centeri) and denote the angle at the apex y ′ i by 2 ¯ϕ. Figure 4 shows the transformation involved in this step. 7 z1 z2 yi y ￿ i ¯ϕ ϕ Figure 4: Transformation of the pyramid Π in the hat-raising step. Bτ B￿τ+1 Kτ Figure 5: Cone-cutting step at epoch τ. Solid circle is the enclosing ball Bτ. Shaded region is the intersection of Kτ with Bτ. The dotted ellipsoid is the new enclosing ellipsoid B ′ τ+1. Cone-cutting: This step concludes an epoch. The algorithm gets here either through case 1(b) or through the hat-raising step. In either case, we have a pyramid with an apex y, base z1, . . . , zd and an angle 2 ¯ϕ at the apex, where cos( ¯ϕ) ≤2c2/d. We now define a cone Kτ = {x | ∃λ > 0, α1, . . . , αd > 0, d X i=1 αi = 1 : x = y −λ d X i=1 αi(zi −y)} (3) which is centered at y and a reflection of the pyramid around the apex. By construction, the cone Kτ has an angle 2 ¯ϕ at its apex. We set B ′ τ+1 to be the ellipsoid of minimum volume containing Bτ \ Kτ and define Xτ+1 = Xτ ∩B ′ τ+1. This is illustrated in Figure 5. Finally, we put things back into an isotropic position and Bτ+1 is the ball containing Xτ+1 is in the isotropic coordinates, which is just obtained by applying an affine transformation to B ′ τ+1. Let us end with a brief discussion regarding the computational aspects of this algorithm. Clearly, the most computationally intensive steps of this algorithm are cone-cutting and the isotropic transformation at the end. However, these are exactly analogous to the classical ellipsoid method. In particular, the equation for B ′ τ+1 is known in closed form [11]. Furthermore, the affine transformations needed to the reshape the set can be computed via rank-one matrix updates and hence computation of inverses can be done efficiently as well (see e.g. [11] for the relevant implementation details of the ellipsoid method). The following theorem states our regret guarantee on the performance of Algorithm 2. Theorem 2. Suppose Algorithm 2 is run with c1 ≥64, c2 ≤1/32 and parameters ∆τ(γ) = 6c1d4 c2 2 + 3  γ and ∆τ(γ) = 6c1d4 c2 2 + 5  γ. Then with probability at least 1 −1/T, the regret incurred by the algorithm is bounded by 768d3σ √ T log2 T 2d2 log d c2 2 + 1  4d7c1 c3 2 + d(d + 1) c2  12c1d4 c2 2 + 11  = ˜O(d16√ T). Remarks: Theorem 2 is again optimal in the dependence on T. The large dependence on d is also seen in Nemirovski and Yudin [14] who obtain a d7 scaling in noiseless case and leave it an unspecified polynomial in the noisy case. Using random walk ideas [6] to improve the dependence on d is an interesting question for future research. Acknowledgments Part of this work was done while AA and DH were at the University of Pennsylvania. AA was partially supported by MSR and Google PhD fellowships and NSF grant CCF-1115788 while this work was done. DH was partially supported under grants AFOSR FA9550-09-10425, NSF IIS-1016061, and NSF IIS-713540. AR gratefully acknowledges the support of NSF under grant CAREER DMS-0954737. 8 References [1] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT, 2010. [2] A. Agarwal, D. Foster, D. Hsu, S. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. URL http://arxiv.org/abs/1107.1744, 2011. [3] R. Agrawal. The continuum-armed bandit problem. SIAM journal on control and optimization, 33:1926, 1995. [4] P. Auer, R. Ortner, and C. Szepesv´ari. Improved rates for the stochastic continuumarmed bandit problem. Learning Theory, pages 454–468, 2007. [5] K. Ball. An elementary introduction to modern convex geometry. In Flavors of Geometry, number 31 in Publications of the Mathematical Sciences Research Institute, pages 1–55. 1997. [6] D. Bertsimas and S. Vempala. Solving convex programs by random walks. Journal of the ACM, 51(4):540–556, 2004. [7] S. Bubeck, R. Munos, G. Stolz, and C. Szepesv´ari. X-armed bandits. Journal of Machine Learning Research, 12:1655–1695, 2011. [8] E.W. Cope. Regret and convergence bounds for a class of continuum-armed bandit problems. Automatic Control, IEEE Transactions on, 54(6):1243–1253, 2009. [9] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), 2008. [10] A. D. Flaxman, A. T. Kalai, and B. H. Mcmahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385–394, 2005. [11] Donald Goldfarb and Michael J. Todd. Modifications and implementation of the ellipsoid algorithm for linear programming. Mathematical Programming, 23:1–19, 1982. [12] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. Advances in Neural Information Processing Systems, 18, 2005. [13] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the 40th annual ACM symposium on Theory of computing, pages 681– 690. ACM, 2008. [14] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, New York, 1983. [15] Y. Nesterov. Random gradient-free minimization of convex functions. Technical Report 2011/1, CORE DP, 2011. [16] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. Arxiv preprint arXiv:0912.3995, 2009. [17] J. Y. Yu and S. Mannor. Unimodal bandits. In ICML, 2011. 9
2011
121
4,171
Online Learning: Stochastic, Constrained, and Smoothed Adversaries Alexander Rakhlin Department of Statistics University of Pennsylvania rakhlin@wharton.upenn.edu Karthik Sridharan Toyota Technological Institute at Chicago karthik@ttic.edu Ambuj Tewari Computer Science Department University of Texas at Austin ambuj@cs.utexas.edu Abstract Learning theory has largely focused on two main learning scenarios: the classical statistical setting where instances are drawn i.i.d. from a fixed distribution, and the adversarial scenario wherein, at every time step, an adversarially chosen instance is revealed to the player. It can be argued that in the real world neither of these assumptions is reasonable. We define the minimax value of a game where the adversary is restricted in his moves, capturing stochastic and non-stochastic assumptions on data. Building on the sequential symmetrization approach, we define a notion of distribution-dependent Rademacher complexity for the spectrum of problems ranging from i.i.d. to worst-case. The bounds let us immediately deduce variation-type bounds. We study a smoothed online learning scenario and show that exponentially small amount of noise can make function classes with infinite Littlestone dimension learnable. 1 Introduction In the papers [1, 10, 11], an array of tools has been developed to study the minimax value of diverse sequential problems under the worst-case assumption on Nature. In [10], many analogues of the classical notions from statistical learning theory have been developed, and these have been extended in [11] for performance measures well beyond the additive regret. The process of sequential symmetrization emerged as a key technique for dealing with complicated nested minimax expressions. In the worst-case model, the developed tools give a unified treatment to such sequential problems as regret minimization, calibration of forecasters, Blackwell’s approachability, Φ-regret, and more. Learning theory has been so far focused predominantly on the i.i.d. and the worst-case learning scenarios. Much less is known about learnability in-between these two extremes. In the present paper, we make progress towards filling this gap by proposing a framework in which it is possible to variously restrict the behavior of Nature. By restricting Nature to play i.i.d. sequences, the results boil down to the classical notions of statistical learning in the supervised learning scenario. By not placing any restrictions on Nature, we recover the worst-case results of [10]. Between these two endpoints of the spectrum, particular assumptions on the adversary yield interesting bounds on the minimax value of the associated problem. Once again, the sequential symmetrization technique arises as the main tool for dealing with the minimax value, but the proofs require more care than in the i.i.d. or completely adversarial settings. 1 Adapting the game-theoretic language, we will think of the learner and the adversary as the two players of a zero-sum repeated game. Adversary’s moves will be associated with “data”, while the moves of the learner – with a function or a parameter. This point of view is not new: game-theoretic minimax analysis has been at the heart of statistical decision theory for more than half a century (see [3]). In fact, there is a well-developed theory of minimax estimation when restrictions are put on either the choice of the adversary or the allowed estimators by the player. We are not aware of a similar theory for sequential problems with non-i.i.d. data. The main contribution of this paper is the development of tools for the analysis of online scenarios where the adversary’s moves are restricted in various ways. In additional to general theory, we consider several interesting scenarios which can be captured by our framework. All proofs are deferred to the appendix. 2 Value of the Game Let F be a closed subset of a complete separable metric space, denoting the set of moves of the learner. Suppose the adversary chooses from the set X. Consider the Online Learning Model, defined as a T-round interaction between the learner and the adversary: On round t = 1, . . . , T, the learner chooses ft ∈F, the adversary simultaneously picks xt ∈X, and the learner suffers loss ft(xt). The goal of the learner is to minimize regret, defined as PT t=1 ft(xt) −inff∈F PT t=1 f(xt). It is a standard fact that simultaneity of the choices can be formalized by the first player choosing a mixed strategy; the second player then picks an action based on this mixed strategy, but not on its realization. We therefore consider randomized learners who predict a distribution qt ∈Q on every round, where Q is the set of probability distributions on F, assumed to be weakly compact. The set of probability distributions on X (mixed strategies of the adversary) is denoted by P. We would like to capture the fact that sequences (x1, . . . , xT ) cannot be arbitrary. This is achieved by defining restrictions on the adversary, that is, subsets of “allowed” distributions for each round. These restrictions limit the scope of available mixed strategies for the adversary. Definition 1. A restriction P1:T on the adversary is a sequence P1, . . . , PT of mappings Pt : X t−1 7→2P such that Pt(x1:t−1) is a convex subset of P for any x1:t−1 ∈X t−1. Note that the restrictions depend on the past moves of the adversary, but not on those of the player. We will write Pt instead of Pt(x1:t−1) when x1:t−1 is clearly defined. Using the notion of restrictions, we can give names to several types of adversaries that we will study in this paper. (1) A worst-case adversary is defined by vacuous restrictions Pt(x1:t−1) = P. That is, any mixed strategy is available to the adversary, including any deterministic point distribution. (2) A constrained adversary is defined by Pt(x1:xt−1) being the set of all distributions supported on the set {x ∈X : Ct(x1, . . . , xt−1, x) = 1} for some deterministic binary-valued constraint Ct. The deterministic constraint can, for instance, ensure that the length of the path determined by the moves x1, . . . , xt stays below the allowed budget. (3) A smoothed adversary picks the worst-case sequence which gets corrupted by i.i.d. noise. Equivalently, we can view this as restrictions on the adversary who chooses the “center” (or a parameter) of the noise distribution. Using techniques developed in this paper, we can also study the following adversaries (omitted due to lack of space): (4) A hybrid adversary in the supervised learning game picks the worst-case label yt, but is forced to draw the xt-variable from a fixed distribution [6]. (5) An i.i.d. adversary is defined by a time-invariant restriction Pt(x1:t−1) = {p} for every t and some p ∈P. For the given restrictions P1:T , we define the value of the game as VT (P1:T ) △= inf q1∈Q sup p1∈P1 E f1,x1 inf q2∈Q sup p2∈P2 E f2,x2 · · · inf qT ∈Q sup pT ∈PT E fT ,xT " T X t=1 ft(xt) −inf f∈F T X t=1 f(xt) # (1) where ft has distribution qt and xt has distribution pt. As in [10], the adversary is adaptive, that is, chooses pt based on the history of moves f1:t−1 and x1:t−1. At this point, the only difference from 2 the setup of [10] is in the restrictions Pt on the adversary. Because these restrictions might not allow point distributions, suprema over pt’s in (1) cannot be equivalently written as the suprema over xt’s. A word about the notation. In [10], the value of the game is written as VT (F), signifying that the main object of study is F. In [11], it is written as VT (ℓ, ΦT ) since the focus is on the complexity of the set of transformations ΦT and the payoff mapping ℓ. In the present paper, the main focus is indeed on the restrictions on the adversary, justifying our choice VT (P1:T ) for the notation. The first step is to apply the minimax theorem. To this end, we verify the necessary conditions. Our assumption that F is a closed subset of a complete separable metric space implies that Q is tight and Prokhorov’s theorem states that compactness of Q under weak topology is equivalent to tightness [14]. Compactness under weak topology allows us to proceed as in [10]. Additionally, we require that the restriction sets are compact and convex. Theorem 1. Let F and X be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let P1:T be the restrictions, and assume that for any x1:t−1, Pt(x1:t−1) satisfies the necessary conditions for the minimax theorem to hold. Then VT (P1:T ) = sup p1∈P1 Ex1∼p1 . . . sup pT ∈PT ExT ∼pT " T X t=1 inf ft∈F Ext∼pt [ft(xt)] −inf f∈F T X t=1 f(xt) # . (2) The nested sequence of suprema and expected values in Theorem 1 can be re-written succinctly as VT (P1:T ) = sup p∈P Ex1∼p1Ex2∼p2(·|x1) . . . ExT ∼pT (·|x1:T −1) " T X t=1 inf ft∈F Ext∼pt [ft(xt)] −inf f∈F T X t=1 f(xt) # = sup p∈P E " T X t=1 inf ft∈F Ext∼pt [ft(xt)] −inf f∈F T X t=1 f(xt) # (3) where the supremum is over all joint distributions p over sequences, such that p satisfies the restrictions as described below. Given a joint distribution p on sequences (x1, . . . , xT ) ∈X T , we denote the associated conditional distributions by pt(·|x1:t−1). We can think of the choice p as a sequence of oblivious strategies {pt : X t−1 7→P}T t=1, mapping the prefix x1:t−1 to a conditional distribution pt(·|x1:t−1) ∈Pt(x1:t−1). We will indeed call p a “joint distribution” or an “oblivious strategy” interchangeably. We say that a joint distribution p satisfies restrictions if for any t and any x1:t−1 ∈X t−1, pt(·|x1:t−1) ∈Pt(x1:t−1). The set of all joint distributions satisfying the restrictions is denoted by P. We note that Theorem 1 cannot be deduced immediately from the analogous result in [10], as it is not clear how the restrictions on the adversary per each round come into play after applying the minimax theorem. Nevertheless, it is comforting that the restrictions directly translate into the set P of oblivious strategies satisfying the restrictions. Before continuing with our goal of upper-bounding the value of the game, we state the following interesting facts. Proposition 2. There is an oblivious minimax optimal strategy for the adversary, and there is a corresponding minimax optimal strategy for the player that does not depend on its own moves. The latter statement of the proposition is folklore for worst-case learning, yet we have not seen a proof of it in the literature. The proposition holds for all online learning settings with legal restrictions P1:T , encompassing also the no-restrictions setting of worst-case online learning [10]. The result crucially relies on the fact that the objective is external regret. 3 Symmetrization and Random Averages Theorem 1 is a useful representation of the value of the game. As the next step, we upper bound it with an expression which is easier to study. Such an expression is obtained by introducing Rademacher random variables. This process can be termed sequential symmetrization and has been exploited in [1, 10, 11]. The restrictions Pt, however, make sequential symmetrization considerably more involved than in the papers cited above. The main difficulty arises from the fact that the set Pt(x1:t−1) depends on the sequence x1:t−1, and symmetrization (that is, replacement of xs with x′ s) has to be done with care as it affects this dependence. Roughly speaking, in the process of symmetrization, a tangent sequence x′ 1, x′ 2, . . . is introduced such that xt and x′ t are independent and 3 identically distributed given “the past”. However, “the past” is itself an interleaving choice of the original sequence and the tangent sequence. Define the “selector function” χ : X ×X ×{±1} 7→X by χ(x, x′, ǫ) = x′ if ǫ = 1 and χ(x, x′, ǫ) = x if ǫ = −1. When xt and x′ t are understood from the context, we will use the shorthand χt(ǫ) := χ(xt, x′ t, ǫ). In other words, χt selects between xt and x′ t depending on the sign of ǫ. Throughout the paper, we deal with binary trees, which arise from symmetrization [10]. Given some set Z, an Z-valued tree of depth T is a sequence z = (z1, . . . , zT ) of T mappings zi : {±1}i−1 7→Z. The T-tuple ǫ = (ǫ1, . . . , ǫT ) ∈{±1}T defines a path. For brevity, we write zt(ǫ) instead of zt(ǫ1:t−1). Given a joint distribution p, consider the “(X × X)T −1 7→P(X × X)”- valued probability tree ρ = (ρ1, . . . , ρT ) defined by ρt(ǫ1:t−1) $ (x1, x′ 1), . . . , (xT −1, x′ T −1)  = (pt(·|χ1(ǫ1), . . . , χt−1(ǫt−1)), pt(·|χ1(ǫ1), . . . , χt−1(ǫt−1))). In other words, the values of the mappings ρt(ǫ) are products of conditional distributions, where conditioning is done with respect to a sequence made from xs and x′ s depending on the sign of ǫs. We note that the difficulty in intermixing the x and x′ sequences does not arise in i.i.d. or worstcase symmetrization. However, in-between these extremes the notational complexity seems to be unavoidable if we are to employ symmetrization and obtain a version of Rademacher complexity. As an example, consider the “left-most” path ǫ = −1 in a binary tree of depth T, where 1 = (1, . . . , 1) is a T-dimensional vector of ones. Then all the selectors χ(xt, x′ t, ǫt) choose the sequence x1, . . . , xT . The probability tree ρ on the “left-most” path is, therefore, defined by the conditional distributions pt(·|x1:t−1); on the path ǫ = 1, the conditional distributions are pt(·|x′ 1:t−1). Slightly abusing the notation, we will write ρt(ǫ) $ (x1, x′ 1), . . . , (xt−1, x′ t−1)  for the probability tree since ρt clearly depends only on the prefix up to time t −1. Throughout the paper, it will be understood that the tree ρ is obtained from p as described above. Since all the conditional distributions of p satisfy the restrictions, so do the corresponding distributions of the probability tree ρ. By saying that ρ satisfies restrictions we then mean that p ∈P. Sampling of a pair of X-valued trees from ρ, written as (x, x′) ∼ρ, is defined as the following recursive process: for any ǫ ∈{±1}T , (x1(ǫ), x′ 1(ǫ)) ∼ρ1(ǫ) and (xt(ǫ), x′ t(ǫ)) ∼ρt(ǫ)((x1(ǫ), x′ 1(ǫ)), . . . , (xt−1(ǫ), x′ t−1(ǫ))) for 2 ≤t ≤T (4) To gain a better understanding of the sampling process, consider the first few levels of the tree. The roots x1, x′ 1 of the trees x, x′ are sampled from p1, the conditional distribution for t = 1 given by p. Next, say, ǫ1 = +1. Then the “right” children of x1 and x′ 1 are sampled via x2(+1), x′ 2(+1) ∼p2(·|x′ 1) since χ1(+1) selects x′ 1. On the other hand, the “left” children x2(−1), x′ 2(−1) are both distributed according to p2(·|x1). Now, suppose ǫ1 = +1 and ǫ2 = −1. Then, x3(+1, −1), x′ 3(+1, −1) are both sampled from p3(·|x′ 1, x2(+1)). The proof of Theorem 3 reveals why such intricate conditional structure arises, and Proposition 5 below shows that this structure greatly simplifies for i.i.d. and worst-case situations. Nevertheless, the process described above allows us to define a unified notion of Rademacher complexity for the spectrum of assumptions between the two extremes. Definition 2. The distribution-dependent sequential Rademacher complexity of a function class F ⊆RX is defined as RT (F, p) △= E(x,x′)∼ρEǫ " sup f∈F T X t=1 ǫtf(xt(ǫ)) # where ǫ = (ǫ1, . . . , ǫT ) is a sequence of i.i.d. Rademacher random variables and ρ is the probability tree associated with p. We now prove an upper bound on the value VT (P1:T ) of the game in terms of this distributiondependent sequential Rademacher complexity. The result cannot be deduced directly from [10], and it greatly increases the scope of problems whose learnability can now be studied in a unified manner. Theorem 3. The minimax value is bounded as VT (P1:T ) ≤2 sup p∈P RT (F, p). (5) 4 More generally, for any measurable function Mt such that Mt(p, f, x, x′, ǫ) = Mt(p, f, x′, x, −ǫ), VT (P1:T ) ≤2 sup p∈P E(x,x′)∼ρEǫ " sup f∈F T X t=1 ǫt(f(xt(ǫ)) −Mt(p, f, x, x′, ǫ)) # The following corollary provides a natural “centered” version of the distribution-dependent Rademacher complexity. That is, the complexity can be measured by relative shifts in the adversarial moves. Corollary 4. For the game with restrictions P1:T , VT (P1:T ) ≤2 sup p∈P E(x,x′)∼ρEǫ " sup f∈F T X t=1 ǫt  f(xt(ǫ)) −Et−1f(xt(ǫ)) # where Et−1 denotes the conditional expectation of xt(ǫ). Example 1. Suppose F is a unit ball in a Banach space and f(x) = ⟨f, x⟩. Then VT (P1:T ) ≤2 sup p∈P E(x,x′)∼ρEǫ T X t=1 ǫt  xt(ǫ) −Et−1xt(ǫ)  Suppose the adversary plays a simple random walk (e.g., pt(x|x1, . . . , xt−1) = pt(x|xt−1) is uniform on a unit sphere). For simplicity, suppose this is the only strategy allowed by the set P. Then xt(ǫ) −Et−1xt(ǫ) are independent increments when conditioned on the history. Further, the increments do not depend on ǫt. Thus, VT (P1:T ) ≤2E PT t=1 Yt where {Yt} is the corresponding random walk. We now show that the distribution-dependent sequential Rademacher complexity for i.i.d. data is precisely the classical Rademacher complexity, and further show that the distribution-dependent sequential Rademacher complexity is always upper bounded by the worst-case sequential Rademacher complexity defined in [10]. Proposition 5. First, consider the i.i.d. restrictions Pt = {p} for all t, where p is some fixed distribution on X, and let ρ be the process associated with the joint distribution p = pT . Then RT (F, p) = RT (F, p), where RT (F, p) △= Ex1,...,xT ∼pEǫ " sup f∈F T X t=1 ǫtf(xt) # (6) is the classical Rademacher complexity. Second, for any joint distribution p, RT (F, p) ≤RT (F), where RT (F) △= sup x Eǫ " sup f∈F T X t=1 ǫtf(xt(ǫ)) # (7) is the sequential Rademacher complexity defined in [10]. In the case of hybrid learning, adversary chooses a sequence of pairs (xt, yt) where the instance xt’s are i.i.d. but the labels yi’s are fully adversarial. The distribution-dependent Rademacher complexity in such a hybrid case can be upper bounded by a vary natural quantity: a random average where expectation is taken over xt’s and a supremum over Y-valued trees. So, the distribution dependent Rademacher complexity itself becomes a hybrid between the classical Rademacher complexity and the worst case sequential Rademacher complexity. For more details, see Lemma 17 in the Appendix as another example of an analysis of the distribution-dependent sequential Rademacher complexity. Distribution-dependent sequential Rademacher complexity enjoys many of the nice properties satisfied by both classical and worst-case Rademacher complexities. As shown in [10], these properties are handy tools for proving upper bounds on the value in various examples. We have: (a) If F ⊂G, then R(F, p) ≤R(G, p); (b) R(F, p) = R(conv(F), p); (c) R(cF, p) = |c|R(F, p) for all c ∈R; (d) For any h, R(F + h, p) = R(F, p) where F + h = {f + h : f ∈F}. In addition to the above properties, upper bounds on R(F, p) can be derived via sequential covering numbers defined in [10]. This notion of a cover captures the sequential complexity of a function class on a given X-valued tree x. One can then show an analogue of the Dudley integral bound, where the complexity is averaged with respect to the underlying process (x, x′) ∼ρ. 5 4 Application: Constrained Adversaries In this section, we consider adversaries who are deterministically constrained in the sequences of actions they can play. It is often useful to consider scenarios where the adversary is worst case, yet has some budget or constraint to satisfy while picking the actions. Examples of such scenarios include, for instance, games where the adversary is constrained to make moves that are close in some fashion to the previous move, linear games with bounded variance, and so on. Below we formulate such games quite generally through arbitrary constraints that the adversary has to satisfy on each round. We easily derive several results to illustrate the versatility of the developed framework. For a T round game consider an adversary who is only allowed to play sequences x1, . . . , xT such that at round t the constraint Ct(x1, . . . , xt) = 1 is satisfied, where Ct : X t 7→{0, 1} represents the constraint on the sequence played so far. The constrained adversary can be viewed as a stochastic adversary with restrictions on the conditional distribution at time t given by the set of all Borel distributions on the set Xt(x1:t−1) △= {x ∈X : Ct(x1, . . . , xt−1, x) = 1}. Since this set includes all point distributions on each x ∈Xt, the sequential complexity simplifies in a way similar to worst-case adversaries. We write VT (C1:T ) for the value of the game with the given constraints. Now, assume that for any x1:t−1, the set of all distributions on Xt(x1:t−1) is weakly compact in a way similar to compactness of P. That is, Pt(x1:t−1) satisfy the necessary conditions for the minimax theorem to hold. We have the following corollaries of Theorems 1 and 3. Corollary 6. Let F and X be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let {Ct : X t−1 7→{0, 1}}T t=1 be the constraints. Then VT (C1:T ) = sup p∈P E " T X t=1 inf ft∈F Ext∼pt [ft(xt)] −inf f∈F T X t=1 f(xt) # (8) where p ranges over all distributions over sequences (x1, . . . , xT ) such that ∀t, Ct(x1:t−1) = 1. Corollary 7. Let the set T be a set of pairs (x, x′) of X-valued trees with the property that for any ǫ ∈ {±1}T and any t ∈ [T], C(χ1(ǫ1), . . . , χt−1(ǫt−1), xt(ǫ)) = C(χ1(ǫ1), . . . , χt−1(ǫt−1), x′ t(ǫ)) = 1 . The minimax value is bounded as VT (C1:T ) ≤2 sup (x,x′)∈T RT (F, p). More generally, for any measurable function Mt such that Mt(f, x, x′, ǫ) = Mt(f, x′, x, −ǫ), VT (C1:T ) ≤2 sup (x,x′)∈T Eǫ " sup f∈F T X t=1 ǫt(f(xt(ǫ)) −Mt(f, x, x′, ǫ)) # . Armed with these results, we can recover and extend some known results on online learning against budgeted adversaries. The first result says that if the adversary is not allowed to move by more than σt away from its previous average of decisions, the player has a strategy to exploit this fact and obtain lower regret. For the ℓ2-norm, such “total variation” bounds have been achieved in [4] up to a log T factor. Our analysis seamlessly incorporates variance measured in arbitrary norms, not just ℓ2. We emphasize that such certificates of learnability are not possible with the analysis of [10]. Proposition 8 (Variance Bound). Consider the online linear optimization setting with F = {f : Ψ(f) ≤R2} for a λ-strongly function Ψ : F 7→R+ on F, and X = {x : ∥x∥∗≤1}. Let f(x) = ⟨f, x⟩for any f ∈F and x ∈X. Consider the sequence of constraints {Ct}T t=1 given by Ct(x1, . . . , xt−1, x) = 1 if ∥x − 1 t−1 Pt−1 τ=1 xτ∥∗≤σt and 0 otherwise. Then VT (C1:T ) ≤2 √ 2R q λ−1 PT t=1 σ2 t In particular, we obtain the following ℓ2 variance bound. Consider the case when Ψ : F 7→R+ is given by Ψ(f) = 1 2∥f∥2, F = {f : ∥f∥2 ≤1} and X = {x : ∥x∥2 ≤1}. Consider the constrained game where the move xt played by adversary at time t satisfies xt − 1 t−1 Pt−1 τ=1 xτ 2 ≤σt . In this case we can conclude that VT (C1:T ) ≤2 √ 2 qPT t=1 σ2 t . We can also derive a variance bound 6 over the simplex. Let Ψ(f) = Pd i=1 fi log(dfi) is defined over the d-simplex F, and X = {x : ∥x∥∞≤1}. Consider the constrained game where the move xt played by adversary at time t satisfies maxj∈[d] xt[j] − 1 t−1 Pt−1 τ=1 xτ[j] ≤σt . For any f ∈F, Ψ(f) ≤log(d) and so we conclude that VT (C1:T ) ≤2 √ 2 q log(d) PT t=1 σ2 t . The next Proposition gives a bound whenever the adversary is constrained to choose his decision from a small ball around the previous decision. Proposition 9 (Slowly-Changing Decisions). Consider the online linear optimization setting where adversary’s move at any time is close to the move during the previous time step. Let F = {f : Ψ(f) ≤R2} where Ψ : F 7→R+ is a λ-strongly function on F and X = {x : ∥x∥∗≤B}. Let f(x) = ⟨f, x⟩for any f ∈F and x ∈X. Consider the sequence of constraints {Ct}T t=1 given by Ct(x1, . . . , xt−1, x) = 1 if ∥x −xt−1∥∗≤δ and 0 otherwise. Then, VT (C1:T ) ≤2Rδ p 2T/λ . In particular, consider the case of a Euclidean-norm restriction on the moves. Let Ψ : F 7→R+ is given by Ψ(f) = 1 2∥f∥2, F = {f : ∥f∥2 ≤1} and X = {x : ∥x∥2 ≤1}. Consider the constrained game where the move xt played by adversary at time t satisfies ∥xt −xt−1∥2 ≤δ . In this case we can conclude that VT (C1:T ) ≤2δ √ 2T . For the case of decision-making on the simplex, we obtain the following result. Let Ψ(f) = Pd i=1 fi log(dfi) is defined over the d-simplex F, and X = {x : ∥x∥∞≤1}. Consider the constrained game where the move xt played by adversary at time t satisfies ∥xt −xt−1∥∞≤δ. In this case note that for any f ∈F, Ψ(f) ≤log(d) and so we can conclude that VT (C1:T ) ≤2δ p 2T log(d) . 5 Application: Smoothed Adversaries The development of smoothed analysis over the past decade is arguably one of the landmarks in the study of complexity of algorithms. In contrast to the overly optimistic average complexity and the overly pessimistic worst-case complexity, smoothed complexity can be seen as a more realistic measure of algorithm’s performance. In their groundbreaking work, Spielman and Teng [13] showed that the smoothed running time complexity of the simplex method is polynomial. This result explains good performance of the method in practice despite its exponential-time worst-case complexity. In this section, we consider the effect of smoothing on learnability. It is well-known that there is a gap between the i.i.d. and the worst-case scenarios. In fact, we do not need to go far for an example: a simple class of threshold functions on a unit interval is learnable in the i.i.d. supervised learning scenario, yet difficult in the online worst-case model [8, 2, 9]. This fact is reflected in the corresponding combinatorial dimensions: the Vapnik-Chervonenkis dimension is one, whereas the Littlestone dimension is infinite. The proof of the latter fact, however, reveals that the infinite number of mistakes on the part of the player is due to the infinite resolution of the carefully chosen adversarial sequence. We can argue that this infinite precision is an unreasonable assumption on the power of a real-world opponent. The idea of limiting the power of the malicious adversary through perturbing the sequence can be traced back to Posner and Kulkarni [9]. The authors considered on-line learning of functions of bounded variation, but in the so-called realizable setting (that is, when labels are given by some function in the given class). We define the smoothed online learning model as the following T-round interaction between the learner and the adversary. On round t, the learner chooses ft ∈F; the adversary simultaneously chooses xt ∈X, which is then perturbed by some noise st ∼σ, yielding a value ˜xt = ω(xt, st); and the player suffers ft(˜xt). Regret is defined with respect to the perturbed sequence. Here ω : X × S 7→X is some measurable mapping; for instance, additive disturbances can be written as ˜x = ω(x, s) = x + s. If ω keeps xt unchanged, that is ω(xt, st) = xt, the setting is precisely the standard online learning model. In the full information version, we assume that the choice ˜xt is revealed to the player at the end of round t. We now recognize that the setting is nothing but a particular way to restrict the adversary. That is, the choice xt ∈X defines a parameter of a mixed strategy from which a actual move ω(xt, st) is drawn; for instance, for additive zero-mean Gaussian noise, xt defines the center of the distribution from which xt + st is drawn. In other words, noise does not allow the adversary to play any desired mixed strategy. 7 The value of the smoothed online learning game (as defined in (1)) can be equivalently written as VT = inf q1 sup x1 E f1∼q1 s1∼σ inf q2 sup x2 E f2∼q2 s2∼σ · · · inf qT sup xT E fT ∼qT sT ∼σ " T X t=1 ft(ω(xt, st)) −inf f∈F T X t=1 f(ω(xt, st)) # where the infima are over qt ∈Q and the suprema are over xt ∈X. Using sequential symmetrization, we deduce the following upper bound on the value of the smoothed online learning game. Theorem 10. The value of the smoothed online learning game is bounded above as VT ≤2 sup x1∈X E s1∼σEǫ1 . . . sup xT ∈X E sT ∼σEǫT " sup f∈F T X t=1 ǫtf(ω(xt, st)) # We now demonstrate how Theorem 10 can be used to show learnability for smoothed learning of threshold functions. First, consider the supervised game with threshold functions on a unit interval (that is, non-homogenous hyperplanes). The moves of the adversary are pairs x = (z, y) with z ∈[0, 1] and y ∈{0, 1}, and the binary-valued function class F is defined by F = {fθ(z, y) = |y −1 {z < θ}| : θ ∈[0, 1]} , (9) that is, every function is associated with a threshold θ ∈[0, 1]. The class F has infinite Littlestone’s dimension and is not learnable in the worst-case online framework. Consider a smoothed scenario, with the z-variable of the adversarial move (z, y) perturbed by an additive uniform noise σ = Unif[−γ/2, γ/2] for some γ ≥0. That is, the actual move revealed to the player at time t is (zt + st, yt), with st ∼σ. Any non-trivial upper bound on regret has to depend on particular noise assumptions, as γ = 0 corresponds to the case with infinite Littlestone dimension. For the uniform disturbance, the intuition tells us that noise implies a margin, and we should expect a 1/γ complexity parameter appearing in the bounds. The next lemma quantifies the intuition that additive noise limits precision of the adversary. Lemma 11. Let θ1, . . . , θN be obtained by discretizing the interval [0, 1] into N = T a bins [θi, θi+1) of length T −a, for some a ≥3. Then, for any sequence z1, . . . , zT ∈[0, 1], with probability at least 1 − 1 γT a−2 , no two elements of the sequence z1 + s1, . . . , zT + sT belong to the same interval [θi, θi+1), where s1, . . . , sT are i.i.d. Unif[−γ/2, γ/2]. We now observe that, conditioned on the event in Lemma 11, the upper bound on the value in Theorem 10 is a supremum of N martingale difference sequences! We then arrive at: Proposition 12. For the problem of smoothed online learning of thresholds in 1-D, the value is VT ≤2 + p 2T (4 log T + log(1/γ)) What we found is somewhat surprising: for a problem which is not learnable in the online worstcase scenario, an exponentially small noise added to the moves of the adversary yields a learnable problem. This shows, at least in the given example, that the worst-case analysis and Littlestone’s dimension are brittle notions which might be too restrictive in the real world, where some noise is unavoidable. It is comforting that small additive noise makes the problem learnable! The proof for smoothed learning of half-spaces in higher dimension follows the same route as the one-dimensional exposition. For simplicity, assume the hyperplanes are homogenous and Z = Sd−1 ⊂Rd, Y = {−1, 1}, X = Z ×Y. Define F = {fθ(z, y) = 1 {y ⟨z, θ⟩> 0} : θ ∈Sd−1}, and assume that the noise is distributed uniformly on a square patch with side-length γ on the surface of the sphere Sd−1. We can also consider other distributions, possibly with support on a d-dimensional ball instead. Proposition 13. For the problem of smoothed online learning of half-spaces, VT = O s dT  log  1 γ  + 3 d −1 log T  + vd−2 ·  1 γ  3 d−1 ! where vd−2 is constant depending only on the dimension d. We conclude that half spaces are online learnable in the smoothed model, since the upper bound of Proposition 13 guarantees existence of an algorithm which achieves this regret. In fact, for the two examples considered in this section, the Exponential Weights Algorithm on the discretization given by Lemma 11 is a (computationally infeasible) algorithm achieving the bound. 8 References [1] J. Abernethy, A. Agarwal, P. Bartlett, and A. Rakhlin. A stochastic view of optimal regret through minimax duality. In COLT, 2009. [2] S. Ben-David, D. Pal, and S. Shalev-Shwartz. Agnostic online learning. In Proceedings of the 22th Annual Conference on Learning Theory, 2009. [3] J.O. Berger. Statistical decision theory and Bayesian analysis. Springer, 1985. [4] E. Hazan and S. Kale. Better algorithms for benign bandits. In SODA, 2009. [5] S.M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. NIPS, 22, 2008. [6] A. Lazaric and R. Munos. Hybrid Stochastic-Adversarial On-line Learning. In COLT, 2009. [7] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer-Verlag, New York, 1991. [8] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2(4):285–318, 04 1988. [9] S. Posner and S. Kulkarni. On-line learning of functions of bounded variation under various sampling schemes. In Proceedings of the sixth annual conference on Computational learning theory, pages 439–445. ACM, 1993. [10] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In NIPS, 2010. Full version available at arXiv:1006.1138. [11] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Beyond regret. In COLT, 2011. Full version available at arXiv:1011.3168. [12] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. JMLR, 11:2635–2670, Oct 2010. [13] D. A. Spielman and S. H. Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. Journal of the ACM, 51(3):385–463, 2004. [14] A. W. Van Der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes : With Applications to Statistics. Springer Series, March 1996. 9
2011
122
4,172
Similarity-based Learning via Data Driven Embeddings Purushottam Kar Indian Institute of Technology Kanpur, INDIA purushot@cse.iitk.ac.in Prateek Jain Microsoft Research India Bangalore, INDIA prajain@microsoft.com Abstract We consider the problem of classification using similarity/distance functions over data. Specifically, we propose a framework for defining the goodness of a (dis)similarity function with respect to a given learning task and propose algorithms that have guaranteed generalization properties when working with such good functions. Our framework unifies and generalizes the frameworks proposed by [1] and [2]. An attractive feature of our framework is its adaptability to data - we do not promote a fixed notion of goodness but rather let data dictate it. We show, by giving theoretical guarantees that the goodness criterion best suited to a problem can itself be learned which makes our approach applicable to a variety of domains and problems. We propose a landmarking-based approach to obtaining a classifier from such learned goodness criteria. We then provide a novel diversity based heuristic to perform task-driven selection of landmark points instead of random selection. We demonstrate the effectiveness of our goodness criteria learning method as well as the landmark selection heuristic on a variety of similarity-based learning datasets and benchmark UCI datasets on which our method consistently outperforms existing approaches by a significant margin. 1 Introduction Machine learning algorithms have found applications in diverse domains such as computer vision, bio-informatics and speech recognition. Working in such heterogeneous domains often involves handling data that is not presented as explicit features embedded into vector spaces. However in many domains, for example co-authorship graphs, it is natural to devise similarity/distance functions over pairs of points. While classical techniques like decision tree and linear perceptron cannot handle such data, several modern machine learning algorithms such as support vector machine (SVM) can be kernelized and are thereby capable of using kernels or similarity functions. However, most of these algorithms require the similarity functions to be positive semi-definite (PSD), which essentially implies that the similarity stems from an (implicit) embedding of the data into a Hilbert space. Unfortunately in many domains, the most natural notion of similarity does not satisfy this condition - moreover, verifying this condition is usually a non-trivial exercise. Take for example the case of images on which the most natural notions of distance (Euclidean, Earth-mover) [3] do not form PSD kernels. Co-authorship graphs give another such example. Consequently, there have been efforts to develop algorithms that do not make assumptions about the PSD-ness of the similarity functions used. One can discern three main approaches in this area. The first approach tries to coerce a given similarity measure into a PSD one by either clipping or shifting the spectrum of the kernel matrix [4, 5]. However, these approaches are mostly restricted to transductive settings and are not applicable to large scale problems due to eigenvector computation requirements. The second approach consists of algorithms that either adapt classical methods like 1 k-NN to handle non-PSD similarity/distance functions and consequently offer slow test times [5], or are forced to solve non-convex formulations [6, 7]. The third approach, which has been investigated recently in a series of papers [1, 2, 8, 9], uses the similarity function to embed the domain into a low dimensional Euclidean space. More specifically, these algorithms choose landmark points in the domain which then give the embedding. Assuming a certain “goodness” property (that is formally defined) for the similarity function, these models offer both generalization guarantees in terms of how well-suited the similarity function is to the classification task as well as the ability to use fast algorithmic techniques such as linear SVM [10] on the landmarked space. The model proposed by Balcan-Blum in [1] gives sufficient conditions for a similarity function to be well suited to such a landmarking approach. Wang et al. in [2] on the other hand provide goodness conditions for dissimilarity functions that enable landmarking algorithms. Informally, a similarity (or distance) function can be said to be good if points of similar labels are closer to each other than points of different labels in some sense. Both the models described above restrict themselves to a fixed goodness criterion, which need not hold for the underlying data. We observe that this might be too restrictive in many situations and present a framework that allows us to tune the goodness criterion itself to the classification problem at hand. Our framework consequently unifies and generalizes those presented in [1] and [2]. We first prove generalization bounds corresponding to landmarked embeddings under a fixed goodness criterion. We then provide a uniform-convergence bound that enables us to learn the best goodness criterion for a given problem. We further generalize our framework by giving the ability to incorporate any Lipschitz loss function into our goodness criterion which allows us to give guarantees for the use of various algorithms such as C-SVM and logistic regression on the landmarked space. Now similar to [1, 2], our framework requires random sampling of training points to create the embedding space1. However in practice, random sampling is inefficient and requires sampling of a large number of points to form a useful embedding, thereby increasing training and test time. To address this issue, [2] proposes a heuristic to select the points that are to be used as landmarks. However their scheme is tied to their optimization algorithm and is computationally inefficient for large scale data. In contrast, we propose a general heuristic for selecting informative landmarks based on a novel notion of diversity which can then be applied to any instantiation of our model. Finally, we apply our methods to a variety of benchmark datasets for similarity learning as well as ones from the UCI repository. We empirically demonstrate that our learning model and landmark selection heuristic consistently offers significant improvements over the existing approaches. In particular, for small number of landmark points, which is a practically important scenario as it is expensive to compute similarity function values at test time, our method provides, on an average, accuracy boosts of upto 5% over existing methods. We also note that our methods can be applied on top of any strategy used to learn the similarity measure (eg. MKL techniques [11]) or the distance measure (eg. [12]) itself. Akin to [1], our techniques can also be extended to learn a combination of (dis)similarity functions but we do not explore these extensions in this paper. 2 Methodology Let D be a fixed but unknown distribution over the labeled input domain X and let ℓ: X → {−1, +1} be a labeling over the domain. Given a (potentially non-PSD) similarity function2 K : X × X →R, the goal is to learn a classifier ˆℓ: X →{−1, +1} from a finite number of i.i.d. samples from D that has bounded generalization error over D. Now, learning a reasonable classifier seems unlikely if the given similarity function does not have any inherent “goodness” property. Intuitively, the goodness of a similarity function should be its suitability to the classification task at hand. For PSD kernels, the notion of goodness is defined in terms of the margin offered in the RKHS [13]. However, a more basic requirement is that the similarity function should preserve affinities among similarly labeled points - that is to say, a good similarity function should not, on an average, assign higher similarity values to dissimilarly labeled points than to similarly labeled points. This intuitive notion of goodness turns out to be rather robust 1Throughout the paper, we use the terms embedding space and landmarked space interchangeably. 2Results described in this section hold for distance functions as well; we present results with respect to similarity functions for sake of simplicity. 2 in the sense that all PSD kernels that offer a good margin in their respective RKHSs satisfy some form of this goodness criterion as well [14]. Recently there has been some interest in studying different realizations of this general notion of goodness and developing corresponding algorithms that allow for efficient learning with similarity/distance functions. Balcan-Blum in [1] present a goodness criteria in which a good similarity function is considered to be one that, for most points, assigns a greater average similarity to similarly labeled points than to dissimilarly labeled points. More specifically, a similarity function is (ϵ, γ)-good if there exists a weighing function w : X →R such that, at least a (1 −ϵ) probability mass of examples x ∼D satisfies: E x′∼D [w (x′) K(x, x′)|ℓ(x′) = ℓ(x)] ≥ E x′∼D [w (x′) K(x, x′)|ℓ(x′) ̸= ℓ(x)] + γ. (1) where instead of average similarity, one considers an average weighted similarity to allow the definition to be more general. Wang et al in [2] define a distance function d to be good if a large fraction of the domain is, on an average, closer to similarly labeled points than to dissimilarly labeled points. They allow these averages to be calculated based on some distribution distinct from D, one that may be more suited to the learning problem. However it turns out that their definition is equivalent to one in which one again assigns weights to domain elements, as done by [1], and the following holds E x′,x′′∼D×D [w(x′)w(x′′) sgn (d(x, x′′) −d(x, x′)) |ℓ(x′) = ℓ(x), ℓ(x′′) ̸= ℓ(x)] > γ (2) Assuming their respective goodness criteria, [1] and [2] provide efficient algorithms to learn classifiers with bounded generalization error. However these notions of goodness with a single fixed criterion may be too restrictive in the sense that the data and the (dis)similarity function may not satisfy the underlying criterion. This is, for example, likely in situations with high intra-class variance. Thus there is need to make the goodness criterion more flexible and data-dependent. To this end, we unify and generalize both the above criteria to give a notion of goodness that is more data dependent. Although the above goodness criteria (1) and (2) seem disparate at first, they can be shown to be special cases of a generalized framework where an antisymmetric function is used to compare intra and inter-class affinities. We use this observation to define our novel goodness criterion using arbitrary bounded antisymmetric functions which we refer to as transfer functions. This allows us to define a family of goodness criteria of which (1) and (2) form special cases ((1) uses the identity function and (2) uses the sign function as transfer function). Moreover, the resulting definition of a good similarity function is more flexible and data dependent. In the rest of the paper we shall always assume that our similarity functions are normalized i.e. for the domain of interest X, sup x,y∈X K(x, y) ≤1. Definition 1 (Good Similarity Function). A similarity function K : X × X →R is said to be an (ϵ, γ, B)-good similarity for a learning problem where ϵ, γ, B > 0 if for some antisymmetric transfer function f : R →R and some weighing function w : X × X →[−B, B], at least a (1 −ϵ) probability mass of examples x ∼D satisfies E x′,x′′∼D×D [w (x′, x′′) f (K(x, x′) −K(x, x′′)) |ℓ(x′) = ℓ(x), ℓ(x′′) ̸= ℓ(x)] ≥Cfγ (3) where Cf = sup x,x′∈X f(K(x, x′)) − inf x,x′∈Xf(K(x, x′)) As mentioned before, the above goodness criterion generalizes the previous notions of goodness3 and is adaptive to changes in data as it allows us, as shall be shown later, to learn the best possible criterion for a given classification task by choosing the most appropriate transfer function from a parameterized family of functions. We stress that the property of antisymmetry for the transfer function is crucial to the definition in order to provide a uniform treatment to points of all classes as will be evident in the proof4 of Theorem 2. As in [1, 2], our goodness criterion lends itself to a simple learning algorithm which consists of choosing a set of d random pairs of points from the domain P = x+ i , x− i  d i=1 (which we refer to 3We refer the reader to the supplementary material (Section 2) for a discussion. 4Due to lack of space we relegate all proofs to the supplementary material 3 as landmark pairs) and defining an embedding of the domain into a landmarked space using these landmarks : ΦL : X →Rd, ΦL(x) = f(K(x, x+ i ) −K(x, x− i )) d i=1 ∈Rd. The advantage of performing this embedding is the guaranteed existence of a large margin classifier in the landmarked space as shown below. Theorem 2. If K is an (ϵ, γ, B)-good similarity with respect to transfer function f and weight function w then for any ϵ1 > 0, with probability at least 1 −δ over the choice of d = (8/γ2) ln(2/δϵ1) positive and negative samples,  x+ i d i=1 ⊂D+ and  x− i d i=1 ⊂D−respectively, the classifier h(x) = sgn[g(x)] where g(x) = 1 d Pd i=1 w(x+ i , x− i )f K(x, x+ i ) −K(x, x− i )  has error no more than ϵ + ϵ1 at margin γ 2 . However, there are two hurdles to obtaining this large margin classifier. Firstly, the existence of this classifier itself is predicated on the use of the correct transfer function, something which is unknown. Secondly, even if an optimal transfer function is known, the above formulation cannot be converted into an efficient learning algorithm for discovering the (unknown) weights since the formulation seeks to minimize the number of misclassifications which is an intractable problem in general. We overcome these two hurdles by proposing a nested learning problem. First of all we assume that for some fixed loss function L, given any transfer function and any set of landmark pairs, it is possible to obtain a large margin classifier in the corresponding landmarked space that minimizes L. Having made this assumption, we address below the issue of learning the optimal transfer function for a given learning task. However as we have noted before, this assumption is not valid for arbitrary loss functions. This is why, subsequently in Section 2.2, we shall show it to be valid for a large class of loss functions by incorporating surrogate loss functions into our goodness criterion. 2.1 Learning the transfer function In this section we present results that allow us to learn a near optimal transfer function from a family of transfer functions. We shall assume, for some fixed loss function L, the existence of an efficient routine which we refer to as TRAIN that shall return, for any landmarked space indexed by a set of landmark pairs P, a large margin classifier minimizing L. The routine TRAIN is allowed to make use of additional training data to come up with this classifier. An immediate algorithm for choosing the best transfer function is to simply search the set of possible transfer functions (in an algorithmically efficient manner) and choose the one offering lowest training error. We show here that given enough landmark pairs, this simple technique, which we refer to as FTUNE (see Algorithm 2) is guaranteed to return a near-best transfer function. For this we prove a uniform convergence type guarantee on the space of transfer functions. Let F ⊂[−1, 1]R be a class of antisymmetric functions and W = [−B, B]X×X be a class of weight functions. For two real valued functions f and g defined on X, let ∥f −g∥∞:= sup x∈X |f(x) −g(x)|. Let B∞(f, r) := {f ′ ∈F | ∥f −f ′∥∞< r}. Let L be a CL-Lipschitz loss function. Let P = x+ i , x− i  d i=1 be a set of (random) landmark pairs. For any f ∈F, w ∈W, define G(f,w)(x) = E x′,x′′∼D×D [w (x′, x′′) f (K(x, x′) −K(x, x′′)) |ℓ(x′) = ℓ(x), ℓ(x′′) ̸= ℓ(x)] g(f,w)(x) = 1 d d X i=1 w x+ i , x− i  f K(x, x+ i ) −K(x, x− i )  Theorem 5 (see Section 2.2) guarantees us that for any fixed f and any ϵ1 > 0, if d is large enough then E x  L(g(f,w)(x))  ≤E x  L(G(f,w)(x))  + ϵ1. We now show that a similar result holds even if one is allowed to vary f. Before stating the result, we develop some notation. For any transfer function f and arbitrary choice of landmark pairs P, let w(g,f) be the best weighing function for this choice of transfer function and landmark pairs i.e. let w(g,f) = arg min w∈[−B,B]d E x∼D  L g(f,w)(x)  5. Similarly, let w(G,f) be the best weighing function corresponding to G i.e. w(G,f) = arg min w∈W E x∼D  L G(f,w)(x)  . Then we can ensure the following : 5Note that the function g(f,w)(x) is dictated by the choice of the set of landmark pairs P 4 Theorem 3. Let F be a compact class of transfer functions with respect to the infinity norm and ϵ1, δ > 0. Let N (F, r) be the size of the smallest ϵ-net over F with respect to the infinity norm at scale r = ϵ1 4CLB . Then if one chooses d = 64B2C2 L ϵ2 1 ln  16B·N (F,r) δϵ1  random landmark pairs then we have the following with probability greater than (1 −δ) sup f∈F h E x∼D h L  g(f,w(g,f))(x) i −E x∼D h L  G(f,w(G,f))(x) i i ≤ϵ1 This result tells us that in a large enough landmarked space, we shall, for each function f ∈F, recover close to the best classifier possible for that transfer function. Thus, if we iterate over the set of transfer functions (or use some gradient-descent based optimization routine), we are bound to select a transfer function that is capable of giving a classifier that is close to the best. 2.2 Working with surrogate loss functions The formulation of a good similarity function suggests a simple learning algorithm that involves the construction of an embedding of the domain into a landmarked space on which the existence of a large margin classifier having low misclassification rate is guaranteed. However, in order to exploit this guarantee we would have to learn the weights w x+ i , x− i  associated with this classifier by minimizing the empirical misclassification rate on some training set. Unfortunately, not only is this problem intractable but also hard to solve approximately [15, 16]. Thus what we require is for the landmarked space to admit a classifier that has low error with respect to a loss function that can also be efficiently minimized on any training set. In such a situation, minimizing the loss on a random training set would, with very high probability, give us weights that give similar performance guarantees as the ones used in the goodness criterion. With a similar objective in mind, [1] offers variants of its goodness criterion tailored to the hinge loss function which can be efficiently optimized on large training sets (for example LIBSVM [17]). Here we give a general notion of goodness that can be tailored to any arbitrary Lipschitz loss function. Definition 4. A similarity function K : X × X →R is said to be an (ϵ, B)-good similarity for a learning problem with respect to a loss function L : R →R+ where ϵ > 0 if for some transfer function f : R →R and some weighing function w : X ×X →[−B, B], E x∼D [L(G(x))] ≤ϵ where G(x) = E x′,x′′∼D×D [w (x′, x′′) f (K(x, x′) −K(x, x′′)) |ℓ(x′) = ℓ(x), ℓ(x′′) ̸= ℓ(x)] One can see that taking the loss functions as L(x) = 1x<Cf γ gives us Equation 3 which defines a good similarity under the 0−1 loss function. It turns out that we can, for any Lipschitz loss function, give similar guarantees on the performance of the classifier in the landmarked space. Theorem 5. If K is an (ϵ, B)-good similarity function with respect to a CL-Lipschitz loss function L then for any ϵ1 > 0, with probability at least 1 −δ over the choice of d = (16B2C2 L/ϵ2 1) ln(4B/δϵ1) positive and negative samples from D+ and D−respectively, the expected loss of the classifier g(x) with respect to L satisfies E x [L(g(x))] ≤ϵ + ϵ1 where g(x) = 1 d Pd i=1 w x+ i , x− i  f K(x, x+ i ) −K(x, x− i )  . If the loss function is hinge loss at margin γ then CL = 1 γ . The 0 −1 loss function and the loss function L(x) = 1x<γ (implicitly used in Definition 1 and Theorem 2) are not Lipschitz and hence this proof technique does not apply to them. 2.3 Selecting informative landmarks Recall that the generalization guarantees we described in the previous section rely on random selection of landmark pairs from a fixed distribution over the domain. However, in practice, a totally random selection might require one to select a large number of landmarks, thereby leading to an inefficient classifier in terms of training as well as test times. For typical domains such as computer vision, similarity function computation is an expensive task and hence selection of a small number of landmarks should lead to a significant improvement in the test times. For this reason, we propose a landmark pair selection heuristic which we call DSELECT (see Algorithm 1). The heuristic 5 Algorithm 1 DSELECT Require: A training set T, landmarking size d. Ensure: A set of d landmark pairs/singletons. 1: L ←get-random-element(T), PFTUNE ←∅ 2: for j = 2 to d do 3: z ←arg min x∈T P x′∈L K(x, x′). 4: L ←L ∪{z}, T ←T\{z} 5: end for 6: for j = 1 to d do 7: Sample z1, z2 randomly from L with replacement s.t. ℓ(z1) = 1, ℓ(z2) = −1 8: PFTUNE ←PFTUNE ∪{(z1, z2)} 9: end for 10: return L (for BBS), PFTUNE (for FTUNE) Algorithm 2 FTUNE Require: A family of transfer functions F, a similarity function K and a loss function L Ensure: An optimal transfer function f ∗∈F. 1: Select d landmark pairs P . 2: for all f ∈F do 3: wf ←TRAIN(P, L), Lf ←L (wf) 4: end for 5: f ∗←arg min f∈F Lf 6: return (f ∗, wf∗). generalizes naturally to multi-class problems and can also be applied to the classification model of Balcan-Blum that uses landmark singletons instead of pairs. At the core of our heuristic is a novel notion of diversity among landmarks. Assuming K is a normalized similarity kernel, we call a set of points S ⊂X diverse if the average inter-point similarity is small i.e 1 |S|(|S|−1) P x,y∈S,x̸=y K(x, y) ≪1 (in case we are working with a distance kernel we would require large inter-point distances). The key observation behind DSELECT is that a nondiverse set of landmarks would cause all data points to receive identical embeddings and linear separation would be impossible. Small inter-landmark similarity, on the other hand would imply that the landmarks are well-spread in the domain and can capture novel patterns in the data. Similar notions of diversity have been used in the past for ensemble classifiers [18] and k-NN classifiers [5]. Here we use this notion to achieve a better embedding into the landmarked space. Experimental results demonstrate that the heuristic offers significant performance improvements over random landmark selection (see Figure 1). One can easily extend Although Algorithm 1 to multiclass problems by selecting a fixed number of landmarks from each class. 3 Empirical results In this section, we empirically study the performance of our proposed methods on a variety of benchmark datasets. We refer to the algorithmic formulation presented in [1] as BBS and its augmentation using DSELECT as BBS+D. We refer to the formulation presented in [2] as DBOOST. We refer to our transfer function learning based formulation as FTUNE and its augmentation using DSELECT as FTUNE+D. In multi-class classification scenarios we will use a one-vs-all formulation which presents us with an opportunity to further exploit the transfer function by learning separate transfer function per class (i.e. per one-vs-all problem). We shall refer to our formulation using a single (resp. multiple) transfer function as FTUNE+D-S (resp. FTUNE+D-M). We take the class of ramp functions indexed by a slope parameter as our set of transfer functions. We use 6 different values of the slope parameter {1, 5, 10, 50, 100, 1000}. Note that these functions (approximately) include both the identity function (used by [1]) and the sign function (used by [2]). Our goal in this section is two-fold: 1) to show that our FTUNE method is able to learn a more suitable transfer function for the underlying data than the existing methods BBS and DBOOST and 2) to show that our diversity based heuristic for landmark selection performs better than random selection. To this end, we perform experiments on a few benchmark datasets for learning with similarity (non-PSD) functions [5] as well as on a variety of standard UCI datasets where the similarity function used is the Gaussian kernel function. For our experiments, we implemented our methods FTUNE and FTUNE+D as well as BBS and BBS+D using MATLAB while using LIBLINEAR [10] for SVM classification. For DBOOST, we use the C++ code provided by the authors of [2]. On all the datasets we randomly selected a fixed percentage of data for training, validation and testing. Except for DBOOST , we selected the SVM penalty constant C from the set {1, 10, 100, 1000} using validation. For each method and dataset, we report classification accuracies averaged over 20 runs. We compare accuracies obtained by different methods using t-test at 95% significance level. 6 Dataset/Method BBS DBOOST FTUNE+D-S AmazonBinary 0.73(0.13) 0.77(0.10) 0.84(0.12) AuralSonar 0.82(0.08) 0.81(0.08) 0.80(0.08) Patrol 0.51(0.06) 0.34(0.11) 0.58(0.06) Voting 0.95(0.03) 0.94(0.03) 0.94(0.04) Protein 0.98(0.02) 1.00(0.01) 0.98(0.02) Mirex07 0.12(0.01) 0.21(0.03) 0.28(0.03) Amazon47 0.39(0.06) 0.07(0.04) 0.61(0.08) FaceRec 0.20(0.04) 0.12(0.03) 0.63(0.04) (a) 30 Landmarks Dataset/Method BBS DBOOST FTUNE+D-S AmazonBinary 0.78(0.11) 0.82(0.10) 0.88(0.07) AuralSonar 0.88(0.06) 0.85(0.07) 0.85(0.07) Patrol 0.79(0.05) 0.55(0.12) 0.79(0.07) Voting 0.97(0.02) 0.97(0.01) 0.97(0.02) Protein 0.98(0.02) 0.99(0.02) 0.98(0.02) Mirex07 0.17(0.02) 0.31(0.04) 0.35(0.02) Amazon47 0.40(0.13) 0.07(0.05) 0.66(0.07) FaceRec 0.27(0.05) 0.19(0.03) 0.64(0.04) (b) 300 Landmarks Table 1: Accuracies for Benchmark Similarity Learning Datasets for Embedding Dimensionality=30, 300. Bold numbers indicate the best performance with 95% confidence level. 50 100 150 200 250 300 0.5 0.6 0.7 0.8 0.9 1 AmazonBinary (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE+D FTUNE BBS+D BBS DBOOST 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Amazon47 (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE+D FTUNE BBS+D BBS DBOOST 50 100 150 200 250 300 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Mirex07 (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE+D FTUNE BBS+D BBS DBOOST 0 100 200 300 0 0.1 0.2 0.3 0.4 0.5 0.6 FaceRec (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE+D FTUNE BBS+D BBS DBOOST Figure 1: Accuracy obtained by various methods on four different datasets as the number of landmarks used increases. Note that for small number of landmarks (30, 50) our diversity based landmark selection criteria increases accuracy for both BBS and our method FTUNE-S significantly. 3.1 Similarity learning datasets First, we conduct experiments on a few similarity learning datasets [5]; these datasets provide a (non-PSD) similarity matrix along with class labels. For each of the datasets, we randomly select 70% of the data for training, 10% for validation and the remaining for testing purposes. We then apply our FTUNE-S, FTUNE+D-S, BBS+D methods along with BBS and DBOOST with varying number of landmark pairs. Note that we do not apply our FTUNE-M method to these datasets as it overfits heavily to these datasets as typically they are small in size. We first compare the accuracy achieved by FTUNE+D-S with the existing methods. Table 1 compares the accuracies achieved by our FTUNE+D-S method with those of BBS and DBOOST over different datasets when using landmark sets of sizes 30 and 300. Numbers in brackets denote standard deviation over different runs. Note that in both the tables FTUNE+D-S is one of the best methods (upto 95% significance level) on all but one dataset. Furthermore, for datasets with large number of classes such as Amazon47 and FaceRec our method outperforms BBS and DBOOST by at least 20% percent. Also, note that some of the datasets have multiple bold faced methods, which means that the two sample t-test (at 95% level) rejects the hypothesis that their mean is different. Next, we evaluate the effectiveness of our landmark selection criteria for both BBS and our method. Figure 1 shows the accuracies achieved by various methods on four different datasets with increasing number of landmarks. Note that in all the datasets, our diversity based landmark selection criteria increases the classification accuracy by around 5 −6% for small number of landmarks. 3.2 UCI benchmark datasets We now compare our FTUNE method against existing methods on a variety of UCI datasets [19]. We ran experiments with FTUNE and FTUNE+D but the latter did not provide any advantage. So for lack of space we drop it from our presentation and only show results for FTUNE-S (FTUNE with a single transfer function) and FTUNE-M (FTUNE with one transfer function per class). Similar to [2], we use the Gaussian kernel function as the similarity function for evaluating our method. We set the “width” parameter in the Gaussian kernel to be the mean of all pair-wise training data distances, a standard heuristic. For all the datasets, we randomly select 50% data for training, 20% for validation and the remaining for testing. We report accuracy values averaged over 20 runs for each method with varying number of landmark pairs. 7 Dataset/Method BBS DBOOST FTUNE-S FTUNE-M Cod-rna 0.93(0.01) 0.89(0.01) 0.93(0.01) 0.93(0.01) Isolet 0.81(0.01) 0.67(0.01) 0.84(0.01) 0.83(0.01) Letters 0.67(0.02) 0.58(0.01) 0.69(0.01) 0.68(0.02) Magic 0.82(0.01) 0.81(0.01) 0.84(0.01) 0.84(0.01) Pen-digits 0.94(0.01) 0.93(0.01) 0.97(0.01) 0.97(0.00) Nursery 0.91(0.01) 0.91(0.01) 0.90(0.01) 0.90(0.00) Faults 0.70(0.01) 0.68(0.02) 0.70(0.02) 0.71(0.02) Mfeat-pixel 0.94(0.01) 0.91(0.01) 0.95(0.01) 0.94(0.01) Mfeat-zernike 0.79(0.02) 0.72(0.02) 0.79(0.02) 0.79(0.02) Opt-digits 0.92(0.01) 0.89(0.01) 0.94(0.01) 0.94(0.01) Satellite 0.85(0.01) 0.86(0.01) 0.86(0.01) 0.87(0.01) Segment 0.90(0.01) 0.93(0.01) 0.92(0.01) 0.92(0.01) (a) 30 Landmarks Dataset/Method BBS DBOOST FTUNE-S FTUNE-M Cod-rna 0.94(0.00) 0.93(0.00) 0.94(0.00) 0.94(0.00) Isolet 0.91(0.01) 0.89(0.01) 0.93(0.01) 0.93(0.00) Letters 0.72(0.01) 0.84(0.01) 0.83(0.01) 0.83(0.01) Magic 0.84(0.01) 0.84(0.00) 0.85(0.01) 0.85(0.01) Pen-digits 0.96(0.00) 0.99(0.00) 0.99(0.00) 0.99(0.00) Nursery 0.93(0.01) 0.97(0.00) 0.96(0.00) 0.97(0.00) Faults 0.72(0.02) 0.74(0.02) 0.73(0.02) 0.73(0.02) Mfeat-pixel 0.96(0.01) 0.97(0.01) 0.97(0.01) 0.97(0.01) Mfeat-zernike 0.81(0.01) 0.79(0.01) 0.82(0.02) 0.82(0.01) Opt-digits 0.95(0.01) 0.97(0.00) 0.98(0.00) 0.98(0.00) Satellite 0.85(0.01) 0.90(0.01) 0.89(0.01) 0.89(0.01) Segment 0.90(0.01) 0.96(0.01) 0.96(0.01) 0.96(0.01) (b) 300 Landmarks Table 2: Accuracies for Gaussian Kernel for Embedding Dimensionality=30. Bold numbers indicate the best performance with 95% confidence level. Note that both our methods, especially FTUNE-S, performs significantly better than the existing methods. 50 100 150 200 250 300 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Isolet (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE (Single) FTUNE (Multiple) BBS DBOOST 50 100 150 200 250 300 0.5 0.6 0.7 0.8 0.9 1 Letters (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE (Single) FTUNE (Multiple) BBS DBOOST 50 100 150 200 250 300 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 Pen−digits (Accuracy vs Landmarks) Number of Landmarks Accuracy FTUNE (Single) FTUNE (Multiple) BBS DBOOST 50 100 150 200 250 300 0.88 0.9 0.92 0.94 0.96 0.98 Number of Landmarks Accuracy Opt−digits (Accuracy vs Landmarks) FTUNE (Single) FTUNE (Multiple) BBS DBOOST Figure 2: Accuracy achieved by various methods on four different UCI repository datasets as the number of landmarks used increases. Note that both FTUNE-S and FTUNE-M perform significantly better than BBS and DBOOST for small number of landmarks (30, 50). Table 2 compares the accuracies obtained by our FTUNE-S and FTUNE-M methods with those of BBS and DBOOST when applied to different UCI benchmark datasets. Note that FTUNE-S is one of the best on most of the datasets for both the landmarking sizes. Also, BBS performs reasonably well for small landmarking sizes while DBOOST performs well for large landmarking sizes. In contrast, our method consistently outperforms the existing methods in both the scenarios. Next, we study accuracies obtained by our method for different landmarking sizes. Figure 2 shows accuracies obtained by various methods as the number of landmarks selected increases. Note that the accuracy curve of our method dominates the accuracy curves of all the other methods, i.e. our method is consistently better than the existing methods for all the landmarking sizes considered. 3.3 Discussion We note that since FTUNE selects its output by way of validation, it is susceptible to over-fitting on small datasets but at the same time, capable of giving performance boosts on large ones. We observe a similar trend in our experiments – on smaller datasets (such as those in Table 1 with average dataset size 660), FTUNE over-fits and performs worse than BBS and DBOOST. However, even in these cases, DSELECT (intuitively) removes redundancies in the landmark points thus allowing FTUNE to recover the best transfer function. In contrast, for larger datasets like those in Table 2 (average size 13200), FTUNE is itself able to recover better transfer functions than the baseline methods and hence both FTUNE-S and FTUNE-M perform significantly better than the baselines. Note that DSELECT is not able to provide any advantage here since the datasets sizes being large, greedy selection actually ends up hurting the accuracy. Acknowledgments We thank the authors of [2] for providing us with C++ code of their implementation. P. K. is supported by Microsoft Corporation and Microsoft Research India under a Microsoft Research India Ph.D. fellowship award. Most of this work was done while P. K. was visiting Microsoft Research Labs India, Bangalore. 8 References [1] Maria-Florina Balcan and Avrim Blum. On a Theory of Learning with Similarity Functions. In International Conference on Machine Learning, pages 73–80, 2006. [2] Liwei Wang, Cheng Yang, and Jufu Feng. On Learning with Dissimilarity Functions. In International Conference on Machine Learning, pages 991–998, 2007. [3] Piotr Indyk and Nitin Thaper. Fast Image Retrieval via Embeddings. In International Workshop Statistical and Computational Theories of Vision, 2003. [4] El˙zbieta Pe¸kalska and Robert P. W. Duin. On Combining Dissimilarity Representations. In Multiple Classifier Systems, pages 359–368, 2001. [5] Yihua Chen, Eric K. Garcia, Maya R. Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based Classification: Concepts and Algorithms. Journal of Machine Learning Research, 10:747–776, 2009. [6] Cheng Soon Ong, Xavier Mary, St´ephane Canu, and Alexander J. Smola. Learning with non-positive Kernels. In International Conference on Machine Learning, 2004. [7] Bernard Haasdonk. Feature Space Interpretation of SVMs with Indefinite Kernels. IEEE Transactions on Pattern Analysis and Machince Intelligence, 27(4):482–492, 2005. [8] Thore Graepel, Ralf Herbrich, Peter Bollmann-Sdorra, and Klaus Obermayer. Classification on Pairwise Proximity Data. In Neural Information Processing Systems, page 438444, 1998. [9] Maria-Florina Balcan, Avrim Blum, and Nathan Srebro. Improved Guarantees for Learning via Similarity Functions. In 21st Annual Conference on Computational Learning Theory, pages 287–298, 2008. [10] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [11] Manik Varma and Bodla Rakesh Babu. More Generality in Efficient Multiple Kernel Learning. In 26th Annual International Conference on Machine Learning, pages 1065–1072, 2009. [12] Prateek Jain, Brian Kulis, Jason V. Davis, and Inderjit S. Dhillon. Metric and Kernel Learning using a Linear Transformation. To appear, Journal of Machine Learning (JMLR), 2011. [13] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. Kernels as Features: On Kernels, Margins, and Low-dimensional Mappings. Machine Learning, 65(1):79–94, 2006. [14] Nathan Srebro. How Good Is a Kernel When Used as a Similarity Measure? In 20th Annual Conference on Computational Learning Theory, pages 323–335, 2007. [15] M. R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the theory of NP-Completeness. Freeman, San Francisco, 1979. [16] Sanjeev Arora, L´aszl´o Babai, Jacques Stern, and Z. Sweedyk. The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations. Journal of Computer and System Sciences, 54(2):317– 331, April 1997. [17] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27:1–27:27, 2011. [18] Krithika Venkataramani and B. V. K. Vijaya Kumar. Designing classifiers for fusion-based biometric verification. In Plataniotis Boulgouris and Micheli-Tzankou, editors, Biometrics: Theory, Methods and Applications. Springer, 2009. [19] A. Frank and Arthur Asuncion. UCI Machine Learning Repository. http://archive.ics.uci. edu/ml, 2010. University of California, Irvine, School of Information and Computer Sciences. 9
2011
123
4,173
Maximum Margin Multi-Label Structured Prediction Christoph H. Lampert IST Austria (Institute of Science and Technology Austria) Am Campus 1, 3400 Klosterneuburg, Austria http://www.ist.ac.at/∼chl chl@ist.ac.at Abstract We study multi-label prediction for structured output sets, a problem that occurs, for example, in object detection in images, secondary structure prediction in computational biology, and graph matching with symmetries. Conventional multilabel classification techniques are typically not applicable in this situation, because they require explicit enumeration of the label set, which is infeasible in case of structured outputs. Relying on techniques originally designed for single-label structured prediction, in particular structured support vector machines, results in reduced prediction accuracy, or leads to infeasible optimization problems. In this work we derive a maximum-margin training formulation for multi-label structured prediction that remains computationally tractable while achieving high prediction accuracy. It also shares most beneficial properties with single-label maximum-margin approaches, in particular formulation as a convex optimization problem, efficient working set training, and PAC-Bayesian generalization bounds. 1 Introduction The recent development of conditional random fields (CRFs) [1], max-margin Markov networks (M3Ns) [2], and structured support vector machines (SSVMs) [3] has triggered a wave of interest in the prediction of complex outputs. Typically, these are formulated as graph labeling or graph matching tasks in which each input has a unique correct output. However, not all problems encountered in real applications are reflected well by this assumption: machine translation in natural language processing, secondary structure prediction in computational biology, and object detection in computer vision are examples of tasks in which more than one prediction can be “correct” for each data sample, and that are therefore more naturally formulated as multi-label prediction tasks. In this paper, we study multi-label structured prediction, defining the task and introducing the necessary notation in Section 2. Our main contribution is a formulation of a maximum-margin training problem, named MLSP, which we introduce in Section 3. Once trained it allows the prediction of multiple structured outputs from a single input, as well as abstaining from a decision. We study the generalization properties of MLSP in form of a generalization bound in Section 3.2, and we introduce a working set optimization procedure in Section 3.3. The main insights from these is that MLSP behaves similarly to a single-label SSVM in terms of efficient use of training data and computational effort during training, despite the increased complexity of the problem setting. In Section 4 we discuss MLSP’s relation to existing methods for multi-label prediction with simple label sets, and to single-label structured prediction. We furthermore compare MLSP to a multi-label structured prediction methods within the SSVM framework in Section 4.1. In Section 5 we compare the different approaches experimentally, and we conclude in Section 6 by summarizing and discussing our contribution. 1 2 Multi-label structured prediction We first recall some background and establish the notation necessary to discuss multi-label classification and structured prediction in a maximum margin framework. Our overall task is predicting outputs y ∈Y for inputs x ∈X in a supervised learning setting. In ordinary (single-label) multi-class prediction we use a prediction function, g : X →Y, for this, which we learn from i.i.d. example pairs {(xi, yi)}i=1,...,n ⊂X ×Y. Adopting a maximum-margin setting, we set g(x) := argmaxy∈Y f(x, y) for a compatibility function f(x, y) := ⟨w, ψ(x, y)⟩. (1) The joint feature map ψ : X ×Y →H maps input-output pairs into a Hilbert space H with inner product ⟨· , ·⟩. It is defined either explicitly, or implicitly through a joint kernel function k : (X × Y) × (X ×Y) →R. We measure the quality of predictions by a task-dependent loss function ∆: Y × Y →R+, where ∆(y, ¯y) specifies what cost occurs if we predict an output ¯y while the correct prediction is y. Structured output prediction can be seen as a generalization of the above setting, where one wants to make not only one, but several dependent decisions at the same time, for example, deciding for each pixel of an image to which out of several semantic classes it belongs. Equivalently, one can interpret the same task as a special case of supervised single-label prediction, where inputs and outputs consist of multiple parts. In the above example, a whole image is one input sample, and a segmentation mask with as many entries as the image has pixels is an output. Having a choice of M ≥2 classes per pixel of a (w×h)-sized image leads to an output set of M w·h elements. Enumerating all of these is out of question, and collecting training examples for each of them even more so. Consequently, structured output prediction requires specialized techniques that avoid enumerating all possible outputs, and that can generalize between labels in the output set. A popular technique for this task is the structured (output) support vector machine (SSVM) [3]. To train it, one has to solve a quadratic program subject to n|Y| linear constraints. If an efficient separation oracle is available, i.e. a technique for identifying the currently most violated linear constraints, working set training, in particular cutting plane [4] or bundle methods [5] allow SSVM training to arbitrary precision in polynomial time. Multi-label prediction is a generalization of single-label prediction that gives up the condition of a functional relation between inputs and outputs. Instead, each input object can be associated with any (finite) number of outputs, including none. Formally, we are given pairs {(xi, Y i)}i=1,...,n ⊂ X ×P(Y), where P denotes the power set operation, and we want to determine a set-valued function G : X →P(Y). Often it is convenient to use indicator vectors instead of variable size subsets. We say that v ∈{±1}Y represents the subset Y ∈P(Y) if vy = +1 for y ∈Y and vy = −1 otherwise. Where no confusion arises, we use both representations interchangeably, e.g., we write either Y i or vi for a label set in the training data. To measure the quality of a predicted set we use a set loss function ∆ML : P(Y) × P(Y) →R. Note that multi-label prediction can also be interpreted as ordinary single-output prediction with P(Y) taking the place of the original output set Y. We will come back to this view in Section 4.1 when discussing related work. Multi-label structured prediction combines the aspects of multi-label prediction and structured output sets: we are given a training set {(xi, Y i)}i=1,...,n ⊂X × P(Y), where Y is a structured output set of potentially very large size, and we would like to learn a prediction function: G : X →P(Y) with the ability to generalize also in the output set. In the following, we will take the view of structured prediction point of view, deriving expressions for predicting multiple structured outputs instead of single ones. Alternatively, the same conclusions could be reached by interpreting the task as performing multi-label predicting with binary output vectors that are too large to store or enumerate explicitly, but that have an internal structure allowing generalization between the elements. 3 Maximum margin multi-label structured prediction In this section we propose a learning technique designed for multi-label structure prediction that we call MLSP. It makes set-valued prediction by1, G(x) := {y ∈Y : f(x, y) > 0} for f(x, y) := ⟨w, ψ(x, y)⟩. (2) 1More complex prediction rules exist in the multi-label literature, see, e.g., [6]. We restrict ourselves to perlabel thresholding, because more advanced rules complicate the learning and prediction problem even further. 2 Note that the compatibility function, f(x, y), acts on individual inputs and outputs, as in single-label prediction (1), but the prediction step consists of collecting all outputs of positive scores instead of finding the outputs of maximal score. By including a constant entry into the joint feature map ψ(x, y) we can model a bias term, thereby avoiding the need of a threshold during prediction (2). We can also add further flexibility by a data-independent, but label-dependent term. Note that our setup differs from SSVMs training in this regard. There, a bias term, or a constant entry of the feature map, would have no influence, because during training only pairwise differences of function values are considered, and during prediction a bias does not affect the argmax-decision in Equation (1). We learn the weight vector w for the MLSP compatibility function in a maximum-margin framework that is derived from regularized risk minimization. As the risk depends on the loss function chosen, we first study the possibilities we have for the set loss ∆ML : P(Y) × P(Y) →R+. There are no established functions for this in the structured prediction setting, but it turns out that two canonical set losses are consistent with the following first principles. Positivity: ∆ML(Y, ¯Y ) ≥0, with equality only if Y = ¯Y , Modularity: ∆ML should decompose over the elements of Y (in order to facilitate efficient computation), Monotonicity: ∆ML should reflect that making a wrong decision about some element y ∈Y can never reduce the loss. The last criterion we formalize as ∆ML(Y, ¯Y ∪{¯y}) ≥∆ML(Y, ¯Y ) for any ¯y ̸∈Y , and (3) ∆ML(Y ∪{y}, ¯Y ) ≥∆ML(Y, ¯Y ) for any y ̸∈¯Y . (4) Two candidates that fulfill these criteria are the sum loss, ∆sum(Y, ¯Y ) := P y∈Y ⊖¯Y λ(Y, y), and the max loss, ∆max(Y, ¯Y ) := maxy∈Y ⊖¯Y λ(Y, y), where Y ⊖¯Y := (Y \¯Y )∪( ¯Y \Y ) is the symmetric set difference, and λ : P(Y) × Y →R+ is a task-dependent per-label misclassification cost. Assuming that a set Y is the correct prediction, λ(Y, ¯y) specifies either the cost of predicting ¯y, although ¯y ̸∈Y , or of not predicting ¯y, when really ¯y ∈Y . In the special case of λ ≡1 the sum loss is known as symmetric difference loss, and it coincides with the Hamming loss of the binary indicator vector representation. The max loss becomes the 0/1-loss between sets in this case. In a general case, λ typically expresses partial correctness, generalizing the single-label structured loss ∆(y, ¯y). Note that in evaluating λ(Y, ¯y) one has access to the whole set Y , not just single elements. Therefore, a flexible penalization of multiple errors is possible, e.g., submodular behavior. While in the small-scale multi-label situation, the sum loss is more common, we argue in this work that that the max loss has advantages in the structured prediction situation. For once, the sum loss has a scaling problem. Because it adds potentially exponentially many terms, the ratio in loss between making few mistakes and making many mistakes is very large. If used in the unnormalized form given above this can result in impractically large values. Normalizing the expression by multiplying with 1/|Y| stabilizes the upper value range, but it leads to a situation where ∆sum(Y, ¯Y ) ≈0 in the common situation that ¯Y differs from Y in only a few elements. The value range of the max loss, on the other hand, is the same as the value range of λ and therefore easy to keep reasonable. A second advantage of the max loss is that it leads to an efficient constraint generation technique during training, as we will see in Section 3.3. 3.1 Maximum margin multi-label structured prediction (MLSP) To learn the parameters w of the compatibility function f(x, y) we follow a regularized risk minimization framework: given i.i.d. training examples {(xi, Y i)}i=1,...,n, we would like to minimize 1 2∥w∥2 + C n P i ∆max(Y i, G(xi)). Using the definition of ∆max this is equivalent to minimizing 1 2∥w∥2 + C n P i ξi, subject to ξi ≥λ(Y i, y) for all y ∈Y with vi yf(xi, y) ≤0. Upper bounding the inequalities by a Hinge construction yields the following maximum-margin training problem: (w∗, ξ∗) = argmin w∈H,ξ1,...,ξn∈R+ 1 2∥w∥2 + C n n X i=1 ξi (5) subject to, for i = 1, . . . , n, ξi ≥λ(Y i, y)[1 −vi yf(xi, y)], for all y ∈Y. (6) Note that making per-label decisions through thresholding does not rule out the sharing of information between labels. In the terminology of [7], Equation (2) corresponds to a conditional label independence assumption. Through the joint feature function ψ(x, y) te proposed model can still learn unconditional dependence between labels, which relates closer to an intuition of the form “Label A tends to co-occur with label B”. 3 Besides this slack rescaled variant, one can also form margin rescaled training using the constraints ξi ≥λ(Y i, y) −vi yf(xi, y), for all y ∈Y. (7) Both variants coincide in the case of 0/1 set loss, λ(Y i, y) ≡1. The main difference between slack and margin rescaled training is how they treat the case of λ(Y i, y) = 0 for some y ∈Y. In slack rescaling, the corresponding outputs have no effect on the training at all, whereas for margin rescaling, no margin is enforced for such examples, but a penalization still occurs whenever f(xi, y) > 0 for y ̸∈Y i, or if f(xi, y) < 0 for y ∈Y i. 3.2 Generalization Properties Maximum margin structured learning has become successful not only because it provides a powerful framework for solving practical prediction problems, but also because it comes with certain theoretical guarantees, in particular generalization bounds. We expect that many of these results will have multi-label analogues. As an initial step, we formulate and prove a generalization bound for slack-rescaled MLSP similar to the single-label SSVM analysis in [8]. Let Gw(x) := {y ∈Y : fw(x, y) > 0} for fw(x, y) = ⟨w, ψ(x, y)⟩. We assume |Y| < r and ∥ψ(x, y)∥< s for all (x, y) ∈X × Y, and λ(Y, y) ≤Λ for all (Y, y) ∈P(Y) × Y. For any distribution Qw over weight vectors, that may depend on w, we denote by L(Qw, P) the expected ∆max-risk for P-distributed data, L(Qw, P) = E ¯ w∼Qw  RP,∆max(G ¯ w) = E ¯ w∼Qw,(x,Y )∼P  ∆max(Y, G ¯ w(x)) . (8) The following theorem bounds the expected risk in terms of the total margin violations. Theorem 1. With probability at least 1 −σ over the sample S of size n, the following inequality holds simultaneously for all weight vectors w. L(Qw,D) ≤1 n n X i=1 ℓ(xi, Y i, f) + ||w||2 n + s2||w||2 ln(rn/||w||2) + ln n σ 2(n −1) 1/2 (9) for ℓ(xi, Y i, f) := max y∈Y λ(Y i, y)Jvi yf(xi, y) < 1K, where vi is the binary indicator vector of Y i. Proof. The argument follows [8, Section 11.6]. It can be found in the supplemental material. A main insight from Theorem 1 is that the number of samples needed for good generalization grows only logarithmically with r, i.e. the size of Y. This is the same complexity as for single-label prediction using SSVMs, despite the fact that multi-label prediction formally maps into P(Y), i.e. an exponentially larger output set. 3.3 Numeric Optimization The numeric solution of MLSP training resembles SSVM training. For explicitly given joint feature maps, ψ(x, y), we can solve the optimization problem (5) in the primal, for example using subgradient descent. To solve MLSP in a kernelized setup we introduce Lagrangian multipliers (αi y)i=1,...,n;y∈Y for the constraints (7)/(6). For the margin-rescaled variant we obtain the dual max αiy∈R+−1 2 X (i,y),(¯ı,¯y) vi yv¯ı ¯yαi yα¯ı ¯y k (xi, y), (x¯ı, ¯y)  + X (i,y) λi yαi y (10) subject to X y αi y ≤C n , for i = 1, . . . , n. (11) For slack-rescaled MLSP, the dual is computed analogously as max αiy∈R+ −1 2 X (i,y),(¯ı,¯y) vi yv¯ı ¯yαi yα¯ı ¯y k (xi, y), (x¯ı, ¯y)  + X (i,y) αi y (12) subject to X y αi y λiy ≤C n , for i = 1, . . . , n, (13) 4 with the convention that only terms with λi y ̸= 0 enter the summation. In both cases, the compatibility function becomes f(x, y) = X (i,¯y) αi ¯yvi ¯y k (xi, ¯y), (x, y)  . (14) Comparing the optimization problems (10)/(11) and (12)/(13) to the ordinary SVM dual, we see that MLSP couples |Y| binary SVM problems by the joint kernel function and the summed-over box constraints. In particular, whenever only a feasibly small subset of variables has to be considered, we can solve the problem using a general purpose QP solver, or a slightly modified SVM solver. Overall, however, there are infeasibly many constraints in the primal, or variables in the dual. Analogously to the SSVM situation we therefore apply iterative working set training, which we explain here using the terminology of the primal. We start with an arbitrary, e.g. empty, working set S. Then, in each step we solve the optimization using only the constraints indicated by the working set. For the resulting solution (wS, ξS) we check whether any constraints of the full set (6)/(7) are violated up to a target precision ϵ. If not, we have found the optimal parameters. Otherwise, we add the most violated constraint to S and start the next iteration. The same monotonicity argument as in [3] shows that we reach an objective value ϵ-close to the optimal one within O( 1 ϵ ) steps. Consequently, MLSP training is roughly comparable in computational complexity to SSVM training. The crucial step in working set training is the identification of violated constraints. Note that constraints in MLSP are determined by pairs of samples and single labels, not pairs of samples and sets of labels. This allows us to reuse existing methods for loss augmented single label inference. In practice, it is safe to assume that the sets Y i are feasibly small, since they are given to us explicitly. Consequently, we can identify violated “positive” constraints by explicitly checking the inequalities (7)/(6) for y ∈Y i. Identifying violated “negative” constraint requires loss-augmented prediction over Y \Y i. We are not aware of a general purpose solution for this task, but at least all problems that allow efficient K-best MAP prediction can be handled by iteratively performing lossaugmented prediction within Y until a violating example from Y \Y i is found, or it is confirmed that no such example exists. Note that K-best versions of most standard MAP prediction methods have been developed, including max-flow [9], loopy BP [10], LP-relaxations [11], and Sampling [12]. 3.4 Prediction problem After training, Equation (2) specifies the rule to predict output sets for new input data. In contrast to single-label SSVM prediction this requires not only a maximization over all elements of Y, but the collection of all elements y ∈Y of positive score. The structure of the output set is not as immediately helpful for this as it is, e.g., in MAP prediction. Task-specific solutions exist, however, for example branch-and-bound search for object detection [13]. Also, it is often possible to establish an upper bound on the number of desired outputs, and then, K-best prediction techniques can again be applied. This makes MLSP of potential use for several classical tasks, such as parsing and chunking in natural language processing, secondary structured prediction in computational biology, or human pose estimation in computer vision. In general situations, evaluating (2) might require approximate structured prediction techniques, e.g. iterative greedy selection [14]. Note that the use of approximation algorithms is little problematic here, because, in contrast to training, the prediction step is not performed in an iterative manner, so errors do not accumulate. 4 Related Work Multi-label classification is an established field of research in machine learning and several established techniques are available, most of which fall into one of three categories: 1) Multi-class reformulations [15] treat every possible label subset, Y ∈P(Y), as a new class in an independent multi-class classification scenario. 2) Per-label decomposition [16] trains one classifier for each output label and makes independent decision for each of those. 3) Label ranking [17] learns a function that ranks all potential labels for an input sample. Given the size of Y, 1) is not a promising direction for multi-label structured prediction. A straight-forward application of 2) and 3) are also infeasible if Y is too large to enumerate. However, MLSP resembles both approaches by sharing their prediction rule (2). MLSP can be seen as a way to make a combination of approaches applicable to the situation of structured prediction by incorporating the ability to generalize in the label set. Besides the general concepts above, many specific techniques for multi-label prediction have been proposed, several of them making use of structured prediction techniques: [18] introduces an SSVM 5 formulation that allows direct optimization of the average precision ranking loss when the label set can be enumerated. [19] relies on a counting framework for this purpose, and [20] proposes an SSVM formulation for enforcing diversity between the labels. [21] and [22] identify shared subspaces between sets of labels, [23] encodes linear label relations by a change of the SSVM regularizer, and [24] handles the case of tree- and DAG-structured dependencies between possible outputs. All these methods work in the multi-class setup and require an explicit enumerations of the label set. They use a structured prediction framework to encode dependencies between the individual output labels, of which there are relatively few. MLSP, on the other hand, aims at predicting multiple structured object, i.e. the structured prediction framework is not just a tool to improve multi-class classification with multiple output labels, but it is required as a core component for predicting even a single output. Some previous methods targeting multi-label prediction with large output sets, in particular using label compression [25] or a label hierarchy [26]. This allows handling thousands of potential output classes, but a direct application to the structured prediction situation is not possible, because the methods still require explicit handling of the output label vectors, or cannot predict labels that were not part of the training set. The actual task of predicting multiple structured outputs has so far not appeared explicitly in the literature. The situation of multiple inputs during training has, however, received some attention: [27] introduces a one-class SVM based training technique for learning with ambiguous ground truth data. [13] trains an SSVM for the same task by defining a task-adapted loss function ∆min(Y, ¯y) = miny∈Y ∆(y, ¯y). [28] uses a similar min-loss in a CRF setup to overcome problems with incomplete annotation. Note that ∆min(Y, ¯y) has the right signature to be used as a misclassification cost λ(Y, ¯y) in MLSP. The compatibility functions learned by the maximum-margin techniques [13, 27] have the same functional form as f(x, y) in MLSP, so they can, in principle, be used to predict multiple outputs using Equation (2). However, our experiments of Section 5 show that this leads to low multilabel prediction accuracy, because the training setup is not designed for this evaluation procedure. 4.1 Structured Multilabel Prediction in the SSVM Framework At first sight, it appears unnecessary to go beyond the standard structured prediction framework at all in trying to predict subsets of Y. As mentioned in Section 3, multi-label prediction into Y can be interpreted as single-label prediction into P(Y), so a straight-forward approach to multi-label structured prediction would be to use an ordinary SSVM with output set P(Y). We will call this setup P-SSVM. It has previously been proposed for classical multi-label prediction, for example in [23]. Unfortunately, as we will show in this section, the P-SSVM setup is not well suited to the structured prediction situation. A P-SSVM learns a prediction function, G(x) := argmaxY ∈P(Y) F(x, Y ), with linearly parameterized compatibility function, F(x, Y ) := ⟨w, ψ(x, Y )⟩, by solving the optimization problem argmin w∈H,ξ1,...,ξn∈R+ 1 2∥w∥2 + C n n X i=1 ξi, subject to ξi ≥∆(yi, Y ) + F(xi, Y ) −F(xi, Y i), (15) for i = 1, . . . , n, and for all Y ∈P(Y). The main problem with this general form is that identifying violated constraints of (15) requires loss-augmented maximization of F over P(Y), i.e. an exponentially larger set than Y. To better understand this problem, we analyze what happens when making the same simplifying assumptions as for MLSP in Section 3.1. First, we assume additivity of F over Y, i.e. F(x, Y ) := P y∈Y f(x, y) for f(x, y) := ⟨w, ψ(x, y)⟩. This turns the argmax-evaluation for G(x) exactly into the prediction rule (2), and the constraint set in (15) simplifies to ξi ≥∆ML(Y i, Y ) − X y∈Y ⊖Y i vi yf(xi, y), for i = 1, . . . , n, and for all Y ∈P(Y), (16) Choosing ∆ML as max loss does not allow us to further simplify this expression, but choosing the sum loss does: with ∆ML(Y i, Y ) = P y∈Y ⊖Y i λ(Y i, y), we obtain an explicit expression for the label set maximizing the right hand side of the constraint (16), namely Y i viol ={y ∈Y i : f(xi, y) < λ(Y i, y)} ∪{y ∈Y \ Y i : f(xi, y) > −λ(Y i, y)}. (17) Thus, we avoid having to maximize a function over P(Y). Unfortunately, the set Y i viol in Equation (17) can contain exponentially many terms, rendering a numeric computation of F(xi, Y i viol) or 6 its gradient still infeasible in general. Note that this is not just a rare, easily avoidable case. Because w, and thereby f, are learned iteratively, they typically go through phases of low prediction quality, i.e. large Y i viol. In fact, starting the optimization with w = 0 would already lead to Y i viol = Y for all i = 1, . . . , n. Consequently, we presume that P-SSVM training is intractable for structured prediction problems, except for the case of a small label set. Note that while computational complexity is the most prominent problem of P-SSVM training, it is not the only one. For example, even if we did find a polynomial-time training algorithm to solve (15) the generalization ability of the resulting predictor would be unclear: the SSVM-generalization bounds [8] suggest that training sets of size O(log |P(Y)|) = O(|Y|) will be required, compared to the O(log |Y|) bound we established for MLSP in Section 3.2. 5 Experimental Evaluation To show the practical use of MLSP we performed experiments on multi-label hierarchical classification and object detection in natural images. The complete protocol of training a miniature toy example can be found in the supplemental material (available from the author’s homepage). 5.1 Multi-label hierarchical classification We use hierarchical classification as an illustrative example that in particular allows us to compare MLSP to alternative, less scalable, methods. On the one hand, it is straight-forward to model as a structured prediction task, see e.g. [3, 29, 30, 31]. On the other hand, its output set is small enough such that we can compare MLSP also against other approaches that cannot handle very large output sets, in particular P-SSVM and independent per-label training. The task in hierarchical classification is to classify samples into a number of discrete classes, where each class corresponds to a path in a tree. Classes are considered related if they share a path in the tree, and this is reflected by sharing parts of the joint feature representations. In our experiments, we use the PASCAL VOC2006 dataset that contains 5304 images, each belonging to between 1 and 4 out of 10 classes. We represent each image x by 960-dimensional GIST features φ(x) and use the same 19-node hierarchy κ and joint feature function, ψ(x, y) = vec(φ(x) ⊗κ(y)), as in [30]. As baselines we use P-SSVM [23], JKSE [27], and an SSVM trained with the normal, singlelabel objective, but evaluated by Equation (2). We follow the pre-defined data splits, doing model selection using the train and val parts to determine C ∈{2−1, . . . , 214} (MLSP, P-SSVM, SSVM), or ν ∈{0.05, 0.10, . . . , 0.95} (JKSE). We then retrain on the combination of train and val and we test on the test part of the dataset. As the label set is small, we use exhaustive search over Y to identify violated constraints during training and to perform the final predictions. We report results in Table 1a). As there is no single established multi-label error measure, and because it illustrates the effect of training with different loss function, we report several common measures. The results show nicely how the assumptions made during training influence the prediction characteristics. Qualitatively, MLSP achieves best prediction accuracy in the max loss, P-SSVM is better if we judge by the sum loss. This exactly reflects the loss functions they are trained with. Independent training achieves very good results with respect to both measures, justifying its common use for multi-label prediction with small label sets and many training examples per label2 Ordinary SSVM training does not achieve good max- or sum-loss scores, but it performs well if quality is measured by the average of the area under the precision-recall curves across labels for each individual test example. This is also plausible, as SSVM training uses a ranking-like loss: all potential labels for each input are enforced to be in the right order (correct labels have higher score than incorrect ones), but nothing in the objective encourages a cut-off point at 0. As a consequence, too few or too many labels are predicted by Equation (2). In Table 1a) it appears to be too many, visible as high recall but low precision. JKSE does not achieve competitive results in max loss, mAUC loss or F1-score. Potentially this is because we use it with a linear kernel to stay comparable with the other methods, whereas [27] reported good results mainly for nonlinear kernels. Qualitatively, MLSP and P-SSVM show comparable prediction quality. We take this as an indication that both, training with sum loss and training with max loss, make sense conceptually. However, of 2For ∆sum this is not surprising: independent training is known to be the optimal setup, if enough data is available [32]. For ∆sum, the multi-class reformulation would be the optimal setup. The problem in multi-label structured prediction is solely that |Y| is too large, and training data too scarce, to use either of these setups. 7 Figure 1: Multi-label structured prediction results. ∆max/∆sum: max/sum loss (lower is better), mAUC: mean area under per-sample precision-recall curve, prec/rec/F1: precision, recall, F1-score (higher is better). Methods printed in italics are infeasible for general structured output sets. ∆max ∆sum mAUC F1 ( prec / rec ) MLSP 0.73 1.59 0.82 0.42 ( 0.40 / 0.46 ) JKSE 1.00 1.91 0.54 0.23 ( 0.14 / 0.76 ) SSVM 0.88 3.86 0.84 0.37 ( 0.24 / 0.88 ) P-SSVM 0.75 1.11 0.83 0.44 ( 0.48 / 0.41 ) indep. 0.73 1.07 0.84 0.46 ( 0.61 / 0.38 ) (a) Hierarchical classification results. ∆max ∆sum F1 ( prec / rec ) MLSP 0.66 1.31 0.46 ( 0.60 / 0.52 ) JKSE 0.99 7.29 0.09 ( 0.60 / 0.16 ) SSVM 0.93 3.71 0.21 ( 0.79 / 0.33 ) P-SSVM infeasible indep. infeasible (b) Object detection results. the five methods, only MLSP, JKSE and SSVM generalize to more general structured prediction setting, as they do not require exhaustive enumeration of the label set. Amongst these, MLSP is preferable, except if one is only interested in ranking the labels, for which SSVM also works well. 5.2 Object class detection in natural images Object detection can be solved as a structured prediction problem where natural images are the inputs and coordinate tuples of bounding boxes are the outputs. The label set is of quadratic size in the number of image pixels and thus cannot be searched exhaustively. However, efficient (loss-augmented) argmax-prediction can be performed by branch-and-bound search [33]. Object detection is also inherently a multi-label task, because natural images contain different numbers of objects. We perform experiments on the public UIUC-Cars dataset [34]. Following the experimental setup of [27] we use the multiscale part of the dataset for training and the singlescale part for testing. The additional set of pre-cropped car and background images serves as validation set for model selection. We use the localization kernel, k (x, y), (¯x, ¯y)  = φ(x|y)tφ(¯x|¯y) where φ(x|y) is a 1000-dimensional bag of visual words representation of the region y within the image x [13]. As misclassification cost we use λ(Y, y) := 1 for y ∈Y , and λ(Y, y) := min¯y∈Y A(¯y, y) otherwise, where A(¯y, y) := 0 if area(¯y∩y) area(¯y∪y) ≥0.5, and A(¯y, y) := 1 otherwise. This is a common measure in object detection, which reflects the intuition that all objects in an image should be identified, and that an object’s position is acceptable if it overlaps sufficiently with at least one ground truth object. P-SSVM and independent training are not applicable in this setup, so we compare MLSP against JKSE and SSVM. For each method we train models on the training set and choose the C or ν value that maximizes the F1 score over the validation set of precropped object and background images. Prediction is performed using branch and bound optimization with greedy non-maximum suppression [35]. Table 1b) summarizes the results on the test set (we do not report the mAUC measure, as computing this would require summing over the complete output set). One sees that MLSP achieves the best results amongst the three method. SSVM as well as JKSE suffer particularly from low recall, and their predictions also have higher sum loss as well as max loss. 6 Summary and Discussion We have studied multi-label classification for structured output sets. Existing multi-label techniques cannot directly be applied to this task because of the large size of the output set, and our analysis showed that formulating multi-label structured prediction set a set-valued structured support vector machine framework also leads to infeasible training problems. Instead, we proposed an new maximum-margin formulation, MLSP, that remains computationally tractable by use of the max loss instead of sum loss between sets, and shows several of the advantageous properties known from other maximum-margin based techniques, in particular a convex training problem and PACBayesian generalization bounds. Our experiments showed that MLSP has higher prediction accuracy than baseline methods that remain applied in structured output settings. For small label sets, where both concepts are applicable, MLSP performs comparable to the set-valued SSVM formulation. Besides these promising initial results, we believe that there are still several aspects of multi-label structured prediction that need to be better understood, in particular the prediction problem at test time. Collecting all elements of positive score is a natural criterion, but it is costly to perform exactly if the output set is very large. Therefore, it would be desirable to develop sparsity enforcing variations of Equation (2), for example by adopting ideas from compressed sensing [25]. 8 References [1] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [2] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003. [3] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6, 2006. [4] T. Joachims, T. Finley, and C. N. J. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1), 2009. [5] C. H. Teo, SVN Vishwanathan, A. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. JMLR, 11, 2010. [6] G. Tsoumakas and I. Katakis. Multi-label classification: An overview. International Journal of Data Warehousing and Mining, 3(3), 2007. [7] K. Dembczynski, W. Cheng, and E. H¨ullermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML, 2011. [8] D. McAllester. Generalization bounds and consistency for structured labeling. In G. Bakır, T. Hofmann, B. Sch¨olkopf, A.J. Smola, and B. Taskar, editors, Predicting Structured Data. MIT Press, 2007. [9] D. Nilsson. An efficient algorithm for finding the M most probable configurations in probabilistic expert systems. Statistics and Computing, 8(2), 1998. [10] C. Yanover and Y. Weiss. Finding the M most probable configurations using loopy belief propagation. In NIPS, 2004. [11] M. Fromer and A. Globerson. An LP View of the M-best MAP problem. In NIPS, 2009. [12] J. Porway and S.-C. Zhu. C4: Exploring multiple solutions in graphical models by cluster sampling. PAMI, 33(9), 2011. [13] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In ECCV, 2008. [14] A. Bordes, N. Usunier, and L. Bottou. Sequence labelling SVMs trained in one pass. ECML PKDD, 2008. [15] M. R. Boutell, J. Luo, X. Shen, and C.M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9), 2004. [16] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In ECML, 1998. [17] R. E. Schapire and Y. Singer. Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2–3), 2000. [18] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In ACM SIGIR, 2007. [19] T. G¨artner and S. Vembu. On structured output training: Hard cases and an efficient alternative. Machine Learning, 76(2):227–242, 2009. [20] Y. Yue and T. Joachims. Predicting diverse subsets using structural SVMs. In ICML, 2008. [21] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting shared subspaces for multi-label classification. In ACM SIGKDD, 2008. [22] P. Rai and H. Daum´e III. Multi-label prediction via sparse infinite CCA. In NIPS, 2009. [23] B. Hariharan, L. Zelnik-Manor, S. V. N. Vishwanathan, and M. Varma. Large scale max-margin multilabel classification with priors. In ICML, 2010. [24] W. Bi and J. Kwok. Multi-label classification on tree- and DAG-structured hierarchies. In ICML, 2011. [25] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, 2009. [26] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and efficient multilabel classification in domains with large number of labels. In ECML PKDD, 2008. [27] C. H. Lampert and M. B. Blaschko. Structured prediction by joint kernel support estimation. Machine Learning, 77(2–3), 2009. [28] J. Petterson, T. S. Caetano, J. J. McAuley, and J. Yu. Exponential family graph matching and ranking. In NIPS, 2009. [29] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor. Kernel-based learning of hierarchical multilabel classification models. JMLR, 7, 2006. [30] A. Binder, K.-R. M¨uller, and M. Kawanabe. On taxonomies for multi-class image categorization. IJCV, 2011. [31] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In ICKM, 2004. [32] K. Dembczynski, W. Cheng, and E. H¨ullermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML, 2010. [33] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound framework for object localization. PAMI, 31(12), 2009. [34] S. Agarwal, A. Awan, and D. Roth. Learning to detect objects in images via a sparse, part-based representation. PAMI, 26(11), 2004. [35] C. H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In CVPR, 2010. 9
2011
124
4,174
Active Ranking using Pairwise Comparisons Kevin G. Jamieson University of Wisconsin Madison, WI 53706, USA kgjamieson@wisc.edu Robert D. Nowak University of Wisconsin Madison, WI 53706, USA nowak@engr.wisc.edu Abstract This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of n objects can be identified by standard sorting methods using n log2 n pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a d-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in Rd. We show that under this assumption the number of possible rankings grows like n2d and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than d log n adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis. 1 Introduction This paper addresses the problem of ranking a set of objects based on a limited number of pairwise comparisons (rankings between pairs of the objects). A ranking over a set of n objects Θ = (θ1, θ2, . . . , θn) is a mapping σ : {1, . . . , n} →{1, . . . , n} that prescribes an order σ(Θ) := θσ(1) ≺θσ(2) ≺· · · ≺θσ(n−1) ≺θσ(n) (1) where θi ≺θj means θi precedes θj in the ranking. A ranking uniquely determines the collection of pairwise comparisons between all pairs of objects. The primary objective here is to bound the number of pairwise comparisons needed to correctly determine the ranking when the objects (and hence rankings) satisfy certain known structural constraints. Specifically, we suppose that the objects may be embedded into a low-dimensional Euclidean space such that the ranking is consistent with distances in the space. We wish to exploit such structure in order to discover the ranking using a very small number of pairwise comparisons. To the best of our knowledge, this is a previously open and unsolved problem. There are practical and theoretical motivations for restricting our attention to pairwise rankings that are discussed in Section 2. We begin by assuming that every pairwise comparison is consistent with an unknown ranking. Each pairwise comparison can be viewed as a query: is θi before θj? Each query provides 1 bit of information about the underlying ranking. Since the number of rankings is n!, in general, specifying a ranking requires Θ(n log n) bits of information. This implies that at least this many pairwise comparisons are required without additional assumptions about the ranking. In fact, this lower bound can be achieved with a standard adaptive sorting algorithm like binary sort [1]. In large-scale problems or when humans are queried for pairwise comparisons, obtaining this many pairwise comparisons may be impractical and therefore we consider situations in which the space of rankings is structured and thereby less complex. 1 A natural way to induce a structure on the space of rankings is to suppose that the objects can be embedded into a d-dimensional Euclidean space so that the distances between objects are consistent with the ranking. This may be a reasonable assumption in many applications, and for instance the audio dataset used in our experiments is believed to have a 2 or 3 dimensional embedding [2]. We further discuss motivations for this assumption in Section 2. It is not difficult to show (see Section 3) that the number of full rankings that could arise from n objects embedded in Rd grows like n2d, and so specifying a ranking from this class requires only O(d log n) bits. The main results of the paper show that under this assumption a randomly selected ranking can be determined using O(d log n) pairwise comparisons selected in an adaptive and sequential fashion, but almost all ￿n 2 ￿ pairwise rankings are needed if they are picked randomly rather than selectively. In other words, actively selecting the most informative queries has a tremendous impact on the complexity of learning the correct ranking. 1.1 Problem statement Let σ denote the ranking to be learned. The objective is to learn the ranking by querying the reference for pairwise comparisons of the form qi,j := {θi ≺θj}. (2) The response or label of qi,j is binary and denoted as yi,j := 1{qi,j} where 1 is the indicator function; ties are not allowed. The main results quantify the minimum number of queries or labels required to determine the reference’s ranking, and they are based on two key assumptions. A1 Embedding: The set of n objects are embedded in Rd (in general position) and we will also use θ1, . . . , θn to refer to their (known) locations in Rd. Every ranking σ can be specified by a reference point rσ ∈Rd, as follows. The Euclidean distances between the reference and objects are consistent with the ranking in the following sense: if the σ ranks θi ≺θj, then ￿θi −rσ￿< ￿θj −rσ￿. Let Σn,d denote the set of all possible rankings of the n objects that satisfy this embedding condition. The interpretation of this assumption is that we know how the objects are related (in the embedding), which limits the space of possible rankings. The ranking to be learned, specified by the reference (e.g., preferences of a human subject), is unknown. Many have studied the problem of finding an embedding of objects from data [3, 4, 5]. This is not the focus here, but it could certainly play a supporting role in our methodology (e.g., the embedding could be determined from known similarities between the n objects, as is done in our experiments with the audio dataset). We assume the embedding is given and our interest is minimizing the number of queries needed to learn the ranking, and for this we require a second assumption. A2 Consistency: Every pairwise comparison is consistent with the ranking to be learned. That is, if the reference ranks θi ≺θj, then θi must precede θj in the (full) ranking. As we will discuss later in Section 3.2, these two assumptions alone are not enough to rule out pathological arrangements of objects in the embedding for which at least Ω(n) queries must be made to recover the ranking. However, because such situations are not representative of what is typically encountered, we analyze the problem in the framework of the average-case analysis [6]. Definition 1. With each ranking σ ∈Σn,d we associate a probability πσ such that ￿ σ∈Σn,d πσ = 1. Let π denote these probabilities and write σ ∼π for shorthand. The uniform distribution corresponds to πσ = |Σn,d|−1 for all σ ∈Σn,d, and we write σ ∼U for this special case. Definition 2. If Mn(σ) denotes the number of pairwise comparisons requested by an algorithm to identify the ranking σ, then the average query complexity with respect to π is denoted by Eπ[Mn]. The main results are proven for the special case of π = U, the uniform distribution, to make the analysis more transparent and intuitive. However the results can easily be extended to general distributions π that satisfy certain mild conditions [7]. All results henceforth, unless otherwise noted, will be given in terms of (uniform) average query complexity and we will say such results hold “on average.” Our main results can be summarized as follows. If the queries are chosen deterministically or randomly in advance of collecting the corresponding pairwise comparisons, then we show that almost all ￿n 2 ￿ pairwise comparisons queries are needed to identify a ranking under the assumptions above. However, if the queries are selected in an adaptive and sequential fashion according to the algorithm 2 Query Selection Algorithm input: n objects in Rd initialize: objects θ1, . . . , θn in uniformly random order for j=2,...,n for i=1,...,j-1 if qi,j is ambiguous, request qi,j’s label from reference; else impute qi,j’s label from previously labeled queries. output: ranking of n objects Figure 1: Sequential algorithm for selecting queries. See Figure 2 and Section 4.2 for the definition of an ambiguous query. θ1 θ2 θ3 q1,2 q1,3 q2,3 Figure 2: Objects θ1, θ2, θ3 and queries. The rσ lies in the shaded region (consistent with the labels of q1,2, q1,3, q2,3). The dotted (dashed) lines represent new queries whose labels are (are not) ambiguous given those labels. in Figure 1, then we show that the number of pairwise rankings required to identify a ranking is no more than a constant multiple of d log n, on average. The algorithm requests a query if and only if the corresponding pairwise ranking is ambiguous (see Section 4.2), meaning that it cannot be determined from previously collected pairwise comparisons and the locations of the objects in Rd. The efficiency of the algorithm is due to the fact that most of the queries are unambiguous when considered in a sequential fashion. For this very same reason, picking queries in a non-adaptive or random fashion is very inefficient. It is also noteworthy that the algorithm is also computationally efficient with an overall complexity no greater than O(n poly(d) poly(log n)) [7]. In Section 5 we present a robust version of the algorithm of Figure 1 that is tolerant to a fraction of errors in the pairwise comparison queries. In the case of persistent errors (see Section 5) we show that at least O(n/ log n) objects can be correctly ranked in a partial ranking with high probability by requesting just O(d log2 n) pairwise comparisons. This allows us to handle situations in which either or both of the assumptions, A1 and A2, are reasonable approximations to the situation at hand, but do not hold strictly (which is the case in our experiments with the audio dataset). Proving the main results involves an uncommon marriage of ideas from the ranking and statistical learning literatures. Geometrical interpretations of our problem derive from the seminal works of [8] in ranking and [9] in learning. From this perspective our problem bears a strong resemblance to the halfspace learning problem, with two crucial distinctions. In the ranking problem, the underlying halfspaces are not in general position and have strong dependencies with each other. These dependencies invalidate many of the typical analyses of such problems [10, 11]. One popular method of analysis in exact learning involves the use of something called the extended teaching dimension [12]. However, because of the possible pathological situations alluded to earlier, it is easy to show that the extended teaching dimension must be at least Ω(n) making that sort of worst-case analysis uninteresting. These differences present unique challenges to learning. 2 Motivation and related work The problem of learning a ranking from few pairwise comparisons is motivated by what we perceive as a significant gap in the theory of ranking and permutation learning. Most work in ranking assumes a passive approach to learning; pairwise comparisons or partial rankings are collected in a random or non-adaptive fashion and then aggregated to obtain a full ranking (cf. [13, 14, 15, 16]). However, this may be quite inefficient in terms of the number of pairwise comparisons or partial rankings needed to learn the (full) ranking. This inefficiency was recently noted in the related area of social choice theory [17]. Furthermore, empirical evidence suggests that, even under complex ranking models, adaptively selecting pairwise comparisons can reduce the number needed to learn the ranking [18]. It is cause for concern since in many applications it is expensive and time-consuming to obtain pairwise comparisons. For example, psychologists and market researchers collect pairwise comparisons to gauge human preferences over a set of objects, for scientific understanding or product placement. The scope of these experiments is often very limited simply due to the time and expense required 3 to collect the data. This suggests the consideration of more selective and judicious approaches to gathering inputs for ranking. We are interested in taking advantage of underlying structure in the set of objects in order to choose more informative pairwise comparison queries. From a learning perspective, our work adds an active learning component to a problem domain that has primarily been treated from a passive learning mindset. We focus on pairwise comparison queries for two reasons. First, pairwise comparisons admit a halfspace representation in embedding spaces which allows for a geometrical approach to learning in such structured ranking spaces. Second, pairwise comparisons are the most common form of queries in many applications, especially those involving human subjects. For example, consider the problem of finding the most highly ranked object, as illustrated by the following familiar task. Suppose a patient needs a new pair of prescription eye lenses. Faced with literally millions of possible prescriptions, the doctor will present candidate prescriptions in a sequential fashion followed by the query: better or worse? Even if certain queries are repeated to account for possible inaccurate answers, the doctor can locate an accurate prescription with just a handful of queries. This is possible presumably because the doctor understands (at least intuitively) the intrinsic space of prescriptions and can efficiently search through it using only binary responses from the patient. We assume that the objects can be embedded in Rd and that the distances between objects and the reference are consistent with the ranking (Assumption A1). The problem of learning a general function f : Rd →R using just pairwise comparisons that correctly ranks the objects embedded in Rd has previously been studied in the passive setting [13, 14, 15, 16]. The main contributions of this paper are theoretical bounds for the specific case when f(x) = ||x −rσ|| where rσ ∈Rd is the reference point. This is a standard model used in multidimensional unfolding and psychometrics [8, 19]. We are unaware of any existing query-complexity bounds for this problem. We do not assume a generative model is responsible for the relationship between rankings to embeddings, but one could. For example, the objects might have an embedding (in a feature space) and the ranking is generated by distances in this space. Or alternatively, structural constraints on the space of rankings could be used to generate a consistent embedding. Assumption A1, while arguably quite natural/reasonable in many situations, significantly constrains the set of possible rankings. 3 Geometry of rankings from pairwise comparisons The embedding assumption A1 gives rise to geometrical interpretations of the ranking problem, which are developed in this section. The pairwise comparison qi,j can be viewed as the membership query: is θi ranked before θj in the (full) ranking σ? The geometrical interpretation is that qi,j requests whether the reference rσ is closer to object θi or object θj in Rd. Consider the line connecting θi and θj in Rd. The hyperplane that bisects this line and is orthogonal to it defines two halfspaces: one containing points closer to θi and the other the points closer to θj. Thus, qi,j is a membership query about which halfspace rσ is in, and there is an equivalence between each query, each pair of objects, and the corresponding bisecting hyperplane. The set of all possible pairwise comparison queries can be represented as ￿n 2 ￿ distinct halfspaces in Rd. The intersections of these halfspaces partition Rd into a number of cells, and each one corresponds to a unique ranking of Θ. Arbitrary rankings are not possible due to the embedding assumption A1, and recall that the set of rankings possible under A1 is denoted by Σn,d. The cardinality of Σn,d is equal to the number of cells in the partition. We will refer to these cells as d-cells (to indicate they are subsets in d-dimensional space) since at times we will also refer to lower dimensional cells; e.g., (d −1)-cells. 3.1 Counting the number of possible rankings The following lemma determines the cardinality of the set of rankings, Σn,d, under assumption A1. Lemma 1. [8] Assume A1-2. Let Q(n, d) denote the number of d-cells defined by the hyperplane arrangement of pairwise comparisons between these objects (i.e. Q(n, d) = |Σn,d|). Q(n, d) satisfies the recursion Q(n, d) = Q(n −1, d) + (n −1)Q(n −1, d −1) , where Q(1, d) = 1 and Q(n, 0) = 1. (3) In the hyperplane arrangement induced by the n objects in d dimensions, each hyperplane is intersected by every other and is partitioned into Q(n −1, d −1) subsets or (d −1)-cells. The recursion, 4 above, arises by considering the addition of one object at a time. Using this lemma in a straightforward fashion, we prove the following corollary in [7]. Corollary 1. Assume A1-2. There exist positive real numbers k1 and k2 such that k1 n2d 2dd! < Q(n, d) < k2 n2d 2dd! for n > d + 1. If n ≤d + 1 then Q(n, d) = n!. For n sufficiently large, k1 = 1 and k2 = 2 suffice. 3.2 Lower bounds on query complexity Since the cardinality of the set of possible rankings is |Σn,d| = Q(n, d), we have a simple lower bound on the number of queries needed to determine the ranking. Theorem 1. Assume A1-2. To reconstruct an arbitrary ranking σ ∈Σn,d any algorithm will require at least log2 |Σn,d| = Θ(2d log2 n) pairwise comparisons. Proof. By Corollary 1 |Σn,d| = Θ(n2d), and so at least 2d log n bits are needed to specify a ranking. Each pairwise comparison provides at most one bit. If each query provides a full bit of information about the ranking, then we achieve this lower bound. For example, in the one-dimensional case (d = 1) the objects can be ordered and binary search can be used to select pairwise comparison queries, achieving the lower bound. This is generally impossible in higher dimensions. Even in two dimensions there are placements of the objects (still in general position) that produce d-cells in the partition induced by queries that have n −1 faces (i.e., bounded by n −1 hyperplanes) as shown in [7]. It follows that the worst case situation may require at least n −1 queries in dimensions d ≥2. In light of this, we conclude that worst case bounds may be overly pessimistic indications of the typical situation, and so we instead consider the average case performance introduced in Section 1.1. 3.3 Inefficiency of random queries The geometrical representation of the ranking problem reveals that randomly choosing pairwise comparison queries is inefficient relative to the lower bound above. To see this, suppose m queries were chosen uniformly at random from the possible ￿n 2 ￿ . The answers to m queries narrows the set of possible rankings to a d-cell in Rd. This d-cell may consist of one or more of the d-cells in the partition induced by all queries. If it contains more than one of the partition cells, then the underlying ranking is ambiguous. Theorem 2. Assume A1-2. Let N = ￿n 2 ￿ . Suppose m pairwise comparison are chosen uniformly at random without replacement from the possible ￿n 2 ￿ . Then for all positive integers N ≥m ≥d the probability that the m queries yield a unique ranking is ￿m d ￿ / ￿N d ￿ ≤( em N )d. Proof. No fewer than d hyperplanes bound each d-cell in the partition of Rd induced by all possible queries. The probability of selecting d specific queries in a random draw of m is equal to ￿N −d m −d ￿￿￿N m ￿ = ￿m d ￿￿￿N d ￿ ≤md d! dd N d ≤ ￿m N ￿d dd d! ≤ ￿em N ￿d . ￿ Note that ￿m d ￿ / ￿N d ￿ < 1/2 unless m = Ω(n2). Therefore, if the queries are randomly chosen, then we will need to ask almost all queries to guarantee that the inferred ranking is probably correct. 4 Analysis of sequential algorithm for query selection Now consider the basic sequential process of the algorithm in Figure 1. Suppose we have ranked k −1 of the n objects. Call these objects 1 through k −1. This places the reference rσ within a d-cell (defined by the labels of the comparison queries between objects 1, . . . , k −1). Call this d-cell Ck−1. Now suppose we pick another object at random and call it object k. A comparison query between object k and one of objects 1, . . . , k −1 can only be informative (i.e., ambiguous) if the associated hyperplane intersects this d-cell Ck−1 (see Figure 2). If k is significantly larger than d, then it turns out that the cell Ck−1 is probably quite small and the probability that one of the queries intersects Ck−1 is very small; in fact the probability is on the order of 1/k2. 5 4.1 Hyperplane-point duality Consider a hyperplane h = (h0, h1, . . . , hd) with (d + 1) parameters in Rd and a point p = (p1, . . . , pd) ∈Rd that does not lie on the hyperplane. Checking which halfspace p falls in, i.e., h1p1 + h2p2 + · · · + hdpd + h0 ≷0, has a dual interpretation: h is a point in Rd+1 and p is a hyperplane in Rd+1 passing through the origin (i.e., with d free parameters). Recall that each possible ranking can be represented by a reference point rσ ∈Rd. Our problem is to determine the ranking, or equivalently the vector of responses to the ￿n 2 ￿ queries represented by hyperplanes in Rd. Using the above observation, we see that our problem is equivalent to finding a labeling over ￿n 2 ￿ points in Rd+1 with as few queries as possible. We will refer to this alternative representation as the dual and the former as the primal. 4.2 Characterization of an ambiguous query The characterization of an ambiguous query has interpretations in both the primal and dual spaces. We will now describe the interpretation in the dual which will be critical to our analysis of the sequential algorithm of Figure 1. Definition 3. [9] Let S be a finite subset of Rd and let S+ ⊂S be points labeled +1 and S−= S \ S+ be the points labeled −1 and let x be any other point except the origin. If there exists two homogeneous linear separators of S+ and S−that assign different labels to the point x, then the label of x is said to be ambiguous with respect to S. Lemma 2. [9, Lemma 1] The label of x is ambiguous with respect to S if and only if S+ and S− are homogeneously linearly separable by a (d −1)-dimensional subspace containing x. Let us consider the implications of this lemma to our scenario. Assume that we have labels for all the pairwise comparisons of k −1 objects. Next consider a new object called object k. In the dual, the pairwise comparison between object k and object i, for some i ∈{1, . . . , k−1}, is ambiguous if and only if there exists a hyperplane that still separates the original points and also passes through this new point. In the primal, this separating hyperplane corresponds to a point lying on the hyperplane defined by the associated pairwise comparison. 4.3 The probability that a query is ambiguous An essential component of the sequential algorithm of Figure 1 is the initial random order of the objects; every sequence in which it could consider objects is equally probable. This allows us to state a nontrivial fact about the partial rankings of the first k objects observed in this sequence. Lemma 3. Assume A1-2 and σ ∼U. Consider the subset S ⊂Θ with |S| = k that is randomly selected from Θ such that all ￿n k ￿ subsets are equally probable. If Σk,d denotes the set of possible rankings of these k objects then every σ ∈Σk,d is equally probable. Proof. Let a k-partition denote the partition of Rd into Q(k, d) d-cells induced by k objects for 1 ≤k ≤n. In the n-partition, each d-cell is weighted uniformly and is equal to 1/Q(n, d). If we uniformly at random select k objects from the possible n and consider the k-partition, each d-cell in the k-partition will contain one or more d-cells of the n-partition. If we select one of these d-cells from the k-partition, on average there will be Q(n, d)/Q(k, d) d-cells from the n-partition contained in this cell. Therefore the probability mass in each d-cell of the k-partition is equal to the number of cells from the n-partition in this cell multiplied by the probability of each of those cells from the n-partition: Q(n, d)/Q(k, d) × 1/Q(n, d) = 1/Q(k, d), and |Σk,d| = Q(k, d). As described above, for 1 ≤i ≤k some of the pairwise comparisons qi,k+1 may be ambiguous. The algorithm chooses a random sequence of the n objects in its initialization and does not use the labels of q1,k+1, . . . , qj−1,k+1, qj+1,k+1, . . . , qk,k+1 to make a determination of whether or not qj,k+1 is ambiguous. It follows that the events of requesting the label of qi,k+1 for i = 1, 2, . . . , k are independent and identically distributed (conditionally on the results of queries from previous steps). Therefore it makes sense to talk about the probability of requesting any one of them. Lemma 4. Assume A1-2 and σ ∼U. Let A(k, d, U) denote the probability of the event that the pairwise comparison qi,k+1 is ambiguous for i = 1, 2, . . . , k. Then there exists positive, real number constants a1 and a2 independent of k such that for k > 2d, a1 2d k2 ≤A(k, d, U) ≤a2 2d k2 . 6 Proof. By Lemma 2, a point in the dual (pairwise comparison) is ambiguous if and only if there exists a separating hyperplane that passes through this point. This implies that the hyperplane representation of the pairwise comparison in the primal intersects the cell containing rσ (see Figure 2 for an illustration of this concept). Consider the partition of Rd generated by the hyperplanes corresponding to pairwise comparisons between objects 1, . . . , k. Let P(k, d) denote the number of d-cells in this partition that are intersected by a hyperplane corresponding to one of the queries qi,k+1, i ∈{1, . . . , k}. Then it is not difficult to show that P(k, d) is bounded above and below by constants independent of n and k times k2(d−1) 2d−1(d−1)! [7]. By Lemma 3, every d-cell in the partition induced by the k objects corresponds to an equally probable ranking of those objects. Therefore, the probability that a query is ambiguous is the number of cells intersected by the corresponding hyperplane divided by the total number of d-cells, and therefore A(k, d, U) = P (k,d) Q(k,d). The result follows immediately from the bounds on P(k, d) and Corollary 1. Because the individual events of requesting each query are conditionally independent, the total number of queries requested by the algorithm is just Mn = ￿n−1 k=1 ￿k i=1 1{Request qi,k+1}. Using the results above, it straightforward to prove the main theorem below (see [7]). Theorem 3. Assume A1-2 and σ ∼U. Let the random variable Mn denote the number of pairwise comparisons that are requested in the algorithm of Figure 1, then EU[Mn] ≤2d log2 2d + 2da2 log n. Furthermore, if σ ∼π and maxσ∈Σn,d πσ ≤c|Σn,d|−1 for some c > 0, then Eπ[Mn] ≤cEU[Mn]. 5 Robust sequential algorithm for query selection We now extend the algorithm of Figure 1 to situations in which the response to each query is only probably correct. If the correct label of a query qi,j is yi,j, we denote the possibly incorrect response by Yi,j. The probability that Yi,j = yi,j is at least 1 −p, p < 1/2. The robust algorithm operates in the same fashion as the algorithm in Figure 1, with the exception that when an ambiguous query is encountered several (equivalent) queries are made and a decision is based on the majority vote. This voting procedure allows us to construct a ranking (or partial ranking) that is correct with high probability by requesting just O(d log2 n) queries where the extra log factor comes from voting. First consider the case in which each query can be repeated to obtain multiple independent responses (votes) for each comparison query. This random noise model arises, for example, in social choice theory where the “reference” is a group of people, each casting a vote. The elementary proof of the next theorem is given in [7]. Theorem 4. Assume A1-2 and σ ∼U but that each query response is a realization of an i.i.d. Bernoulli random variable Yi,j with P(Yi,j ￿= yi,j) ≤p < 1/2. If all ambiguous queries are decided by the majority vote of R independent responses to each such query, then with probability greater than 1 −2n log2(n) exp(−1 2(1 −2p)2R) this procedure correctly identifies the correct ranking and requests no more than O(Rd log n) queries on average. In other situations, if we ask the same query multiple times we may get the same, possibly incorrect, response each time. This persistent noise model is natural, for example, if the reference is a single human. Under this model, if two rankings differ by only a single pairwise comparison, then they cannot be distinguished with probability greater than 1 −p. So, in general, exact recovery of the ranking cannot be guaranteed with high probability. The best we can hope for is to exactly recover a partial ranking of the objects (i.e. the ranking over a subset of the objects). Henceforth, we will assume the noise is persistent and aim to exactly recover a partial ranking of the objects. The key ingredient in the persistent noise setting is the design of a voting set for each ambiguous query encountered. Suppose that at the jth object in the algorithm in Figure 1 the query qi,j is ambiguous. In principle, a voting set could be constructed using objects ranked between i and j. If object k is between i and j, then note that yi,j = yi,k = yk,j. In practice, we cannot identify the subset of objects ranked between i and j, but it is contained within the set Ti,j, defined to be the subset of objects θk such that qi,k, qk,j, or both are ambiguous. Furthermore, Lemma 3 implies that each object in Ti,j is ranked between i and j with probability at least 1/3 [7]. Ti,j will be our voting set. Note however, if objects i and j are closely ranked, then Ti,j may be rather small, and so it is not 7 0 10 20 30 40 50 60 70 80 90 100 0 100 200 300 400 500 600 log2 |Σn,d| 2 log2 |Σn,d| Dimension Number of query requests Figure 3: Mean and standard deviation of requested queries (solid) in the noiseless case for n = 100; log2 |Σn,d| is a lower bound (dashed). Table 1: Statistics for the algorithm robust to persistent noise of Section 5 with respect to all ￿n 2 ￿ pairwise comparisons. Recall y is the noisy response vector, ˜y is the embedding’s solution, and ˆy is the output of the robust algorithm. Dimension 2 3 % of queries requested mean 14.5 18.5 std 5.3 6 Average error d(y, ˜y) 0.23 0.21 d(y, ˆy) 0.31 0.29 always possible to find a sufficiently large voting set. Therefore, we must specify a size-threshold R ≥1. If the size of Ti,j is at least R, then we decide the label for qi,j by voting over the responses to {qi,k, qk,j : k ∈Ti,j} and qi,j; otherwise we pass over object j and move on to the next object in the list. This allows us to construct a probably correct ranking of the objects that are not passed over. The theorem below proves that a large portion of objects will not be passed over. At the end of the process, some objects that were passed over may then be unambiguously ranked (based on queries made after they were passed over) or they can be ranked without voting (and without guarantees). The proof of the next theorem is provided in the longer version of this paper [7]. Theorem 5. Assume A1-2, σ ∼U, and P(Yi,j ￿= yi,j) = p. For any size-threshold R ≥1, with probability greater than 1 −2n log2(n) exp ￿ −2 9(1 −2p)2R ￿ the procedure above correctly ranks at least n/(2R + 1) objects and requests no more than O(Rd log n) queries on average. 6 Empirical results In this section we present empirical results for both the noiseless algorithm of Figure 1 and the robust algorithm of Section 5. For the noiseless algorithm, n = 100 points, representing the objects to be ranked, were uniformly at random simulated from the unit hypercube [0, 1]d for d = 1, 10, 20, . . . , 100. The reference was simulated from the same distribution. For each value of d the experiment was repeated 25 times using a new simulation of points and the reference. Because responses are noiseless, exact identification of the ranking is guaranteed. The number of requested queries is plotted in Figure 3 with the lower bound of Theorem 1 for reference. The number of requested queries never exceeds twice the lower bound which agrees with the result of Theorem 3. The robust algorithm in Section 5 was evaluated using a symmetric similarity matrix dataset available at [20] whose (i, j)th entry, denoted si,j, represents the human-judged similarity between audio signals i and j for all i ￿= j ∈{1, . . . , 100}. If we consider the kth row of this matrix, we can rank the other signals with respect to their similarity to the kth signal; we define q(k) i,j := {sk,i > sk,j} and y(k) i,j := 1{q(k) i,j }. Since the similarities were derived from human subjects, the derived labels may be erroneous. Moreover, there is no possibility of repeating queries here and so the noise is persistent. The analysis of this dataset in [2] suggests that the relationship between signals can be well approximated by an embedding in 2 or 3 dimensions. We used non-metric multidimensional scaling [5] to find an embedding of the signals: θ1, . . . , θ100 ∈Rd for d = 2 and 3. For each object θk, we use the embedding to derive pairwise comparison labels between all other objects as follows: ˜y(k) i,j := 1{||θk −θi|| < ||θk −θj||}, which can be considered as the best approximation to the labels y(k) i,j (defined above) in this embedding. The output of the robust sequential algorithm, which uses only a small fraction of the similarities, is denoted by ˆy(k) i,j . We set R = 15 using Theorem 5 as a rough guide. Using the popular Kendell-Tau distance d(y(k), ˆy(k)) = ￿n 2 ￿−1 ￿ i<j 1{y(k) i,j ￿= ˆy(k) i,j } [21] for each object k, we denote the average of this metric over all objects by d(y, ˆy) and report this statistic and the number of queries requested in Table 1. Because the average error of ˆy is only 0.07 higher than that of ˜y, this suggests that the algorithm is doing almost as well as we could hope. Also, note that 2R 2d log n/ ￿n 2 ￿ is equal to 11.4% and 17.1% for d = 2 and 3, respectively, which agrees well with the experimental values. 8 References [1] D. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching. AddisonWesley, 1998. [2] Scott Philips, James Pitton, and Les Atlas. Perceptual feature identification for active sonar echoes. In OCEANS 2006, 2006. [3] B. McFee and G. Lanckriet. Partial order embedding with multiple kernels. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 721–728. ACM, 2009. [4] I. Gormley and T. Murphy. A latent space model for rank data. Statistical Network Analysis: Models, Issues, and New Directions, pages 90–102, 2007. [5] M.A.A. Cox and T.F. Cox. Multidimensional scaling. Handbook of data visualization, pages 315–347, 2008. [6] J.F. Traub. Information-based complexity. John Wiley and Sons Ltd., 2003. [7] Kevin G. Jamieson and Robert D. Nowak. Active ranking using pairwise comparisons. arXiv:1109.3701v1, 2011. [8] C.H. Coombs. A theory of data. Psychological review, 67(3):143–159, 1960. [9] T.M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE transactions on electronic computers, 14(3):326–334, 1965. [10] S. Dasgupta, A.T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. The Journal of Machine Learning Research, 10:281–299, 2009. [11] S. Hanneke. Theoretical foundations of active learning. PhD thesis, Citeseer, 2009. [12] Tibor Heged¨us. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the eighth annual conference on Computational learning theory, COLT ’95, pages 108–117, New York, NY, USA, 1995. ACM. [13] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research, 4:933–969, 2003. [14] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89–96. ACM, 2005. [15] Z. Zheng, K. Chen, G. Sun, and H. Zha. A regression framework for learning ranking functions using relative relevance judgments. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 287–294. ACM, 2007. [16] R. Herbrich, T. Graepel, and K. Obermayer. Support vector learning for ordinal regression. In Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470), volume 1, pages 97–102. IET, 1999. [17] T. Lu and C. Boutilier. Robust approximation and incremental elicitation in voting protocols. IJCAI-11, Barcelona, 2011. [18] W. Chu and Z. Ghahramani. Extensions of gaussian processes for ranking: semi-supervised and active learning. Learning to Rank, page 29, 2005. [19] J.F. Bennett and W.L. Hays. Multidimensional unfolding: Determining the dimensionality of ranked preference data. Psychometrika, 25(1):27–43, 1960. [20] Similarity Learning. Aural Sonar dataset. [http://idl.ee.washington.edu/SimilarityLearning]. University of Washington Information Design Lab, 2011. [21] J.I. Marden. Analyzing and modeling rank data. Chapman & Hall/CRC, 1995. 9
2011
125
4,175
Selecting Receptive Fields in Deep Networks Adam Coates Department of Computer Science Stanford University Stanford, CA 94305 acoates@cs.stanford.edu Andrew Y. Ng Department of Computer Science Stanford University Stanford, CA 94305 ang@cs.stanford.edu Abstract Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer. Unfortunately, for such large architectures the number of parameters can grow quadratically in the width of the network, thus necessitating hand-coded “local receptive fields” that limit the number of connections from lower level features to higher ones (e.g., based on spatial locality). In this paper we propose a fast method to choose these connections that may be incorporated into a wide variety of unsupervised training methods. Specifically, we choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric. This approach allows us to harness the advantages of local receptive fields (such as improved scalability, and reduced data requirements) when we do not know how to specify such receptive fields by hand or where our unsupervised training algorithm has no obvious generalization to a topographic setting. We produce results showing how this method allows us to use even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets: 82.0% and 60.1% accuracy, respectively. 1 Introduction Much recent research has focused on training deep, multi-layered networks of feature extractors applied to challenging visual tasks like object recognition. An important practical concern in building such networks is to specify how the features in each layer connect to the features in the layers beneath. Traditionally, the number of parameters in networks for visual tasks is reduced by restricting higher level units to receive inputs only from a “receptive field” of lower-level inputs. For instance, in the first layer of a network used for object recognition it is common to connect each feature extractor to a small rectangular area within a larger image instead of connecting every feature to the entire image [14, 15]. This trick dramatically reduces the number of parameters that must be trained and is a key element of several state-of-the-art systems [4, 19, 6]. In this paper, we propose a method to automatically choose such receptive fields in situations where we do not know how to specify them by hand—a situation that, as we will explain, is commonly encountered in deep networks. There are now many results in the literature indicating that large networks with thousands of unique feature extractors are top competitors in applications and benchmarks (e.g., [4, 6, 9, 19]). A major obstacle to scaling up these representations further is the blowup in the number of network parameters: for n input features, a complete representation with n features requires a matrix of n2 weights—one weight for every feature and input. This blowup leads to a number of practical problems: (i) it becomes difficult to represent, and even more difficult to update, the entire weight matrix during learning, (ii) feature extraction becomes extremely slow, and (iii) many algorithms and techniques (like whitening and local contrast normalization) are difficult to generalize to large, 1 unstructured input domains. As mentioned above, we can solve this problem by limiting the “fan in” to each feature by connecting each feature extractor to a small receptive field of inputs. In this work, we will propose a method that chooses these receptive fields automatically during unsupervised training of deep networks. The scheme can operate without prior knowledge of the underlying data and is applicable to virtually any unsupervised feature learning or pre-training pipeline. In our experiments, we will show that when this method is combined with a recently proposed learning system, we can construct highly scalable architectures that achieve accuracy on CIFAR-10 and STL datasets beyond the best previously published. It may not be clear yet why it is necessary to have an automated way to choose receptive fields since, after all, it is already common practice to pick receptive fields simply based on prior knowledge. However, this type of solution is insufficient for large, deep representations. For instance, in local receptive field architectures for image data, we typically train a bank of linear filters that apply only to a small image patch. These filters are then convolved with the input image to yield the first layer of features. As an example, if we train 100 5-by-5 pixel filters and convolve them with a 32-by-32 pixel input, then we will get a 28-by-28-by-100 array of features. Each 2D grid of 28-by28 feature responses for a single filter is frequently called a “map” [14, 4]. Though there are still spatial relationships amongst the feature values within each map, it is not clear how two features in different maps are related. Thus when we train a second layer of features we must typically resort to connecting each feature to every input map or to a random subset of maps [12, 4] (though we may still take advantage of the remaining spatial organization within each map). At even higher layers of deep networks, this problem becomes extreme: our array of responses will have very small spatial resolution (e.g., 1-by-1) yet will have a large number of maps and thus we can no longer make use of spatial receptive fields. This problem is exacerbated further when we try to use very large numbers of maps which are often necessary to achieve top performance [4, 5]. In this work we propose a way to address the problem of choosing receptive fields that is not only a flexible addition to unsupervised learning and pre-training pipelines, but that can scale up to the extremely large networks used in state-of-the-art systems. In our method we select local receptive fields that group together (pre-trained) lower-level features according to a pairwise similarity metric between features. Each receptive field is constructed using a greedy selection scheme so that it contains features that are similar according to the similarity metric. Depending on the choice of metric, we can cause our system to choose receptive fields that are similar to those that might be learned implicitly by popular learning algorithms like ICA [11]. Given the learned receptive fields (groups of features) we can subsequently apply an unsupervised learning method independently over each receptive field. Thus, this method frees us to use any unsupervised learning algorithm to train the weights of the next layer. Using our method in conjunction with the pipeline proposed by [6], we demonstrate the ability to train multi-layered networks using only vector quantization as our unsupervised learning module. All of our results are achieved without supervised fine-tuning (i.e., backpropagation), and thus rely heavily on the success of the unsupervised learning procedure. Nevertheless, we attain the best known performances on the CIFAR-10 and STL datasets. We will now discuss some additional work related to our approach in Section 2. Details of our method are given in Section 3 followed by our experimental results in Section 4. 2 Related Work While much work has focused on different representations for deep networks, an orthogonal line of work has investigated the effect of network structure on performance of these systems. Much of this line of inquiry has sought to identify the best choices of network parameters such as size, activation function, pooling method and so on [12, 5, 3, 16, 19]. Through these investigations a handful of key factors have been identified that strongly influence performance (such as the type of pooling, activation function, and number of features). These works, however, do not address the finer-grained questions of how to choose the internal structure of deep networks directly. Other authors have tackled the problem of architecture selection more generally. One approach is to search for the best architecture. For instance, Saxe et al. [18] propose using randomly initialized networks (forgoing the expense of training) to search for a high-performing structure. Pinto et al. [17], on the other hand, use a screening procedure to choose from amongst large numbers of randomly composed networks, collecting the best performing networks. 2 More powerful modeling and optimization techniques have also been used for learning the structure of deep networks in-situ. For instance, Adams et al. [1] use a non-parametric Bayesian prior to jointly infer the depth and number of hidden units at each layer of a deep belief network during training. Zhang and Chan [21] use an L1 penalization scheme to zero out many of the connections in an otherwise bipartite structure. Unfortunately, these methods require optimizations that are as complex or expensive as the algorithms they augment, thus making it difficult to achieve computational gains from any architectural knowledge discovered by these systems. In this work, the receptive fields will be built by analyzing the relationships between feature responses rather than relying on prior knowledge of their organization. A popular alternative solution is to impose topographic organization on the feature outputs during training. In general, these learning algorithms train a set of features (usually linear filters) such that features nearby in a pre-specified topography share certain characteristics. The Topographic ICA algorithm [10], for instance, uses a probabilistic model that implies that nearby features in the topography have correlated variances (i.e., energies). This statistical measure of similarity is motivated by empirical observations of neurons and has been used in other analytical models [20]. Similar methods can be obtained by imposing group sparsity constraints so that features within a group tend to be on or off at the same time [7, 8]. These methods have many advantages but require us to specify a topography first, then solve a large-scale optimization problem in order to organize our features according to the given topographic layout. This will typically involve many epochs of training and repeated feature evaluations in order to succeed. In this work, we perform this procedure in reverse: our features are pre-trained using whatever method we like, then we will extract a useful grouping of the features post-hoc. This approach has the advantage that it can be scaled to large distributed clusters and is very generic, allowing us to potentially use different types of grouping criteria and learning strategies in the future with few changes. In that respect, part of the novelty in our approach is to convert existing notions of topography and statistical dependence in deep networks into a highly scalable “wrapper method” that can be re-used with other algorithms. 3 Algorithm Details In this section we will describe our approach to selecting the connections between high-level features and their lower-level inputs (i.e., how to “learn” the receptive field structure of the high-level features) from an arbitrary set of data based on a particular pairwise similarity metric: squarecorrelation of feature responses.1 We will then explain how our method integrates with a typical learning pipeline and, in particular, how to couple our algorithm with the feature learning system proposed in [6], which we adopt since it has been shown previously to perform well on image recognition tasks. In what follows, we assume that we are given a dataset X of feature vectors x(i), i ∈{1, . . . , m}, with elements x(i) j . These vectors may be raw features (e.g., pixel values) but will usually be features generated by lower layers of a deep network. 3.1 Similarity of Features In order to group features together, we must first define a similarity metric between features. Ideally, we should group together features that are closely related (e.g., because they respond to similar patterns or tend to appear together). By putting such features in the same receptive field, we allow their relationship to be modeled more finely by higher level learning algorithms. Meanwhile, it also makes sense to model seemingly independent subsets of features separately, and thus we would like such features to end up in different receptive fields. A number of criteria might be used to quantify this type of relationship between features. One popular choice is “square correlation” of feature responses, which partly underpins the Topographic ICA [10] algorithm. The idea is that if our dataset X consists of linearly uncorrelated features (as can be obtained by applying a whitening procedure), then a measure of the higher-order dependence between two features can be obtained by looking at the correlation of their energies (squared responses). In particular, if we have E [x] = 0 and E  xx⊤ = I, then we will define the similarity 1Though we use this metric throughout, and propose some extensions, it can be replaced by many other choices such as the mutual information between two features. 3 between features xj and xk as the correlation between the squared responses: S[xj, xk] = corr(x2 j, x2 k) = E  x2 jx2 k −1  / q E  x4 j −1  E [x4 k −1]. This metric is easy to compute by first whitening our input dataset with ZCA2 whitening [2], then computing the pairwise similarities between all of the features: Sj,k ≡SX[xj, xk] ≡ P i x(i) j 2x(i) k 2 −1 qP i(x(i) j 4 −1) P i(x(i) k 4 −1) . (1) This computation is completely practical for fewer than 5000 input features. For fewer than 10000 features it is feasible but somewhat arduous: we must not only hold a 10000x10000 matrix in memory but we must also whiten our 10000-feature dataset—requiring a singular value or eigenvalue decomposition. We will explain how this expense can be avoided in Section 3.3, after we describe our receptive field learning procedure. 3.2 Selecting Local Receptive Fields We now assume that we have available to us the matrix of pairwise similarities between features Sj,k computed as above. Our goal is to construct “receptive fields”: sets of features Rn, n = 1, . . . , N whose responses will become the inputs to one or more higher-level features. We would like for each Rn to contain pairs of features with large values of Sj,k. We might achieve this using various agglomerative or spectral clustering methods, but we have found that a simple greedy procedure works well: we choose one feature as a seed, and then group it with its nearest neighbors according to the similarities Sj,k. In detail, we first select N rows, j1, . . . , jN, of the matrix S at random (corresponding to a random choice of features xjn to be the seed of each group). We then construct a receptive field Rn that contains the features xk corresponding to the top T values of Sjn,k. We typically use T = 200, though our results are not too sensitive to this parameter. Upon completion, we have N (possibly overlapping) receptive fields Rn that can be used during training of the next layer of features. 3.3 Approximate Similarity Computing the similarity matrix Sj,k using square correlation is practical for fairly large numbers of features using the obvious procedure given above. However, if we want to learn receptive fields over huge numbers of features (as arise, for instance, when we use hundreds or thousands of maps), we may often be unable to compute S directly. For instance, as explained above, if we use square correlation as our similarity criterion then we must perform whitening over a large number of features. Note, however, that the greedy grouping scheme we use requires only N rows of the matrix. Thus, provided we can compute Sj,k for a single pair of features, we can avoid storing the entire matrix S. To avoid performing the whitening step for all of the input features, we can instead perform pair-wise whitening between features. Specifically, to compute the squared correlation of xj and xk, we whiten the jth and kth columns of X together (independently of all other columns), then compute the square correlation between the whitened values ˆxj and ˆxk. Though this procedure is not equivalent to performing full whitening, it appears to yield effective estimates for the squared correlation between two features in practice. For instance, for a given “seed”, the receptive field chosen using this approximation typically overlaps with the “true” receptive field (computed with full whitening) by 70% or more. More importantly, our final results (Section 4) are unchanged compared to the exact procedure. Compared to the “brute force” computation of the similarity matrix, the approximation described above is very fast and easy to distribute across a cluster of machines. Specifically, the 2x2 ZCA whitening transform for a pair of features can be computed analytically, and thus we can express the pair-wise square correlations analytically as a function of the original inputs without having to 2If E ˆ xx⊤˜ = Σ = V DV ⊤, ZCA whitening uses the transform P = V D−1/2V ⊤to compute the whitened vector ˆx as ˆx = Px. 4 numerically perform the whitening on all pairs of features. If we assume that all of the input features of x(i) are zero-mean and unit variance, then we have: ˆx(i) j = 1 2((γjk + βjk)x(i) j + (γjk −βjk)x(i) k ) ˆx(i) k = 1 2((γjk −βjk)x(i) j + (γjk + βjk)x(i) k ) where βjk = (1 −αjk)−1/2, γjk = (1 + αjk)−1/2 and αjk is the correlation between xj and xk. Substituting ˆx(i) for x(i) in Equation 1 and expanding yields an expression for the similarity Sj,k in terms of the pair-wise moments of each feature (up to fourth order). We can typically implement these computations in a single pass over the dataset that accumulates the needed statistics and then selects the receptive fields based on the results. Many alternative methods (e.g., Topographic ICA) would require some form of distributed optimization algorithm to achieve a similar result, which requires many feed-forward and feed-back passes over the dataset. In contrast, the above method is typically less expensive than a single feed-forward pass (to compute the feature values x(i)) and is thus very fast compared to other conceivable solutions. 3.4 Learning Architecture We have adopted the architecture of [6], which has previously been applied with success to image recognition problems. In this section we will briefly review this system as it is used in conjunction with our receptive field learning approach, but it should be noted that our basic method is equally applicable to many other choices of processing pipeline and unsupervised learning method. The architecture proposed by [6] works by constructing a feature representation of a small image patch (say, a 6-by-6 pixel region) and then extracting these features from many overlapping patches within a larger image (much like a convolutional neural net). Let X ∈Rm×108 be a dataset composed of a large number of 3-channel (RGB), 6-by-6 pixel image patches extracted from random locations in unlabeled training images and let x(i) ∈R108 be the vector of RGB pixel values representing the ith patch. Then the system in [6] applies the following procedure to learn a new representation of an image patch: 1. Normalize each example x(i) by subtracting out the mean and dividing by the norm. Apply a ZCA whitening transform to x(i) to yield ˆx(i). 2. Apply an unsupervised learning algorithm (e.g., K-means or sparse coding) to obtain a (normalized) set of linear filters (a “dictionary”), D. 3. Define a mapping from the whitened input vectors ˆx(i) to output features given the dictionary D. We use a soft threshold function that computes each feature f (i) j as f (i) j = max{0, D(j)⊤ˆx(i) −t} for a fixed threshold t. The computed feature values for each example, f (i), become the new representation for the patch x(i). We can now apply the learned feature extractor produced by this method to a larger image, say, a 32-by-32 pixel RGB color image. This large image can be represented generally as a long vector with 32×32×3 = 3072 elements. To compute its feature representation we simply extract features from every overlapping patch within the image (using a stride of 1 pixel between patches) and then concatenate all of the features into a single vector, yielding a (usually large) new representation of the entire image. Clearly we can modify this procedure to use choices of receptive fields other than 6-by-6 patches of images. Concretely, given the 32-by-32 pixel image, we could break it up into arbitrary choices of overlapping sets Rn where each Rn includes a subset of the RGB values of the whole image. Then we apply the procedure outlined above to each set of features Rn independently, followed by concatenating all of the extracted features. In general, if X is now any training set (not necessarily image patches), we can define XRn as the training set X reduced to include only the features in one receptive field, Rn (that is, we simply discard all of the columns of X that do not correspond to features in Rn). We may then apply the feature learning and extraction methods above to each XRn separately, just as we would for the hand-chosen patch receptive fields used in previous work. 5 3.5 Network Details The above components, conceptually, allow us to lump together arbitrary types and quantities of data into our unlabeled training set and then automatically partition them into receptive fields in order to learn higher-level features. The automated receptive field selection can choose receptive fields that span multiple feature maps, but the receptive fields will often span only small spatial areas (since features extracted from locations far apart tend to appear nearly independent). Thus, we will also exploit spatial knowledge to enable us to use large numbers of maps rather than trying to treat the entire input as unstructured data. Note that this is mainly to reduce the expense of feature extraction and to allow us to use spatial pooling (which introduces some invariance between layers of features); the receptive field selection method itself can be applied to hundreds of thousands of inputs. We now detail the network structure used for our experiments that incorporates this structure. First, there is little point in applying the receptive field learning method to the raw pixel layer. Thus, we use 6-by-6 pixel receptive fields with a stride (step) of 1 pixel between them for the first layer of features. If the first layer contains K1 maps (i.e., K1 filters), then a 32-by-32 pixel color image takes on a 27-by-27-by-K1 representation after the first layer of (convolutional) feature extraction. Second, depending on the unsupervised learning module, it can be difficult to learn features that are invariant to image transformations like translation. This is handled traditionally by incorporating “pooling” layers [3, 14]. Here we use average pooling over adjacent, disjoint 3-by-3 spatial blocks. Thus, applied to the 27-by-27-by-K1 representation from layer 1, this yields a 9-by-9-by-K1 pooled representation. After extracting the 9-by-9-by-K1 pooled representation from the first two layers, we apply our receptive field selection method. We could certainly apply the algorithm to the entire high-dimensional representation. As explained above, it is useful to retain spatial structure so that we can perform spatial pooling and convolutional feature extraction. Thus, rather than applying our algorithm to the entire input, we apply the receptive field learning to 2-by-2 spatial regions within the 9-by-9-by-K1 pooled representation. Thus the receptive field learning algorithm must find receptive fields to cover 2 × 2 × K1 inputs. The next layer of feature learning then operates on each receptive field within the 2-by-2 spatial regions separately. This is similar to the structure commonly employed by prior work [4, 12], but here we are able to choose receptive fields that span several feature maps in a deliberate way while also exploiting knowledge of the spatial structure. In our experiments we will benchmark our system on image recognition datasets using K1 = 1600 first layer maps and K2 = 3200 second layer maps learned from N = 32 receptive fields. When we use three layers, we apply an additional 2-by-2 average pooling stage to the layer 2 outputs (with stride of 1) and then train K3 = 3200 third layer maps (again with N = 32 receptive fields). To construct a final feature representation for classification, the outputs of the first and second layers of trained features are average-pooled over quadrants as is done by [6]. Thus, our first layer of features result in 1600 × 4 = 6400 values in the final feature vector, and our second layer of features results in 3200×4 = 12800 values. When using a third layer, we use average pooling over the entire image to yield 3200 additional feature values. The features for all layers are then concatenated into a single long vector and used to train a linear classifier (L2-SVM). 4 Experimental Results We have applied our method to several benchmark visual recognition problems: the CIFAR-10 and STL datasets. In addition to training on the full CIFAR training set, we also provide results of our method when we use only 400 training examples per class to compare with other single-layer results in [6]. The CIFAR-10 examples are all 32-by-32 pixel color images. For the STL dataset, we downsample the (96 pixel) images to 32 pixels. We use the pipeline detailed in Section 3.4, with vector quantization (VQ) as the unsupervised learning module to train up to 3 layers. For each set of experiments we provide test results for 1 to 3 layers of features, where the receptive fields for the 2nd and 3rd layers of features are learned using the method of Section 3.2 and square-correlation for the similarity metric. For comparison, we also provide test results in each case using several alternative receptive field choices. In particular, we have also tested architectures where we use a single receptive field (N = 1) 6 where R1 contains all of the inputs, and random receptive fields (N = 32) where Rn is filled according to the same algorithm as in Section 3.2, but where the matrix S is set to random values. The first method corresponds to the “completely connected”, brute-force case described in the introduction, while the second is the “randomly connected” case. Note that in these cases we use the same spatial organization outlined in Section 3.5. For instance, the completely-connected layers are connected to all the maps within a 2-by-2 spatial window. Finally, we will also provide test results using a larger 1st layer representation (K1 = 4800 maps) to verify that the performance gains we achieve are not merely the result of passing more projections of the data to the supervised classification stage. 4.1 CIFAR-10 4.1.1 Learned 2nd-layer Receptive Fields and Features Before we look at classification results, we first inspect the learned features and their receptive fields from the second layer (i.e., the features that take the pooled first-layer responses as their input). Figure 1 shows two typical examples of receptive fields chosen by our method when using squarecorrelation as the similarity metric. In both of the examples, the receptive field incorporates filters with similar orientation tuning but varying phase, frequency and, sometimes, varying color. The position of the filters within each window indicates its location in the 2-by-2 region considered by the learning algorithm. As we might expect, the filters in each group are visibly similar to those placed together by topographic methods like TICA that use related criteria. Figure 1: Two examples of receptive fields chosen from 2-by-2-by-1600 image representations. Each box shows the low-level filter and its position (ignoring pooling) in the 2-by-2 area considered by the algorithm. Only the most strongly dependent features from the T = 200 total features are shown. (Best viewed in color.) Figure 2: Most inhibitory (left) and excitatory (right) filters for two 2nd-layer features. (Best viewed in color.) We also visualize some of the higher-level features constructed by the vector quantization algorithm when applied to these two receptive fields. The filters obtained from VQ assign weights to each of the lower level features in the receptive field. Those with a high positive weight are “excitatory” inputs (tending to lead to a high response when these input features are active) and those with a large negative weight are “inhibitory” inputs (tending to result in low filter responses). The 5 most inhibitory and excitatory inputs for two learned features are shown in Figure 2 (one from each receptive field in Figure 1). For instance, the two most excitatory filters of feature (a) tend to select for long, narrow vertical bars, inhibiting responses of wide bars. 4.1.2 Classification Results We have tested our method on the task of image recognition using the CIFAR training and testing labels. Table 1 details our results using the full CIFAR dataset with various settings. We first note the comparison of our 2nd layer results with the alternative of a single large 1st layer using an equivalent number of maps (4800) and see that, indeed, our 2nd layer created with learned receptive fields performs better (81.2% vs. 80.6%). We also see that the random and single receptive field choices work poorly, barely matching the smaller single-layer network. This appears to confirm our belief that grouping together similar features is necessary to allow our unsupervised learning module (VQ) to identify useful higher-level structure in the data. Finally, with a third layer of features, we achieve the best result to date on the full CIFAR dataset with 82.0% accuracy. 7 Table 1: Results on CIFAR-10 (full) Architecture Accuracy (%) 1 Layer 78.3% 1 Layer (4800 maps) 80.6% 2 Layers (Single RF) 77.4% 2 Layers (Random RF) 77.6% 2 Layers (Learned RF) 81.2% 3 Layers (Learned RF) 82.0% VQ (6000 maps) [6] 81.5% Conv. DBN [13] 78.9% Deep NN [4] 80.49% Table 2: Results on CIFAR-10 (400 ex. per class) Architecture Accuracy (%) 1 Layer 64.6% (±0.8%) 1 Layer (4800 maps) 63.7% (±0.7%) 2 Layers (Single RF) 65.8% (±0.3%) 2 Layers (Random RF) 65.8% (±0.9%) 2 Layers (Learned RF) 69.2% (±0.7%) 3 Layers (Learned RF) 70.7% (±0.7%) Sparse coding (1 layer) [6] 66.4% (±0.8%) VQ (1 layer) [6] 64.4% (±1.0%) It is difficult to assess the strength of feature learning methods on the full CIFAR dataset because the performance may be attributed to the success of the supervised SVM training and not the unsupervised feature training. For this reason we have also performed classification using 400 labeled examples per class.3 Our results for this scenario are in Table 2. There we see that our 2-layer architecture significantly outperforms our 1-layer system as well as the two 1-layer architectures developed in [6]. As with the full CIFAR dataset, we note that it was not possible to achieve equivalent performance by merely expanding the first layer or by using either of the alternative receptive field structures (which, again, make minimal gains over a single layer). 4.2 STL-10 Table 3: Classification Results on STL-10 Architecture Accuracy (%) 1 Layer 54.5% (±0.8%) 1 Layer (4800 maps) 53.8% (±1.6%) 2 Layers (Single RF) 55.0% (±0.8%) 2 Layers (Random RF) 54.4% (±1.2%) 2 Layers (Learned RF) 58.9% (±1.1%) 3 Layers (Learned RF) 60.1% (±1.0%) Sparse coding (1 layer) [6] 59.0% (±0.8%) VQ (1 layer) [6] 54.9% (±0.4%) Finally, we also tested our algorithm on the STL-10 dataset [5]. Compared to CIFAR, STL provides many fewer labeled training examples (allowing 100 labeled instances per class for each training fold). Instead of relying on labeled data, one tries to learn from the provided unlabeled dataset, which contains images from a distribution that is similar to the labeled set but broader. We used the same architecture for this dataset as for CIFAR, but rather than train our features each time on the labeled training fold (which is too small), we use 20000 examples taken from the unlabeled dataset. Our results are reported in Table 3. Here we see increasing performance with higher levels of features once more, achieving state-of-theart performance with our 3-layered model. This is especially notable since the higher level features have been trained purely from unlabeled data. We note, one more time, that none of the alternative architectures (which roughly represent common practice for training deep networks) makes significant gains over the single layer system. 5 Conclusions We have proposed a method for selecting local receptive fields in deep networks. Inspired by the grouping behavior of topographic learning methods, our algorithm selects qualitatively similar groups of features directly using arbitrary choices of similarity metric, while also being compatible with any unsupervised learning algorithm we wish to use. For one metric in particular (square correlation) we have employed our algorithm to choose receptive fields within multi-layered networks that lead to successful image representations for classification while still using only vector quantization for unsupervised learning—a relatively simple by highly scalable learning module. Among our results, we have achieved the best published accuracy on CIFAR-10 and STL datasets. These performances are strengthened by the fact that they did not require the use of any supervised backpropagation algorithms. We expect that the method proposed here is a useful new tool for managing extremely large, higher-level feature representations where more traditional spatio-temporal local receptive fields are unhelpful or impossible to employ successfully. 3Our networks are still trained unsupervised from the entire training set. 8 References [1] R. Adams, H. Wallach, and Z. Ghahramani. Learning the structure of deep sparse graphical models. In International Conference on AI and Statistics, 2010. [2] A. Bell and T. J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Research, 37, 1997. [3] Y. Boureau, F. Bach, Y. LeCun, and J. Ponce. Learning mid-level features for recognition. In Computer Vision and Pattern Recognition, 2010. [4] D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. Highperformance neural networks for visual object classification. Pre-print, 2011. http://arxiv.org/abs/1102.0183. [5] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning. In International Conference on AI and Statistics, 2011. [6] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning, 2011. [7] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In Advances in Neural Information Processing Systems, 2010. [8] K. Gregor and Y. LeCun. Emergence of complex-like cells in a temporal product network with local receptive fields, 2010. [9] F. Huang and Y. LeCun. Large-scale learning with SVM and convolutional nets for generic object categorization. In Computer Vision and Pattern Recognition, 2006. [10] A. Hyvarinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527–1558, 2001. [11] A. Hyvarinen and E. Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411–430, 2000. [12] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In International Conference on Computer Vision, 2009. [13] A. Krizhevsky. Convolutional Deep Belief Networks on CIFAR-10. Unpublished manuscript, 2010. [14] Y. LeCun, F. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. [15] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning, 2009. [16] V. Nair and G. E. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In International Conference on Machine Learning, 2010. [17] N. Pinto, D. Doukhan, J. J. DiCarlo, and D. D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Comput Biol, 2009. [18] A. Saxe, P. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and unsupervised feature learning. In International Conference on Machine Learning, 2011. [19] D. Scherer, A. Mller, and S. Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks, 2010. [20] E. Simoncelli and O. Schwartz. Modeling surround suppression in v1 neurons with a statistically derived normalization model. Advances in Neural Information Processing Systems, 1998. [21] K. Zhang and L. Chan. Ica with sparse connections. Intelligent Data Engineering and Automated Learning, 2006. 9
2011
126
4,176
Learning Auto-regressive Models from Sequence and Non-sequence Data Tzu-Kuo Huang Machine Learning Department Carnegie Mellon University tzukuoh@cs.cmu.edu Jeff Schneider Robotics Institute Carnegie Mellon University schneide@cs.cmu.edu Abstract Vector Auto-regressive models (VAR) are useful tools for analyzing time series data. In quite a few modern time series modelling tasks, the collection of reliable time series turns out to be a major challenge, either due to the slow progression of the dynamic process of interest, or inaccessibility of repetitive measurements of the same dynamic process over time. In those situations, however, we observe that it is often easier to collect a large amount of non-sequence samples, or snapshots of the dynamic process of interest. In this work, we assume a small amount of time series data are available, and propose methods to incorporate non-sequence data into penalized least-square estimation of VAR models. We consider non-sequence data as samples drawn from the stationary distribution of the underlying VAR model, and devise a novel penalization scheme based on the Lyapunov equation concerning the covariance of the stationary distribution. Experiments on synthetic and video data demonstrate the effectiveness of the proposed methods. 1 Introduction Vector Auto-regressive models (VAR) are an important class of models for analyzing multivariate time series data. They have proven to be very useful in capturing and forecasting the dynamic properties of time series in a number of domains, such as finance and economics [18, 13]. Recently, researchers in computational biology applied VAR models in the analysis of genomic time series [12], and found interesting results that were unknown previously. In quite a few scientific modeling tasks, a major difficulty turns out to be the collection of reliable time series data. In some situations, the dynamic process of interest may evolve slowly over time, such as the progression of Alzheimer’s or Parkinson’s diseases, and researchers may need to spend months or even years tracking the dynamic process to obtain enough time series data for analysis. In other situations, the dynamic process of interest may not be able to undergo repetitive measurements, so researchers have to measure multiple instances of the same process while maintaining synchronization among these instances. One such example is gene expression time series. In their study, [19] measured expression profiles of yeast genes along consecutive metabolic cycles. Due to the destructive nature of the measurement technique, they collected expression data from multiple yeast cells. In order to obtain reliable time series data, they spent a lot of effort developing a stable environment to synchronize the cells during the metabolic cycles. Yet, they point out in their discussion that such a synchronization scheme may not work for other species, e.g., certain bacteria and fungi, as effectively as for yeast. While obtaining reliable time series can be difficult, we observe that it is often easier to collect nonsequence samples, or snapshots of the dynamic process of interest1. For example, a scientist studying 1 In several disciplines, such as social and medical sciences, the former is usually referred to as a longitudinal study, while the latter is similar to what is called a cross-sectional study. 1 Alzheimer’s or Parkinson’s can collect samples from his or her current pool of patients, each of whom may be in a different stage of the disease. Or in gene expression analysis, current technology already enables large-scale collection of static gene expression data. Previously [6] investigated ways to extract dynamics from such static gene expression data, and more recently [8, 9] proposed methods for learning first-order dynamic models from general non-sequence data. However, most of these efforts suffer from a fundamental limitation: due to lack of temporal information, multiple dynamic models may fit the data equally well and hence certain characteristics of dynamics, such as the step size of a discrete-time model and the overall temporal direction, become non-identifiable. In this work, we aim to combine these two types of data to improve learning of dynamic models. We assume that a small amount of sequence samples and a large amount of non-sequence samples are available. Our aim is to rely on the few sequence samples to obtain a rough estimate of the model, while refining this rough estimate using the non-sequence samples. We consider the following firstorder p-dimensional vector auto-regressive model: xt+1 = xtA + ǫt+1, (1) where xt ∈R1×p is the state vector at time t, A ∈Rp×p is the transition matrix, and ǫt is a whitenoise process with a time-invariant variance σ2I. Given a sequence sample, a common estimation method for A is the least-square estimator, whose properties have been studied extensively (see e.g., [7]). We assume that the process (1) is stable, i.e., the eigenvalues of A have modulus less than one. As a result, the process (1) has a stationary distribution, whose covariance Q is determined by the following discrete-time Lyapunov equation: A⊤QA + σ2I = Q. (2) Linear quadratic Lyapunov theory (see e.g., [1]) gives that Q is uniquely determined if and only if λi(A)λj(A) ̸= 1 for 1 ≤i, j ≤p, where λi(A) is the i-th eigenvalue of A. If the noise process ǫt follows a normal distribution, the stationary distribution also follows a normal distribution, with covariance Q determined as above. Since our goal is to estimate A, a more relevant perspective is viewing (2) as a system of constraints on A. What motivates this work is that the estimation of Q requires only samples drawn from the stationary distribution rather than sequence data. However, even if we have the true Q and σ2, we still cannot uniquely determine A because (2) is an underdetermined system2 of A. We thus rely on the few sequence samples to resolve the ambiguity. We describe the proposed methods in Section 2, and demonstrate their performance through experiments on synthetic and video data in Section 3. Our finding in short is that when the amount of sequence data is small and our VAR model assumption is valid, the proposed methods of incorporating non-sequence data into estimation significantly improve over standard methods, which use only the sequence data. We conclude this work and discuss future directions in Section 4. 2 Proposed Methods Let {xi}T i=1 be a sequence of observations generated by the process (1). The standard least-square estimator for the transition matrix A is the solution to the following minimization problem: min A ∥Y −XA∥2 F , (3) where Y ⊤:= [(x2)⊤(x3)⊤· · · (xT )⊤], X⊤:= [(x1)⊤(x2)⊤· · · (xT −1)⊤], and ∥· ∥F denotes the matrix Frobenius norm. When p > T, which is often the case in modern time series modeling tasks, the least square problem (3) has multiple solutions all achieving zero squared error, and the resulting estimator overfitts the data. A common remedy is adding a penalty term on A to (3) and minimizing the resulting regularized sum of squared errors. Usual penalty terms include the ridge penalty ∥A∥2 F and the sparse penalty ∥A∥1 := P i,j |Aij|. Now suppose we also have a set of non-sequence observations {zi}n i=1 drawn independently from the stationary distribution of (1). Note that we use superscripts for time indices and subscripts for data indices. As described in Section 1, the size n of the non-sequence sample can usually be much larger than the size T of the sequence data. To incorporate the non-sequence observations into the 2If we further require A to be symmetric, (2) would be a simplified Continuous-time Algebraic Riccati Equation, which has a unique solution under some conditions (c.f. [1]). 2 (a) SSE and Ridge (b) Lyap (c) SSE+Ridge+ 1 2Lyap Figure 1: Level sets of different functions in a bivariate AR example estimation procedure, we first obtain a covariance estimate bQ of the stationary distribution from the non-sequence sample, and then turn the Lyapunov equation (2) into a regularization term on A. More precisely, in addition to the usual ridge or sparse penalty terms, we also consider the following regularization: ∥A⊤bQA + σ2I −bQ∥2 F , (4) which we refer to as the Lyapunov penalty. To compare (4) with the ridge penalty and the sparse penalty, we consider (3) as a multiple-response regression problem and view the i-th column of A as the regression coefficient vector for the i-th output dimension. From this viewpoint, we immediately see that both the ridge and the sparse penalizations treat the p regression problems as unrelated. On the contrary, the Lyapunov penalty incorporates relations between pairs of columns of A by using a covariance estimate bQ. In other words, although the non-sequence sample does not provide direct information about the individual regression problems, it does reveal how the regression problems are related to one another. To illustrate how the Lyapunov penalty may help to improve learning, we give an example in Figure 1. The true transition matrix is A =  −0.4280 0.5723 −1.0428 −0.7144  (5) and ǫt ∼N(0, I). We generate a sequence of 4 points, draw a non-sequence sample of 20 points independently from the stationary distribution and obtain the sample covariance bQ. We fix the second column of A but vary the first, and plot in Figure 1(a) the resulting level sets of the sum of squared errors on the sequence (SSE) and the ridge penalty (Ridge), and in Figure 1(b) the level sets of the Lyapunov penalty (Lyap). We also give coordinates of the true [A11 A21]⊤, the minima of SSE, Ridge, and Lyap, respectively. To see the behavior of the ridge regression, we trace out a path of the ridge regression solution by varying the penalization parameter, as indicated by the red-to-black curve in Figure 1(a). This path is pretty far from the true model, due to insufficient sequence data. For the Lyapunov penalty, we observe that it has two local minima, one of which is very close to the true model, while the other, also the global minimum, is very far. Thus, neither ridge regression nor the Lyapunov penalty can be used on its own to estimate the true model well. But as shown in Figure 1(c), the combined objective, SSE+Ridge+ 1 2Lyap, has its global minimum very close to the true model. This demonstrates how the ridge regression and the Lyapunov penalty may complement each other: the former by itself gives an inaccurate estimation of the true model, but is just enough to identify a good model from the many candidate local minima provided by the latter. In the following we describe our proposed methods for incorporating the Lyapunov penalty (4) into ridge and sparse least-square estimation. We also discuss robust estimation for the covariance Q. 2.1 Ridge and Lyapunov penalty Here we estimate A by solving the following problem: min A 1 2∥Y −XA∥2 F + λ1 2 ∥A∥2 F + λ2 4 ∥A⊤bQA + σ2I −bQ∥2 F , (6) 3 where bQ is a covariance estimate obtained from the non-sequence sample. We treat λ1, λ2 and σ2 as hyperparameters and determine their values on a validation set. Given these hyperparameters, we solve (6) by gradient descent with back-tracking line search for the step size. The gradient of the objective function is given by −X⊤Y + X⊤XA + λ1A + λ2 bQA(A⊤bQA + σ2I −bQ). (7) As mentioned before, (6) is a non-convex problem and thus requires good initialization. We use the following two initial estimates of A: bAlsq := (X⊤X)†X⊤Y and bAridge := (X⊤X + λ1I)−1X⊤Y, (8) where (·)† denotes the Moore-Penrose pseudo inverse of a matrix, making bAlsq the minimum-norm solution to the least square problem (3). We run the gradient descent algorithm with these two initial estimates, and choose the estimated A that gives a smaller objective. 2.2 Sparse and Lyapunov penalty Sparse learning for vector auto-regressive models has become a useful tool in many modern time series modeling tasks, where the number p of states in the system is usually larger than the length T of the time series. For example, an important problem in computational biology is to understand the progression of certain biological processes from some measurements, such as temporal gene expression data. Using an idea similar to (6), we estimate A by min A 1 2∥Y −XA∥2 F + λ2 4 ∥A⊤bQA + σ2I −bQ∥2 F , s.t. ∥A∥1 ≤λ1. (9) Instead of adding a sparse penalty on A to the objective function, we impose a constraint on the ℓ1 norm of A. Both the penalty and the constraint formulations have been considered in the sparse learning literature, and shown to be equivalent in the case of a convex objective. Here we choose the constraint formulation because it can be solved by a simple projected gradient descent method. On the contrary, the penalty formulation leads to a non-smooth and non-convex optimization problem, which is difficult to solve with standard methods for sparse learning. In particular, the softthresholding-based coordinate descent method for LASSO does not apply due to the Lyapunov regularization term. Moreover, most of the common methods for non-smooth optimization, such as bundle methods, solve convex problems and need non-trivial modification in order to handle non-convex problems [14]. Let J(A) denote the objective function in (9) and A(k) denote the intermediate solution at the k-th iteration. Our projected gradient method updates A(k) to A(k+1) by the following rule: A(k+1) ←Π(A(k) −η(k)∇J(A(k))), (10) where η(k) > 0 denotes a proper step size, ∇J(A(k)) denotes the gradient of J(·) at A(k), and Π(·) denotes the projection onto the feasible region ∥A∥1 ≤λ1. More precisely, for any p-by-p real matrix V we define Π(V ) := arg min ∥A∥1≤λ1 ∥A −V ∥2 F . (11) To compute the projection, we use the efficient ℓ1 projection technique given in Figure 2 of [5], whose expected running time is linear in the size of V . For choosing a proper step size η(k), we consider the simple and effective Armijo rule along the projection arc described in [2]. This procedure is given in Algorithm 1, and the main idea is to ensure a sufficient decrease in the objective value per iteration (13). [2] proved that there always exists η(k) = βrk > 0 satisfying (13), and every limit point of {A(k)}∞ k=0 is a stationary point of (9). In our experiments we set c = 0.01 and β = 0.1, both of which are typical values used in gradient descent. As in the previous section, we need good initializations for the projected gradient descent method. Here we use these two initial estimates: bAlsq′ := arg min ∥A∥≤λ1 ∥A −bAlsq∥2 F and bAsp := arg min ∥A∥≤λ1 1 2∥Y −XA∥2 F , (12) where bAlsq is defined in (8), and then choose the one that leads to a smaller objective value. 4 Algorithm 1: Armijo’s rule along the projection arc Input : A(k), ∇J(A(k)), 0 < β < 1, 0 < c < 1. Output: A(k+1) 1 Find η(k) = max{βrk|rk ∈{0, 1, . . .}} such that A(k+1) := Π(A(k) −η(k)∇J(A(k))) satisfies J(A(k+1)) −J(A(k)) ≤c trace  ∇J(A(k))⊤(A(k+1) −A(k))  (13) 2.3 Robust estimation of covariance matrices To obtain a good estimator for A using the proposed methods, we need a good estimator for the covariance of the stationary distribution of (1). Given an independent sample {zi}n i=1 drawn from the stationary distribution, the sample covariance is defined as S := 1 n −1 n X i=1 (zi −¯z)⊤(zi −¯z), where ¯z := Pn i=1 zi n . (14) Although unbiased, the sample covariance is known to be vulnerable to outliers, and ill-conditioned when the number of sample points n is smaller than the dimension p. Both issues arise in many real world problems, and the latter is particularly common in gene expression analysis. Therefore, researchers in many fields, such as statistics [17, 20, 11], finance [10], signal processing [3, 4], and recently computational biology [15], have investigated robust estimators of covariances. Most of these results originate from the idea of shrinkage estimators, which shrink the covariance matrix towards some target covariance with a simple structure, such as a diagonal matrix. It has been shown in, e.g., [17, 10] that shrinking the sample covariance can achieve a smaller mean-squared error (MSE). More specifically, [10] considers the following linear shrinkage: bQ = (1 −α)S + αF (15) for 0 < α < 1 and some target covariance F, and derive a formula for the optimal α that minimizes the mean-squared error: α∗:= arg min 0≤α≤1 E(∥bQ −Q∥2 F ), (16) which involves unknown quantities such as true covariances of S. [15] proposed to estimate α∗by replacing all the population quantities appearing in α∗by their unbiased empirical estimates, and derived the resulting estimator bα∗for several types of target F. For the experiments in this paper we use the estimator proposed in [15] with the following F: Fij = Sij, if i = j, 0 otherwise, 1 ≤i, j ≤p. (17) Denoting the sample correlation matrix as R, we give the final estimator bQ (Table 1 in [15]) below: bQij := ( Sij, if i = j, bRij p SiiSjj otherwise, bRij := 1, if i = j, Rij min(1, max(0, 1 −bα∗)) otherwise, (18) bα∗:= P i̸=j c Var(Rij) P i̸=j R2 ij = P i̸=j n (n−1)3 Pn k=1(wkij −¯wij)2 P i̸=j R2 ij , (19) where wkij := (˜zk)i(˜zk)j, ¯wij := Pn k=1 wkij n , (20) and {˜zi}n i=1 are standardized non-sequence samples. 5 (a) (b) (c) (d) Eigenvalues in modulus Figure 2: Testing performances and eigenvalues in modulus for the dense model 3 Experiments To evaluate the proposed methods, we conduct experiments on synthetic and video data. In both sets of experiments we use the following two performance measures for a learnt model bA: Normalized error: 1 T −1 T −1 X t=1 ∥xt+1 −xt bA∥2 ∥xt+1 −xt∥2 . Cosine score: 1 T −1 T −1 X t=1 (xt+1 −xt)⊤(xt bA −xt) ∥xt+1 −xt∥∥xt bA −xt∥ . To give an idea of how a good estimate bA would perform under these two measures, we point out that a constant prediction ˆxt+1 = xt leads to a normalized error of 1, and a random-walk prediction ˆxt+1 = xt + ǫt+1, ǫt+1 being a white-noise process, results in a nearly-zero cosine score. Thus, when the true model is more than a simple random walk, a good estimate bA should achieve a normalized error much smaller than 1 and a cosine score way above 0. We also note that the cosine score is upper-bounded by 1. In experiments on synthetic data we have the true transition matrix A, so we consider a third criterion, the matrix error: ∥bA −A∥F /∥A∥F . In all our experiments, we have a training sequence, a testing sequence, and a non-sequence sample. To choose the hyper-parameters λ1, λ2 and σ2, we split the training sequence into two halves and use the second half as the validation sequence. Once we find the best hyper-parameters according to the validation performance, we train a model on the full training sequence and predict on the testing sequence. For λ1 and λ2, we adopt the usual grid-search scheme with a suitable range of values. For σ2, we observe that (2) implies bQ −σ2I should be positive semidefinite, and thus search the set {0.9j mini λi( bQ) | 1 ≤j ≤3}. In most of our experiments, we find that the proposed methods are much less sensitive to σ2 than to λ1 and λ2. 3.1 Synthetic Data We consider the following two VAR models with a Gaussian white noise process ǫt ∼N(0, I). Dense Model: A = 0.95M max(|λi(M)|), Mij ∼N(0, 1), 1 ≤i, j ≤200. Sparse Model: A = 0.95(M ⊙B) max(|λi(M ⊙B)|), Mij ∼N(0, 1), Bij ∼Bern (1/8), 1 ≤i, j ≤200, where Bern(h) is the Bernoulli distribution with success probability h, and ⊙denotes the entrywise product of two matrices. By setting h = 1/8, we make the sparse transition matrix A have roughly 40000/8 = 5000 non-zero entries. Both models are stable, and the stationary distribution for each model is a zero-mean Gaussian. We obtain the covariance Q of each stationary distribution by solving the Lyapunov equation (2). For a single experiment, we generate a training sequence and a testing sequence, both initialized from the stationary distribution, and draw a non-sequence sample independently from the stationary distribution. We set the length of the testing sequence to be 6 (a) (b) (c) (d) Eigenvalues in modulus Figure 3: Testing performances and eigenvalues in modulus for the sparse model 800, and vary the training sequence length T and the non-sequence sample size n: for the dense model, T ∈{50, 100, 150, 200, 300, 400, 600, 800} and n ∈{50, 400, 1600}; for the sparse model, T ∈{25, 75, 150, 400} and n ∈{50, 400, 1600}. Under each combination of T and n, we compare the proposed Lyapunov penalization method with the baseline approach of penalized least square, which uses only the sequence data. To investigate the limit of the proposed methods, we also use the true Q for the Lyapunov penalization. We run 10 such experiments for the dense model and 5 for the sparse model, and report the overall performances of both the proposed and the baseline methods. 3.1.1 Experimental results for the dense model We give boxplots of the three performance measures in the 10 experiments in Figures 2(a) to 2(c). The ridge regression approach and the proposed Lyapunov penalization method (6) are abbreviated as Ridge and Lyap, respectively. For normalized error and cosine score, we also report the performance of the true A on testing sequences. We observe that Lyap improves over Ridge more significantly when the training sequence length T is small (≤200) and the non-sequence sample size n is large (≥400). When T is large, Ridge already performs quite well and Lyap does not improve the performance much. But with the true stationary covariance Q, Lyap outperforms Ridge significantly for all T. When n is small, the covariance estimate bQ is far from the true Q and the Lyapunov penalty does not provide useful information about A. In this case, the value of λ2 determined by the validation performance is usually quite small (0.5 or 1) compared to λ1 (256), so the two methods perform similarly on testing sequences. We note that if instead of the robust covariance estimate in (18) and (19) we use the sample covariance, the performance of Lyap can be marginally worse than Ridge when n is small. A precise statement on how the estimation error in Q affects bA is worth studying in the future. As a qualitative assessment of the estimated transition matrices, in Figure 2(d) we plot the eigenvalues in modulus of the true A and the bA’s obtained by different methods when T = 50 and n = 1600. The eigenvalues are sorted according to their modulus. Both Ridge and Lyap severely under-estimate the eigenvalues in modulus, but Lyap preserves the spectrum much better than Ridge. 3.1.2 Experimental results for the sparse model We give boxplots of the performance measures in the 5 experiments in Figures 3(a) to 3(c), and the eigenvalues in modulus of the true A and some bA’s in Figure 3(d). The sparse least-square method and the proposed method (9) are abbreviated as Sparse and Lyap, respectively. We observe the same type of improvement as in the dense model: Lyap improves over Sparse more significantly when T is small and n is large. But the largest improvement occurs when T = 75, not the shortest training sequence length T = 25. A major difference lies in the impact of the Lyapunov penalization on the spectrum of bA, as revealed in Figure 3(d). When T is as small as 25, the sparse least-square method shrinks all the eigenvalues but still keep most of them non-zero, while Lyap with a non-sequence sample of size 1600 over-estimates the first few largest eigenvalues in modulus but shrink the rest to have very small modulus. In contrast, Lyap with the true Q preserves the spectrum much better. We may thus need an even better covariance estimate for the sparse model. 7 (a) The pendulum T=6 T=10 T=20 T=50 0 0.5 1 1.5 2 Normalized error Ridge Lyap (b) Normalized error T=6 T=10 T=20 T=50 0 0.2 0.4 0.6 0.8 1 Cosine score Ridge Lyap (c) Cosine score Figure 4: Results on the pendulum video data 3.2 Video Data We test our methods using a video sequence of a periodically swinging pendulum3, which consists of 500 frames of 75-by-80 grayscale images. One such frame is given in Figure 4(a) The period is about 23 frames. To further reduce the dimension we take the second-level Gaussian pyramids, resulting in images of size 9-by-11. We then treat each reduced image as a 99-dimensional vector, and normalize each dimension to be zero-mean and standard deviation 1. We analyze this sequence with a 99-dimensional first-order VAR model. To check whether a VAR model is a suitable choice, we estimate a transition matrix from the first 400 frames by ridge regression while choosing the penalization parameter on the next 50 frames, and predict on the last 50 frames. The best penalization parameter is 0.0156, and the testing normalized error and cosine score are 0.33 and 0.97, respectively, suggesting that the dynamics of the video sequence is well-captured by a VAR model. We compare the proposed method (6) with the ridge regression for two lengths of the training sequence: T ∈{6, 10, 20, 50}, and treat the last 50 frames as the testing sequence. For both methods, we split the training sequence into two halves and use the second half as a validation sequence. For the proposed method, we simulate a non-sequence sample by randomly choosing 300 frames from between the (T + 1)-st frame and the 450-th frame without replacement. We repeat this 10 times. The testing normalized errors and cosine scores of both methods are given in Figures 4(b) and 4(c). For the proposed method, we report the mean performance measures over the 10 simulated nonsequence samples with standard deviation. When T ≤20, which is close to the period, the proposed method outperforms ridge regression very significantly except when T = 10 the cosine score of Lyap is barely better than Ridge. However, when we increase T to 50, the difference between the two methods vanishes, even though there is still much room for improvement as indicated by the result of our model sanity check before. This may be due to our use of dependent data as the nonsequence sample, or simply insufficient non-sequence data. As for λ1 and λ2, their values decrease respectively from 512 and 2,048 to less than 32 as T increases, but since we fix the amount of nonsequence data, the interaction between their value changes is less clear than on the synthetic data. 4 Conclusion We propose to improve penalized least-square estimation of VAR models by incorporating nonsequence data, which are assumed to be samples drawn from the stationary distribution of the underlying VAR model. We construct a novel penalization term based on the discrete-time Lyapunov equation concerning the covariance (estimate) of the stationary distribution. Preliminary experimental results demonstrate that our methods can improve significantly over standard penalized least-square methods when there are only few sequence data but abundant non-sequence data and when the model assumption is valid. In the future, we would like to investigate the impact of bQ on bA in a precise manner. Also, we may consider noise processes ǫt with more general covariances, and incorporate the noise covariance estimation into the proposed Lyapunov penalization scheme. Finally and the most importantly, we aim to apply the proposed methods to real scientific time series data and provide a more effective tool for those modelling tasks. 3A similar video sequence has been used in [16]. 8 References [1] P. Antsaklis and A. Michel. Linear systems. Birkhauser, 2005. 2 [2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, second edition, 1999. 4 [3] Y. Chen, A. Wiesel, Y. C. Eldar, and A. O. Hero. Shrinkage algorithms for mmse covariance estimation. IEEE Transactions on Signal Processing, 58:5016–5029, 2010. 5 [4] Y. Chen, A. Wiesel, and A. O. Hero. Robust shrinkage estimation of high-dimensional covariance matrices. Technical report, arXiv:1009.5331v1 [stat.ME], September 2010. 5 [5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ℓ1ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, pages 272–279, 2008. 4 [6] A. Gupta and Z. Bar-Joseph. Extracting dynamics from static cancer expression data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 5:172–182, 2008. 2 [7] J. Hamilton. Time series analysis. Princeton Univ Pr, 1994. 2 [8] T.-K. Huang and J. Schneider. Learning linear dynamical systems without sequence information. In Proceedings of the 26th International Conference on Machine Learning, pages 425–432, 2009. 2 [9] T.-K. Huang, L. Song, and J. Schneider. Learning nonlinear dynamic models from nonsequenced data. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010. 2 [10] O. Ledoit and M. Wolf. Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. Journal of Empirical Finance, 10:603–621, 2003. 5 [11] O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88:365–411, 2004. 5 [12] A. Lozano, N. Abe, Y. Liu, and S. Rosset. Grouped graphical granger modeling for gene expression regulatory networks discovery. Bioinformatics, 25(12):i110, 2009. 1 [13] T. C. Mills. The Econometric Modelling of Financial Time Series. Cambridge University Press, second edition, 1999. 1 [14] D. Noll, O. Prot, and A. Rondepierre. A proximity control algorithm to minimize nonsmooth and nonconvex functions. Pacific Journal of Optimization, 4(3):569–602, 2008. 4 [15] J. Sch¨afer and K. Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology, 4, 2005. 5 [16] S. M. Siddiqi, B. Boots, and G. J. Gordon. Reduced-rank hidden Markov models. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010. 8 [17] C. Stein. Estimation of a covariance matrix. In Rietz Lecture, 39th Annual Meeting, Atlanta, GA, 1975. 5 [18] R. S. Tsay. Analysis of financial time series. Wiley-Interscience, 2005. 1 [19] B. P. Tu, A. Kudlicki, M. Rowicka, and S. L. McKnight. Logic of the yeast metabolic cycle: Temporal compartmentalization of cellular processes. Science, 310(5751):1152–1158, 2005. 1 [20] R. Yang and J. O. Berger. Estimation of a covariance matrix using the reference prior. Annals of Statistics, 22:1195–1211, 1994. 5 9
2011
127
4,177
Multi-View Learning of Word Embeddings via CCA Paramveer S. Dhillon Dean Foster Lyle Ungar Computer & Information Science Statistics Computer & Information Science University of Pennsylvania, Philadelphia, PA, U.S.A {dhillon|ungar}@cis.upenn.edu, foster@wharton.upenn.edu Abstract Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model the context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-ofthe-art performance on named entity recognition (NER) and chunking problems. 1 Introduction and Related Work Over the past decade there has been increased interest in using unlabeled data to supplement the labeled data in semi-supervised learning settings to overcome the inherent data sparsity and get improved generalization accuracies in high dimensional domains like NLP. Approaches like [1, 2] have been empirically very successful and have achieved excellent accuracies on a variety of NLP tasks. However, it is often difficult to adapt these approaches to use in conjunction with an existing supervised NLP system as these approaches enforce a particular choice of model. An increasingly popular alternative is to learn representational embeddings for words from a large collection of unlabeled data (typically using a generative model), and to use these embeddings to augment the feature set of a supervised learner. Embedding methods produce features in low dimensional spaces or over a small vocabulary size, unlike the traditional approach of working in the original high dimensional vocabulary space with only one dimension “on” at a given time. Broadly, these embedding methods fall into two categories: 1. Clustering based word representations: Clustering methods, often hierarchical, are used to group distributionally similar words based on their contexts. The two dominant approaches are Brown Clustering [3] and [4]. As recently shown, HMMs can also be used to induce a multinomial distribution over possible clusters [5]. 2. Dense representations: These representations are dense, low dimensional and real-valued. Each dimension of these representations captures latent information about a combination of syntactic and semantic word properties. They can either be induced using neural networks like C&W embeddings [6] and Hierarchical log-linear (HLBL) embeddings [7] or by eigen-decomposition of the word co-occurrence matrix, e.g. Latent Semantic Analysis/Latent Semantic Indexing (LSA/LSI) [8]. Unfortunately, most of these representations are 1). slow to train, 2). sensitive to the scaling of the embeddings (especially ℓ2 based approaches like LSA/PCA), 3). can get stuck in local optima (like EM trained HMM) and 4). learn a single embedding for a given word type; i.e. all the occurrences 1 of the word “bank” will have the same embedding, irrespective of whether the context of the word suggests it means “a financial institution” or “a river bank”. In this paper, we propose a novel context-specific word embedding method called Low Rank MultiView Learning, LR-MVL, which is fast to train and is guaranteed to converge to the optimal solution. As presented here, our LR-MVL embeddings are context-specific, but context oblivious embeddings (like the ones used by [6, 7]) can be trivially gotten from our model. Furthermore, building on recent advances in spectral learning for sequence models like HMMs [9, 10, 11] we show that LR-MVL has strong theoretical grounding. Particularly, we show that LR-MVL estimates low dimensional context-specific word embeddings which preserve all the information in the data if the data were generated by an HMM. Moreover, LR-MVL being linear does not face the danger of getting stuck in local optima as is the case for an EM trained HMM. LR-MVL falls into category (2) mentioned above; it learns real-valued context-specific word embeddings by performing Canonical Correlation Analysis (CCA) [12] between the past and future views of low rank approximations of the data. However, LR-MVL is more general than those methods, which work on bigram or trigram co-occurrence matrices, in that it uses longer word sequence information to estimate context-specific embeddings and also for the reasons mentioned in the last paragraph. The remainder of the paper is organized as follows. In the next section we give a brief overview of CCA, which forms the core of our method. Section 3 describes our proposed LR-MVL algorithm in detail and gives theory supporting its performance. Section 4 demonstrates the effectiveness of LR-MVL on the NLP tasks of Named Entity Recognition and Chunking. We conclude with a brief summary in Section 5. 2 Brief Review: Canonical Correlation Analysis (CCA) CCA [12] is the analog to Principal Component Analysis (PCA) for pairs of matrices. PCA computes the directions of maximum covariance between elements in a single matrix, whereas CCA computes the directions of maximal correlation between a pair of matrices. Unlike PCA, CCA does not depend on how the observations are scaled. This invariance of CCA to linear data transformations allows proofs that keeping the dominant singular vectors (those with largest singular values) will faithfully capture any state information. More specifically, given a set of n paired observation vectors {(l1, r1), ..., (ln, rn)}–in our case the two matrices are the left (L) and right (R) context matrices of a word–we would like to simultaneously find the directions Φl and Φr that maximize the correlation of the projections of L onto Φl with the projections of R onto Φr. This is expressed as max Φl,Φr E[⟨L, Φl⟩⟨R, Φr⟩] p E[⟨L, Φl⟩2]E[⟨R, Φr⟩2] (1) where E denotes the empirical expectation. We use the notation Clr (Cll) to denote the cross (auto) covariance matrices between L and R (i.e. L’R and L’L respectively.). The left and right canonical correlates are the solutions ⟨Φl, Φr⟩of the following equations: Cll −1ClrCrr −1CrlΦl = λΦl Crr −1CrlCll −1ClrΦr = λΦr (2) 3 Low Rank Multi-View Learning (LR-MVL) In LR-MVL, we compute the CCA between the past and future views of the data on a large unlabeled corpus to find the common latent structure, i.e., the hidden state associated with each token. These induced representations of the tokens can then be used as features in a supervised classifier (typically discriminative). The context around a word, consisting of the h words to the right and left of it, sits in a high dimensional space, since for a vocabulary of size v, each of the h words in the context requires an indicator function of dimension v. The key move in LR-MVL is to project the v-dimensional word 2 space down to a k dimensional state space. Thus, all eigenvector computations are done in a space that is v/k times smaller than the original space. Since a typical vocabulary contains at least 50, 000 words, and we use state spaces of order k ≈50 dimensions, this gives a 1,000-fold reduction in the size of calculations that are needed. The core of our LR-MVL algorithm is a fast spectral method for learning a v × k matrix A which maps each of the v words in the vocabulary to a k-dimensional state vector. We call this matrix the “eigenfeature dictionary”. We now describe the LR-MVL method, give a theorem that provides intuition into how it works, and formally present the LR-MVL algorithm. The Experiments section then shows that this low rank approximation allows us to achieve state-of-the-art performance on NLP tasks. 3.1 The LR-MVL method Given an unlabeled token sequence w={w0, w1, . . ., wn} we want to learn a low (k)- dimensional state vector {z0, z1, . . . , zn} for each observed token. The key is to find a v×k matrix A (Algorithm 1) that maps each of the v words in the vocabulary to a reduced rank k-dimensional state vector, which is later used to induce context specific embeddings for the tokens (Algorithm 2). For supervised learning, these context specific embeddings are supplemented with other information about each token wt, such as its identity, orthographic features such as prefixes and suffixes or membership in domain-specific lexicons, and used as features in a classifier. Section 3.4 gives the algorithm more formally, but the key steps in the algorithm are, in general terms: • Take the h words to the left and to the right of each target word wt (the “Left” and “Right” contexts), and project them each down to k dimensions using A. • Take the CCA between the reduced rank left and right contexts, and use the resulting model to estimate a k dimensional state vector (the “hidden state”) for each token. • Take the CCA between the hidden states and the tokens wt. The singular vectors associated with wt form a new estimate of the eigenfeature dictionary. LR-MVL can be viewed as a type of co-training [13]: The state of each token wt is similar to that of the tokens both before and after it, and it is also similar to the states of the other occurrences of the same word elsewhere in the document (used in the outer iteration). LR-MVL takes advantage of these two different types of similarity by alternately estimating word state using CCA on the smooths of the states of the words before and after each target token and using the average over the states associated with all other occurrences of that word. 3.2 Theoretical Properties of LR-MVL We now present the theory behind the LR-MVL algorithm; particularly we show that the reduced rank matrix A allows a significant data reduction while preserving the information in our data and the estimated state does the best possible job of capturing any label information that can be inferred by a linear model. Let L be an n × hv matrix giving the words in the left context of each of the n tokens, where the context is of length h, R be the corresponding n × hv matrix for the right context, and W be an n × v matrix of indicator functions for the words themselves. We will use the following assumptions at various points in our proof: Assumption 1. L, W, and R come from a rank k HMM i.e. it has a rank k observation matrix and rank k transition matrix both of which have the same domain. For example, if the dimension of the hidden state is k and the vocabulary size is v then the observation matrix, which is k × v, has rank k. This rank condition is similar to the one used by [10]. Assumption 1A. For the three views, L, W and R assume that there exists a “hidden state H” of dimension n × k, where each row Hi has the same non-singular variance-covariance matrix and 3 such that E(Li|Hi) = HiβT L and E(Ri|Hi) = HiβT R and E(Wi|Hi) = HiβT W where all β’s are of rank k, where Li, Ri and Wi are the rows of L, R and W respectively. Assumption 1A follows from Assumption 1. Assumption 2. ρ(L, W), ρ(L, R) and ρ(W, R) all have rank k, where ρ(X1, X2) is the expected correlation between X1 and X2. Assumption 2 is a rank condition similar to that in [9]. Assumption 3. ρ([L, R], W) has k distinct singular values. Assumption 3 just makes the proof a little cleaner, since if there are repeated singular values, then the singular vectors are not unique. Without it, we would have to phrase results in terms of subspaces with identical singular values. We also need to define the CCA function that computes the left and right singular vectors for a pair of matrices: Definition 1 (CCA). Compute the CCA between two matrices X1 and X2. Let ΦX1 be a matrix containing the d largest singular vectors for X1 (sorted from the largest on down). Likewise for ΦX2. Define the function CCAd(X1, X2) = [ΦX1, ΦX2]. When we want just one of these Φ’s, we will use CCAd(X1, X2)left = ΦX1 for the left singular vectors and CCAd(X1, X2)right = ΦX2 for the right singular vectors. Note that the resulting singular vectors, [ΦX1, ΦX2] can be used to give two redundant estimates, X1ΦX1 and X2ΦX2 of the “hidden” state relating X1 and X2, if such a hidden state exists. Definition 2. Define the symbol “≈” to mean X1 ≈X2 ⇐⇒ lim n→∞X1 = lim n→∞X2 where n is the sample size. Lemma 1. Define A by the following limit of the right singular vectors: CCAk([L, R], W)right ≈A. Under assumptions 2, 3 and 1A, such that if CCAk(L, R) ≡[ΦL, ΦR] then CCAk([LΦL, RΦR], W)right ≈A. Lemma 1 shows that instead of finding the CCA between the full context and the words, we can take the CCA between the Left and Right contexts, estimate a k dimensional state from them, and take the CCA of that state with the words and get the same result. See the supplementary material for the Proof. Let ˜Ah denote a matrix formed by stacking h copies of A on top of each other. Right multiplying L or R by ˜Ah projects each of the words in that context into the k-dimensional reduced rank space. The following theorem addresses the core of the LR-MVL algorithm, showing that there is an A which gives the desired dimensionality reduction. Specifically, it shows that the previous lemma also holds in the reduced rank space. Theorem 1. Under assumptions 1, 2 and 3 there exists a unique matrix A such that if CCAk(L˜Ah, R˜Ah) ≡[˜ΦL, ˜ΦR] then CCAk([L˜Ah ˜ΦL, R˜Ah ˜ΦR], W)right ≈A where ˜Ah is the stacked form of A. See the supplementary material for the Proof 1. 1It is worth noting that our matrix A corresponds to the matrix ˆU used by [9, 10]. They showed that U is sufficient to compute the probability of a sequence of words generated by an HMM; although we do not show it here (due to limited space), our A provides a more statistically efficient estimate of U than their ˆU, and hence can also be used to estimate the sequence probabilities. 4 Under the above assumptions, there is asymptotically (in the limit of infinite data) no benefit to first estimating state by finding the CCA between the left and right contexts and then finding the CCA between the estimated state and the words. One could instead just directly find the CCA between the combined left and rights contexts and the words. However, because of the Zipfian distribution of words, many words are rare or even unique, and hence one is not in the asymptotic limit. In this case, CCA between the rare words and context will not be informative, whereas finding the CCA between the left and right contexts gives a good state vector estimate even for unique words. One can then fruitfully find the CCA between the contexts and the estimated state vector for their associated words. 3.3 Using Exponential Smooths In practice, we replace the projected left and right contexts with exponential smooths (weighted average of the previous (or next) token’s state i.e. Zt−1 (or Zt+1) and previous (or next) token’s smoothed state i.e. St−1 (or St+1).), of them at a few different time scales, thus giving a further dimension reduction by a factor of context length h (say 100 words) divided by the number of smooths (often 5-7). We use a mixture of both very short and very long contexts which capture short and long range dependencies as required by NLP problems as NER, Chunking, WSD etc. Since exponential smooths are linear, we preserve the linearity of our method. 3.4 The LR-MVL Algorithm The LR-MVL algorithm (using exponential smooths) is given in Algorithm 1; it computes the pair of CCAs described above in Theorem 1. Algorithm 1 LR-MVL Algorithm - Learning from Large amounts of Unlabeled Data 1: Input: Token sequence Wn×v, state space size k, smoothing rates αj 2: Initialize the eigenfeature dictionary A to random values N(0, 1). 3: repeat 4: Set the state Zt (1 < t ≤n) of each token wt to the eigenfeature vector of the corresponding word. Zt = (Aw : w = wt) 5: Smooth the state estimates before and after each token to get a pair of views for each smoothing rate αj. S(l,j) t = (1 −αj)S(l,j) t−1 + αjZt−1 // left view L S(r,j) t = (1 −αj)S(r,j) t+1 + αjZt+1 // right view R. where the tth rows of L and R are, respectively, concatenations of the smooths S(l,j) t and S(r,j) t for each of the α(j)s. 6: Find the left and right canonical correlates, which are the eigenvectors Φl and Φr of (L′L)−1L′R(R′R)−1R′LΦl = λΦl. (R′R)−1R′L(L′L)−1L′RΦr = λΦr. 7: Project the left and right views on to the space spanned by the top k/2 left and right CCAs respectively Xl = LΦ(k/2) l and Xr = RΦ(k/2) r where Φ(k/2) l , Φ(k/2) r are matrices composed of the singular vectors of Φl, Φr with the k/2 largest magnitude singular values. Estimate the state for each word wt as the union of the left and right estimates: Z = [Xl, Xr] 8: Estimate the eigenfeatures of each word type, w, as the average of the states estimated for that word. Aw = avg(Zt : wt = w) 9: Compute the change in A from the previous iteration 10: until |∆A| < ϵ 11: Output: Φk l , Φk r, A . A few iterations (∼5) of the above algorithm are sufficient to converge to the solution. (Since the problem is convex, there is a single solution, so there is no issue of local minima.) As [14] show for PCA, one can start with a random matrix that is only slightly larger than the true rank k of the correlation matrix, and with extremely high likelihood converge in a few iterations to within a small distance of the true principal components. In our case, if the assumptions detailed above (1, 1A, 2 and 3) are satisfied, our method converges equally rapidly to the true canonical variates. As mentioned earlier, we get further dimensionality reduction in Step 5, by replacing the Left and Right context matrices with a set of exponentially smoothed values of the reduced rank projections of the context words. Step 6 finds the CCA between the Left and Right contexts. Step 7 estimates 5 the state by combining the estimates from the left and right contexts, since we don’t know which will best estimate the state. Step 8 takes the CCA between the estimated state Z and the matrix of words W. Because W is a vector of indicator functions, this CCA takes the trivial form of a set of averages. Once we have estimated the CCA model, it is used to generate context specific embeddings for the tokens from training, development and test sets (as described in Algorithm 2). These embeddings are further supplemented with other baseline features and used in a supervised learner to predict the label of the token. Algorithm 2 LR-MVL Algorithm -Inducing Context Specific Embeddings for Train/Dev/Test Data 1: Input: Model (Φk l , Φk r, A) output from above algorithm and Token sequences Wtrain, (Wdev, Wtest) 2: Project the left and right views L and R after smoothing onto the space spanned by the top k left and right CCAs respectively Xl = LΦk l and Xr = RΦk r and the words onto the eigenfeature dictionary Xw = W trainA 3: Form the final embedding matrix Xtrain:embed by concatenating these three estimates of state Xtrain:embed = [Xl , Xw , Xr] 4: Output: The embedding matrices Xtrain:embed, (Xdev:embed, Xtest:embed) with context-specific representations for the tokens. These embeddings are augmented with baseline set of features mentioned in Sections 4.1.1 and 4.1.2 before learning the final classifier. Note that we can get context “oblivious” embeddings i.e. one embedding per word type, just by using the eigenfeature dictionary (Av×k) output by Algorithm 1. 4 Experimental Results In this section we present the experimental results of LR-MVL on Named Entity Recognition (NER) and Syntactic Chunking tasks. We compare LR-MVL to state-of-the-art semi-supervised approaches like [1] (Alternating Structures Optimization (ASO)) and [2] (Semi-supervised extension of CRFs) as well as embeddings like C&W, HLBL and Brown Clustering. 4.1 Datasets and Experimental Setup For the NER experiments we used the data from CoNLL 2003 shared task and for Chunking experiments we used the CoNLL 2000 shared task data2 with standard training, development and testing set splits. The CoNLL ’03 and the CoNLL ’00 datasets had ∼204K/51K/46K and ∼212K/−/47K tokens respectively for Train/Dev./Test sets. 4.1.1 Named Entity Recognition (NER) We use the same set of baseline features as used by [15, 16] in their experiments. The detailed list of features is as below: • Current Word wi; Its type information: all-capitalized, is-capitalized, all-digits and so on; Prefixes and suffixes of wi • Word tokens in window of 2 around the current word i.e. d = (wi−2, wi−1, wi, wi+1, wi+2); and capitalization pattern in the window. • Previous two predictions yi−1 and yi−2 and conjunction of d and yi−1 • Embedding features (LR-MVL, C&W, HLBL, Brown etc.) in a window of 2 around the current word (if applicable). Following [17] we use regularized averaged perceptron model with above set of baseline features for the NER task. We also used their BILOU text chunk representation and fast greedy inference as it was shown to give superior performance. 2More details about the data and competition are available at http://www.cnts.ua.ac.be/ conll2003/ner/ and http://www.cnts.ua.ac.be/conll2000/chunking/ 6 We also augment the above set of baseline features with gazetteers, as is standard practice in NER experiments. We tuned our free parameter namely the size of LR-MVL embedding on the development and scaled our embedding features to have a ℓ2 norm of 1 for each token and further multiplied them by a normalization constant (also chosen by cross validation), so that when they are used in conjunction with other categorical features in a linear classifier, they do not exert extra influence. The size of LR-MVL embeddings (state-space) that gave the best performance on the development set was k = 50 (50 each for Xl, Xw, Xr in Algorithm 2) i.e. the total size of embeddings was 50×3, and the best normalization constant was 0.5. We omit validation plots due to paucity of space. 4.1.2 Chunking For our chunking experiments we use a similar base set of features as above: • Current Word wi and word tokens in window of 2 around the current word i.e. d = (wi−2, wi−1, wi, wi+1, wi+2); • POS tags ti in a window of 2 around the current word. • Word conjunction features wi ∩wi+1, i ∈{−1, 0} and Tag conjunction features ti ∩ti+1, i ∈{−2, −1, 0, 1} and ti ∩ti+1 ∩ti+2, i ∈{−2, −1, 0}. • Embedding features in a window of 2 around the current word (when applicable). Since CoNLL 00 chunking data does not have a development set, we randomly sampled 1000 sentences from the training data (8936 sentences) for development. So, we trained our chunking models on 7936 training sentences and evaluated their F1 score on the 1000 development sentences and used a CRF 3 as the supervised classifier. We tuned the size of embedding and the magnitude of ℓ2 regularization penalty in CRF on the development set and took log (or -log of the magnitude) of the value of the features4. The regularization penalty that gave best performance on development set was 2 and here again the best size of LR-MVL embeddings (state-space) was k = 50. Finally, we trained the CRF on the entire (“original”) training data i.e. 8936 sentences. 4.1.3 Unlabeled Data and Induction of embeddings For inducing the embeddings we used the RCV1 corpus containing Reuters newswire from Aug ’96 to Aug ’97 and containing about 63 million tokens in 3.3 million sentences5. Case was left intact and we did not do the “cleaning” as done by [18, 16] i.e. remove all sentences which are less than 90% lowercase a-z, as our multi-view learning approach is robust to such noisy data, like news byline text (mostly all caps) which does not correlate strongly with the text of the article. We induced our LR-MVL embeddings over a period of 3 days (70 core hours on 3.0 GHz CPU) on the entire RCV1 data by performing 4 iterations, a vocabulary size of 300k and using a variety of smoothing rates (α in Algorithm 1) to capture correlations between shorter and longer contexts α = [0.005, 0.01, 0.05, 0.1, 0.5, 0.9]; theoretically we could tune the smoothing parameters on the development set but we found this mixture of long and short term dependencies to work well in practice. As far as the other embeddings are concerned i.e. C&W, HLBL and Brown Clusters, we downloaded them from http://metaoptimize.com/projects/wordreprs. The details about their induction and parameter tuning can be found in [16]; we report their best numbers here. It is also worth noting that the unsupervised training of LR-MVL was (> 1.5 times)6 faster than other embeddings. 4.2 Results The results for NER and Chunking are shown in Tables 1 and 2, respectively, which show that LR-MVL performs significantly better than state-of-the-art competing methods on both NER and Chunking tasks. 3http://www.chokkan.org/software/crfsuite/ 4Our embeddings are learnt using a linear model whereas CRF is a log-linear model, so to keep things on same scale we did this normalization. 5We chose this particular dataset to make a fair comparison with [1, 16], who report results using RCV1 as unlabeled data. 6As some of these embeddings were trained on GPGPU which makes our method even faster comparatively. 7 F1-Score Embedding/Model Dev. Set Test Set Baseline No Gazetteers 90.03 84.39 C&W, 200-dim 92.46 87.46 HLBL, 100-dim 92.00 88.13 Brown 1000 clusters 92.32 88.52 Ando & Zhang ’05 93.15 89.31 Suzuki & Isozaki ’08 93.66 89.36 LR-MVL (CO) 50 × 3-dim 93.11 89.55 LR-MVL 50 × 3-dim 93.61 89.91 HLBL, 100-dim With Gazetteers 92.91 89.35 C&W, 200-dim 92.98 88.88 Brown, 1000 clusters 93.25 89.41 LR-MVL (CO) 50 × 3-dim 93.91 89.89 LR-MVL 50 × 3-dim 94.41 90.06 Table 1: NER Results. Note: 1). LR-MVL (CO) are Context Oblivious embeddings which are gotten from (A) in Algorithm 1. 2). F1-score= Harmonic Mean of Precision and Recall. 3). The current state-of-the-art for this NER task is 90.90 (Test Set) but using 700 billion tokens of unlabeled data [19]. Embedding/Model Test Set F1-Score Baseline 93.79 HLBL, 50-dim 94.00 C&W, 50-dim 94.10 Brown 3200 Clusters 94.11 Ando & Zhang ’05 94.39 Suzuki & Isozaki ’08 94.67 LR-MVL (CO) 50 × 3-dim 95.02 LR-MVL 50 × 3-dim 95.44 Table 2: Chunking Results. It is important to note that in problems like NER, the final accuracy depends on performance on rare-words and since LR-MVL is robustly able to correlate past with future views, it is able to learn better representations for rare words resulting in overall better accuracy. On rare-words (occurring < 10 times in corpus), we got 11.7%, 10.7% and 9.6% relative reduction in error over C&W, HLBL and Brown respectively for NER; on chunking the corresponding numbers were 6.7%, 7.1% and 8.7%. Also, it is worth mentioning that modeling the context in embeddings gives decent improvements in accuracies on both NER and Chunking problems. For the case of NER, the polysemous words were mostly like Chicago, Wales, Oakland etc., which could either be a location or organization (Sports teams, Banks etc.), so when we don’t use the gazetteer features, (which are known lists of cities, persons, organizations etc.) we got higher increase in F-score by modeling context, compared to the case when we already had gazetteer features which captured most of the information about polysemous words for NER dataset and modeling the context didn’t help as much. The polysemous words for Chunking dataset were like spot (VP/NP), never (VP/ADVP), more (NP/VP/ADVP/ADJP) etc. and in this case embeddings with context helped significantly, giving 3.1 −6.5% relative improvement in accuracy over context oblivious embeddings. 5 Summary and Conclusion In this paper, we presented a novel CCA-based multi-view learning method, LR-MVL, for large scale sequence learning problems such as arise in NLP. LR-MVL is a spectral method that works in low dimensional state-space so it is computationally efficient, and can be used to train using large amounts of unlabeled data; moreover it does not get stuck in local optima like an EM trained HMM. The embeddings learnt using LR-MVL can be used as features with any supervised learner. LR-MVL has strong theoretical grounding; is much simpler and faster than competing methods and achieves state-of-the-art accuracies on NER and Chunking problems. Acknowledgements: The authors would like to thank Alexander Yates, Ted Sandler and the three anonymous reviews for providing valuable feedback. We would also like to thank Lev Ratinov and Joseph Turian for answering our questions regarding their paper [16]. 8 References [1] Ando, R., Zhang, T.: A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6 (2005) 1817–1853 [2] Suzuki, J., Isozaki, H.: Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In: In ACL. (2008) [3] Brown, P., deSouza, P., Mercer, R., Pietra, V.D., Lai, J.: Class-based n-gram models of natural language. Comput. Linguist. 18 (December 1992) 467–479 [4] Pereira, F., Tishby, N., Lee, L.: Distributional clustering of English words. In: 31st Annual Meeting of the ACL. (1993) 183–190 [5] Huang, F., Yates, A.: Distributional representations for handling sparsity in supervised sequence-labeling. ACL ’09, Stroudsburg, PA, USA, Association for Computational Linguistics (2009) 495–503 [6] Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. ICML ’08, New York, NY, USA, ACM (2008) 160–167 [7] Mnih, A., Hinton, G.: Three new graphical models for statistical language modelling. ICML ’07, New York, NY, USA, ACM (2007) 641–648 [8] Dumais, S., Furnas, G., Landauer, T., Deerwester, S., Harshman, R.: Using latent semantic analysis to improve access to textual information. In: SIGCHI Conference on human factors in computing systems, ACM (1988) 281–285 [9] Hsu, D., Kakade, S., Zhang, T.: A spectral algorithm for learning hidden markov models. In: COLT. (2009) [10] Siddiqi, S., Boots, B., Gordon, G.J.: Reduced-rank hidden Markov models. In: AISTATS2010. (2010) [11] Song, L., Boots, B., Siddiqi, S.M., Gordon, G.J., Smola, A.J.: Hilbert space embeddings of hidden Markov models. In: ICML. (2010) [12] Hotelling, H.: Canonical correlation analysis (cca). Journal of Educational Psychology (1935) [13] Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT’ 98. (1998) 92–100 [14] Halko, N., Martinsson, P.G., Tropp, J.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. (Dec 2010) [15] Zhang, T., Johnson, D.: A robust risk minimization based named entity recognition system. CONLL ’03 (2003) 204–207 [16] Turian, J., Ratinov, L., Bengio, Y.: Word representations: a simple and general method for semi-supervised learning. ACL ’10, Stroudsburg, PA, USA, Association for Computational Linguistics (2010) 384–394 [17] Ratinov, L., Roth, D.: Design challenges and misconceptions in named entity recognition. In: CONLL. (2009) 147–155 [18] Liang, P.: Semi-supervised learning for natural language. Master’s thesis, Massachusetts Institute of Technology (2005) [19] Lin, D., Wu, X.: Phrase clustering for discriminative learning. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2. ACL ’09, Stroudsburg, PA, USA, Association for Computational Linguistics (2009) 1030–1038 9
2011
128
4,178
Projection onto A Nonnegative Max-Heap Jun Liu Arizona State University Tempe, AZ 85287, USA j.liu@asu.edu Liang Sun Arizona State University Tempe, AZ 85287, USA sun.liang@asu.edu Jieping Ye Arizona State University Tempe, AZ 85287, USA jieping.ye@asu.edu Abstract We consider the problem of computing the Euclidean projection of a vector of length p onto a non-negative max-heap—an ordered tree where the values of the nodes are all nonnegative and the value of any parent node is no less than the value(s) of its child node(s). This Euclidean projection plays a building block role in the optimization problem with a non-negative maxheap constraint. Such a constraint is desirable when the features follow an ordered tree structure, that is, a given feature is selected for the given regression/classification task only if its parent node is selected. In this paper, we show that such Euclidean projection problem admits an analytical solution and we develop a top-down algorithm where the key operation is to find the so-called maximal root-tree of the subtree rooted at each node. A naive approach for finding the maximal root-tree is to enumerate all the possible root-trees, which, however, does not scale well. We reveal several important properties of the maximal root-tree, based on which we design a bottom-up algorithm with merge for efficiently finding the maximal roottree. The proposed algorithm has a (worst-case) linear time complexity for a sequential list, and O(p2) for a general tree. We report simulation results showing the effectiveness of the max-heap for regression with an ordered tree structure. Empirical results show that the proposed algorithm has an expected linear time complexity for many special cases including a sequential list, a full binary tree, and a tree with depth 1. 1 Introduction In many regression/classification problems, the features exhibit certain hierarchical or structural relationships, the usage of which can yield an interpretable model with improved regression/classification performance [25]. Recently, there have been increasing interests on structured sparisty with various approaches for incorporating structures; see [7, 8, 9, 17, 24, 25] and references therein. In this paper, we consider an ordered tree structure: a given feature is selected for the given regression/classification task only if its parent node is selected. To incorporate such ordered tree structure, we assume that the model parameter x ∈Rp follows the non-negative max-heap structure1: P = {x ≥0, xi ≥xj ∀(xi, xj) ∈Et}, (1) where T t = (V t, Et) is a target tree with V t = {x1, x2, . . . , xp} containing all the nodes and Et all the edges. The constraint set P implies that if xi is the parent node of a child node xj then the value of xi is no less than the value of xj. In other words, if a parent node xi is 0, then any of its child nodes xj is also 0. Figure 1 illustrates three special tree structures: 1) a full binary tree, 2) a sequential list, and 3) a tree with depth 1. 1To deal with the negative model parameters, one can make use of the technique employed in [24], which solves the scaled version of the least square estimate. 1 x1 x2 x3 x4 x5 x6 x7 x1 x2 x3 x4 x5 x6 x7 x1 x2 x3 x4 x5 x6 x7 (a) (b) (c) Figure 1: Illustration of a non-negative max-heap depicted in (1). Plots (a), (b), and (c) correspond to a full binary tree, a sequential list, and a tree with depth 1, respectively. The set P defined in (1) induces the so-called “heredity principle” [3, 6, 18, 24], which has been proven effective for high-dimensional variable selection. In a recent study [12], Li et al. conducted a meta-analysis of 113 data sets from published factorial experiments and concluded that an overwhelming majority of these real studies conform with the heredity principles. The ordered tree structure is a special case of the non-negative garrote discussed in [24] when the hierarchical relationship is depicted by a tree. Therefore, the asymptotic properties established in [24] are applicable to the ordered tree structrue. Several related approaches that can incorporate the ordered tree structure include the Wedge approach [17] and the hierarchical group Lasso [25]. The Wedge approach incorporates such ordering information by designing a penalty for the model parameter x as Ω(x|P) = inft∈P 1 2 Pp i=1( x2 i ti +ti), with tree being a sequential list. By imposing the mixed ℓ1-ℓ2 norm on each group formed by the nodes in the subtree of a parent node, the hierarchical group Lasso is able to incorporate such ordering information. The hierarchical group Lasso has been applied for multi-task learning in [11] with a tree structure, and the efficient computation was discussed in [10, 15]. Compared to Wedge and hierarchical group Lasso, the max-heap in (1) incorporates such ordering information in a direct way, and our simulation results show that the max-heap can achieve lower reconstruction error than both approaches. In estimating the model parameter satisfying the ordered tree structure, one needs to solve the following constrained optimization problem: min x∈P f(x) (2) for some convex function f(·). The problem (2) can be solved via many approaches including subgradient descent, cutting plane method, gradient descent, accelerated gradient descent, etc [19, 20]. In applying these approaches, a key building block is the so-called Euclidean projection of a vector v onto the convex set P: πP (v) = arg min x∈P 1 2∥x −v∥2 2, (3) which ensures that the solution belongs to the constraint set P. For some special set P (e.g., hyperplane, halfspace, and rectangle), the Euclidean projection admits a simple analytical solution, see [2]. In the literature, researchers have developed efficient Euclidean projection algorithms for the ℓ1-ball [5, 14], the ℓ1/ℓ2-ball [1], and the polyhedra [4, 22]. When P is induced by a sequential list, a linear time algorithm was recently proposed in [26]. Without the non-negative constraints, problem (3) is the so-called isotonic regression problem [16, 21]. Our major technical contribution in this paper is the efficient computation of (3) for the set P defined in (1). In Section 2, we show that the Euclidean projection admits an analytical solution, and we develop a top-down algorithm where the key operation is to find the so-called maximal root-tree of the subtree rooted at each node. In Section 3, we design a bottom-up algorithm with merge for efficiently finding the maximal root-tree by using its properties. We provide empirical results for the proposed algorithm in Section 4, and conclude this paper in Section 5. 2 Atda: A Top-Down Algorithm In this section, we develop an algorithm in a top-down manner called Atda for solving (3). With the target tree T t = (V t, Et), we construct the input tree T = (V, E) with the input vector v, where V = {v1, v2, . . . , vp} and E = {(vi, vj)|(xi, xj) ∈Et}. For the convenience of presenting our proposed algorithm, we begin with several definitions. We also provide some examples for elaborating the definitions in the supplementary file A.1. 2 Definition 1. For a non-empty tree T = (V, E), we define its root-tree as any non-empty tree ˜T = ( ˜V , ˜E) that satisfies: 1) ˜V ⊆V , 2) ˜E ⊆E, and 3) ˜T shares the same root as T. Definition 2. For a non-empty tree T = (V, E), we define R(T) as the root-tree set containing all its root-trees. Definition 3. For a non-empty tree T = (V, E), we define m(T) = max P vi∈V vi |V | , 0  , (4) which equals the mean of all the nodes in T if such mean is non-negative, and 0 otherwise. Definition 4. For a non-empty tree T = (V, E), we define its maximal root-tree as: Mmax(T) = arg max ˜T =( ˜V , ˜ E): ˜T ∈R(T ),m( ˜T )=mmax(T ) | ˜V |, (5) where mmax(T) = max ˜T ∈R(T ) m( ˜T) (6) is the maximal value of all the root-trees of the tree T. Note that if two root-trees share the same maximal value, (5) selects the one with the largest tree size. When ˜T = ( ˜V , ˜E) is a part of a “larger” tree T = (V, E), i.e., ˜V ⊆V and ˜E ⊆E, we can treat ˜T as a “super-node” of the tree T with value m( ˜T). Thus, we have the following definition of a super-tree (note that a super-tree provides a disjoint partition of the given tree): Definition 5. For a non-empty tree T = (V, E), we define its super-tree as S = (VS, ES), which satisfies: 1) each node in VS = {T1, T2, . . . , Tn} is a non-empty tree with Ti = (Vi, Ei), 2) Vi ⊆V and Ei ⊆E, 3) Vi T Vj = ∅, i ̸= j and V = Sn i=1 Vi, and 4) (Ti, Tj) ∈ES if and only if there exists a node in Tj whose parent node is in Ti. 2.1 Proposed Algorithm We present the pseudo code for solving (3) in Algorithm 1. The key idea of the proposed algorithm is that, in the i-th call, we find Ti = Mmax(T), the maximal root-tree of T, set ˜x corresponding to the nodes of Ti to mi = mmax(T) = m(Ti), remove Ti from the tree T, and apply Atda to the resulting trees one by one recursively. Algorithm 1 A Top-Down Algorithm: Atda Input: the tree structure T = (V, E), i Output: ˜x ∈Rp 1: Set i = i + 1 2: Find the maximal root-tree of T, denoted by Ti = (Vi, Ei), and set mi = m(Ti) 3: if mi > 0 then 4: Set ˜xj = mi, ∀vj ∈Vi 5: Remove the root-tree Ti from T, denote the resulting trees as ˜T1, ˜T2, . . . , ˜Tri, and apply Atda( ˜Tj,i), ∀j = 1, 2, . . . , ri 6: else 7: Set ˜xj = mi, ∀vj ∈Vi 8: end if 2.2 Illustration & Justification For a better illustration and justification of the proposed algorithm, we provide the analysis of Atda for a special case—the sequential list—in the supplementary file A.2. Let us analyze Algorithm 1 for the general tree. Figure 2 illustrates solving (3) via Algorithm 1 for a tree with depth 3. Plot (a) shows a target tree T t, and plot (b) denotes the input tree T. The dashed frame of plot (b) shows Mmax(T), the maximal root-tree of T, and 3 1 5 3 -4 -1 -4 2 -1 2 1 -1 1 2 4 2 -4 -1 -4 2 -1 2 1 -1 1 2 4 2 (c) (f) 1 5 3 -4 -1 -4 2 -1 2 1 -1 1 2 4 2 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 (a) (e) 3 3 3 1 0 0 2 0 1 1 0 0 0 1 1 (d) 1 5 3 -4 -1 -4 2 -1 2 1 -1 1 2 4 2 2 0 0 0 0 0 1 2 2 1 0 0 1 5 0 0 0 3 1 1 0 1 (b) 0 Figure 2: Illustration of Algorithm 1 for solving (3) for a tree with depth 3. Plot (a) shows the target tree T t, and plots (b-e) illustrate Atda. Specifically, plot (b) denotes the input tree T, with the dashed frame displaying its maximal root-tree; plot (c) depicts the resulting trees after removing the maximal root-tree in plot (b); plot (d) shows the resulting super-tree (we treat each tree enclosed by the dashed frame as a super-node) of the algorithm; plot (e) gives the solution ˜x ∈R15; and the edges of plot (f) show the dual variables, from which we can also obtain the optimal solution ˜x (refer to the proof of Theorem 1). we have Mmax(T) = 3. Thus, we set the corresponding entries of ˜x to 3. Plot (c) depicts the resulting trees after removing the maximal root-tree in plot (b), and plot (d) shows the generated maximal root-trees (enclosed by dashed frame) by the algorithm. When treating each generated maximal root-tree as a super-node with the value defined in Definition 3, plot (d) is a super-tree of the input tree T. In addition, the super-tree is a max-heap, i.e., the value of the parent node is no less than the values of its child nodes. Plot (e) gives the solution ˜x ∈R15. The edges of plot (f) correspond to the values of the dual variables, from which we can also obtain the optimal solution ˜x ∈R15. Finally, we can observe that the non-zero entries of ˜x constitute a cut of the original tree. We verify the correctness of Algorithm 1 for the general tree in the following theorem. We make use of the KKT conditions and variational inequality [20] in the proof. Theorem 1. ˜x = Atda(T, 0) provides the unique optimal solution to (3). Proof: As the objective function of (3) is strictly convex and the constraints are affine, it admits a unique solution. After running Algorithm 1, we obtain the sequences {Ti}k i=1 and {mi}k i=1, where k satisfies 1 ≤k ≤p. It is easy to verify that the trees Ti, i = 1, 2, . . . , k constitute a disjoint partition of the input tree T. With the sequences {Ti}k i=1 and {mi}k i=1, we can construct a super-tree of the input tree T as follows: 1) we treat Ti as a super-node with value mi, and 2) we put an edge between Ti and Tj if there is an edge between the nodes of Ti and Tj in the input tree T. With Algorithm 1, we can verify that the resulting super-tree has the property that the value of the parent node is no less than its child nodes. Therefore, ˜x = Atda(T, 0) satisfies ˜x ∈P. Let xl and vl denote a subset of x and v corresponding to the indices appearing in the subtree Tl, respectively. Denote P l = {xl : xl ≥0, xi ≥xj, (vi, vj) ∈El}, I1 = {l : ml > 0}, I2 = {l : ml = 0}. Our proof is based on the following inequality: min x∈P 1 2∥x −v∥2 2 ≥ X l∈I1 min xl∈P l 1 2∥xl −vl∥2 2 + X l∈I2 min xl∈P l 1 2∥xl −vl∥2 2, (7) which holds as the left hand side has the additional inequality constraints compared to the right hand side. Our methodology is to show that ˜x = Atda(T, 0) provides the optimal solution to the right hand side of (7), i.e., ˜xl = arg min xl∈P l 1 2∥xl −vl∥2 2, ∀l ∈I1, (8) ˜xl = arg min xl∈P l 1 2∥xl −vl∥2 2, ∀l ∈I2, (9) 4 which, together with the fact 1 2∥˜x −v∥2 2 ≥minx∈P 1 2∥x −v∥2 2, ˜x ∈P lead to our main argument. Next, we prove (8) by the KKT conditions, and prove (9) by the variational inequality [20]. Firstly, ∀l ∈I1, we introduce the dual variable yij for the edge (vi, vj) ∈El, and yii if vi ∈Ll, where Ll contains all the leaf nodes of the tree Tl. Denote the root of Tl by vrl. For all vi ∈Vl, vi ̸= vrl , we denote its parent node by vji, and for the root vrl, we denote jrl = rl. We let Cl i = {j|vj is a child node of vi in the tree Tl}. Rl i = {j|vj is in the subtree of Tl rooted at vi}. To prove (8), we verify that the primal variable ˜x = Atda(T, 0) and the dual variable ˜y satisfy the following KKT conditions: ∀(vi, vj) ∈El, ˜xi ≥˜xj ≥ 0 (10) ∀(vi, vj) ∈El, (˜xi −˜xj)˜yij = 0 (11) ∀vi ∈Ll, ˜yii˜xi = 0 (12) ∀vi ∈Vl, ˜xi −vi − X j∈Cl i ˜yij + ˜yjii = 0 (13) ∀(vi, vj) ∈El, ˜yij ≥ 0 (14) ∀vi ∈Ll, ˜yii ≥ 0, (15) where ˜yjrlrl = 0 (Note that ˜yjrlrl is a dual variable, and it is introduced for the simplicity of presenting (12)), and the dual variable ˜y is set as: ˜yii = 0, ∀i ∈Ll, (16) ˜yjii = vi −ml + X j∈Cl i ˜yij, ∀vi ∈Vl. (17) According to Algorithm 1, ˜xi = ml > 0, ∀vi ∈Vl, l ∈I1. Thus, we have (10)-(12) and (15). It follows from (17) that (13) holds. According to (16) and (17), we have ˜yjii = X j∈Rl i vj −|Rl i|ml, ∀vi ∈Vl, (18) where |Rl i| denotes the number of elements in Rl i, the subtree of Tl rooted at vi. From the nature of the maximal root-tree Tl, l ∈I1, we have P j∈Rl i vj ≥|Rl i|ml. Otherwise, if P j∈Rl i vj < |Rl i|ml, we can construct from Tl a new root-tree ¯Tl by removing the subtree of Tl rooted at vi, so that ¯Tl achieves a larger value than Tl. This contradicts with the argument that Tl, l ∈I1 is the maximal root-tree of the working tree T. Therefore, it follows from (18) that (14) holds. Secondly, we prove (9) by verifying the following optimality condition: ⟨xl −˜xl, ˜xl −vl⟩≥0, ∀xl ∈P l, l ∈I2, (19) which is the so-called variational inequality condition for ˜xl being the optimal solution to (9). According to Algorithm 1, if l ∈I2, we have ˜xi = 0, ∀vi ∈Vl. Thus, (19) is equivalent to ⟨xl, vl⟩≤0, ∀xl ∈P l, l ∈I2. (20) For a given xl ∈P l, if xi = 0, ∀vi ∈V l, (20) naturally holds. Next, we consider xl ̸= 0. Denote by ¯xl 1 the minimal nonzero element in xl, and T 1 l = (V 1 l , E1 l ) a tree constructed by removing the nodes corresponding to the indices in the set {i : xl i = 0, vi ∈Vl} from Tl. It is clear that T 1 l shares the same root as Tl. It follows from Algorithm 1 that P i:vi∈V 1 l vi ≤0. Thus, we have ⟨xl, vl⟩= ¯xl 1 X i:vi∈V 1 l vi + X i:vi∈V 1 l (xi −¯xl 1)vi ≤ X i:vi∈V 1 l (xi −¯xl 1)vi. 5 If xl i = ¯xl 1, ∀vi ∈V 1 l , we arrive at (20). Otherwise, we set r = 2; denote by ¯xl r the minimal nonzero element in the set {xi −Pr−1 j=1 ¯xl j : vi ∈V r−1 l }, and T r l = (V r l , Er l ) a subtree of T r−1 l by removing those nodes with the indices in the set {i : xl i −Pr−1 j=1 ¯xl j = 0, vi ∈V r−1 l }. It is clear that T r l shares the same root as T r−1 l and Tl as well, so that it follows from Algorithm 1 that P i:vi∈V r l vi ≤0. Therefore, we have X i:vi∈V r−1 l (xi − r−1 X j=1 ¯xl j)vi = ¯xl r X i:vi∈V r l vi + X i:vi∈V r l (xi − r X j=1 ¯xl j)vi ≤ X i:vi∈V r l (xi − r X j=1 ¯xl j)vi. (21) Repeating the above process until V r l is empty, we can verify that (20) holds. □ For a better understanding of the proof, we make use of the edges of Figure 2 (f) to show the dual variables, where the edge connecting vi and vj corresponds to the dual variable ˜yij, and the edge starting from the leaf node vi corresponds to the dual variable ˜yii. With the dual variables, we can compute ˜x via (13). We note that, for the maximal root-tree with a positive value, the associated dual variables are unique, but for the maximal root-tree with zero value, the associated dual variables may not be unique. For example, in Figure 2 (f), we set ˜yii = 1 for i = 12, ˜yii = 0 for i = 13, ˜yij = 2 for i = 6, j = 12, and ˜yij = 2 for i = 6, j = 13. It is easy to check that the dual variables can also be set as follows: ˜yii = 0 for i = 12, ˜yii = 1 for i = 13, ˜yij = 1 for i = 6, j = 12, and ˜yij = 3 for i = 6, j = 13. 3 Finding the Maximal Root-Tree A key operation of Algorithm 1 is to find the maximal root-tree used in Step 2. A naive approach for finding the maximal root-tree of a tree T is to enumerate all possible roottrees in the root-tree set R(T), and identify the maximal root-tree via (5). We call such an approach Anae, which stands for a naive algorithm with enumeration. Although Anae is simple to describe, it has a very high time complexity (see the analysis given in supplementary file A.3). To this end, we develop Abuam (A Bottom-Up Algorithm with Merge). The underlying idea is to make use of the special structure of the maximal root-tree defined in (5) for avoiding the enumeration of all possible root-trees. We begin the discussion with some key properties of the maximal root-tree, and the proof is given in the supplementary file A.4. Lemma 1. For a non-empty tree T = (V, E), denote its maximal root-tree as Tmax = (Vmax, Emax). Let ˜T = ( ˜V , ˜E) be a root-tree of Tmax. Assume that there are n nodes vi1, . . . , vin, which satisfy: 1) vij /∈˜V , 2) vij ∈V , and 3) the parent node of vij is in ˜V . If n ≥1, we denote the subtree of T rooted at vij as T j = (V j, Ej), j = 1, 2, . . . , n, T j max = (V j max, Ej max) as the maximal root-trees of T j, and ˜m = maxj=1,2,...,n m(T j max). Then, the followings hold: (1) If n = 0, then Tmax = ˜T = T; (2) If n ≥1, m( ˜T) = 0, and ˜m = 0, then Tmax = T; (3) If n ≥1, m( ˜T) > 0, and m( ˜T) > ˜m, then Tmax = ˜T; (4) If n ≥1, m( ˜T) > 0, and m( ˜T) ≤˜m, then V j max ⊆Vmax, Ej max ⊆Emax and (vi0, vij) ∈Emax, ∀j : m(T j max) = ˜m; and (5) If n ≥1, m( ˜T) = 0, and ˜m > 0, then V j max ⊆Vmax, Ej max ⊆ Emax and (vi0, vij) ∈Emax, ∀j : m(T j max) = ˜m. For the convenience of presenting our proposed algorithm, we define the operation “merge” as follows: Definition 6. Let T = (V, E) be a non-empty tree, and T1 = (V 1, E1) and T2 = (V 2, E2) be two trees that satisfy: 1) they are composed of a subset of the nodes and edges of T, i.e., V 1 ∈V , V 2 ∈V , E1 ∈E, and E2 ∈E; 2) they do not overlap, i.e., V 1 T V 2 = ∅, and E1 T E2 = ∅; and 3) in the tree T, vi2, the root node of T2 is a child of vi1, a leaf node of T1. We define the operation “merge” as ˜T = merge(T1, T2, T), where ˜T = ( ˜V , ˜E) with V = V1 S V2 and E = E1 S E2 S{(vi1, vi2)}. Next, we make use of Lemma 1 to efficiently compute the maximal root-tree, and present the pseudo code for Abuam in Algorithm 2. We provide the illustration of the proposed algorithm and the analysis of its computational cost in the supplementary file A.5 and A.6, respectively. 6 Algorithm 2 A Bottom-Up Algorithm with Merge: Abuam Input: the input tree T = (V, E) Output: the maximal root-tree Tmax = (Vmax, Emax) 1: Set T0 = (V0, E0), where V0 = {xi0} and E0 = ∅ 2: if vi0 does not have a child node in T then 3: Set Tmax = T0, return 4: end if 5: while 1 do 6: Set ˜m = 0, denote by vi1, . . . , vin the n nodes that satisfy: 1) vij /∈V0, 2) vij ∈V , and 3) the parent node of vij is in V0, and denote by T j = (V j, Ej), j = 1, 2, . . . , n the subtree of T rooted at vij. 7: if n = 0 then 8: Set Tmax = T0 = T, return 9: end if 10: for j = 1 to n do 11: Set T j max = Abuam(T j), and ˜m = max(m(T j max), ˜m) 12: end for 13: if m(T0) = ˜m = 0 then 14: Set Tmax = T, return 15: else if m( ˜T) > 0 and m( ˜T) > ˜m then 16: Set Tmax = T0, return 17: else 18: Set T0=merge(T0, T j max, T), ∀j : m(T j max) = ˜m 19: end if 20: end while Making use of the fact that T0 is always a valid root-tree of Tmax, the maximal root-tree of T, we can easily prove the following result using Lemma 1. Theorem 2. Tmax returned by Algorithm 2 is the maximal root-tree of the input tree T. 4 Numerical Simulations Effectiveness of the Max-Heap Structure We test the effectiveness of the max-heap structure for linear regression b = Ax, following the same experimental setting as in [17]. Specifically, the elements of A ∈Rn×p are generated i.i.d. from the Gaussian distribution with zero mean and standard derivation and the columns of A are then normalized to have unit length. The regression vector x has p = 127 nonincreasing elements, where the first 10 elements are set as x∗ i = 11 −i, i = 1, 2, . . . , 10 and the rest are zeros. We compared with the following three approaches: Lasso [23], Group Lasso [25], and Wedge [17]. Lasso makes no use of such ordering, while Wedge incorporates the structure by using an auxiliary ordered variable. For Group Lasso and Max-Heap, we try binary-tree grouping and list-tree grouping, where the associated trees are a full binary tree and a sequential list, respectively. The regression vector is put on the tree so that, the closer the node to the root, the larger the element is placed. In Group Lasso, the nodes appearing in the same subtree form a group. For the compared approaches, we use the implementations provided in [17]2; and for Max-Heap, we solve (2) with f(x) = 1 2∥Ax−b∥2 2+ρ∥x∥1 for some small ρ = r×∥AT b∥∞(we set r = 10−4, and 10−8 for the binary-tree grouping and list-tree grouping, respectively) and apply the accelerated gradient descent [19] approach with our proposed Euclidean projection. We compute the average model error ∥x −x∗∥2 over 50 independent runs, and report the results with a varying number of sample size n in Figure 3 (a) & (b). As expected, GL-binary, MH-binary, Wedge, GL-list and MH-list outperform Lasso which does not incorporate such ordering information. MH-binary performs better than GL-binary, and MH-list performs better than Wedge and GL-list, due to the direct usage of such ordering information. In addition, the list-tree grouping performs better than the binary-tree grouping, as it makes better usage of such ordering information. 2http://www.cs.ucl.ac.uk/staff/M.Pontil/software/sparsity.html 7 12 15 18 20 25 30 40 50 0 50 100 150 200 250 300 350 400 450 Sample size Model error Lasso GL−binary MH−binary 10 4 10 5 10 6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 p Computational Time Gaussian Distribution for v sequential list full binary tree tree of depth 1 0 20 40 60 80 100 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Random Initialization of v Computational Time Gaussian Distribution, Full Binary Tree d=10 d=12 d=14 d=18 d=18 d=20 (a) (c) (e) 12 15 18 20 25 30 40 50 0 20 40 60 80 100 120 Sample size Model error Wedge GL−list MH−list 10 4 10 5 10 6 10 −6 10 −4 10 −2 10 0 10 2 p Computational Time Uniform Distribution for v sequential list full binary tree tree of depth 1 0 20 40 60 80 100 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Random Initialization of v Computational Time Uniform Distribution, Full Binary Tree d=10 d=12 d=14 d=18 d=18 d=20 (b) (d) (f) Figure 3: Simulation results. In plots (a) and (b), we show the average model error ∥x −x∗∥2 over 50 independet runs by different approaches with the full binary-tree ordering and the list-tree ordering. In plots (c) and (d), we report the computational time (in seconds) of the proposed Atda (averaged over 100 runs) with different randomly initialized input v. In plots (e) and (f), we show the computational time of Atda over 100 runs. Efficiency of the Proposed Projection We test the efficiency of the proposed Atda approach for solving the Euclidean projection onto the non-negative max-heap, equipped with our proposed Abuam approach for finding the maximal root-trees. In the experiments, we make use of the three tree structures as depicted in Figure 1, and try two different distributions: 1) Gaussian distribution with zero mean and standard derivation and 2) uniform distribution in [0, 1] for randomly and independently generating the entries of the input v ∈Rp. In Figure 3 (c) & (d), we report the average computational time (in seconds) over 100 runs under different values of p = 2d+1 −1, where d = 10, 12, . . . , 20. We can observe that, the proposed algorithm scales linearly with the size of p. In Figure 3 (e) & (f), we report the computational time of Atda over 100 runs when the ordered tree structure is a full binary tree. The results show that the computational time of the proposed algorithm is relatively stable for different runs, especially for larger d or p. Note that, the source codes for our proposed algorithm have been included in the SLEP package [13]. 5 Conclusion In this paper, we have developed an efficient algorithm for the computation of the Euclidean projection onto a non-negative max-heap. The proposed algorithm has a (worst-case) linear time complexity for a sequential list, and O(p2) for a general tree. Empirical results show that: 1) the proposed approach deals with the ordering information better than existing approaches, and 2) the proposed algorithm has an expected linear time complexity for the sequential list, the full binary tree, and the tree of depth 1. It will be interesting to explore whether the proposed Abuam has a worst case linear (or linearithmic) time complexity for the binary tree. We plan to apply the proposed algorithms to real-world applications with an ordered tree structure. We also plan to extend our proposed approaches to the general hierarchical structure. Acknowledgments This work was supported by NSF IIS-0812551, IIS-0953662, MCB-1026710, CCF-1025177, NGA HM1582-08-1-0016, and NSFC 60905035, 61035003. 8 References [1] E. Berg, M. Schmidt, M. P. Friedlander, and K. Murphy. Group sparsity via linear-time projection. Tech. Rep. TR-2008-09, Department of Computer Science, University of British Columbia, Vancouver, July 2008. [2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [3] N. Choi, W. Li, and J. Zhu. Variable selection with the strong heredity constraint and its oracle property. Journal of the American Statistical Association, 105:354–364, 2010. [4] Z. Dost´al. Box constrained quadratic programming with proportioning and projections. SIAM Journal on Optimization, 7(3):871–887, 1997. [5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and C. Tushar. Efficient projection onto the ℓ1-ball for learning in high dimensions. In International Conference on Machine Learning, 2008. [6] M. Hamada and C. Wu. Analysis of designed experiments with complex aliasing. Journal of Quality Technology, 24:130–137, 1992. [7] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In International Conference on Machine Learning. 2009. [8] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In International Conference on Machine Learning, 2009. [9] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, arXiv:0904.3523v2, 2009. [10] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning, 2010. [11] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In International Conference on Machine Learning, 2010. [12] X. Li, N. Sundarsanam, and D. Frey. Regularities in data from factorial experiments. Complexity, 11:32–45, 2006. [13] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009. [14] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In International Conference on Machine Learning, 2009. [15] J. Liu and J. Ye. Moreau-yosida regularization for grouped tree structure learning. In Advances in Neural Information Processing Systems, 2010. [16] R. Luss, S. Rosset, and M. Shahar. Decomposing isotonic regression for efficiently solving large problems. In Advances in Neural Information Processing Systems, 2010. [17] C. Micchelli, J. Morales, and M. Pontil. A family of penalty functions for structured sparsity. In Advances in Neural Information Processing Systems 23, pages 1612–1623. 2010. [18] J. Nelder. The selection of terms in response-surface models—how strong is the weak-heredity principle? Annals of Applied Statistics, 52:315–318, 1998. [19] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994. [20] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004. [21] P. M. Pardalos and G. Xue. Algorithms for a class of isotonic regression problems. Algorithmica, 23:211–222, 1999. [22] S. Shalev-Shwartz and Y. Singer. Efficient learning of label ranking by soft projections onto polyhedra. Journal of Machine Learning Research, 7:1567–1599, 2006. [23] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58(1):267–288, 1996. [24] M. Yuan, V. R. Joseph, and H. Zou. Structured variable selection and estimation. Annals of Applied Statistics, 3:1738–1757, 2009. [25] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 37(6A):3468–3497, 2009. [26] L.W. Zhong and J.T. Kwok. Efficient sparse modeling with automatic feature grouping. In International Conference on Machine Learning, 2011. 9
2011
129
4,179
Phase transition in the family of p-resistances Morteza Alamgir Max Planck Institute for Intelligent Systems T¨ubingen, Germany morteza@tuebingen.mpg.de Ulrike von Luxburg Max Planck Institute for Intelligent Systems T¨ubingen, Germany ulrike.luxburg@tuebingen.mpg.de Abstract We study the family of p-resistances on graphs for p ≥1. This family generalizes the standard resistance distance. We prove that for any fixed graph, for p = 1 the p-resistance coincides with the shortest path distance, for p = 2 it coincides with the standard resistance distance, and for p ! 1 it converges to the inverse of the minimal s-t-cut in the graph. Secondly, we consider the special case of random geometric graphs (such as k-nearest neighbor graphs) when the number n of vertices in the graph tends to infinity. We prove that an interesting phase transition takes place. There exist two critical thresholds p⇤and p⇤⇤such that if p < p⇤, then the p-resistance depends on meaningful global properties of the graph, whereas if p > p⇤⇤, it only depends on trivial local quantities and does not convey any useful information. We can explicitly compute the critical values: p⇤= 1 + 1/(d −1) and p⇤⇤= 1 + 1/(d −2) where d is the dimension of the underlying space (we believe that the fact that there is a small gap between p⇤and p⇤⇤is an artifact of our proofs). We also relate our findings to Laplacian regularization and suggest to use q-Laplacians as regularizers, where q satisfies 1/p⇤+ 1/q = 1. 1 Introduction The graph Laplacian is a popular tool for unsupervised and semi-supervised learning problems on graphs. It is used in the context of spectral clustering, as a regularizer for semi-supervised learning, or to compute the resistance distance on graphs. However, it has been observed that under certain circumstances, standard Laplacian-based methods show undesired artifacts. In the semi-supervised learning setting Nadler et al. (2009) showed that as the number of unlabeled points increases, the solution obtained by Laplacian regularization degenerates to a non-informative function. von Luxburg et al. (2010) proved that as the number of points increases, the resistance distance converges to a meaningless limit function. Independently of these observations, a number of authors suggested to generalize Laplacian methods. The observation was that the “standard” Laplacian methods correspond to a vector space setting with L2-norms, and that it might be beneficial to work in a more general Lp setting for p 6= 2 instead. See B¨uhler and Hein (2009) for an application to clustering and Herbster and Lever (2009) for an application to label propagation. In this paper we take up several of these loose ends and connect them. The main object under study in this paper is the family of p-resistances, which is a generalization of the standard resistance distance. Our first major result proves that the family of p-resistances is very rich and contains several special cases. The general picture is that the smaller p is, the more the resistance is concentrated on “short paths”. In particular, the case p = 1 corresponds to the shortest path distance in the graph, the case p = 2 to the standard resistance distance, and the case p ! 1 to the inverse s-t-mincut. Second, we study the behavior of p-resistances in the setting of random geometric graphs like lattice graphs, "-graphs or k-nearest neighbor graphs. We prove that as the sample size n increases, there 1 are two completely different regimes of behavior. Namely, there exist two critical thresholds p⇤and p⇤⇤such that if p < p⇤, the p-resistances convey useful information about the global topology of the data (such as its cluster properties), whereas for p > p⇤⇤the resistance distances approximate a limit that does not convey any useful information. We can explicitly compute the value of the critical thresholds p⇤:= 1 + 1/(d −1) and p⇤⇤:= 1 + 1/(d −2). This result even holds independently of the exact construction of the geometric graph. Third, as we will see in Section 5, our results also shed light on the Laplacian regularization and semi-supervised learning setting. As there is a tight relationship between p-resistances and graph Laplacians, we can reformulate the artifacts described in Nadler et al. (2009) in terms of p-resistances. Taken together, our results suggest that standard Laplacian regularization should be replaced by q-Laplacian regularization (where q is such that 1/p⇤+ 1/q = 1). 2 Intuition and main results Consider an undirected, weighted graph G = (V, E) with n vertices. As is standard in machine learning, the edge weights are supposed to indicate similarity of the adjacent points (not distances). Denote the weight of edge e by we ≥0 and the degree of vertex u by du. The length of a path γ in the weighted graph is defined as P e2γ 1/we. In the electrical network interpretation, a graph is considered as a network where each edge e 2 E has resistance re = 1/we. The effective resistance (or resistance distance) R(s, t) between two vertices s and t in the network is defined as the overall resistance one obtains when connecting a unit volt battery to s and t. It can be computed in many ways, but the one most useful for our paper is the following representation in terms of flows (cf. Section IX.1 of Bollobas, 1998): R(s, t) = min n P e2E rei2 e ### i = (ie)e2E unit flow from s to t o . (1) In von Luxburg et al. (2010) it has been proved that in many random graph models, the resistance distance R(s, t) between two vertices s and t converges to the trivial limit expression 1/ds + 1/dt as the size of the graph increases. We now want to present some intuition as to how this problem can be resolved in a natural way. For a subset M ⇢E of edges we define the contribution of M to the resistance R(s, t) as the part of the sum in (1) that runs over the edges in M. Let i⇤be a flow minimizing (1). To explain our intuition we separate this flow into two parts: R(s, t) = R(s, t)local + R(s, t)global. The part R(s, t)local stands for the contribution of i⇤that stems from the edges in small neighborhoods around s and t, whereas R(s, t)global is the contribution of the remaining edges (exact definition given below). A useful distance function is supposed to encode the global geometry of the graph, for example its cluster properties. Hence, R(s, t)global should be the most important part in this decomposition. However, in case of the standard resistance distance the contribution of the global part becomes negligible as n ! 1 (for many different models of graph construction). This effect happens because as the graph increases, there are so many different paths between s and t that once the flow has left the neighborhood of s, electricity can flow “without considerable resistance”. The “bottleneck” for the flow is the part that comes from the edges in the local neighborhoods of s and t, because here the flow has to concentrate on relatively few edges. So the dominating part is R(s, t)local. In order to define a useful distance function, we have to ensure that the global part has a significant contribution to the overall resistance. To this end, we have to avoid that the flow is distributed over “too many paths”. In machine learning terms, we would like to achieve a flow that is “sparser” in the number of paths it uses. From this point of view, a natural attempt is to replace the 2-normoptimization problem (1) by a p-norm optimization problem for some p < 2. Based on this intuition, our idea is to replace the squares in the flow problem (1) by a general exponent p ≥1 and define the following new distance function on the graph. Definition 1 (p-resistance) On any weighted graph G, for any p ≥1 we define Rp(s, t) := min n P e2E re|ie|p ### i = (ie)e2E unit flow from s to t o . (⇤) As it turns out, our newly defined distance function Rp is closely related but not completely identical to the p-resistance RH p defined by Herbster and Lever (2009). A discussion of this issue can be found in Section 6.1. 2 0 5 10 15 20 25 30 0 5 10 15 20 25 30 (a) p = 2 0 5 10 15 20 25 30 0 5 10 15 20 25 30 (b) p = 1.33 0 5 10 15 20 25 30 0 5 10 15 20 25 30 (c) p = 1.1 Figure 1: The s-t-flows minimizing (⇤) in a two-dimensional grid for different values of p. The smaller p, the more the flow concentrates along the shortest path. In toy simulations we can observe that the desired effect of concentrating the flow on fewer paths takes place indeed. In Figure 1 we show how the optimal flow between two points s and t gets propagated through the network. We can see that the smaller p is, the more the flow is concentrated along the shortest path between s and t. We are now going to formally investigate the influence of the parameter p. Our first question is how the family Rp(s, t) behaves as a function of p (that is, on a fixed graph and for fixed s, t). The answer is given in the following theorem. Theorem 2 (Family of p-resistances) For any weighted graph G the following statements are true: 1. For p = 1, the p-resistance coincides with the shortest path distance on the graph. 2. For p = 2, the p-resistance reduces to the standard resistance distance. 3. For p ! 1, Rp(s, t)p−1 converges to 1/m where m is the unweighted s-t-mincut. This theorem shows that our intuition as outlined above was exactly the right one. The smaller p is, the more flow is concentrated along straight paths. The extreme case is p = 1, which yields the shortest path distance. In the other direction, the larger p is, the more widely distributed the flow is. Moreover, the theorem above suggests that for p close to 1, Rp encodes global information about the part of the graph that is concentrated around the shortest path. As p increases, global information is still present, but now describes a larger portion of the graph, say, its cluster structure. This is the regime that is most interesting for machine learning. The larger p becomes, the less global information is present in Rp (because flows even use extremely long paths that take long detours), and in the extreme case p ! 1 we are left with nothing but the information about the minimal s-t-cut. In many large graphs, the latter just contains local information about one of the points s or t (see the discussion at the end of this section). An illustration of the different behaviors can be found in Figure 2. The next question, inspired by the results of von Luxburg et al. (2010), is what happens to Rp(s, t) if we fix p but consider a family (Gn)n2N of graphs such that the number n of vertices in Gn tends to 1. Let us consider geometric graphs such as k-nearest neighbor graphs or "-graphs. We now give exact definitions of the local and global contributions to the p-resistance. Let r and R be real numbers that depend on n (they will be specified in Section 4) and C ≥R/r a constant. We define the local neighborhood N(s) of vertex s as the ball with radius C · r around s. We will see later that the condition C ≥R/r ensures that N(s) contains at least all vertices adjacent to s. By abuse of notation we also write e 2 N(s) if both endpoints of edge e are contained in N(s). Let i⇤be the optimal flow in Problem (⇤). We define Rlocal p (s) := P e2N (s) re|i⇤ e|p, Rlocal p (s, t) := Rlocal p (s) + Rlocal p (t), and Rglobal p (s, t) := Rp(s, t) −Rlocal p (s, t). Our next result conveys that the behavior of the family of p-resistances shows an interesting phase transition. The statements involve a term ⌧n that should be interpreted as the average degree in the graph Gn (exact definition see later). 3 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 (a) p = 1 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 (b) p = 1.11 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 (c) p = 1.5 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 (d) p = 2 Figure 2: Heat plots of the Rp distance matrices for a mixture of two Gaussians in R10. We can see that the larger p it, the less pronounced the “global information” about the cluster structure is. Theorem 3 (Phase transition for p-resistances in large geometric graphs) Consider a family (Gn)n2N of unweighted geometric graphs on Rd, d > 2 that satisfies some general assumptions (see Section 4 for definitions and details). Fix two vertices s and t. Define the two critical values p⇤:= 1 + 1/(d −1) and p⇤⇤:= 1 + 1/(d −2). Then, as n ! 1, the following statements hold: 1. If p < p⇤and ⌧n is sub-polynomial in n, then Rglobal p (s, t)/Rlocal p (s, t) ! 1, that is the global contribution dominates the local one. 2. If p > p⇤⇤and ⌧n ! 1, then Rlocal p (s, t)/Rglobal p (s, t) ! 1 and Rp(s, t) ! 1 dp−1 s + 1 dp−1 t , that is all global information vanishes. This result is interesting. It shows that there exists a non-trivial point of phase transition in the behavior of p-resistances: if p < p⇤, then p-resistances are informative about the global topology of the graph, whereas if p > p⇤⇤the p-resistances converge to trivial distance functions that do not depend on any global properties of the graph. In fact, we believe that p⇤⇤should be 1−1/(d −1) as well, but our current proof leaves the tiny gap between p⇤= 1−1/(d−1) and p⇤⇤= 1−1/(d−2). Theorem 3 is a substantial extension of the work of von Luxburg et al. (2010), in several respects. First, and most importantly, it shows the complete picture of the full range of p ≥1, and not just the single snapshot at p = 2. We can see that there is a range of values for p for which presistance distances convey very important information about the global topology of the graph, even in extremely large graphs. Also note how nicely Theorems 2 and 3 fit together. It is well-known that as n ! 1, the shortest path distance corresponding to p = 1 converges to the (geodesic) distance of s and t in the underlying space (Tenenbaum et al., 2000), which of course conveys global information. von Luxburg et al. (2010) proved that the standard resistance distance (p = 2) converges to the trivial local limit. Theorem 3 now identifies the point of phase transition p⇤between the boundary cases p = 1 and p = 2. Finally, for p ! 1, we know by Theorem 2 that the presistance converges to the inverse of the s-t-min-cut. It is widely believed that the minimal s-t cut in geometric graphs converges to the minimum of the degrees of s and t as n ! 1 (even though a formal proof has yet to be presented and we cannot point to any reference). This is in alignment with the result of Theorem 3 that the p-resistance converges to 1/dp−1 s + 1/dp−1 t . As p ! 1, only the smaller of the two degrees contributes to the local part, which agrees with the limit for the s-t-mincut. 3 Equivalent optimization problems and proof of Theorem 2 In this section we will consider different optimization problems that are inherently related to presistances. All graphs in this section are considered to be weighted. 3.1 Equivalent optimization problems Consider the following two optimization problems for p > 1: Flow-problem: Rp(s, t) := min n P e2E re|ie|p ## i = (ie)e2E unit flow from s to t o (⇤) 4 Potential problem: Cp(s, t) = min n X e=(u,v) |'(u) −'(v)|1+ 1 p−1 r 1 p−1 e ## '(s) −'(t) = 1 o (⇤⇤) It is well known that these two problems are equivalent for p = 2 (see Section 1.3 of Doyle and Snell, 2000). We will now extend this result to general p > 1. Proposition 4 (Equivalent optimization problems) For p > 1, the following statements are true: 1. The flow problem (⇤) has a unique solution. 2. The solutions of (⇤) and (⇤⇤) satisfy Rp(s, t) = (Cp(s, t))− 1 p−1 . To prove this proposition, we derive the Lagrange dual of problem (⇤) and use the homogeneity of the variables to convert it to the form of problem (⇤⇤). Details can be found in the supplementary material. With this proposition we can now easily see why Theorem 2 is true. Proof of Theorem 2. Part (1). If we set p = 1, Problem (⇤) coincides with the well-known linear programming formulation of the shortest path problem, see Chapter 12 of Bazaraa et al. (2010). Part (2). For p = 2, we get the well-known formula for the effective resistance. Part (3). For p ! 1, the objective function in the dual problem (⇤⇤) converges to C1(s, t) := min n P e=(u,v) |'(u) −'(v)| ## '(s) −'(t) = 1 o . This coincides with the well-known linear programming formulation of the min-cut problem in unweighted graphs. Using Proposition 4 we finally obtain lim p!1 Rp(s, t)p−1 = lim p!1 1 Cp(s, t) = 1 C1(s, t) = 1 s-t-mincut. 4 Geometric graphs and the Proof of Theorem 3 In this section we consider the class of geometric graphs. The vertices of such graphs consist of points X1, .., Xn 2 Rd, and vertices are connected by edges if the corresponding points are “close” (for example, they are k-nearest neighbors of each other). In most cases, we consider the set of points as drawn i.i.d from some density on Rd. Consider the following general assumptions. General Assumptions: Consider a family (Gn)n2N of unweighted geometric graphs where Gn is based on X1, ..., Xn 2 M ⇢Rd, d > 2. We assume that there exist 0 < r R (depending on n and d) such that the following statements about Gn holds simultaneously for all x 2 {X1, ..., Xn}: 1. Distribution of points: For ⇢2 {r, R} the number of sample points in B(x, ⇢) is of the order ⇥(n · ⇢d). 2. Graph connectivity: x is connected to all sample points inside B(x, r) and x is not connected to any sample point outside B(x, R). 3. Geometry of M: M is a compact, connected set such that M \ @M is still connected. The boundary @M is regular in the sense that there exist positive constants ↵> 0 and "0 > 0 such that if " < "0, then for all points x 2 @M we have vol(B"(x) \ M) ≥ ↵vol(B"(x)) (where vol denotes the Lebesgue volume). Essentially this condition just excludes the situation where the boundary has arbitrarily thin spikes. It is a straightforward consequence of these assumptions that there exists some function ⌧(n) =: ⌧n such that r and R are both of the order ⇥((⌧n/n)1/d) and all degrees in the graph are of order ⇥(⌧n). 4.1 Lower and upper bounds and the proof of Theorem 3 To prove Theorem 3 we need to study the balance between Rlocal p and Rglobal p . We introduce the shorthand notation T1 = ⇥ ⇣ 1 np(1−1/d)−1⌧p(1+1/d)−1 n ⌘ , T2 = ⇥ ⇣ 1 ⌧2(p−1) n 1/r X k=1 1 k(d−2)(p−1) ⌘ . 5 Theorem 5 (General bounds on Rlocal p and Rglobal p ) Consider a family (Gn)n2N of unweighted geometric graphs that satisfies the general assumptions. Then the following statements are true for any fixed pair s, t of vertices in Gn: 4C > Rlocal p (s, t) ≥ 1 dp−1 s + 1 dp−1 t and T1 + T2 ≥Rglobal p (s, t) ≥T1. Note that by taking the sum of the two inequalities this theorem also leads to upper and lower bounds for Rp(s, t) itself. The proof of Theorem 5 consists of several parts. To derive lower bounds on Rp(s, t) we construct a second graph G0 n which is a contracted version of Gn. Lower bounds can then be obtained by Rayleigh’s monotonicity principle. To get upper bounds on Rp(s, t) we exploit the fact that the p-resistance in an unweighted graph can be upper bounded by P e2E ip e, where i is any unit flow from s to t. We construct a particular flow that leads to a good upper bound. Finally, investigating the properties of lower and upper bounds we can derive the individual bounds on Rlocal p and Rglobal p . Details can be found in the supplementary material. Theorem 3 can now be derived from Theorem 5 by straight forward computations. 4.2 Applications Our general results can directly be applied to many standard geometric graph models. The "-graph. We assume that X1, ..., Xn have been drawn i.i.d from some underlying density f on Rd, where M := supp(f) satisfies Part (3) of the general assumptions. Points are connected by unweighted edges in the graph if their Euclidean distances are smaller than ". Exploiting standard results on "-graphs (cf. the appendix in von Luxburg et al., 2010), it is easy to see that the general assumptions (1) and (2) are satisfied with probability at least 1−c1n exp(−c2n"d) (where c1, c2 are constants independent of n and d) with r = R = " and ⌧n = ⇥(n"d). The probability converges to 1 if n ! 1, " ! 0 and n"d/ log(n) ! 1. k-nearest neighbor graphs. We assume that X1, ..., Xn have been drawn i.i.d from some underlying density f on Rd, where M := supp(f) satisfies Part (3) of the general assumptions. We connect each point to its k nearest neighbors by an undirected, unweighted edge. Exploiting standard results on kNN-graphs (cf. the appendix in von Luxburg et al., 2010), it is easy to see that the general assumptions (1) and (2) are satisfied with probability at least 1 −c1k exp(−c2k) with r = ⇥ ( (k/n)1/d) , R = ⇥ ( (k/n)1/d) , and ⌧n = k. The probability converges to 1 if n ! 1, k ! 1, and k/ log(n) ! 1. Lattice graphs. Consider uniform lattices such as the square lattice or triangular lattice in Rd. These lattices have constant degrees, which means that ⌧n = ⇥(1). If we denote the edge length of grid by ", the total number of nodes in the support will be in the order of n = ⇥(1/"d). This means that the general assumptions hold for r = R = " = ⇥( 1 n1/d ) and ⌧n = ⇥(1). Note that while the lower bounds of Theorem 3 can be applied to the lattice case, our current upper bounds do not hold because they require that ⌧n ! 1. 5 Regularization by p-Laplacians One of the most popular methods for semi-supervised learning on graphs is based on Laplacian regularization. In Zhu et al. (2003) the label assignment problem is formulated as ' = argminx C(x) subject to '(xi) = yi , i = 1, . . . , l (2) where yi 2 {±1}, C(x) := 'T L' is the energy function involving the standard (p = 2) graph Laplacian L. This formulation is appealing and works well for small sample problems. However, Nadler et al. (2009) showed that the method is not well posed when the number of unlabeled data points is very large. In this setting, the solution of the optimization problem converges to a constant function with “spikes” at the labeled points. We now present a simple theorem that connects these findings to those concerning the resistance distance. Theorem 6 (Laplacian regularization in terms of resistance distance) Consider a semi-supervised classification problem with one labeled point per class: '(s) = 1, '(t) = −1. Denote 6 the solution of (2) by '⇤, and let v be an unlabeled data point. Then '⇤(v) −'⇤(t) > '⇤(s) −'⇤(v) () R2(v, t) > R2(v, s). Proof. It is easy to verify that '⇤= L†(es −et) and R2(s, t) = (es −et)T L†(es −et) where L† is the pseudo-inverse of the Laplacian matrix L. Therefore we have '⇤(v) = (ev)T L†(es −et) and '⇤(v) −'⇤(t) > '⇤(s) −'⇤(v) () (ev −et)T L†(es −et) > (es −ev)T L†(es −et) (a) () (ev −et)T L†(ev −et) > (ev −es)T L†(ev −es) () R2(v, t) > R2(v, s). Here in step (a) we use the symmetry of L† to state that eT v L†es = eT s L†ev. 2 What does this theorem mean? We have seen above that in case p = 2, if n ! 1, R2(v, t) ⇡1 dv + 1 dt and R2(v, s) ⇡1 dv + 1 ds . Hence, the theorem states that if we threshold the function '⇤at 0 to separate the two classes, then all the points will be assigned to the labeled vertex with larger degree. Our conjecture is that an analogue to Theorem 6 also holds for general p. For a precise formulation, define the matrix r as ri,j = ⇢'(i) −'(j) i ⇠j 0 otherwise and introduce the matrix norm kAkm,n = ( P i((P j am ij)1/m)n)1/n. Consider q such that 1/p + 1/q = 1. We conjecture that if we used krkq,q as a regularizer for semi-supervised learning, then the corresponding solution '⇤would satisfy '⇤(v) −'⇤(t) > '⇤(s) −'⇤(v) () Rp(v, t) > Rp(v, s). That is, the solution of the q-regularized problem would assign labels according to the Rp-distances. In particular, using q-regularization for the value q with 1/q + 1/p⇤= 1 would resolve the artifacts of Laplacian regularization described in Nadler et al. (2009). It is worth mentioning that this regularization is different from others in the literature. The usual Laplacian regularization term as in Zhu et al. (2003) coincides with krk2,2, Zhou and Sch¨olkopf (2005) use the krk2,p norm, and our conjecture is that the krkq,q norm would be a good candidate. Proving whether this conjecture is right or wrong is a subject of future work. 6 Related families of distance functions on graphs In this section we sketch some relations between p-resistances and other families of distances. 6.1 Comparing Herbster’s and our definition of p-resistances For p 2, Herbster and Lever (2009) introduced the following definition of p-resistances: RH p0(s, t) := 1 CH p0 (s, t) with CH p0 (s, t) := min n X e=(u,v) |'(u) −'(v)|p0 re ## '(s) −'(t) = 1 o . In Section 3.1 we have seen that the potential and flow optimization problems are duals of each other. Based on this derivation we believe that the natural way of relating RH and CH would be to replace the p0 in Herbster’s potential formulation by q0 such that 1/p0 +1/q0 = 1. That is, one would have to consider CH q0 and then define bRH p0 := 1/CH q0 . In particular, reducing Herbster’s p0 towards 1 has the same influence as increasing our p to infinity and makes RH p0 converge to the minimal s-t-cut. To ease further comparison, let us assume for now that we use “our” p in the definition of Herbster’s resistances. Then one can see by similar arguments as in Section 3.1 that RH p can be rewritten as RH p (s, t) := min n X e2E rp−1 e |ie|p ### i = (ie)e2E unit flow from s to t o . (H) 7 Now it is easy to see that the main difference between Herbster’s definition (H) and our definition (⇤) is that (H) takes the power p −1 of the resistances re, while we keep the resistances with power 1. In many respects, Rp and RH p have properties that are similar to each other: they satisfy slightly different versions (with different powers or weights) of the triangle inequality, Rayleigh’s monotonicity principle, laws for resistances in series and in parallel, and so on. We will not discuss further details due to space constraints. 6.2 Other families of distances There also exist other families of distances on graphs that share some of the properties of presistances. We will only discuss the ones that are most related to our work, for more references see von Luxburg et al. (2010). The first such family was introduced by Yen et al. (2008), where the authors use a statistical physics approach to reduce the influence of long paths to the distance. This family is parameterized by a parameter ✓, contains the shortest path distance at one end (✓! 1) and the standard resistance distance at the other end (✓! 0). However, the construction is somewhat ad hoc, the resulting distances cannot be computed in closed form and do not even satisfy the triangle inequality. A second family is the one of “logarithmic forest distances” by Chebotarev (2011). Even though its derivation is complicated, it has a closed form solution and can be interpreted intuitively: The contribution of a path to the overall distance is “discounted” by a factor (1/↵)l where l is the length of the path. For ↵! 0, the logarithmic forest distance distance converges to the shortest path distance, for ↵! 1, it converges to the resistance distance. At the time of writing this paper, the major disadvantage of both the families introduced by Yen et al. (2008) and Chebotarev (2011) is that it is unknown how their distances behave as the size of the graph increases. It is clear that on the one end (shortest path), they convey global information, whereas on the other end (resistance distance) they depend on local quantities only when n ! 1. But what happens to all intermediate parameter values? Do all of them lead to meaningless distances as n ! 1, or is there some interesting phase transition as well? As long as this question has not been answered, one should be careful when using these distances. In particular, it is unclear how the parameters (✓and ↵, respectively) should be chosen, and it is hard to get an intuition about this. 7 Conclusions We proved that the family of p-resistances has a wide range of behaviors. In particular, for p = 1 it coincides with the shortest path distance, for p = 2 with the standard resistance distance and for p ! 1 it is related to the minimal s-t-cut. Moreover, an interesting phase transition takes place: in large geometric graphs such as k-nearest neighbor graphs, the p-resistance is governed by meaningful global properties as long as p < p⇤:= 1 + 1/(d −1), whereas it converges to the trivial local quantity 1/dp−1 s + 1/dp−1 t if p > p⇤⇤:= 1 + 1/(d −2). Our suggestion for practice is to use p-resistances with p ⇡p⇤. For this value of p, the p-resistances encode those global properties of the graph that are most important for machine learning, namely the cluster structure of the graph. Our findings are interesting on their own, but also help in explaining several artifacts discussed in the literature. They go much beyond the work of von Luxburg et al. (2010) (which only studied the case p = 2) and lead to an intuitive explanation of the artifacts of Laplacian regularization discovered in Nadler et al. (2009). An interesting line of future research will be to connect our results to the ones about p-eigenvectors of p-Laplacians (B¨uhler and Hein, 2009). For p = 2, the resistance distance can be expressed in terms of the eigenvalues and eigenvectors of the Laplacian. We are curious to see whether a refined theory on p-eigenvalues can lead to similarly tight relationships for general values of p. Acknowledgements We would like to thank the anonymous reviewers who discovered an inconsistency in our earlier proof, and Bernhard Sch¨olkopf for helpful discussions. 8 References M. Bazaraa, J. Jarvis, and H. Sherali. Linear Programming and Network Flows. Wiley-Interscience, 2010. B. Bollobas. Modern Graph Theory. Springer, 1998. T. B¨uhler and M. Hein. Spectral clustering based on the graph p-Laplacian. In Proceedings of the International Conference on Machine Learning (ICML), pages 81–88, 2009. P. Chebotarev. A class of graph-geodetic distances generalizing the shortets path and the resistance distances. Discrete Applied Mathematics, 159(295 – 302), 2011. P. G. Doyle and J. Laurie Snell. Random walks and electric networks, 2000. URL http://www. citebase.org/abstract?id=oai:arXiv.org:math/0001057. M. Herbster and G. Lever. Predicting the labelling of a graph via minimum p-seminorm interpolation. In Conference on Learning Theory (COLT), 2009. B. Nadler, N. Srebro, and X. Zhou. Semi-supervised learning with the graph Laplacian: The limit of infinite unlabelled data. In Advances in Neural Information Processing Systems (NIPS), 2009. J. Tenenbaum, V. de Silva, and J. Langford. Supplementary material to ”A Global Geometric Framework for Nonlinear Dimensionality Reduction”. Science, 290:2319 – 2323, 2000. URL http://isomap.stanford.edu/BdSLT.pdf. U. von Luxburg, A. Radl, and M. Hein. Getting lost in space: Large sample analysis of the commute distance. In Neural Information Processing Systems (NIPS), 2010. L. Yen, M. Saerens, A. Mantrach, and M. Shimbo. A family of dissimilarity measures between nodes generalizing both the shortest-path and the commute-time distances. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–793, 2008. D. Zhou and B. Sch¨olkopf. Regularization on discrete spaces. In DAGM-Symposium, pages 361– 368, 2005. X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, pages 912–919, 2003. 9
2011
13
4,180
Learning to Learn with Compound HD Models Ruslan Salakhutdinov Department of Statistics, University of Toronto rsalakhu@utstat.toronto.edu Joshua B. Tenenbaum Brain and Cognitive Sciences, MIT jbt@mit.edu Antonio Torralba CSAIL, MIT torralba@mit.edu Abstract We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian models. Specifically we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training examples, by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets. 1 Introduction “Learning to learn”, or the ability to learn abstract representations that support transfer to novel but related tasks, lies at the core of many problems in computer vision, natural language processing, cognitive science, and machine learning. In typical applications of machine classification algorithms today, learning curves are measured in tens, hundreds or thousands of training examples. For humans learners, however, just one or a few examples are often sufficient to grasp a new category and make meaningful generalizations to novel instances [25, 16]. The architecture we describe here takes a step towards this “one-shot learning” ability by learning several forms of abstract knowledge that support transfer of useful representations from previously learned concepts to novel ones. We call our architectures compound HD models, where “HD” stands for “Hierarchical-Deep”, because they are derived by composing hierarchical nonparametric Bayesian models with deep networks, two influential approaches from the recent unsupervised learning literature with complementary strengths. Recently introduced deep learning models, including Deep Belief Networks [5], Deep Boltzmann Machines [14], deep autoencoders [10], and others [12, 11], have been shown to learn useful distributed feature representations for many high-dimensional datasets. The ability to automatically learn in multiple layers allows deep models to construct sophisticated domain-specific features without the need to rely on precise human-crafted input representations, increasingly important with the proliferation of data sets and application domains. While the features learned by deep models can enable more rapid and accurate classification learning, deep networks themselves are not well suited to one-shot learning of novel classes. All units and parameters at all levels of the network are engaged in representing any given input and are adjusted together during learning. In contrast, we argue that one-shot learning of new classes will be easier in architectures that can explicitly identify only a small number of degrees of freedom (latent variables and parameters) that are relevant to the new concept being learned, and thereby achieve more appropriate and flexible transfer of learned representations to new tasks. This ability is the 1 hallmark of hierarchical Bayesian (HB) models, recently proposed in computer vision, statistics, and cognitive science [7, 25, 4, 13] for learning to learn from few examples. Unlike deep networks, these HB models explicitly represent category hierarchies that admit sharing the appropriate abstract knowledge about the new class’s parameters via a prior abstracted from related classes. HB approaches, however, have complementary weaknesses relative to deep networks. They typically rely on domain-specific hand-crafted features [4, 1] (e.g. GIST, SIFT features in computer vision, MFCC features in speech perception domains). Committing to the a-priori defined feature representations, instead of learning them from data, can be detrimental. Moreover, many HB approaches often assume a fixed hierarchy for sharing parameters [17, 3] instead of learning the hierarchy in an unsupervised fashion. In this work we investigate compound HD (hierarchical-deep) architectures that integrate these deep models with structured hierarchical Bayesian models. In particular, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine (DBM), coming to represent both a layered hierarchy of increasingly abstract features, and a tree-structured hierarchy of classes. Our model depends minimally on domain-specific representations and achieves state-of-the-art one-shot learning performance by unsupervised discovery of three components: (a) low-level features that abstract from the raw high-dimensional sensory input (e.g. pixels, or 3D joint angles); (b) high-level part-like features that express the distinctive perceptual structure of a specific class, in terms of class-specific correlations over low-level features; and (c) a hierarchy of super-classes for sharing abstract knowledge among related classes. We evaluate the compound HDP-DBM model on three different perceptual domains. We also illustrate the advantages of having a full generative model, extending from highly abstract concepts all the way down to sensory inputs: we can not only generalize class labels but also synthesize new examples in novel classes that look reasonably natural, and we can significantly improve classification performance by learning parameters at all levels jointly by maximizing a joint log-probability score. 2 Deep Boltzmann Machines (DBMs) A Deep Boltzmann Machine is a network of symmetrically coupled stochastic binary units. It contains a set of visible units v ∈{0, 1}D, and a sequence of layers of hidden units h1 ∈{0, 1}F1, h2 ∈{0, 1}F2,..., hL ∈{0, 1}FL. There are connections only between hidden units in adjacent layers, as well as between visible and hidden units in the first hidden layer. Consider a DBM with three hidden layers1 (i.e. L = 3). The probability of a visible input v is: P(v; ψ) = 1 Z(ψ) X h exp  X ij W(1) ij vih1 j + X jl W(2) jl h1 jh2 l + X lm W(2) lmh2 l h3 m  , (1) where h = {h1, h2, h3} are the set of hidden units, and ψ = {W(1), W(2), W(3)} are the model parameters, representing visible-to-hidden and hidden-to-hidden symmetric interaction terms. Approximate Learning: Exact maximum likelihood learning in this model is intractable, but efficient approximate learning of DBMs can be carried out by using a mean-field inference to estimate data-dependent expectations, and an MCMC based stochastic approximation procedure to approximate the model’s expected sufficient statistics [14]. In particular, consider approximating the true posterior P(h|v; ψ) with a fully factorized approximating distribution over the three sets of hidden units: Q(h|v; µ) = QF1 j=1 QF2 k=1 QF3 m=1 q(h1 j|v)q(h2 k|v)q(h3 m|v) where µ = {µ1, µ2, µ3} are the mean-field parameters with q(hl i = 1) = µl i for l = 1, 2, 3. In this case, we can write down the variational lower bound on the log-probability of the data, which takes a particularly simple form: log P(v; ψ) ≥ v⊤W(1)µ1 + µ1⊤W(2)µ2 + µ2⊤W(3)µ2 −log Z(ψ) + H(Q), (2) where H(·) is the entropy functional. Learning proceeds by finding the value of µ that maximizes this lower bound for the current value of model parameters ψ, which results in a set of the mean-field fixed-point equations. Given the variational parameters µ, the model parameters ψ are then updated to maximize the variational bound using stochastic approximation (for details see [14, 22, 26]). Multinomial DBMs: To allow DBMs to express more information and introduce more structured hierarchical priors, we will use a conditional multinomial distribution to model activities of the toplevel units. Specifically, we will use M softmax units, each with “1-of-K” encoding (so that each 1For clarity, we use three hidden layers. Extensions to models with more than three layers is trivial. 2 h3 h2 h1 v W2 W1 W3 h3 h2 h1 v W2 W1 W3 α(2) α(3) γ H G(1) c G(1) c G(1) c G(1) c G(1) c G(2) k G(2) k G(3) α(3) Gn Gn Gn Gn Gn φin φin φin φin φin h3 in h3 in h3 in h3 in h3 in c N cow horse car van truck M M Deep Boltzmann Machine HDP prior over activities of the top-level units Learned Hierarchy of super-classes “Animal” “Vehicle” M replicated softmax units with tied weights Multinomial unit sampled M times Figure 1: Left: Multinomial DBM model: the top layer represents M softmax hidden units h3, which share the same set of weights. Middle: A different interpretation: M softmax units are replaced by a single multinomial unit which is sampled M times. Right: Hierarchical Dirichlet Process prior over the states of h3. unit contains a set of K weights). All M separate softmax units will share the same set of weights, connecting them to binary hidden units at the lower-level (Fig. 1). A key observation is that M separate copies of softmax units that all share the same set of weights can be viewed as a single multinomial unit that is samples M times [15, 19]. A pleasing property of using softmax units is that the mathematics underlying the learning algorithm for binary-binary DBMs remains the same. 3 Compound HDP-DBM model After a DBM model has been learned, we have an undirected model that defines the joint distribution P(v, h1, h2, h3). One way to express what has been learned is the conditional model P(v, h1, h2|h3) and a prior term P(h3). We can therefore rewrite the variational bound as: log P(v) ≥ X h1,h2,h3 Q(h|v; µ) log P(v, h1, h2|h3) + H(Q) + X h3 Q(h3|v; µ) log P(h3). (3) This particular decomposition lies at the core of the greedy recursive pretraining algorithm: we keep the learned conditional model P(v, h1, h2|h3), but maximize the variational lower-bound of Eq. 3 with respect to the last term [5]. Instead of adding an additional undirected layer, (e.g. a restricted Boltzmann machine), to model P(h3), we can place a hierarchical Dirichlet process prior over h3, that will allow us to learn category hierarchies, and more importantly, useful representations of classes that contain few training examples. The part we keep, P(v, h1, h2|h3), represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states of h3: P(v, h1, h2|h3) = 1 Z(ψ, h3) exp  X ij W(1) ij vih1 j + X jl W(2) jl h1 jh2 l + X lm W(3) lmh2 l h3 m  . (4) 3.1 A Hierarchical Bayesian Topic Prior In a typical hierarchical topic model, we observe a set of N documents, each of which is modeled as a mixture over topics, that are shared among documents. Let there be K words in the vocabulary. A topic t is a discrete distribution over K words with probability vector φt. Each document n has its own distribution over topics given by probabilities θn. In our compound HDP-DBM model, we will use a hierarchical topic model as a prior over the activities of the DBM’s top-level features. Specifically, the term “document” will refer to the toplevel multinomial unit h3, and M “words” in the document will represent the M samples, or active DBM’s top-level features, generated by this multinomial unit. Words in each document are drawn by choosing a topic t with probability θnt, and then choosing a word w with probability φtw. We will often refer to topics as our learned higher-level features, each of which defines a topic specific distribution over DBM’s h3 features. Let h3 in be the ith word in document n, and xin be its topic: θn|π ∼Dir(απ), φt|τ ∼Dir(βτ), xin|θn ∼Mult(θn), h3 in|xin, φxin ∼Mult(φxin), (5) where π is the global distribution over topics, τ is the global distribution over K words, and α and β are concentration parameters. 3 Let us further assume that we are presented with a fixed two-level category hierarchy. Suppose that N documents, or objects, are partitioned into C basic level categories (e.g. cow, sheep, car). We represent such partition by a vector zb of length N, each entry of which is zb n ∈{1, ..., C}. We also assume that our C basic-level categories are partitioned into S super-categories (e.g. animal, vehicle), represented by by a vector zs of length C, with zs c ∈{1, ..., S}. These partitions define a fixed two-level tree hierarchy (see Fig. 1). We will relax this assumption later. The hierarchical topic model can be readily extended to modeling the above hierarchy. For each document n that belong to the basic category c, we place a common Dirichlet prior over θn with parameters π(1) c . The Dirichlet parameters π(1) are themselves drawn from a Dirichlet prior with parameters π(2), and so on (see Fig. 1). Specifically, we define the following prior over h3: π(2) s |π(3) g ∼ Dir(α(3)π3 g), for each super-category s=1,..,S (6) π(1) c |π(2) zsc ∼ Dir(α(2)π(2) zsc ), for each basic-category c = 1, .., C θn|π(1) zbn ∼ Dir(α(1)π(1) zbn ), for each document n = 1, .., N xin|θn ∼ Mult(θn), for each word i = 1, .., M φt|τ ∼ Dir(βτ), h3 in|xin, φxin ∼Mult(φxin), where π(3) g is the global distribution over topics, π(2) s is the super-category specific and π(1) c is the class specific distribution over topics, or higher-level features. These high-level features, in turn, define topic-specific distribution over h3 features, or “words” in a DBM model. For a fixed number of topics T, the above model represents a hierarchical extension of LDA. We typically do not know the number of topics a-priori. It is therefore natural to consider a nonparametric extension based on the HDP model [21], which allows for a countably infinite number of topics. In the standard hierarchical Dirichlet process notation, we have G(3) g ∼DP(γ, Dir(βτ)), G(2) s ∼DP(α(3), G(3) g ), G(1) c ∼DP(α(2), G(2) zsc ), (7) Gn ∼DP(α(1), G(1) zbn ), φ∗ in|Gn ∼Gn, h3 in|φ∗ in ∼Mult(φ∗ in), where Dir(βτ) is the base-distribution, and each φ∗is a factor associated with a single observation h3 in. Making use of topic index variables xin, we denote φ∗ in = φxin (see Eq. 6). Using a stick-breaking representation we can write: G(3) g (φ) = P∞ t=1 π(3) gt δφt, G(2) s (φ) = P∞ t=1 π(2) st δφt, G(3) c (φ) = P∞ t=1 π(1) ct δφt, and Gn(φ) = P∞ t=1 θntδφt that represent sums of point masses. We also place Gamma priors over concentration parameters as in [21]. The overall generative model is shown in Fig. 1. To generate a sample we first draw M words, or activations of the top-level features, from the HDP prior over h3 given by Eq. 7. Conditioned on h3, we sample the states of v from the conditional DBM model given by Eq. 4. 3.2 Modeling the number of super-categories So far we have assumed that our model is presented with a two-level partition z = {zs, zb}. If, however, we are not given any level-1 or level-2 category labels, we need to infer the distribution over the possible category structures. We place a nonparametric two-level nested Chinese Restaurant Prior (CRP) [2] over z, which defines a prior over tree structures and is flexible enough to learn arbitrary hierarchies. The main building block of the nested CRP is the Chinese restaurant process, a distribution on partition of integers. Imagine a process by which customers enter a restaurant with an unbounded number of tables, where the nth customer occupies a table k drawn from: P(zn = k|z1, ..., zn−1) = { nk n −1 + η , if nk > 0; η n −1 + η , if k is new}, (8) where nk is the number of previous customers at table k and η is the concentration parameter. The nested CRP, nCRP(η), extends CRP to nested sequence of partitions, one for each level of the tree. In this case each observation n is first assigned to the super-category zs n using Eq. 8. Its assignment to the basic-level category zb n, that is placed under a super-category zs n, is again recursively drawn from Eq. 8. We also place a Gamma prior Γ(1, 1) over η. The proposed model allows for both: a nonparametric prior over potentially unbounded number of global topics, or higher-level features, as well as a nonparametric prior that allow learning an arbitrary tree taxonomy. 4 4 Inference Inferences about model parameters at all levels of hierarchy can be performed by MCMC. When the tree structure z of the model is not given, the inference process will alternate between fixing z while sampling the space of model parameters, and vice versa. Sampling HDP parameters: Given category assignment vectors z, and the states of the top-level DBM features h3, we use posterior representation sampler of [20]. In particular, the HDP sampler maintains the stick-breaking weights {θ}N n=1, and {π(1) c , π(2) s , π(3) g }; and topic indicator variables x (parameters φ can be integrated out). The sampler alternatives between: (a) sampling cluster indices xin using Gibbs updates in the Chinese restaurant franchise (CRF) representation of the HDP; (b) sampling the weights at all three levels conditioned on x using the usual posterior of a DP2. Sampling category assignments z: Given current instantiation of the stick-breaking weights, using a defining property of a DP, for each input n, we have: (θ1,n, ..., θT,n, θnew,n) ∼Dir(α(1)π(1) zn,1, ..., α(1)π(1) zn,T , α(1)π(1) zn,new) (9) Combining the above likelihood term with the CRP prior (Eq. 8), the posterior over the category assignment can be calculated as follows: p(zn|θn, z−n, π(1)) ∝p(θn|π(1), zn)p(zn|z−n), (10) where z−n denotes variables z for all observations other than n. When computing the probability of placing θn under a newly created category, its parameters are sampled from the prior. Sampling DBM’s hidden units: Given the states of the DBM’s top-level multinomial unit h3, conditional samples from P(h1 n, h2 n|h3 n, vn) can be obtained by running a Gibbs sampler that alternates between sampling the states of h1 n independently given h2 n, and vice versa. Conditioned on topic assignments xin and h2 n, the states of the multinomial unit h3 n for each input n are sampled using Gibbs conditionals: P(h3 in|h2 n, h3 −in, xn) ∝P(h2 n|h3 n)P(h3 in|xin), (11) where the first term is given by the product of logistic functions (see Eq. 4): P(h2|h3) = Y l P(h2 l |h3), with P(h2 l = 1|h3) = 1 1 + exp −P m W(3) lmh3m , (12) and the second term P(h3 in) is given by the multinomial: Mult(φxin) (see Eq. 7, in our conjugate setting, parameters φ can be further integrated out). Fine-tuning DBM: More importantly, conditioned on h3, we can further fine-tune low-level DBM parameters ψ = {W(1), W(2), W(3)} by applying approximate maximum likelihood learning (see section 2) to the conditional DBM model of Eq. 4. For the stochastic approximation algorithm, as the partition function depends on the states of h3, we maintain one “persistent” Markov chain per data point (for details see [22, 14]). Making predictions: Given a test input vt, we can quickly infer the approximate posterior over h3 t using the mean-field of Eq. 2, followed by running the full Gibbs sampler to get approximate samples from the posterior over the category assignments. In practice, for faster inference, we fix learned topics φt and approximate the marginal likelihood that h3 t belongs to category zt by assuming that document specific DP can be well approximated by the class-specific DP3 Gt ≈G(1) zt (see Fig. 1): P(h3 t|zt, G(1), φ) = Z Gt P(h3 t|φ, Gt)P(Gt|G(1) zt )dGt ≈P(h3 t|φ, G(1) zt ), (13) Combining this likelihood term with nCRP prior P(zt|z−t) (Eq. 8) allows us to efficiently infer approximate posterior over category assignments4. 2Conditioned on the draw of the super-class DP G(2) s and the state of the CRF, the posteriors over G(1) c become independent. We can easily speed up inference by sampling from these conditionals in parallel. 3We note that G(1) zt = E[Gt|G(1) zt ] 4In all of our experimental results, computing this approximate posterior takes a fraction of a second. 5 DBM features 1st layer 2nd layer HDP high-level features Figure 2: A random subset of the 1st, 2nd layer DBM features, and higher-level class-sensitive HDP features/topics. 1. bed, chair, clock, couch, dinosaur, lawn mower, table, telephone, television, wardrobe 2. bus, house, pickup truck, streetcar, tank, tractor, train 3. crocodile, kangaroo, lizard, snake, spider, squirrel 4. hamster, mouse, rabbit, raccoon, possum, bear 5. apple, orange, pear, sunflower, sweet pepper 6. baby, boy, girl, man, woman 7. dolphin, ray, shark, turtle, whale 8. otter, porcupine, shrew, skunk 9. beaver, camel, cattle, chimpanzee, elephant 10. fox, leopard, lion, tiger, wolf 11. maple tree, oak tree, pine tree, willow tree 12 flatfish, seal, trout, worm 13 butterfly, caterpillar, snail 14 bee, crab, lobster 15 bridge, castle, road, skyscraper 16 bicycle, keyboard, motorcycle, orchid, palm tree 17 bottle, bowl, can, cup, lamp 18 cloud, plate, rocket 19. mountain, plain, sea 20 poppy, rose, tulip 21. aquarium fish, mushroom 22 beetle, cockroach 23. forest Figure 3: A typical partition of the 100 basic-level categories 5 Experiments We present experimental results on the CIFAR-100 [8], handwritten character [9], and human motion capture recognition datasets. For all datasets, we first pretrain a DBM model in unsupervised fashion on raw sensory input (e.g. pixels, or 3D joint angles), followed by fitting an HDP prior, which is run for 200 Gibbs sweeps. We further run 200 additional Gibbs steps in order to fine-tune parameters of the entire compound HDP-DBM model. This was sufficient to reach convergence and obtain good performance. Across all datasets, we also assume that the basic-level category labels are given, but no super-category labels are available. The training set includes many examples of familiar categories but only a few examples of a novel class. Our goal is to generalize well on a novel class. In all experiments we compare performance of HDP-DBM to the following alternative models: stand-alone Deep Boltzmann Machines, Deep Belief Networks [5], “Flat HDP-DBM” model, that always uses a single super-category, SVMs, and k-NN. The Flat HDP-DBM approach could potentially identify a set of useful high-level features common to all categories. Finally, to evaluate performance of DBMs (and DBNs), we follow [14]. Note that using HDPs on top of raw sensory input (i.e. pixels, or even image-specific GIST features) performs far worse compared to HDP-DBM. 5.1 CIFAR-100 dataset The CIFAR-100 image dataset [8] contains 50,000 training and 10,000 test images of 100 object categories (100 per class), with 32 × 32 × 3 RGB pixels. Extreme variability in scale, viewpoint, illumination, and cluttered background makes object recognition task for this dataset quite difficult. Similar to [8], in order to learn good generic low-level features, we first train a two-layer DBM in completely unsupervised fashion using 4 million tiny images5 [23]. We use a conditional Gaussian distribution to model observed pixel values [8, 6]. The first DBM layer contained 10,000 binary hidden units, and the second layer contained M=1000 softmax units, each defining a distribution over 10, 000 second layer features6. We then fit an HDP prior over h2 to the 100 object classes. Fig. 2 displays a random subset of the 1st and 2nd layer DBM features, as well as higher-level classsensitive features, or topics, learned by the HDP model. To visualize a particular higher-level feature, we first sample M words from a fixed topic φt, followed by sampling RGB pixel values from the conditional DBM model. While DBM features capture mostly low-level structure, including edges and corners, the HDP features tend to capture higher-level structure, including contours, shapes, color components, and surface boundaries. More importantly, features at all levels of the hierarchy evolve without incorporating any image-specific priors. Fig. 3 shows a typical partition over 100 classes that our model learns with many super-categories containing semantically similar classes. We next illustrate the ability of the HDP-DBM to generalize from a single training example of a “pear” class. We trained the model on 99 classes containing 500 training images each, but only one training example of a “pear” class. Fig. 4 shows the kind of transfer our model is performing: the model discovers that pears are like apples and oranges, and not like other classes of images, such as dolphins, that reside in very different parts of the hierarchy. Hence the novel category can inherit 5The dataset contains random images of natural scenes downloaded from the web 6We also experimented with a 3-layer DBM model, as well as various softmax parameters: M = 500 and M = 2000. The difference in performance was not significant. 6 Shared HDP high-level features Shape Color 0 200 400 600 800 1000 1200 1400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Sorted Class Index 2*AUROC−1 HDP−DBM DBM SVM Characters Dataset 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Sorted Class Index 2*AUROC−1 HDP−DBM DBM SVM Learning with 3 examples CIFAR Dataset Figure 4: Left: Training examples along with eight most probable topics φt, ordered by hand. Right: Performance of HDP-DBM, DBM, and SVMs for all object classes when learning with 3 examples. Object categories are sorted by their performance. CIFAR Dataset Handwritten Characters Motion Capture Number of examples Number of examples Number of examples Model 1 3 5 10 50 1 3 5 10 1 3 5 10 50 Tuned HDP-DBM 0.36 0.41 0.46 0.53 0.62 0.67 0.78 0.87 0.93 0.67 0.84 0.90 0.93 0.96 HDP-DBM 0.34 0.39 0.45 0.52 0.61 0.65 0.76 0.85 0.92 0.66 0.82 0.88 0.93 0.96 Flat HDP-DBM 0.27 0.37 0.42 0.50 0.61 0.58 0.73 0.82 0.89 0.63 0.79 0.86 0.91 0.96 DBM 0.26 0.36 0.41 0.48 0.61 0.57 0.72 0.81 0.89 0.61 0.79 0.85 0.91 0.95 DBN 0.25 0.33 0.37 0.45 0.60 0.51 0.72 0.81 0.89 0.61 0.79 0.84 0.92 0.96 SVM 0.18 0.27 0.31 0.38 0.61 0.41 0.66 0.77 0.86 0.54 0.78 0.84 0.91 0.96 1-NN 0.17 0.18 0.19 0.20 0.32 0.43 0.65 0.73 0.81 0.58 0.75 0.81 0.88 0.93 GIST 0.27 0.31 0.33 0.39 0.58 Table 1: Classification performance on the test set using 2*AUROC-1. The results in bold correspond to ROCs that are statistically indistinguishable from the best (the difference is not statistically significant). the prior distribution over similar high-level shape and color features, allowing the HDP-DBM to generalize considerably better to new instances of the “pear” class. Table 1 quantifies performance using the area under the ROC curve (AUROC) for classifying 10,000 test images as belonging to the novel vs. all other 99 classes (we report 2*AUROC-1, so zero corresponds to the classifier that makes random predictions). The results are averaged over 100 classes using “leave-one-out” test format. Based on a single example, the HDP-DBM model achieves an AUROC of 0.36, significantly outperforming DBMs, DBNs, SVMs, as well as 1-NN using standard image-specific GIST features [24] that achieve an AUROC of 0.26, 0.25, 0.18 and 0.27 respectively. Table 1 also shows that fine-tuning parameters of all layers jointly as well as learning super-category hierarchy significantly improves model performance. As the number of training examples increases, the HDP-DBM model still consistently outperforms alternative methods. Fig. 4 further displays performance of HDP-DBM, DBM, and SVM models for all object categories when learning with only three examples. Observe that over 40 classes benefit in various degrees from learning a hierarchy. 5.2 Handwritten Characters The handwritten characters dataset [9] can be viewed as the “transpose” of MNIST. Instead of containing 60,000 images of 10 digit classes, the dataset contains 30,000 images of 1500 characters (20 examples each) with 28 × 28 pixels. These characters are from 50 alphabets from around the world, including Bengali, Cyrillic, Arabic, Sanskrit, Tagalog (see Fig. 5). We split the dataset into 15,000 training and 15,000 test images (10 examples of each class). Similar to the CIFAR dataset, we pretrain a two-layer DBM model, with the first layer containing 1000 hidden units, and the second layer containing M=100 softmax units, each defining a distribution over 1000 second layer features. Fig. 2 displays a random subset of training images, along with the 1st and 2nd layer DBM features, as well as higher-level class-sensitive HDP features. The HDP features tend to capture higher-level parts, many of which resemble pen “strokes”. Table 1 further shows results for classifying 15,000 test images as belonging to the novel vs. all other 1,499 character classes. The HDP-DBM model significantly outperforms other methods, particularly when learning characters with few training examples. Fig. 6 further displays learned super-classes along with examples of entirely novel characters that have been generated by the model for the same super-class, as well as conditional samples 7 Training samples DBM features 1st layer 2nd layer HDP high-level features Figure 5: A random subset of the training images along with 1st and 2nd layer DBM features, as well as higher-level class-sensitive HDP features/topics. Learned Super-Classes (by row) Sampled Novel Characters Learning with 3 examples Training Examples Conditional Samples Figure 6: Left: Learned super-classes along with examples of novel characters, generated by the model for the same super-class. Right: Three training examples along with 8 conditional samples. when learning only with three training examples. (we note that using Deep Belief Networks instead of DBMs produced far inferior generative samples). Remarkably, many samples look realistic, containing coherent, long-range structure, while at the same time being different from existing training images (see Supplementary Materials for a much richer class of generated samples). 5.3 Motion capture We next applied our model to human motion capture data consisting of sequences of 3D joint angles plus body orientation and translation [18]. The dataset contains 10 walking styles, including normal, drunk, graceful, gangly, sexy, dinosaur, chicken, old person, cat, and strong. There are 2500 frames of each style at 60fps, where each time step was represented by a vector of 58 real-valued numbers. The dataset was split at random into 1500 training and 1000 test frames of each style. We further preprocessed the data by treating each window of 10 consecutive frames as a single 58 ∗10 = 580d data vector. For the two-layer DBM model, the first layer contained 500 hidden units, with the second layer containing M=50 softmax units, each defining a distribution over 500 second layer features. As expected, Table 1 shows that the HDP-DBM model performs much better compared to other models when discriminating between existing nine walking styles vs. novel walking style. The difference is particularly large in the regime when we observe only a handful number of training examples of a novel walking style. 6 Conclusions We developed a compositional architecture that learns an HDP prior over the activities of top-level features of the DBM model. The resulting compound HDP-DBM model is able to learn low-level features from raw sensory input, high-level features, as well as a category hierarchy for parameter sharing. Our experimental results show that the proposed model can acquire new concepts from very few examples in a diverse set of application domains. The compositional model considered in this paper was directly inspired by the architecture of the DBM and HDP, but it need not be. Indeed, any other deep learning module, including Deep Belief Networks, sparse auto-encoders, or any other hierarchical Bayesian model can be adapted. This perspective opens a space of compositional models that may be more suitable for capturing the human-like ability to learn from few examples. Acknowledgments: This research was supported by NSERC, ONR (MURI Grant 1015GNA126), ONR N00014-07-1-0937, ARO W911NF-08-1-0242, and Qualcomm. 8 References [1] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR, pages 1–8, 2008. [2] David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. J. ACM, 57(2), 2010. [3] Kevin R. Canini and Thomas L. Griffiths. Modeling human transfer learning with the hierarchical dirichlet process. In NIPS 2009 workshop: Nonparametric Bayes, 2009. [4] Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Trans. Pattern Analysis and Machine Intelligence, 28(4):594–611, April 2006. [5] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. [6] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 – 507, 2006. [7] C. Kemp, A. Perfors, and J. Tenenbaum. Learning overhypotheses with hierarchical Bayesian models. Developmental Science, 10(3):307–321, 2006. [8] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Dept. of Computer Science, University of Toronto, 2009. [9] Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Josh Tenenbaum. One-shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011. [10] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep neural networks. Journal of Machine Learning Research, 10:1–40, 2009. [11] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th International Conference on Machine Learning, pages 609–616, 2009. [12] M. A. Ranzato, Y. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. Advances in Neural Information Processing Systems, 2008. [13] A. Rodriguez, D. Dunson, and A. Gelfand. The nested Dirichlet process. Journal of the American Statistical Association, 103:11311144, 2008. [14] R. R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 12, 2009. [15] R. R. Salakhutdinov and G. E. Hinton. Replicated softmax: an undirected topic model. In Advances in Neural Information Processing Systems, volume 22, 2010. [16] L.B. Smith, S.S. Jones, B. Landau, L. Gershkoff-Stowe, and L. Samuelson. Object name learning provides on-the-job training for attention. Psychological Science, pages 13–19, 2002. [17] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Describing visual scenes using transformed objects and parts. International Journal of Computer Vision, 77(1-3):291–330, 2008. [18] G. Taylor, G. E. Hinton, and S. T. Roweis. Modeling human motion using binary latent variables. In Advances in Neural Information Processing Systems. MIT Press, 2006. [19] Y. W. Teh and G. E. Hinton. Rate-coded restricted Boltzmann machines for face recognition. In Advances in Neural Information Processing Systems, volume 13, 2001. [20] Y. W. Teh and M. I. Jordan. Hierarchical Bayesian nonparametric models with applications. In Bayesian Nonparametrics: Principles and Practice. Cambridge University Press, 2010. [21] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [22] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML. ACM, 2008. [23] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large dataset for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008. [24] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008. [25] Fei Xu and Joshua B. Tenenbaum. Word learning as bayesian inference. Psychological Review, 114(2), 2007. [26] L. Younes. On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates, March 17 2000. 9
2011
130
4,181
Object Detection with Grammar Models Ross B. Girshick Dept. of Computer Science University of Chicago Chicago, IL 60637 rbg@cs.uchicago.edu Pedro F. Felzenszwalb School of Engineering and Dept. of Computer Science Brown University Providence, RI 02912 pff@brown.edu David McAllester TTI-Chicago Chicago, IL 60637 mcallester@ttic.edu Abstract Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data. 1 Introduction The idea that images can be hierarchically parsed into objects and their parts has a long history in computer vision, see for example [15]. Image parsing has also been of considerable recent interest [11, 13, 21, 22, 24]. However, it has been difficult to demonstrate that sophisticated compositional models lead to performance advantages on challenging metrics such as the PASCAL object detection benchmark [9]. In this paper we achieve new levels of performance for person detection using a grammar model that is richer than previous models used in high-performance systems. We also introduce a general framework for learning discriminative models from weakly-labeled data. Our models are based on the object detection grammar formalism in [11]. Objects are represented in terms of other objects through compositional rules. Deformation rules allow for the parts of an object to move relative to each other, leading to hierarchical deformable part models. Structural variability provides choice between multiple part subtypes — effectively creating mixture models throughout the compositional hierarchy — and also enables optional parts. In this formalism parts may be reused both within an object category and across object categories. Our baseline and departure point is the UoC-TTI object detector [10, 12]. This system represents a class of objects with three different pictorial structure models. Although these models are learned automatically, making semantic interpretation unclear, it seems that the three components for the person class differ in how much of the person is taken to be visible — just the head and shoulders, the head and shoulders together with the upper body, or the whole standing person. Each of the three components has independently trained parts. For example, each component has a head part trained independently from the head part of the other components. Here we construct a single grammar model that allows more flexibility in describing the amount of the person that is visible. The grammar model avoids dividing the training data between different components and thus uses the training data more efficiently. The parts in the model, such as the head part, are shared across different interpretations of the degree of visibility of the person. The grammar model also includes subtype choice at the part level to accommodate greater appearance 1 variability across object instances. We use parts with subparts to benefit from high-resolution image data, while also allowing for deformations. Unlike previous approaches, we explicitly model the source of occlusion for partially visible objects. Our approach differs from that of Jin and Geman [13] in that theirs focuses on whole scene interpretation with generative models, while we focus on discriminatively trained models of individual objects. We also make Markovian restrictions not made in [13]. Our work is more similar to that of Zhu et al. [21] who impose similar Markovian restrictions. However, our training method, image features, and grammar design are substantially different. The model presented here is designed to accurately capture the visible portion of a person. There has been recent related work on occlusion modeling in pedestrian and person images [7, 18]. In [7], Enzweiler et al. assume access to depth and motion information in order to estimate occlusion boundaries. In [18], Wang et al. rely on the observation that the scores of individual filter cells (using the Dalal and Triggs detector [5]) can reliably predict occlusion in the INRIA pedestrian data. This does not hold for the harder PASCAL person data. In addition to developing a grammar model for detecting people, we develop new training methods which contribute to our boost in performance. Training data for vision is often assigned weak labels such as bounding boxes or just the names of objects occurring in the image. In contrast, an image analysis system will often produce strong predictions such as a segmentation or a pose. Existing structured prediction methods, such as structural SVM [16, 17] and latent structural SVM [19], do not directly support weak labels together with strong predictions. We develop the notion of a weaklabel structural SVM which generalizes structural SVMs and latent-structural SVMs. The key idea is to introduce a loss L(y, s) for making a strong prediction s when the weak training label is y. A formalism for learning from weak labels was also developed in [2]. One important difference is that [2] generalizes ranking SVMs.1 Our framework also allows for softer relations between weak labels and strong predictions. 2 Grammar models Object detection grammars [11] represent objects recursively in terms of other objects. Let N be a set of nonterminal symbols and T be a set of terminal symbols. We can think of the terminals as the basic building blocks that can be found in an image. The nonterminals define abstract objects whose appearance are defined in terms of expansions into terminals. Let Ωbe a set of possible locations for a symbol within an image. A placed symbol, Y (ω), specifies a placement of Y ∈N ∪T at a location ω ∈Ω. The structure of a grammar model is defined by a set, R, of weighted productions of the form X(ω0) s −→{ Y1(ω1), . . . , Yn(ωn) }, (1) where X ∈N, Yi ∈N ∪T , ωi ∈Ωand s ∈R is a score. We denote the score of r ∈R by s(r). We can expand a placed nonterminal to a bag of placed terminals by repeatedly applying productions. An expansion of X(ω) leads to a derivation tree T rooted at X(ω). The leaves of T are labeled with placed terminals, and the internal nodes of T are labeled with placed nonterminals and with the productions used to replace those symbols. We define appearance models for the terminals using a function score(A, ω) that computes a score for placing the terminal A at location ω. This score depends implicitly on the image data. We define the score of a derivation tree T to be the sum of the scores of the productions used to generate T, plus the score of placing the terminals associated with T’s leaves in their respective locations. score(T) = X r∈internal(T ) s(r) + X A(w)∈leaves(T ) score(A, ω) (2) To generalize the models from [10] we let Ωbe positions and scales within a feature map pyramid H. We define the appearance models for terminals by associating a filter FA with each terminal A. 1[2] claims the ranking framework overcomes a loss in performance when the number of background examples is increased. In contrast, we don’t use a ranking framework but always observed a performance improvement when increasing the number of background examples. 2 Parts 1-6 (no occlusion) Parts 1-4 & occluder Parts 1-2 & occluder Example detections and derived filters Subtype 1 Subtype 2 Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Occluder Figure 1: Shallow grammar model. This figure illustrates a shallow version of our grammar model (Section 2.1). This model has six person parts and an occlusion model (“occluder”), each of which comes in one of two subtypes. A detection places one subtype of each visible part at a location and scale in the image. If the derivation does not place all parts it must place the occluder. Parts are allowed to move relative to each other, but their displacements are constrained by deformation penalties. Then score(A, ω) = FA · φ(H, ω) is the dot product between the filter coefficients and the features in a subwindow of the feature map pyramid, φ(H, ω). We use the variant of histogram of oriented gradient (HOG [5]) features described in [10]. We consider models with productions specified by two kinds of schemas (a schema is a template for generating productions). A structure schema specifies one production for each placement ω ∈Ω, X(ω) s −→{ Y1(ω ⊕δ1), . . . , Yn(ω ⊕δn) }. (3) Here the δi specify constant displacements within the feature map pyramid. Structure schemas can be used to define decompositions of objects into other objects. Let ∆be the set of possible displacements within a single scale of a feature map pyramid. A deformation schema specifies one production for each placement ω ∈Ωand displacement δ ∈∆, X(ω) α·φ(δ) −→{ Y (ω ⊕δ) }. (4) Here φ(δ) is a feature vector and α is a vector of deformation parameters. Deformation schemas can be used to define deformable models. We define φ(δ) = (dx, dy, dx2, dy2) so that deformation scores are quadratic functions of the displacements. The parameters of our models are defined by a weight vector w with entries for the score of each structure schema, the deformation parameters of each deformation schema and the filter coefficients associated with each terminal. Then score(T) = w · Φ(T), where Φ(T) is the sum of (sparse) feature vectors associated with each placed terminal and production in T. 2.1 A grammar model for detecting people Each component in the person model learned by the voc-release4 system [12] is tuned to detect people under a prototypical visibility pattern. Based on this observation we designed, by hand, the structure of a grammar that models visibility by using structural variability and optional parts. For clarity, we begin by describing a shallow model (Figure 1) that places all filters at the same resolution in the feature map pyramid. After explaining this model, we describe a deeper model that includes deformable subparts at higher resolutions. Fine-grained occlusion Our grammar model has a start symbol Q that can be expanded using one of six possible structure schemas. These choices model different degrees of visibility ranging from heavy occlusion (only the head and shoulders are visible) to no occlusion at all. Beyond modeling fine-grained occlusion patterns when compared to the mixture models from [7] and [12], our grammar model is also richer in a number of ways. In Section 5 we show that each of the following modeling choices improves detection performance. 3 Occlusion model If a person is occluded, then there must be some cause of the occlusion — either the edge of the image or an occluding object, such as a desk or dinner table. We use a nontrivial model to capture the appearance of the stuff that occludes people. Part subtypes The mixture model from [12] has two subtypes for each mixture component. The subtypes are forced to be mirror images of each other and correspond roughly to left-facing versus right-facing people. Our grammar model has two subtypes for each part, which are also forced to be mirror images of each other. But in the case of our grammar model, the decision of which part subtype to instantiate at detection time is independent for each part. The shallow person grammar model is defined by the following grammar. The indices p (for part), t (for subtype), and k have the following ranges: p ∈{1, . . . , 6}, t ∈{L, R} and k ∈{1, . . . , 5}. Q(ω) sk −→ { Y1(ω ⊕δ1), . . . , Yk(ω ⊕δk), O(ω ⊕δk+1) } Q(ω) s6 −→ { Y1(ω ⊕δ1), . . . , Y6(ω ⊕δ6) } Yp(ω) 0 −→ { Yp,t(ω) } Yp,t(ω) αp,t·φ(δ) −→ { Ap,t(ω ⊕δ) } O(ω) 0 −→ { Ot(ω) } Ot(ω) αt·φ(δ) −→ { At(ω ⊕δ) } The grammar has a start symbol Q with six alternate choices that derive people under varying degrees of visibility (occlusion). Each part has a corresponding nonterminal Yp that is placed at some ideal position relative to Q. Derivations with occlusion include the occlusion symbol O. A derivation selects a subtype and displacement for each visible part. The parameters of the grammar (production scores, deformation parameters and filters) are learned with the discriminative procedure described in Section 4. Figure 1 illustrates the filters in the resulting model and some example detections. Deeper model We extend the shallow model by adding deformable subparts at two scales: (1) the same as, and (2) twice the resolution of the start symbol Q. When detecting large objects, high-resolution subparts capture fine image details. However, when detecting small objects, highresolution subparts cannot be used because they “fall off the bottom” of the feature map pyramid. The model uses derivations with low-resolution subparts when detecting small objects. We begin by replacing the productions from Yp,t in the grammar above, and then adding new productions. Recall that p indexes the top-level parts and t indexes subtypes. In the following schemas, the indices r (for resolution) and u (for subpart) have the ranges: r ∈{H, L}, u ∈{1, . . . , Np}, where Np is the number of subparts in a top-level part Yp. Yp,t(ω) αp,t·φ(δ) −→ { Zp,t(ω ⊕δ) } Zp,t(ω) 0 −→ {Ap,t(ω), Wp,t,r,1(ω ⊕δp,t,r,1), . . . , Wp,t,r,Np(ω ⊕δp,t,r,Np)} Wp,t,r,u(ω) αp,t,r,u·φ(δ) −→ {Ap,t,r,u(ω ⊕δ)} We note that as in [23] our model has hierarchical deformations. The part terminal Ap,t can move relative to Q and the subpart terminal Ap,t,r,u can move relative to Ap,t. The displacements δp,t,H,u place the symbols Wp,t,H,u one octave below Zp,t in the feature map pyramid. The displacements δp,t,L,u place the symbols Wp,t,L,u at the same scale as Zp,t. We add subparts to the first two top-level parts (p = 1 and 2), with the number of subparts set to N1 = 3 and N2 = 2. We find that adding additional subparts does not improve detection performance. 2.2 Inference and test time detection Inference involves finding high scoring derivations. At test time, because images may contain multiple instances of an object class, we compute the maximum scoring derivation rooted at Q(ω), for each ω ∈Ω. This can be done efficiently using a standard dynamic programming algorithm [11]. We retain only those derivations that score above a threshold, which we set low enough to ensure high recall. We use box(T) to denote a detection window associated with a derivation T. Given a set of candidate detections, we apply nonmaximal suppression to produce a final set of detections. We define box(T) by assigning a detection window size, in feature map coordinates, to each structure schema that can be applied to Q. This leads to detections with one of six possible aspect ratios, depending on which production was used in the first step of the derivation. The absolute location and size of a detection depends on the placement of Q. For the first five production schemas, the ideal location of the occlusion part, O, is outside of box(T). 4 3 Learning from weakly-labeled data Here we define a general formalism for learning functions from weakly-labeled data. Let X be an input space, Y be a label space, and S be an output space. We are interested in learning functions f : X →S based on a set of training examples {(x1, y1), . . . , (xn, yn)} where (xi, yi) ∈X × Y. In contrast to the usual supervised learning setting, we do not assume that the label space and the output space are the same. In particular there may be many output values that are compatible with a label, and we can think of each example as being only weakly labeled. It will also be useful to associate a subset of possible outputs, S(x) ⊆S, with an example x. In this case f(x) ∈S(x). A connection between labels and outputs can be made using a loss function L : Y ×S →R. L(y, s) associates a cost with the prediction s ∈S on an example labeled y ∈Y. Let D be a distribution over X × Y. Then a natural goal is to find a function f with low expected loss ED[L(y, f(x))]. A simple example of a weakly-labeled training problem comes from learning sliding window classifiers in the PASCAL object detection dataset. The training data specifies pixel-accurate bounding boxes for the target objects while a sliding window classifier reports boxes with a fixed aspect ratio and at a finite number of scales. The output space is, therefore, a subset of the label space. As usual, we assume f is parameterized by a vector of model parameters w and generates predictions by maximizing a linear function of a joint feature map Φ(x, s), f(x) = argmaxs∈S(x) w · Φ(x, s). We can train w by minimizing a regularized risk on the training set. We define a weak-label structural SVM (WL-SSVM) by the following training equation, E(w) = 1 2||w||2 + C n X i=1 L′(w, xi, yi). (5) The surrogate training loss L′ is defined in terms of two different loss augmented predictions. L′(w, x, y) = max s∈S(x) [w · Φ(x, s) + Lmargin(y, s)] | {z } (6a) −max s∈S(x) [w · Φ(x, s) −Loutput(y, s)] | {z } (6b) (6) Lmargin encourages high-loss outputs to “pop out” of (6a), so that their scores get pushed down. Loutput suppresses high-loss outputs in (6b), so the score of a low-loss prediction gets pulled up. It is natural to take Lmargin = Loutput = L. In this case L′ becomes a type of ramp loss [4, 6, 14]. Alternatively, taking Lmargin = L and Loutput = 0 gives the ramp loss that has been shown to be consistent in [14]. As we discuss below, the choice of Loutput can have a significant effect on the computational difficulty of the training problem. Several popular learning frameworks can be derived as special cases of WL-SSVM. For the examples below, let I(a, b) = 0 when a = b, and I(a, b) = ∞when a ̸= b. Structural SVM Let S = Y, Lmargin = L and Loutput(y, ˆy) = I(y, ˆy). Then L′(w, x, y) is the hinge loss used in a structural SVM [17]. In this case L′ is convex in w because the maximization in (6b) disappears. We note, however, that this choice of Loutput may be problematic and lead to inconsistent training problems. Consider the following situation. A training example (x, y) may be compatible with a different label ˆy ̸= y, in the sense that L(y, ˆy) = 0. But even in this case a structural SVM pushes the score w · Φ(x, y) to be above w · Φ(x, ˆy). This issue can be addressed by relaxing Loutput to include a maximization over labels in (6b). Latent structural SVM Now let Z be a space of latent values, S = Y × Z, Lmargin = L and Loutput(y, (ˆy, ˆz)) = I(y, ˆy). Then L′(w, x, y) is the hinge loss used in a latent structural SVM [19]. In this case L′ is not convex in w due to the maximization over latent values in (6b). As in the previous example, this choice of Loutput can be problematic because it “requires” that the training labels be predicted exactly. This can be addressed by relaxing Loutput, as in the previous example. 4 Training grammar models Now we consider learning the parameters of an object detection grammar using the training data in the PASCAL VOC datasets with the WL-SSVM framework. For two rectangles a and b let overlap(a, b) = area(a ∩b)/ area(a ∪b). We will use this measure of overlap in our loss functions. 5 For training, we augment our model’s output space (the set of all derivation trees), with the background output ⊥. We define Φ(x, ⊥) to be the zero vector, as was done in [1]. Thus the score of a background hypothesis is zero independent of the model parameters w. The training data specifies a bounding box for each instance of an object in a set of training images. We construct a set of weakly-labeled examples {(x1, y1), . . . , (xn, yn)} as follows. For each training image I, and for each bounding box B in I, we define a foreground example (x, y), where y = B, x specifies the image I, and the set of valid predictions S(x) includes: 1. Derivations T with overlap(box(T), B) ≥0.1 and overlap(box(T), B′) < 0.5 for all B′ in I such that B′ ̸= B. 2. The background output ⊥. The overlap requirements in (1) ensure that we consider only predictions that are relevant for a particular object instance, while avoiding interactions with other objects in the image. We also define a very large set of background examples. For simplicity, we use images that do not contain any bounding boxes. For each background image I, we define a different example (x, y) for each position and scale ω within I. In this case y = ⊥, x specifies the image I, and S(x) includes derivations T rooted at Q(ω) and the background output ⊥. The set of background examples is very large because the number of positions and scales within each image is typically around 250K. 4.1 Loss functions The PASCAL benchmark requires a correct detection to have at least 50% overlap with a groundtruth bounding box. We use this rule to define our loss functions. First, define Ll,τ(y, s) as follows Ll,τ(y, s) =      l if y = ⊥and s ̸= ⊥ 0 if y = ⊥and s = ⊥ l if y ̸= ⊥and overlap(y, s) < τ 0 if y ̸= ⊥and overlap(y, s) ≥τ. (7) Following the PASCAL VOC protocol we use Lmargin = L1,0.5. For a foreground example this pushes down the score of detections that don’t overlap with the bounding box label by at least 50%. Instead of using Loutput = Lmargin, we let Loutput = L∞,0.7. For a foreground example this ensures that the maximizer of (6b) is a detection with high overlap with the bounding box label. For a background example, the maximizer of (6b) is always ⊥. Later we discuss how this simplifies our optimization algorithm. While our choice of Loutput does not produce a convex objective, it does tightly limit the range of outputs, making our optimization less prone to reaching bad local optima. 4.2 Optimization Since L′ is not convex, the WL-SSVM objective (5) leads to a nonconvex optimization problem. We follow [19] in which the CCCP procedure [20] was used to find a local optima of a similar objective. CCCP is an iterative algorithm that uses a decomposition of the objective into a sum of convex and concave parts E(w) = Econvex(w) + Econcave(w). Econvex(w) = 1 2||w||2 + C n X i=1 max s∈S(xi) [w · Φ(xi, s) + Lmargin(yi, s)] (8) Econcave(w) = −C n X i=1 max s∈S(xi) [w · Φ(xi, s) −Loutput(yi, s)] (9) In each iteration, CCCP computes a linear upper bound to Econcave based on a current weight vector wt. The bound depends on subgradients of the summands in (9). For each summand, we take the subgradient Φ(xi, si(wt)), where si(w) = argmaxs∈S(xi) [w · Φ(xi, s) −Loutput(yi, s)] is a loss augmented prediction. We note that computing si(wt) for each training example can be costly. But from our definition of Loutput, we have that si(w) = ⊥for a background example independent of w. Therefore, for a background example Φ(xi, si(wt)) = 0. 6 Table 1: PASCAL 2010 results. UoC-TTI and our method compete in comp3. Poselets competes comp4 due to its use of detailed pose and visibility annotations and non-PASCAL images. Grammar +bbox +context UoC-TTI [9] +bbox +context Poselets [9] AP 47.5 47.6 49.5 44.4 45.2 47.5 48.5 Table 2: Training objective and model structure evaluation on PASCAL 2007. Grammar LSVM Grammar WL-SSVM Mixture LSVM Mixture WL-SSVM AP 45.3 46.7 42.6 43.2 After computing si(wt) and Φ(xi, si(wt)) for all examples (implicitly for background examples), the weight vector is updated by minimizing a convex upper bound on the objective E(w): wt+1 = argmin w 1 2||w||2 + C n X i=1  max s∈S(xi) [w · Φ(xi, s) + Lmargin(yi, s)] −w · Φ(xi, si(wt))  . (10) The optimization subproblem defined by equation (10) is similar in form to a structural SVM optimization. Given the size and nature of our training dataset we opt to solve this subproblem using stochastic subgradient descent and a modified form of the data mining procedure from [10]. As in [10], we data mine over background images to collect support vectors for background examples. However, unlike in the binary LSVM setting considered in [10], we also need to apply data mining to foreground examples. This would be slow because it requires performing relatively expensive inference (more than 1 second per image) on thousands of images. Instead of applying data mining to the foreground examples, each time we compute si(wt) for a foreground example, we also compute the top M scoring outputs s ∈S(xi) of wt · Φ(xi, s) + Lmargin(yi, s), and place the corresponding feature vectors in the data mining cache. This is efficient since much of the required computation is shared with computation already necessary for computing si(wt). While this is only a heuristic approximation to true data mining, it leads to an improvement over training with binary LSVM (see Section 5). In practice, we find that M = 1 is sufficient for improved performance and that increasing M beyond 1 does not improve our results. 4.3 Initialization Using CCCP requires an initial model or heuristic for selecting the initial outputs si(w0). Inspired by the methods in [10, 12], we train a single filter for fully visible people using a standard binary SVM. To define the SVM’s training data, we select vertically elongated examples. We apply the orientation clustering method in [12] to further divide these examples into two sets that approximately correspond to left-facing versus right-facing orientations. Examples from one of these two sets are then anisotropically rescaled so their HOG feature maps match the dimensions of the filter. These form the positive examples. For negative examples, random patches are extracted from background images. After training the initial filter, we slice it into subfilters (one 8 × 8 and five 3 × 8) that form the building blocks of the grammar model. We mirror these six filters to get subtypes, and then add subparts using the energy covering heuristic in [10, 12]. 5 Experimental results We evaluated the performance of our person grammar and training framework on the PASCAL VOC 2007 and 2010 datasets [8, 9]. We used the standard PASCAL VOC comp3 test protocol, which measures detection performance by average precision (AP) over different recall levels. Figure 2 shows some qualitative results, including failure cases. PASCAL VOC 2010 Our results on the 2010 dataset are presented in Table 1 in the context of two strong baselines. The first, UoC-TTI, won the person category in the comp3 track of the 2010 competition [9]. The 2010 entry of the UoC-TTI method extended [12] by adding an extra octave to the HOG feature map pyramid, which allows the detector to find smaller objects. We report the AP score of the UoC-TTI “raw” person detector, as well as the scores after applying the bounding 7 (a) Full visibility (b) Occlusion boundaries (c) Early termination (d) Mistakes Figure 2: Example detections. Parts are blue. The occlusion part, if used, is dashed cyan. (a) Detections of fully visible people. (b) Examples where the occlusion part detects an occlusion boundary. (c) Detections where there is no occlusion, but a partial person is appropriate. (d) Mistakes where the model did not detect occlusion properly. box prediction and context rescoring methods described in [10]. Comparing raw detector outputs our grammar model significantly outperforms the mixture model: 47.5 vs. 44.4. We also applied the two post-processing steps to the grammar model, and found that unlike with the mixture model, the grammar model does not benefit from bounding box prediction. This is likely because our fine-grained occlusion model reduces the number of near misses that are fixed by bounding box prediction. To test context rescoring, we used the UoC-TTI detection data for the other 19 object classes. Context rescoring boosts our final score to 49.5. The second baseline is the poselets system described in [3]. Their system requires detailed pose and visibility annotations, in contrast to our grammar model which was trained only with bounding box labels. Prior to context rescoring, our model scores one point lower than the poselets model, and after rescoring it scores one point higher. Structure and training We evaluated several aspects of our model structure and training objective on the PASCAL VOC 2007 dataset. In all of our experiments we set the regularization constant to C = 0.006. In Table 2 we compare the WL-SSVM framework developed here with the binary LSVM framework from [10]. WL-SSVM improves performance of the grammar model by 1.4 AP points over binary LSVM training. WL-SSVM also improves results obtained using a mixture of part-based models by 0.6 points. To investigate model structure, we evaluated the effect of part subtypes and occlusion modeling. Removing subtypes reduces the score of the grammar model from 46.7 to 45.5. Removing the occlusion part also decreases the score from 46.7 to 45.5. The shallow model (no subparts) achieves a score of 40.6. 6 Discussion Our results establish grammar-based methods as a high-performance approach to object detection by demonstrating their effectiveness on the challenging task of detecting people in the PASCAL VOC datasets. To do this, we carefully designed a flexible grammar model that can detect people under a wide range of partial occlusion, pose, and appearance variability. Automatically learning the structure of grammar models remains a significant challenge for future work. We hope that our empirical success will provide motivation for pursing this goal, and that the structure of our handcrafted grammar will yield insights into the properties that an automatically learned grammar might require. We also develop a structured training framework, weak-label structural SVM, that naturally handles learning a model with strong outputs, such as derivation trees, from data with weak labels, such as bounding boxes. Our training objective is nonconvex and we use a strong loss function to avoid bad local optima. We plan to explore making this loss softer, in an effort to make learning more robust to outliers. Acknowledgments This research has been supported by NSF grant IIS-0746569. 8 References [1] M. Blaschko and C. Lampert. Learning to localize objects with structured output regression. In ECCV, 2008. [2] M. Blaschko, A. Vedaldi, and A. Zisserman. Simultaneous object detection and ranking with weak supervision. In NIPS, 2010. [3] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people using mutually consistent poselet activations. In ECCV, 2010. [4] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In ICML, 2006. [5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [6] C. Do, Q. Le, C. Teo, O. Chapelle, and A. Smola. Tighter bounds for structured estimation. In NIPS, 2008. [7] M. Enzweiler, A. Eigenstetter, B. Schiele, and D. M. Gavrila. Multi-cue pedestrian classification with partial occlusion handling. In CVPR, 2010. [8] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. http://www.pascalnetwork.org/challenges/VOC/voc2010/workshop/index.html. [10] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 2009. [11] P. Felzenszwalb and D. McAllester. Object detection grammars. Univerity of Chicago, CS Dept., Tech. Rep. 2010-02. [12] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models, release 4. http://people.cs.uchicago.edu/˜pff/latent-release4/. [13] Y. Jin and S. Geman. Context and hierarchy in a probabilistic image model. In CVPR, 2006. [14] D. McAllester and J. Keshet. Generalization bounds and consistency for latent structural probit and ramp loss. In NIPS, 2011. [15] Y. Ohta, T. Kanade, and T. Sakai. An analysis system for scenes containing objects with substructures. In ICPR, 1978. [16] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 2006. [18] X. Wang, T. Han, and S. Yan. An hog-lbp human detector with partial occlusion handling. In ICCV, 2009. [19] C.-N. J. Yu and T. Joachims. Learning structural svms with latent variables. In ICML, 2009. [20] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 2003. [21] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and appearance sharing: Recursive compositional models for multi-view multi-object detection. In CVPR, 2010. [22] L. Zhu, Y. Chen, and A. Yuille. Unsupervised learning of probabilistic grammar-markov models for object categories. PAMI, 2009. [23] L. Zhu, Y. Chen, A. Yuille, and W. Freeman. Latent hierarchical structural learning for object detection. In CVPR, 2010. [24] S. Zhu and D. Mumford. A stochastic grammar of images. Foundations and Trends in Computer Graphics and Vision, 2006. 9
2011
131
4,182
Inductive reasoning about chimeric creatures Charles Kemp Department of Psychology Carnegie Mellon University ckemp@cmu.edu Abstract Given one feature of a novel animal, humans readily make inferences about other features of the animal. For example, winged creatures often fly, and creatures that eat fish often live in the water. We explore the knowledge that supports these inferences and compare two approaches. The first approach proposes that humans rely on abstract representations of dependency relationships between features, and is formalized here as a graphical model. The second approach proposes that humans rely on specific knowledge of previously encountered animals, and is formalized here as a family of exemplar models. We evaluate these models using a task where participants reason about chimeras, or animals with pairs of features that have not previously been observed to co-occur. The results support the hypothesis that humans rely on explicit representations of relationships between features. Suppose that an eighteenth-century naturalist learns about a new kind of animal that has fur and a duck’s bill. Even though the naturalist has never encountered an animal with this pair of features, he should be able to make predictions about other features of the animal—for example, the animal could well live in water but probably does not have feathers. Although the platypus exists in reality, from a eighteenth-century perspective it qualifies as a chimera, or an animal that combines two or more features that have not previously been observed to co-occur. Here we describe a probabilistic account of inductive reasoning and use it to account for human inferences about chimeras. The inductive problems we consider are special cases of the more general problem in Figure 1a where a reasoner is given a partially observed matrix of animals by features then asked to infer the values of the missing entries. This general problem has been previously studied and is addressed by computational models of property induction, categorization, and generalization [1–7]. A challenge faced by all of these models is to capture the background knowledge that guides inductive inferences. Some accounts rely on similarity relationships between animals [6, 8], others rely on causal relationships between features [9, 10], and others incorporate relationships between animals and relationships between features [11]. We will evaluate graphical models that capture both kinds of relationships (Figure 1a), but will focus in particular on relationships between features. Psychologists have previously suggested that humans rely on explicit mental representations of relationships between features [12–16]. Often these representations are described as theories—for example, theories that specify a causal relationship between having wings and flying, or living in the sea and eating fish. Relationships between features may take several forms: for example, one feature may cause, enable, prevent, be inconsistent with, or be a special case of another feature. For simplicity, we will treat all of these relationships as instances of dependency relationships between features, and will capture them using an undirected graphical model. Previous studies have used graphical models to account for human inferences about features but typically these studies consider toy problems involving a handful of novel features such as “has gene X14” or “has enzyme Y132” [9, 11]. Participants might be told, for example, that gene X14 leads to the production of enzyme Y132, then asked to use this information when reasoning about novel animals. Here we explore whether a graphical model approach can account for inferences 1 ? ? 1 ? hippo rhino sparrow robin 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 slow heavy flies wings (a) (b) onew Figure 1: Inductive reasoning about animals and features. (a) Inferences about the features of a new animal onew that flies may draw on similarity relationships between animals (the new animal is similar to sparrows and robins but not hippos and rhinos), and on dependency relationships between features (flying and having wings are linked). (b) A graph product produced by combining the two graph structures in (a). about familiar features. Working with familiar features raises a methodological challenge since participants have a substantial amount of knowledge about these features and can reason about them in multiple ways. Suppose, for example, that you learn that a novel animal can fly (Figure 1a). To conclude that the animal probably has wings, you might consult a mental representation similar to the graph at the top of Figure 1a that specifies a dependency relationship between flying and having wings. On the other hand, you might reach the same conclusion by thinking about flying creatures that you have previously encountered (e.g. sparrows and robins) and noticing that these creatures have wings. Since the same conclusion can be reached in two different ways, judgments about arguments of this kind provide little evidence about the mental representations involved. The challenge of working with familiar features directly motivates our focus on chimeras. Inferences about chimeras draw on rich background knowledge but require the reasoner to go beyond past experience in a fundamental way. For example, if you learn that an animal flies and has no legs, you cannot make predictions about the animal by thinking of flying, no-legged creatures that you have previously encountered. You may, however, still be able to infer that the novel animal has wings if you understand the relationship between flying and having wings. We propose that graphical models over features can help to explain how humans make inferences of this kind, and evaluate our approach by comparing it to a family of exemplar models. The next section introduces these models, and we then describe two experiments designed to distinguish between the models. 1 Reasoning about objects and features Our models make use of a binary matrix D where the rows {o1, . . . , o129} correspond to objects, and the columns {f 1, . . . , f 56} correspond to features. A subset of the objects is shown in Figure 2a, and the full set of features is shown in Figure 2b and its caption. Matrix D was extracted from the Leuven natural concept database [17], which includes 129 animals and 757 features in total. We chose a subset of these features that includes a mix of perceptual and behavioral features, and that includes many pairs of features that depend on each other. For example, animals that “live in water” typically “can swim,” and animals that have “no legs” cannot “jump far.” Matrix D can be used to formulate problems where a reasoner observes one or two features of a new object (i.e. animal o130) and must make inferences about the remaining features of the animal. The next two sections describe graphical models that can be used to address this problem. The first graphical model O captures relationships between objects, and the second model F captures relationships between features. We then discuss how these models can be combined, and introduce a family of exemplar-style models that will be compared with our graphical models. A graphical model over objects Many accounts of inductive reasoning focus on similarity relationships between objects [6, 8]. Here we describe a tree-structured graphical model O that captures these relationships. The tree was constructed from matrix D using average linkage clustering and the Jaccard similarity measure, and part of the resulting structure is shown in Figure 2a. The subtree in Figure 2a includes clusters 2 heavy the sea lives in water eats fish lives in the desert lives in the woods lives underground lives in trees can climb well has six legs has two legs has four legs can be ridden has sharp teeth has no legs has feathers has scales alligator caiman crocodile monitor lizard dinosaur blindworm boa cobra python snake viper chameleon iguana gecko lizard salamander frog toad tortoise turtle anchovy herring sardine cod sole salmon trout carp pike stickleback eel flatfish ray plaice piranha sperm whale squid swordfish goldfish dolphin orca whale shark bat fox wolf beaver hedgehog hamster squirrel mouse rabbit bison elephant hippopotamus rhinoceros lion tiger polar bear deer dromedary llama giraffe zebra kangaroo monkey cat dog cow horse donkey pig sheep (a) (b) can swim has gills can fly has wings eats grain eats nuts eats grass eats berries crawls far strong predator can jump has mane has fur nocturnal can see in dark slow lives in Figure 2: Graph structures used to define graphical models O and F. (a) A tree that captures similarity relationships between animals. The full tree includes 129 animals, and only part of the tree is shown here. The grey points along the branches indicate locations where a novel animal o130 could be attached to the tree. (b) A network capturing pairwise dependency relationships between features. The edges capture both positive and negative dependencies. All edges in the network are shown, and the network also includes 20 isolated nodes for the following features: is black, is blue, is green, is grey, is pink, is red, is white, is yellow, is a pet, has a beak, stings, stinks, has a long neck, has feelers, sucks blood, lays eggs, makes a web, has a hump, has a trunk, and is cold-blooded. corresponding to amphibians and reptiles, aquatic creatures, and land mammals, and the subtree omitted for space includes clusters for insects and birds. We assume that the features in matrix D (i.e. the columns) are generated independently over O: P(D|O, π, λ) = Y i P(f i|O, πi, λi). The distribution P(f i|O, πi, λi) is based on the intuition that nearby nodes in O tend to have the same value of f i. Previous researchers [8, 18] have used a directed graphical model where the distribution at the root node is based on the baserate πi, and any other node v with parent u has the following conditional probability distribution: P(v = 1|u) = ( πi + (1 −πi)e−λil, if u = 1 πi −πie−λil, if u = 0 (1) where l is the length of the branch joining node u to node v. The variability parameter λi captures the extent to which feature f i is expected to vary over the tree. Note, for example, that any node v must take the same value as its parent u when λ = 0. To avoid free parameters, the feature baserates πi and variability parameters λi are set to their maximum likelihood values given the observed values of the features {f i} in the data matrix D. The conditional distributions in Equation 1 induce a joint distribution over all of the nodes in graph O, and the distribution P(f i|O, πi, λi) is computed by marginalizing out the values of the internal nodes. Although we described O as a directed graphical model, the model can be converted into an equivalent undirected model with a potential for each edge in the tree and a potential for the root node. Here we use the undirected version of the model, which is a natural counterpart to the undirected model F described in the next section. The full version of structure O in Figure 2a includes 129 familiar animals, and our task requires inferences about a novel animal o130 that must be slotted into the structure. Let D′ be an expanded version of D that includes a row for o130, and let O′ be an expanded version of O that includes a node for o130. The edges in Figure 2a are marked with evenly spaced gray points, and we use a 3 uniform prior P(O′) over all trees that can be created by attaching o130 to one of these points. Some of these trees have identical topologies, since some edges in Figure 2a have multiple gray points. Predictions about o130 can be computed using: P(D′|D) = X O′ P(D′|O′, D)P(O′|D) ∝ X O′ P(D′|O′, D)P(D|O′)P(O′). (2) Equation 2 captures the basic intuition that the distribution of features for o130 is expected to be consistent with the distribution observed for previous animals. For example, if o130 is known to fly then the trees with high posterior probability P(O′|D) will be those where o130 is near other flying creatures (Figure 1a), and since these creatures have wings Equation 2 predicts that o130 probably also has wings. As this example suggests, model O captures dependency relationships between features implicitly, and therefore stands in contrast to models like F that rely on explicit representations of relationships between features. A graphical model over features Model F is an undirected graphical model defined over features. The graph shown in Figure 2b was created by identifying pairs where one feature depends directly on another. The author and a research assistant both independently identified candidate sets of pairwise dependencies, and Figure 2b was created by merging these sets and reaching agreement about how to handle any discrepancies. As previous researchers have suggested [13, 15], feature dependencies can capture several kinds of relationships. For example, wings enable flying, living in the sea leads to eating fish, and having no legs rules out jumping far. We work with an undirected graph because some pairs of features depend on each other but there is no clear direction of causal influence. For example, there is clearly a dependency relationship between being nocturnal and seeing in the dark, but no obvious sense in which one of these features causes the other. We assume that the rows of the object-feature matrix D are generated independently from an undirected graphical model F defined over the feature structure in Figure 2b: P(D|F) = Y i P(oi|F). Model F includes potential functions for each node and for each edge in the graph. These potentials were learned from matrix D using the UGM toolbox for undirected graphical models [19]. The learned potentials capture both positive and negative relationships: for example, animals that live in the sea tend to eat fish, and tend not to eat berries. Some pairs of feature values never occur together in matrix D (there are no creatures that fly but do not have wings). We therefore chose to compute maximum a posteriori values of the potential functions rather than maximum likelihood values, and used a diffuse Gaussian prior with a variance of 100 on the entries in each potential. After learning the potentials for model F, we can make predictions about a new object o130 using the distribution P(o130|F). For example, if o130 is known to fly (Figure 1a), model F predicts that o130 probably has wings because the learned potentials capture a positive dependency between flying and having wings. Combining object and feature relationships There are two simple ways to combine models O and F in order to develop an approach that incorporates both relationships between features and relationships between objects. The output combination model computes the predictions of both models in isolation, then combines these predictions using a weighted sum. The resulting model is similar to a mixture-of-experts model, and to avoid free parameters we use a mixing weight of 0.5. The structure combination model combines the graph structures used by the two models and relies on a set of potentials defined over the resulting graph product. An example of a graph product is shown in Figure 1b, and the potential functions for this graph are inherited from the component models in the natural way. Kemp et al. [11] use a similar approach to combine a functional causal model with an object model O, but note that our structure combination model uses an undirected model F rather than a functional causal model over features. Both combination models capture the intuition that inductive inferences rely on relationships between features and relationships between objects. The output combination model has the virtue of 4 simplicity, and the structure combination model is appealing because it relies on a single integrated representation that captures both relationships between features and relationships between objects. To preview our results, our data suggest that the combination models perform better overall than either O or F in isolation, and that both combination models perform about equally well. Exemplar models We will compare the family of graphical models already described with a family of exemplar models. The key difference between these model families is that the exemplar models do not rely on explicit representations of relationships between objects and relationships between features. Comparing the model families can therefore help to establish whether human inferences rely on representations of this sort. Consider first a problem where a reasoner must predict whether object o130 has feature k after observing that it has feature i. An exemplar model addresses the problem by retrieving all previouslyobserved objects with feature i and computing the proportion that have feature k: P(ok = 1|oi = 1) = |f k & f i| |f i| (3) where |f k| is the number of objects in matrix D that have feature k, and |f k &f i| is the number that have both feature k and feature i. Note that we have streamlined our notation by using ok instead of o130 k to refer to the kth feature value for object o130. Suppose now that the reasoner observes that object o130 has features i and j. The natural generalization of Equation 3 is: P(ok = 1|oi = 1, oj = 1) = |f k & f i & f j| |f i & f j| (4) Because we focus on chimeras, |f i & f j| = 0 and Equation 4 is not well defined. We therefore evaluate an exemplar model that computes predictions for the two observed features separately then computes the weighted sum of these predictions: P(ok = 1|oi = 1, oj = 1) = wi |f k & f i| |f i| + wj |f k & f j| |f j| . (5) where the weights wi and wj must sum to one. We consider four ways in which the weights could be set. The first strategy sets wi = wj = 0.5. The second strategy sets wi ∝|f i|, and is consistent with an approach where the reasoner retrieves all exemplars in D that are most similar to the novel animal and reports the proportion of these exemplars that have feature k. The third strategy sets wi ∝ 1 |f i|, and captures the idea that features should be weighted by their distinctiveness [20]. The final strategy sets weights according to the coherence of each feature [21]. A feature is coherent if objects with that feature tend to resemble each other overall, and we define the coherence of feature i as the expected Jaccard similarity between two randomly chosen objects from matrix D that both have feature i. Note that the final three strategies are all consistent with previous proposals from the psychological literature, and each one might be expected to perform well. Because exemplar models and prototype models are often compared, it is natural to consider a prototype model [22] as an additional baseline. A standard prototype model would partition the 129 animals into categories and would use summary statistics for these categories to make predictions about the novel animal o130. We will not evaluate this model because it corresponds to a coarser version of model O, which organizes the animals into a hierarchy of categories. The key characteristic shared by both models is that they explicitly capture relationships between objects but not features. 2 Experiment 1: Chimeras Our first experiment explores how people make inferences about chimeras, or novel animals with features that have not previously been observed to co-occur. Inferences about chimeras raise challenges for exemplar models, and therefore help to establish whether humans rely on explicit representations of relationships between features. Each argument can be represented as f i, f j →f k 5 0 0.5 1 1 3 5 7 r = 0.42 0 0.5 1 1 3 5 7 r = 0.44 0 0.5 1 1 3 5 7 r = 0.69 0 0.5 1 1 3 5 7 r = 0.31 0 0.5 1 1 3 5 7 r = 0.59 0 0.5 1 1 3 5 7 r = 0.60 0 0.5 1 1 3 5 7 r = 0.06 0 0.5 1 1 3 5 7 r = 0.17 0 0.5 1 1 3 5 7 r = 0.71 0 0.5 1 1 3 5 7 r = −0.02 0 0.5 1 1 3 5 7 r = 0.49 0 0.5 1 1 3 5 7 r = 0.57 0 0.5 1 1 3 5 7 r = 0.51 0 0.5 1 1 3 5 7 r = 0.64 0 0.5 1 1 3 5 7 r = 0.83 0 0.5 1 1 3 5 7 r = 0.45 0 0.5 1 1 3 5 7 r = 0.76 0 0.5 1 1 3 5 7 r = 0.79 0 0.5 1 1 3 5 7 r = 0.26 0 0.5 1 1 3 5 7 r = 0.25 0 0.5 1 1 3 5 7 r = 0.19 0 0.5 1 1 3 5 7 r = 0.25 0 0.5 1 1 3 5 7 r = 0.24 0 0.5 1 1 3 5 7 r = 0.33 other conflict all edge exemplar output combination structure combination exemplar feature object (wi = 0.5) (wi = |f i|) F O Figure 3: Argument ratings for Experiment 1 plotted against the predictions of six models. The y-axis in each panel shows human ratings on a seven point scale, and the x-axis shows probabilities according to one of the models. Correlation coefficients are shown for each plot. where f i and f k are the premises (e.g. “has no legs” and “can fly”) and f k is the conclusion (e.g. “has wings”). We are especially interested in conflict cases where the premises f i and f j lead to opposite conclusions when taken individually: for example, most animals with no legs do not have wings, but most animals that fly do have wings. Our models that incorporate feature structure F can resolve this conflict since F includes a dependency between “wings” and “can fly” but not between “wings” and “has no legs.” Our models that do not include F cannot resolve the conflict and predict that humans will be uncertain about whether the novel animal has wings. Materials. The object-feature matrix D includes 447 feature pairs {f i, f j} such that none of the 129 animals has both f i and f j. We selected 40 pairs (see the supporting material) and created 400 arguments in total by choosing 10 conclusion features for each pair. The arguments can be assigned to three categories. Conflict cases are arguments f i, f j →f k such that the single-premise arguments f i →f k and f j →f k lead to incompatible predictions. For our purposes, two singlepremise arguments with the same conclusion are deemed incompatible if one leads to a probability greater than 0.9 according to Equation 3, and the other leads to a probability less than 0.1. Edge cases are arguments f i, f j →f k such that the feature network in Figure 2b includes an edge between f k and either f i or f j. Note that some arguments are both conflict cases and edge cases. All arguments that do not fall into either one of these categories will be referred to as other cases. The 400 arguments for the experiment include 154 conflict cases, 153 edge cases, and 120 other cases. 34 arguments are both conflict cases and edge cases. We chose these arguments based on three criteria. First, we avoided premise pairs that did not co-occur in matrix D but that co-occur in familiar animals that do not belong to D. For example, “is pink” and “has wings” do not co-occur in D but “flamingo” is a familiar animal that has both features. Second, we avoided premise pairs that specified two different numbers of legs—for example, {“has four legs,” “has six legs”}. Finally, we aimed to include roughly equal numbers of conflict cases, edge cases, and other cases. Method. 16 undergraduates participated for course credit. The experiment was carried out using a custom-built computer interface, and one argument was presented on screen at a time. Participants 6 rated the probability of the conclusion on seven point scale where the endpoints were labeled “very unlikely” and “very likely.” The ten arguments for each pair of premises were presented in a block, but the order of these blocks and the order of the arguments within these blocks were randomized across participants. Results. Figure 3 shows average human judgments plotted against the predictions of six models. The plots in the first row include all 400 arguments in the experiment, and the remaining rows show results for conflict cases, edge cases, and other cases. The previous section described four exemplar models, and the two shown in Figure 3 are the best performers overall. Even though the graphical models include more numerical parameters than the exemplar models, recall that these parameters are learned from matrix D rather than fit to the experimental data. Matrix D also serves as the basis for the exemplar models, which means that all of the models can be compared on equal terms. The first row of Figure 3 suggests that the three models which include feature structure F perform better than the alternatives. The output combination model is the worst of the three models that incorporate F, and the correlation achieved by this model is significantly greater than the correlation achieved by the best exemplar model (p < 0.001, using the Fisher transformation to convert correlation coefficients to z scores). Our data therefore suggest that explicit representations of relationships between features are needed to account for inductive inferences about chimeras. The model that includes the feature structure F alone performs better than the two models that combine F with the object structure O, which may not be surprising since Experiment 1 focuses specifically on novel animals that do not slot naturally into structure O. Rows two through four suggest that the conflict arguments in particular raise challenges for the models which do not include feature structure F. Since these conflict cases are arguments f i, f j → f k where f i →f k has strength greater than 0.9 and f j →f k has strength less than 0.1, the first exemplar model averages these strengths and assigns an overall strength of around 0.5 to each argument. The second exemplar model is better able to differentiate between the conflict arguments, but still performs substantially worse than the three models that include structure F. The exemplar models perform better on the edge arguments, but are outperformed by the models that include F. Finally, all models achieve roughly the same level of performance on the other arguments. Although the feature model F performs best overall, the predictions of this model still leave room for improvement. The two most obvious outliers in the third plot in the top row represent the arguments {is blue, lives in desert →lives in woods} and {is pink, lives in desert →lives in woods}. Our participants sensibly infer that any animal which lives in the desert cannot simultaneously live in the woods. In contrast, the Leuven database indicates that eight of the twelve animals that live in the desert also live in the woods, and the edge in Figure 2b between “lives in the desert” and “lives in the woods” therefore represents a positive dependency relationship according to model F. This discrepancy between model and participants reflects the fact that participants made inferences about individual animals but the Leuven database is based on features of animal categories. Note, for example, that any individual animal is unlikely to live in the desert and the woods, but that some animal categories (including snakes, salamanders, and lizards) are found in both environments. 3 Experiment 2: Single-premise arguments Our results so far suggest that inferences about chimeras rely on explicit representations of relationships between features but provide no evidence that relationships between objects are important. It would be a mistake, however, to conclude that relationships between objects play no role in inductive reasoning. Previous studies have used object structures like the example in Figure 2a to account for inferences about novel features [11]—for example, given that alligators have enzyme Y132 in their blood, it seems likely that crocodiles also have this enzyme. Inferences about novel objects can also draw on relationships between objects rather than relationships between features. For example, given that a novel animal has a beak you will probably predict that it has feathers, not because there is any direct dependency between these two features, but because the beaked animals that you know tend to have feathers. Our second experiment explores inferences of this kind. Materials and Method. 32 undergraduates participated for course credit. The task was identical to Experiment 1 with the following exceptions. Each two-premise argument f i, f j →f k from Experiment 1 was converted into two one-premise arguments f i →f k and f j →f k, and these 7 edge other 0 0.5 1 1 3 5 7 r = 0.78 0 0.5 1 1 3 5 7 r = 0.54 0 0.5 1 1 3 5 7 r = 0.75 0 0.5 1 1 3 5 7 r = 0.75 0 0.5 1 1 3 5 7 r = 0.77 0 0.5 1 1 3 5 7 r = 0.87 0 0.5 1 1 3 5 7 r = 0.87 0 0.5 1 1 3 5 7 r = 0.84 0 0.5 1 1 3 5 7 r = 0.86 0 0.5 1 1 3 5 7 r = 0.85 0 0.5 1 1 3 5 7 r = 0.79 0 0.5 1 1 3 5 7 r = 0.21 0 0.5 1 1 3 5 7 r = 0.74 0 0.5 1 1 3 5 7 r = 0.66 0 0.5 1 1 3 5 7 r = 0.73 exemplar combination structure output combination feature object all O F Figure 4: Argument ratings and model predictions for Experiment 2. one-premise arguments were randomly assigned to two sets. 16 participants rated the 400 arguments in the first set, and the other 16 rated the 400 arguments in the second set. Results. Figure 4 shows average human ratings for the 800 arguments plotted against the predictions of five models. Unlike Figure 3, Figure 4 includes a single exemplar model since there is no need to consider different feature weightings in this case. Unlike Experiment 1, the feature model F performs worse than the other alternatives (p < 0.001 in all cases). Not surprisingly, this model performs relatively well for edge cases f j →f k where f j and f k are linked in Figure 2b, but the final row shows that the model performs poorly across the remaining set of arguments. Taken together, Experiments 1 and 2 suggest that relationships between objects and relationships between features are both needed to account for human inferences. Experiment 1 rules out an exemplar approach but models that combine graph structures over objects and features perform relatively well in both experiments. We considered two methods for combining these structures and both performed equally well. Combining the knowledge captured by these structures appears to be important, and future studies can explore in detail how humans achieve this combination. 4 Conclusion This paper proposed that graphical models are useful for capturing knowledge about animals and their features and showed that a graphical model over features can account for human inferences about chimeras. A family of exemplar models and a graphical model defined over objects were unable to account for our data, which suggests that humans rely on mental representations that explicitly capture dependency relationships between features. Psychologists have previously used graphical models to capture relationships between features, but our work is the first to focus on chimeras and to explore models defined over a large set of familiar features. Although a simple undirected model accounted relatively well for our data, this model is only a starting point. The model incorporates dependency relationships between features, but people know about many specific kinds of dependencies, including cases where one feature causes, enables, prevents, or is inconsistent with another. An undirected graph with only one class of edges cannot capture this knowledge in full, and richer representations will ultimately be needed in order to provide a more complete account of human reasoning. Acknowledgments I thank Madeleine Clute for assisting with this research. This work was supported in part by the Pittsburgh Life Sciences Greenhouse Opportunity Fund and by NSF grant CDI-0835797. 8 References [1] R. N. Shepard. Towards a universal law of generalization for psychological science. Science, 237:1317– 1323, 1987. [2] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409–429, 1991. [3] E. Heit. A Bayesian analysis of some forms of inductive reasoning. In M. Oaksford and N. Chater, editors, Rational models of cognition, pages 248–274. Oxford University Press, Oxford, 1998. [4] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24:629–641, 2001. [5] C. Kemp and J. B. Tenenbaum. Structured statistical models of inductive reasoning. Psychological Review, 116(1):20–58, 2009. [6] D. N. Osherson, E. E. Smith, O. Wilkie, A. Lopez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185–200, 1990. [7] D. J. Navarro. Learning the context of a category. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1795–1803. 2010. [8] C. Kemp, T. L. Griffiths, S. Stromsten, and J. B. Tenenbaum. Semi-supervised learning with trees. In Advances in Neural Information Processing Systems 16, pages 257–264. MIT Press, Cambridge, MA, 2004. [9] B. Rehder. A causal-model theory of conceptual representation and categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:1141–1159, 2003. [10] B. Rehder and R. Burnett. Feature inference and the causal structure of categories. Cognitive Psychology, 50:264–314, 2005. [11] C. Kemp, P. Shafto, and J. B. Tenenbaum. An integrated account of generalization across objects and features. Cognitive Psychology, in press. [12] S. E. Barrett, H. Abdi, G. L. Murphy, and J. McCarthy Gallagher. Theory-based correlations and their role in children’s concepts. Child Development, 64:1595–1616, 1993. [13] S. A. Sloman, B. C. Love, and W. Ahn. Feature centrality and conceptual coherence. Cognitive Science, 22(2):189–228, 1998. [14] D. Yarlett and M. Ramscar. A quantitative model of counterfactual reasoning. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 123–130. MIT Press, Cambridge, MA, 2002. [15] W. Ahn, J. K. Marsh, C. C. Luhmann, and K. Lee. Effect of theory-based feature correlations on typicality judgments. Memory and Cognition, 30(1):107–118, 2002. [16] D. C. Meehan C. McNorgan, R. A. Kotack and K. McRae. Feature-feature causal relations and statistical co-occurrences in object concepts. Memory and Cognition, 35(3):418–431, 2007. [17] S. De Deyne, S. Verheyen, E. Ameel, W. Vanpaemel, M. J. Dry, W. Voorspoels, and G. Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts. Behavior Research Methods, 40(4):1030–1048, 2008. [18] J. P. Huelsenbeck and F. Ronquist. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics, 17(8):754–755, 2001. [19] M. Schmidt. UGM: A Matlab toolbox for probabilistic undirected graphical models. 2007. Available at http://people.cs.ubc.ca/∼schmidtm/Software/UGM.html. [20] L. J. Nelson and D. T. Miller. The distinctiveness effect in social categorization: you are what makes you unusual. Psychological Science, 6:246–249, 1995. [21] A. L. Patalano, S. Chin-Parker, and B. H. Ross. The importance of being coherent: category coherence, cross-classification and reasoning. Journal of memory and language, 54:407–424, 2006. [22] S. K. Reed. Pattern recognition and categorization. Cognitive Psychology, 3:393–407, 1972. 9
2011
132
4,183
Empirical models of spiking in neural populations Jakob H. Macke Gatsby Computational Neuroscience Unit University College London, UK jakob@gatsby.ucl.ac.uk Lars B¨using Gatsby Computational Neuroscience Unit University College London, UK lars@gatsby.ucl.ac.uk John P. Cunningham Department of Engineering University of Cambridge, UK jpc74@cam.ac.uk Byron M. Yu ECE and BME Carnegie Mellon University byronyu@cmu.edu Krishna V. Shenoy Department of Electrical Engineering Stanford University shenoy@stanford.edu Maneesh Sahani Gatsby Computational Neuroscience Unit University College London, UK maneesh@gatsby.ucl.ac.uk Abstract Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1 Introduction Multi-electrode array recording and similar methods provide measurements of activity from dozens of neurons simultaneously, and thus allow unprecedented insights into the statistical structure of neural population activity. To exploit this potential we need methods that identify the temporal dynamics of population activity and link it to external stimuli and observed behaviour. These statistical models of population activity are essential for understanding neural coding at a population level [1] and can have practical applications for Brain Machine Interfaces [2]. Two frameworks for modelling the temporal dynamics of cortical population recordings have recently become popular. Generalised Linear spike-response Models (GLMs) [1, 3, 4, 5] model the influence of spiking history, external stimuli or other neural signals on the firing of a neuron. Here, the interdependence of different neurons is modelled by terms that link the instantaneous firing rate of each neuron to the recent spiking history of the population. The parameters of the GLM can be 1 learned efficiently by convex optimisation [3, 4, 5, 6]. Such models have been successful in a range of studies and systems, including retinal [1] and cortical [7] population recordings. An alternative is provided by latent variable models such as Gaussian Process Factor Analysis [8] or other state-space models [9, 10, 11]. In this approach, shared variability (or ‘noise correlation’) is modelled by an unobserved process driving the population, which is sometimes characterised as ‘common input’ [12, 13]. One advantage of this approach is that the trajectories of the latent state provide a compact, low-dimensional representation of the population which can be used to visualise population activity, and link it to observed behaviour [14]. 1.1 Comparing coupled generalised linear models and latent variable models Three lines of argument suggest that latent dynamical models may provide a better fit to cortical population data than the spike-response GLM. First, prevalent recording apparatus, such as extracellular grid electrodes, sample neural populations very sparsely making it unlikely that much of the observed shared variability is a consequence of direct physical interaction. Hence, the coupling filters of a GLM rather reflect statistical interactions (sometimes called functional connectivity). Without direct synaptic coupling, it is unlikely that variability is shared exclusively by particular pairs of units; instead, it will generally be common to many cells—an assumption explicit in the latent variable approach, where shared variability results from the model of cortical dynamics. Second, most cortical population recordings find that shared variability across neurons is dominated by a central peak at zero time lag (i.e. the strongest correlation is instantaneous) [15, 16], and has broad, positive, sometimes asymmetric flanks, decaying slowly with lag time. Correlations with these properties arise naturally in dynamical system models. The common input from the latent state induces instantaneous correlations, and the evolution of the latent system typically yields positive temporal correlations over moderate timescales. By contrast, GLMs couple instantaneous rate to the recent spiking of other neurons, but not to their simultaneous activity, making zero-lag correlation hard to model. (As we show below in “Methods”, the inclusion of simultaneous terms would lead to invalid models.) Instead, the common approach is to discretise time very finely so that an off-zero peak can be brought close to simultaneity. This increases computational load, and often requires discretisation finer than the time-scale of interest, perhaps even finer than the recording resolution (e.g. for 2-photon calcium imaging). In addition, positive history coupling in a GLM may lead to loops of self-excitation, predicting unrealistically high firing rates—a trend that must be countered by long-term negative self-coupling. Thus, while it is certainly not impossible to reproduce neural correlation structure with GLMs [1], they do not seem to be the natural choice for modelling timeseries of spike-counts with instantaneous correlations. Third, recording time, and therefore the data available to fit a model, is usually limited in vivo, especially in behaving animals. This paucity of data places strong constraints on the number of parameters than can be identified. In dynamical system models, the parameter count grows linearly with population size (for a constant latent dimension), whereas the parameters of a coupled GLM depend quadratically on the number of neurons. Thus, GLMs may have many more parameters, and depend on aggressive regularisation techniques to avoid over-fitting to small datasets. Here we show that population activity in monkey motor cortex is better fit by a dynamical system model than by a spike-response GLM; and that the dynamical system, but not a GLM of the same temporal resolution, accurately reproduces the temporal structure of cross-correlations in these data. 1.2 Comparing dynamical system models with spiking or Gaussian observations Many studies of population latent variable models assume Gaussian observation noise [8, 17] (but see, e.g. [2, 11, 13, 18]). Given that spikes are discrete events in time, it seems more natural to use a Poisson [10] or other point-process model [19] or, at coarser timescales, a count-process model. However, it is unclear what (if any) advantage such more realistic models confer. For example, Poisson decoding models do not always outperform Gaussian ones [2, 11]. Here, we describe a latent linear dynamical system whose count distribution, when conditioned on all past observations, is Poisson. (the Poisson linear dynamical system or PLDS). Using a co-smoothing metric, we show that this (computationally more expensive) count model predicts spike counts in our data better than a Gaussian linear dynamical system (GLDS). The two models give substantially different population spike-count distributions, and the count approach is also more accurate on this measure than either the GLDS or GLM. 2 2 Methods 2.1 Dynamical systems with count observations and time-varying mean rates We first consider the count-process latent dynamical model (PLDS). Denote by yi kt the observed spike-count of neuron i ∈{1 . . . q} at time bin t ∈{1 . . . T} of trial k ∈{1 . . . N}, and by yk = vec (yk,i=1:q,t=1:T ) the qT × 1 vector of all data observed on trial k. Neurons are assumed to be conditionally independent given the low-dimensional latent state xkt (of dimensionality p with p < q). Thus, correlated neural variability arises from variations of this latent population state, and not from direct interaction between neurons. Conditioned on x and the recent spiking history st, the activity of neuron i at time t is given by a Poisson distribution with mean E[yi kt|skt, xkt] = exp ([Cxkt + d + Dskt]i) , (1) where the q × p matrix C determines how each neuron is related to the latent state xkt, and the q-dimensional vector d controls the mean firing rates of the population. The history term st is a vector of all relevant recent spiking in the population [1, 3, 7, 20]. For example, one choice to model spike refractoriness would set skt to the counts at the previous time point skt = yk,(t−1), and D to a diagonal matrix of size q × q with negative diagonal entries. In general, however, s and D may contain entries that reflect temporal dependence on a longer time-scale. However, to maintain the conditional independence of neurons given latent state the matrix D (of size q × dim(s)) is constrained to have zero values at all entries corresponding to cross-neuron couplings. The exponential nonlinearity ensures that the conditional firing rate of each neuron is positive. Furthermore, while conditioned on the latent state and the recent spiking history the count in each bin is Poisson distributed (hence the model name), samples from the model are not Poisson as they are affected both by variations in the underlying state and the single-neuron history. We assume that the latent population state xkt evolves according to driven linear Gaussian dynamics: xk1 ∼N (xo, Qo) (2) xk(t+1)|xkt ∼N (Axkt + bt, Q) (3) Here, xo and Qo denote the average value and the covariance of the initial state x1 of each trial. The p × p matrix A specifies the deterministic component of the evolution from one state to the next, and the matrix Q gives the covariance of the innovations that perturb the latent state at each time step. The ‘driving inputs’ bt, which add to the latent state, allow the model to capture time-varying structure in the firing rates that is consistent across trials. Such time-varying mean firing rates are usually characterised by the peri-stimulus time histogram (PSTH), which requires q × T parameters to estimate for each stimulus. Here, by contrast, time-varying means are captured by the driving inputs into the latent state, and so only p × T parameters are needed to describe all the PSTHs. 2.2 Expectation-Maximisation for the PLDS model We use an EM algorithm, similar to those described before [10, 11, 12], to learn the parameters Θ = {C, D, d, A, Q, Qo, xo}. The E-step requires the posterior distribution P(¯xk|yk, Θ) over the latent trajectories ¯xk = vec (xk,1:T ) given the data and our current estimate of the parameters Θ. As this distribution is not available in closed-form, we approximate it by a multivariate Gaussian, P(¯xk|yk, Θ) ≈N (µk, Σk). As xk is a vector of length pT, so µk and Σk are of size pT × 1 and pT × pT, respectively. We find the mean µk and the covariance Σk of this Gaussian via a global Laplace approximation [21], i.e. by maximising the log-posterior P(¯xk, yk) of each trial over xk, setting µk = argmax¯xP(¯x|yk, Θ) to be the latent trajectory that achieves this maximum, and Σk = −(∇∇¯x log P(¯x|yk, Θ)|¯x=µk)−1 to be the negative inverse Hessian of the log-posterior at its maximum. The log-posterior on trial k is given by log P(¯xk|yk, Θ) = const + T X t=1 y⊤ kt (Cxkt + Dskt + d) − q X i=1 exp [Cxkt + Dskt + d]i ! −1 2(xk1 −xo)⊤Q−1 o (xk1 −xo) −1 2 T −1 X t=1 (xk,t+1 −Axkt −bt)⊤Q−1(xk,t+1 −Axkt −bt) (4) Log-posteriors of this type are concave and hence unimodal [5, 6], and the Markov structure of the latent dynamics makes it possible to compute a Newton update in O(T) time [22]. Furthermore, it 3 has previously been observed that the Laplace approximation performs well for similar models with Poisson observations [23]. We checked the quality of the Laplace approximation for our parameter settings by drawing samples from the true posterior in a few cases. The agreement was generally good, with only some minor deviations between the approximated and sampled covariances. The M-step requires optimisation of the expected joint log-likelihood with respect to the parameters Θ, i.e. Θnew = argmaxΘ′L(Θ′) with L(Θ′) = X k Z [log P(yk|x, Θ′) + log P (x|Θ′)] N (x|µk, Σk) dx. (5) This integral can be evaluated in closed form, and efficiently optimised over the parameters: L(Θ′) is jointly concave in the parameters C, d, D, and the updates with respect to the dynamics parameters A, Q, Qo, xo and the driving inputs bt can be calculated analytically. Our use of the Laplace approximation in the E-step breaks the usual guarantee of non-decreasing likelihoods in EM. Furthermore, the full likelihood of the model can only be approximated using sampling techniques [11]. We therefore monitored convergence using the leave-one-neuron-out prediction score [8] that we also used for comparisons with alternative methods (see below): For each trial in the test-set, and for each neuron i, we calculate its most likely firing rate given the activity of the other neurons y−i k,1:T , and then compared this prediction against the observed activity. If implemented naively, this requires q inferences of the latent state from the activity of q−1 neurons. However, this computation can be sped up by an order of magnitude by first finding the most likely state given all neurons, and then performing one Newton-update for each held out neuron from this initial state. While this approximate approach yielded accurate results, we only used it for tracking convergence of the algorithm, not for reporting the results in section 3.1. 2.3 Alternative models: Generalised Linear Models and Gaussian dynamical systems The spike-response GLM models the instantaneous rate of neuron i at time t by a generalised linear form [4] with input covariates representing stimulus (or time) and population spike history: λi kt = E yi kt|skt  = exp ([bt + d + Dskt]i) . (6) The coupling matrix D describes dependence both on the history of firing in the same neuron and on spiking in other neurons, and the q × 1 vectors bt model time-varying mean firing rates. The parameters are estimated by minimising the negative log-likelihood Ldat = P kti yi kt log λi kt −λi kt  . While equation (6) is similar to the definition of the PLDS model in equation (1), the models differ in their treatment of shared variability: The GLM has no latent state xt and so shared variance is modelled through the cross-coupling terms of the matrix D, which are set to 0 in the PLDS. As the number of parameters in the GLM is quadratic in population size, it may be prone to overfitting on small datasets. To improve the generalisation ability of the GLM we added a sparsityinducing L1 prior on the coupling parameters and a smoothness prior on the PSTH parameters bt, and minimized the (convex) cost function using methods described in [24]: L(b, d, D) = Ldat + η1 X ij |Dij| + 1 2η2 X t b⊤ t K−1 η3 bt. (7) Here, the regularization parameter η1 determines the sparsity of the solution D, η2 is the prior variance of the smoothing prior, and Kη3(t, s) = exp −(s −t)2/η2 3  is a squared-exponential prior on the time-varying firing rates bt which ensures their smoothness over time. It is important to note that GLMs with Poisson conditionals cannot easily be extended to allow for instantaneous couplings between neurons. Suppose that we sought a model whose couplings were only instantaneous, with conditional distributions yit|y−i,t ∼Poiss D(i,−i)y−i,t  . It can be verified that the model P(y) = 1 Z exp y⊤Jy  / Q i yi!, which could be regarded as the Poisson equivalent to the Ising model [25], would provide such a structure (as long as J has a zero diagonal). In this model, P(yit|y−i,t) ∝exp(yi,t P j̸=i Dijyj,t)/yi,t! . One might imagine that the parameters J could be learnt by maximizing each of the conditional likelihoods over a row of J (effectively maximising the pseudo-likelihood), and one could sample counts by Gibbs sampling, again exploiting the fact that the conditional distributions are all Poisson. However, an obvious prerequisite would be that a Z exists for which the model is normalised. Unfortunately, this becomes impossible as soon as any entry of J is positive. For example, if entry Jij is positive, then we can easily construct a firing 4 pattern y for which probabilities diverge. Let the pattern y(n) have value n at entries i and j, and zeros otherwise. Then, for large n, we find that log P(y(n)) ∝n2Jij −2 log(n!), which is dominated by the quadratic term, and therefore diverges, rendering the model unnormalizeable. Thus, this “Poisson equivalent” of the Ising model cannot model positive interactions between neurons, limiting its value. The Poisson likelihood of the PLDS requires approximation and is computationally cumbersome. An apparently less veridical alternative would be to model counts as conditionally Gaussian given the latent state. We used the EM algorithm [9] to fit a linear dynamical system model with Gaussian noise and driving inputs [17] (GLDS). In comparison with the Poisson model, the GLDS has an additional set of q parameters corresponding to the variances of the Gaussian observation noise. Finally, we also compared PLDS to Gaussian Process Factor Analysis (GPFA) [8], a Gaussian model in which the latent trajectories are drawn not from a linear dynamical system, but from a more general Gaussian Process with (here) a squared-exponential kernel. We did not include the driving inputs bt in this model, and used the full model for co-smoothing, i.e. we did not orthogonalise its filters as was done in [8]. We quantified goodness-of-fit using two measures of ‘leave-one-neuron-out prediction’ accuracy on test data (see [8] for more detail). Each neuron’s firing rate was first predicted using the activity of all other neurons on each test trial. For the GLM (but not PLDS), predictions reported were based on the past activity of other neurons, but also used the observed past activity of the neuron being predicted (results exploiting all data from other neurons were similar). Then we calculated the difference between the total variance and the residual variance around this prediction Mi,k = var(yi k,1:T ) −MSE(yi k,1:T , ypred). Here, the predicted firing rate is a vector of length T, and the variance is computed over all times t = 1, . . . , T in trial k. Positive values indicate that prediction is more accurate than a constant prediction equal to the true mean activity of that neuron on that trial. We also constructed a receiver operating characteristic (ROC) for deciding based on the predicted firing rates which bins were likely to contain at least one spike, and measured the area under this curve (AUC) [7, 26]. This measure ranges between 0.5 and 1, with a value of 1 reflecting correct identification of spike-containing bins, even if the predicted number of spikes is incorrect. 2.4 Details of neural recordings and choice of parameters We evaluated the methods described above on multi-electrode recordings from the motor cortex of a behaving monkey. The details of the data are described elsewhere [8]. Briefly, spikes were recorded with a 96-electrode array (Blackrock, Salt Lake City, UT) implanted into motor areas of a rhesus macaque (monkey G) performing a delayed center-out reach task. For the analyses presented here, data came from 108 trials on which the monkey was instructed to reach to one target. We used 1200 ms of data from each trial, from 200ms before target onset until the cue to move was presented. We included 92 units (single and multi-units) with robust delay activity. Spike trains were binned at 10ms which resulted in 8.13% of bins containing at least one spike, and in 0.61% of bins containing more than one spike. For goodness-of fit analyses, we performed 4-fold cross-validation, splitting the data into four non-overlapping test folds with 27 trials each. For the PLDS model, dimensionality of the latent state varied from 1 to 20. Models either had no direct history-dependence (i.e. D = 0), or used spike history mapped to a set of 4 basis functions formed by othogonalising decaying exponentials with time constants 0.1, 10, 20, 40ms (similar to those used in [1]). The history term st was then obtained by projecting spike counts in the previous 100ms onto each of these functions. The exponential with 0.1ms time constant effectively covered only the previous time bin and was thus able to model refractoriness. In this case, D was of size q × 4q, with only 4 non-zero elements in each row. For the GLM, we varied the sparsity parameter η1 from 0 to 1 (yielding estimates of D that ranged from a dense matrix to entirely 0), and computed prediction performance at each prior setting. After exploratory runs, the parameters of the smoothness prior were set to η2 = 0.1 and √η3 = 20ms. 3 Results 3.1 Goodness-of-fit of dynamical system models and GLMs We first compared the goodness-of-fit of PLDS with p = 5 latent dimensions against those of GLMs. For all choices of the regularization parameter η1 tested, we found that the prediction performance of 5 A) 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 x 10 −3 Percentage of zero−entries in coupling matrix Var−MSE GLM 10ms GLM 100ms GLM2 100ms GLM 150ms PLDS dim=5 B) 0 20 40 60 80 100 0.58 0.6 0.62 0.64 0.66 0.68 Percentage of zero−entries in coupling matrix AUC GLM 10ms GLM 100ms GLM2 100ms GLM 150ms PLDS dim=5 C) 5 10 15 20 1.5 2 2.5 3 3.5 x 10 −3 Dimensionality of latent space Var−MSE GPFA GLDS PLDS PLDS 100ms D) 5 10 15 20 0.64 0.65 0.66 0.67 0.68 Dimensionality of latent space AUC GPFA GLDS PLDS PLDS 100ms Figure 1: Quantifying goodness-of-fit. A) Prediction performance (variance minus mean-squared error on test-set) of various coupled GLMs (10 ms history; 2 variants with 100 ms history; 150 ms history) plotted against sparsity in the filter matrix D generated by different choices of η1. For all η1, GLM prediction was poorer than that of PLDS with p = 5. Error bars on PLDS-performance are standard errors of mean across trials. B) As A, measuring performance by area under the ROCcurve (AUC). C) Prediction performance of different latent variable models (GPFA, and LDSs with Gaussian, Poisson or history-dependent Poisson noise) on the test-set. Black dots indicate dimensionalities where PLDS with 100ms history is significantly better than GLDS (p < 0.05, pairwise comparisons of trials). PLDS outperforms alternatives, and performance plateaus at small latent dimensionalities. D) As C, but using AUC to quantify prediction performance. The ordering of the methods (at the optimal dimensionality) is similar, but there is no advantage of PLDS for higher dimensional models. GLMs was inferior to that of PLDS (Fig. 1A). This was true for GLMs with history terms of length 10ms, 100ms or 150ms (with 1, 4 or 5 basis functions each, which were equivalent to the history functions used for the spiking-history in the dynamical system model, with an additional 80 ms time-constant exponential as the 5th basis function). To ensure that this difference in performance is not due to the GLM over-fitting the terms bt (which have q × T parameters for the GLM, but only p × T parameters for PLDS), we fitted both GLMs and PLDS without those filters. In this case, the prediction performance of both models decreased slightly, but the latent variable models still had substantially better prediction performance. Our performance metric based on the mean-squared error is sensitive both to the prediction of which bins contain spikes, as well as to how many they contain. To quantify the accuracy with which our models predicted only the absence or presence of spikes, we calculated the area under the curve (AUC) of the receiver operating characteristic [7]. As can be seen in Fig. 1 B the PLDS outperformed the GLMs over all choices of the regularization parameter η1. Next, we investigated a more realistic spiking noise model would further improves the performance of the dynamical system model, and how this would depend on the latent dimensionality d. We therefore compared our models (GPFA, GLDS, PLDS, PLDS with 100ms history) for different choices of the latent dimensionality d. When quantifying prediction performance using the meansquared error, we found that for all four models, prediction performance on the test-set increased strongly with dimensionality for small dimensions, but plateaued at about 8 to 10 dimensions (see Fig. 1C). Thus, of the models considered here, a low-dimensional latent variable provides the best fit to the data. 6 A) −100 −50 0 50 100 0.01 0.02 0.03 0.04 0.05 Time lag (ms) Correlation GLDS PLDS PLDS 100ms recorded data B) −100 −50 0 50 100 0.01 0.02 0.03 0.04 0.05 Time lag (ms) Correlation GLM 10ms GLM 100ms GLM 150ms recorded data Figure 2: Temporal structure of cross-correlations. A) Average temporal cross-correlation in four groups of neurons (color-coded from most to least correlated), and comparison with correlations captured by the dynamical system models with Gaussian, Poisson or history-dependent Poisson noise. All three model correlations agree well with the data. B) Comparison of GLMs with differing history-dependence with cortical recordings; the correlations of the models differ markedly from those of the data, and do not have a peak at zero time-lag. We also found that models with the more realistic spiking noise model (PLDS, and PLDS 100ms) had a small, but consistent performance benefit over the computationally more efficient Gaussian models (GLDS, GPFA). However, for the dataset and comparison considered here (which was based on predicting the mean activity averaged over all possible spiking histories), we only found a small advantage of also adding single-neuron dynamics (i.e. the spike-history filters in D) to the spiking noise model. If we compared the models using their ability to predict population activity on the next time-step from the observed population history, single-neuron filters did have an effect. In this prediction task, PLDS with history filters performed best, in particular better than GLMs. When using AUC rather than mean-squared-error to quantify prediction performance, we found similar results: Low-dimensional models showed best performance, spiking models slightly outperformed Gaussian ones, and adding single-neuron dynamics yielded only a small benefit. In addition, when using AUC, the performance benefit of PLDS over GLDS was smaller, and was significant only at those state-dimensionalities for which overall prediction performance was best. Finally, both GPFA and GLDS at p = 5 outperformed all GLMs, both for using AUC and mean-squared-error. Thus, all four of our latent variable models provided superior fits to the dataset than GLMs. 3.2 Reproducing the correlations of cortical population activity In the introduction, we argued that dynamical system models would be more appropriate for capturing the typical temporal structure of cross-neural correlations in cortical multi-cell recordings. We explicitly tested this claim in our cortical recordings. First, we subtracted the time-varying mean firing rate (PSTH) of each neuron to eliminate correlations induced by similarity in mean firing rates. Then, we calculated time-lagged cross-correlations for each pair of neurons, using 10ms bins. For display purposes, we divided neurons into 4 groups (color coded in Fig. 2) according to their total correlation (using summed correlation coefficients with all other neurons), and calculated the average pairwise correlation in each group. Fig. 2A shows the resulting average time-lagged correlations, and shows that both dynamical system models accurately capture this aspect of the correlation structure of the data. In contrast Fig. 2B shows that the temporal correlations of the GLM differ markedly from the real data1. As mentioned before, this GLM is also fit at 10ms resolution, leaving open the possibility that fitting it at a finer temporal resolution would yield samples which more closely reflect the recorded correlations. 3.3 Reproducing the distribution of spike-counts across the population In the above, we showed that the PLDS model outperforms both Gaussian models and GLMs with respect to our performance-metric, and that samples from both dynamical systems accurately capture the temporal correlation structure of the data. Finally, we looked at an aggregate measure of 1We used η1 = 0, i.e. no regularization for this figure, results with η1 optimized for prediction performance vastly underestimate correlations in the data. 7 0 10 20 30 40 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Number of spikes in 10ms bin Frequency GLM GLM 2 GLDS PLDS real data Figure 3: Modeling population spike counts. Distribution of the population spike counts, and comparison with distributions from PLDS, GLDS and two versions of the GLM with 150ms history dependence (GLM with no regularization, GLM2 with optimal sparsity). population activity, namely the distribution of population spike counts, i.e. the distribution of the total number of spikes across the population per time bin. This distribution is influenced both by the single-neuron spike-count distributions and second- and higher-order correlations across neurons. Fig. 3 shows that the PLDS model accurately reproduces the spike-count distribution in the data, whereas the other two models do not. The GLDS model underestimates the frequency of high spike counts, despite accurately matching both the mean and the variance of the distribution. For the GLM (using 150ms history, and either no regularization or optimal regularization), the frequency of rare events is either over- or under-estimated. This could be further indication that the GLM does not fully capture the fact that variability is shared across many cells in the population. 4 Discussion We explored a statistical model of cortical population recordings based on a latent dynamical system with count-process observations. We argued that such a model provides a more natural modeling choice than coupled spike-response GLMs for cortical array-recordings; and indeed, this model did fit motor-cortical multi-unit recording better, and more faithfully reproduced the temporal structure of cross-neural correlations. GLMs have many attractive properties, and given the flexibility of the model class, it is impossible to rule out that some coupled GLM with finer temporal resolution, possibly nonlinear history dependencies and cleverly chosen regularization would yield better crossvalidation performance. We here argued that latent variable models yield a more appropriate model of cross-neural correlations with zero-lag peaks: In GLMs, one has to use a fine discretization of the time-axis (which can be computationally intensive) or work in continuous time to achieve this. Thus, they might constitute good point-process models at fine time-scales, but arguably not the right count-process model to model neural recordings at coarser time-scales. We also showed that a model with count-process observations yields better fits to our data than ones with a Gaussian noise model, and that it has a more realistic distribution of population spike counts. Given that spiking data is discrete and therefore non-Gaussian, this might not seem surprising. However, it is important to note that the Gaussian model has free parameters for the singleneuron variability, whereas the conditional variance of the Poisson model is constrained to equal the mean. For data in which this assumption is invalid, use of other count models, such as a negative binomial distribution, might be more appropriate. In addition, fitting the PLDS model requires simplifying approximations, and these approximations could offset any gain in prediction performance. As measured by our co-smoothing metrics, the performance advantage of our count-process over the Gaussian noise model was small, and the question of whether this advantage would justify the considerable additional computational cost of the count-process model will depend on the application at hand. In addition, any comparison of statistical models depends on the data used, as different methods are appropriate for datasets with different properties. For the recordings we considered here, a dynamical system model with count-process observations worked best, but there will be datasets for which either GLMs, or GLDS or GPFA provide the most appropriate model. Finally, the choice of the most appropriate model depends on the analysis or prediction question of interest. While we used a co-smoothing metric to quantify model performance, different models might be more suitable for decoding reaching movements from population activity [11], or inferring the underlying anatomical connectivity from extracellular recordings. 8 Acknowledgements We acknowledge support from the Gatsby Charitable Foundation, an EU Marie Curie Fellowship to JHM, EPSRC EP/H019472/1 to JPC, the Defense Advanced Research Projects Agency (DARPA) through “Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR; N66001-10-C-2010)”, NIH CRCNS R01NS054283 to KVS and MS as well as NIH Pioneer 1DP1OD006409 to KVS. References [1] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995– 999, 2008. [2] G. Santhanam, B. M. Yu, V. Gilja, S. I. Ryu, A. Afshar, M. Sahani, and K. V. Shenoy. Factor-analysis methods for higher-performance neural prostheses. J Neurophysiol, 102(2):1315–1330, 2009. [3] E.S. Chornoboy, L.P. Schramm, and A.F. Karr. Maximum likelihood identification of neural point process systems. Biological Cybernetics, 59(4):265–275, 1988. [4] P. McCulloch and J. Nelder. Generalized linear models. Chapman and Hall, London, 1989. [5] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network, 15(4):243–262, 2004. [6] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Press, 2004. [7] W. Truccolo, L. R. Hochberg, and J. P. Donoghue. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci, 13(1):105–111, 2010. [8] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J Neurophysiol, 102(1):614–635, 2009. [9] S. Roweis and Z. Ghahramani. A unifying review of linear gaussian models. Neural Comput, 11(2):305– 345, 1999 Feb 15. [10] A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural Comput, 15(5):965–91, 2003. [11] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski. Population decoding of motor cortical activity using a generalized linear model with hidden states. J Neurosci Methods, 189(2):267–280, 2010. [12] J.E. Kulkarni and L. Paninski. Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems, 18(4):375–407, 2007. [13] M. Vidne, Y. Ahmadian, J. Shlens, J.W. Pillow, J Kulkarni, E. J. Chichilnisky, E. P. Simoncelli, and L Paninski. A common-input model of a complete network of ganglion cells in the primate retina. In Computational and Systems Neuroscience, 2010. [14] M. M. Churchland, B. M. Yu, M. Sahani, and K. V. Shenoy. Techniques for extracting single-trial activity patterns from large-scale neural recordings. Current Opinion in Neurobiology, 17(5):609–618, 2007. [15] D. Y. Tso, C. D. Gilbert, and T. N. Wiesel. Relationships between horizontal interactions and functional architecture in cat striate cortex revealed by cross-correlation analysis. J Neurosci, 6(4):1160–1170, 1986. [16] A. Jackson, V. J. Gee, S. N. Baker, and R. N. Lemon. Synchrony between neurons with similar muscle fields in monkey motor cortex. Neuron, 38(1):115–125, 2003. [17] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Bayesian population decoding of motor cortical activity using a kalman filter. Neural Comput, 18(1):80–118, 2006. [18] B. Yu, A. Afshar, G. Santhanam, S.I. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure embedded in neural activity. In Advances in Neural Information Processing Systems, volume 18, pages 1545–1552. MIT Press, Cambridge, 2006. [19] J.P. Cunningham, B.M. Yu, K.V. Shenoy, and M. Sahani. Inferring neural firing rates from spike trains using gaussian processes. Advances in neural information processing systems, 20:329–336, 2008. [20] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown. Dynamic analysis of neural encoding by point process adaptive filtering. Neural Comput, 16(5):971–98, 2004. [21] B. Yu, J. Cunningham, K. Shenoy, and M. Sahani. Neural decoding of movements: From linear to nonlinear trajectory models. In Neural Information Processing, pages 586–595. Springer, 2008. [22] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein, and W. Wu. A new look at state-space models for neural data. J Comput Neurosci, 29(1-2):107–126, 2010. [23] Y. Ahmadian, J. W. Pillow, and L. Paninski. Efficient markov chain monte carlo methods for decoding neural spike trains. Neural Comput, 23(1):46–96, 2011. [24] G. Andrew and J. Gao. Scalable training of l 1-regularized log-linear models. In Proceedings of the 24th international conference on Machine learning, pages 33–40. ACM, 2007. [25] E. Schneidman, M. J. 2nd Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007–12, 2006. [26] T.D. Wickens. Elementary Signal Detection Theory. Oxford University Press, 2002. 9
2011
133
4,184
An Exact Algorithm for F-Measure Maximization Krzysztof Dembczy´nski Institute of Computing Science Pozna´n University of Technology Pozna´n, 60-695 Poland kdembczynski@cs.put.poznan.pl Willem Waegeman Mathematical Modelling, Statistics and Bioinformatics, Ghent University Ghent, 9000 Belgium willem.waegeman@ugent.be Weiwei Cheng Mathematics and Computer Science Philipps-Universit¨at Marburg Marburg, 35032 Germany cheng@mathematik.uni-marburg.de Eyke H¨ullermeier Mathematics and Computer Science Philipps-Universit¨at Marburg Marburg, 35032 Germany eyke@mathematik.uni-marburg.de Abstract The F-measure, originally introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure remains a statistically and computationally challenging problem, since no closed-form maximizer exists. Current algorithms are approximate and typically rely on additional assumptions regarding the statistical distribution of the binary response variables. In this paper, we present an algorithm which is not only computationally efficient but also exact, regardless of the underlying distribution. The algorithm requires only a quadratic number of parameters of the joint distribution (with respect to the number of binary responses). We illustrate its practical performance by means of experimental results for multi-label classification. 1 Introduction While being rooted in information retrieval [1], the so-called F-measure is nowadays routinely used as a performance metric for different types of prediction problems, including binary classification, multi-label classification (MLC), and certain applications of structured output prediction, like text chunking and named entity recognition. Compared to measures like error rate in binary classification and Hamming loss in MLC, it enforces a better balance between performance on the minority and the majority class, respectively, and, therefore, it is more suitable in the case of imbalanced data. Given a prediction h = (h1, . . . , hm) ∈{0, 1}m of an m-dimensional binary label vector y = (y1, . . . , ym) (e.g., the class labels of a test set of size m in binary classification or the label vector associated with a single instance in MLC), the F-measure is defined as follows: F(y, h) = 2 Pm i=1 yihi Pm i=1 yi + Pm i=1 hi ∈[0, 1] , (1) where 0/0 = 1 by definition. This measure essentially corresponds to the harmonic mean of precision prec and recall rec: prec(y, h) = Pm i=1 yihi Pm i=1 hi , rec(y, h) = Pm i=1 yihi Pm i=1 yi . One can generalize the F-measure to a weighted harmonic average of these two values, but for the sake of simplicity, we stick to the unweighted mean, which is often referred to as the F1-score or the F1-measure. 1 Despite its popularity in experimental settings, only a few methods for training classifiers that directly optimize the F-measure have been proposed so far. In binary classification, the existing algorithms are extensions of support vector machines [2, 3] or logistic regression [4]. However, the most popular methods, including [5], rely on explicit threshold adjustment. Some algorithms have also been proposed for structured output prediction [6, 7, 8] and MLC [9, 10, 11]. In these two application domains, three different aggregation schemes of the F-measure can be distinguished, namely the instance-wise, the micro-, and the macro-averaging. One should carefully distinguish these versions, as algorithms optimized with a given objective are usually performing suboptimally for other (target) evaluation measures. All the above algorithms intend to optimize the F-measure during the training phase. Conversely, in this article we rather investigate an orthogonal problem of inference from a probabilistic model. Modeling the ground-truth as a random variable Y , i.e., assuming an underlying probability distribution p(Y ) on {0, 1}m, the prediction h∗ F that maximizes the expected F-measure is given by h∗ F = arg max h∈{0,1}m Ey∼p(Y ) [F(y, h)] = arg max h∈{0,1}m X y∈{0,1}m p(Y =y) F(y, h). (2) As discussed in Section 2, this setting was mainly examined before by [12], under the assumption of independence of the Yi, i.e., p(Y =y) = Qm i=1 pyi i (1 −pi)1−yi with pi =p(Yi =1). Indeed, finding the maximizer (2) is in general a difficult problem. Apparently, there is no closed-form expression, and a brute-force search is infeasible (it would require checking all 2m combinations of prediction vector h). At first sight, it also seems that information about the entire joint distribution p(Y ) is needed to maximize the F-measure. Yet, as will be shown in this paper, the problem can be solved more efficiently. In Section 3, we present a general algorithm for maximizing the F-measure that requires only m2 + 1 parameters of the joint distribution. If these parameters are given, the exact solution can be obtained in time o(m3). This result holds regardless of the underlying distribution. In particular, unlike algorithms such as [12], we do not require independence of the binary response variables (labels). While being natural for problems like binary classification, this assumption is indeed not tenable in domains like MLC and structured output prediction. A discussion of existing methods for F-measure maximization, along with results indicating their shortcomings, is provided in Section 2. An experimental comparison in the context of MLC is presented in Section 4. 2 Existing Algorithms for F-Measure Maximization Current algorithms for solving (2) make different assumptions to simplify the problem. First of all, the algorithms operate on a constrained hypothesis space, sometimes justified by theoretical arguments. Secondly, they guarantee optimality only for specific distributions p(Y ). 2.1 Algorithms Based on Label Independence By assuming independence of the random variables Y1, ..., Ym, the optimization problem (2) can be substantially simplified. It has been shown independently in [13] and [12] that the optimal solution always contains the labels with the highest marginal probabilities pi, or no labels at all. As a consequence, only a few hypotheses h (m+1 instead of 2m) need to be examined, and the computation of the expected F-measure can be performed in an efficient way. Lewis [13] showed that the expected F-measure can be approximated by the following expression under the assumption of independence:1 Ey∼p(Y ) [F(y, h)] ≃ ( Qm i=1(1 −pi), if h = 0 2 P m i=1 pihi P m i=1 pi+P m i=1 hi , if h ̸= 0 This approximation is exact for h = 0, while for h ̸= 0, an upper bound of the error can easily be determined [13]. Jansche [12], however, has proposed an exact procedure, called maximum expected utility framework (MEUF), that takes marginal probabilities p1, p2, . . . , pm as inputs and solves (2) in time 1In the following, we denote 0 and 1 as vectors containing all zeros and ones, respectively. 2 O(m4). He noticed that (2) can be solved via outer and inner maximization. Namely, (2) can be transformed into an inner maximization h(k)∗= arg max h∈Hk Ey∼p(Y ) [F(y, h)] , (3) where Hk = {h ∈{0, 1}m | Pm i=1 hi = k}, followed by an outer maximization h∗ F = arg max h∈{h(0)∗,...,h(m)∗} Ey∼p(Y ) [F(y, h)] . (4) The outer maximization (4) can be done by simply checking all m + 1 possibilities. The main effort is then devoted for solving the inner maximization (3). According to Theorem 2.1, to solve (3) for a given k, we need to check only one vector h in which hi = 1 for the k labels with highest marginal probabilities pi. The remaining problem is the computation of the expected F-measure in (3). This expectation cannot be computed naively, as the sum is over exponentially many terms. But the F-measure is a function of integer counts that are bounded, so it can normally only assume a much smaller number of distinct values. The cardinality of its domain is indeed exponential in m, but the cardinality of its range is polynomial in m, so the expectation can be computed in polynomial time. As a result, Jansche [12] obtains a procedure that is cubic in m for computing (3). He also presents approximate variants of this procedure, reducing its complexity from cubic to quadratic or even to linear. The results of the quadratic-time approximation, according to [12], are almost indistinguishable in practice from the exact algorithm; but still the overall complexity of the approach is O(m3). If the independence assumption is violated, the above methods may produce predictions being far away from the optimal one. The following result shows this concretely for the method of Jansche.2 Proposition 2.1. Let hJ be a vector of predictions obtained by MEUF, then the worst-case regret converges to one in the limit of m, i.e., lim m→∞sup p (EY  F(Y, h∗ F ) −F(Y, hJ)  ) = 1, where the supremum is taken over all possible distributions p(Y ). Additionally, one can easily construct families of probability distributions that obtain a relatively fast convergence rate as a function of m. 2.2 Algorithms Based on the Multinomial Distribution Solving (2) becomes straightforward in the case of a specific distribution in which the probability mass is distributed over vectors y containing only a single positive label, i.e., Pm i=1 yi = 1, corresponding to the multinomial distribution. This was studied in [14] in the setting of so-called non-deterministic classification. Theorem 2.2 (Del Coz et al. [14]). Denote by y(i) a vector for which yi = 1 and all the other entries are zeros. Assume that p(Y ) is a joint distribution such that p(Y = y(i)) = pi. The maximizer h∗ F of (2) consists of the k labels with the highest marginal probabilities, where k is the first integer for which k X j=1 pj ≥(1 + k)pk+1; if there is no such integer, then h = 1. 2.3 Algorithms Based on Thresholding on Ordered Marginal Probabilities Since all the methods so far rely on the fact that the optimal solution contains ones for the labels with the highest marginal probabilities (or consists of a vector of zeros), one may expect that thresholding on the marginal probabilities (hi = 1 for pi ≥θ, and hi = 0 otherwise) will provide a solution to 2Some of the proofs have been attached to the paper as supplementary material and will also be provided later with the extended version of the paper. 3 (2) in general. Obviously, to find an optimal threshold θ, access to the entire joint distribution is needed. However, this is not the main problem here, since in the next section, we will show that only a polynomial number of parameters of the joint distribution is needed. What is more interesting is the observation that the F-maximizer is in general not consistent with the order of marginal label probabilities. In fact, the regret can be substantial, as shown by the following result. Proposition 2.3. Let hT be a vector of predictions obtained by putting a threshold on sorted marginal probabilities in the optimal way, then the worst-case regret is lower bounded by sup p (EY  F(Y, h∗ F ) −F(Y, hT )  ) ≥max(0, 1 6 − 2 m + 4), where the supremum is taken over all possible distributions p(Y ).3 This is a rather surprising result in light of the existence of many algorithms that rely on finding a threshold for maximizing the F-measure [5, 9, 10]. While being justified by Theorems 2.1 and 2.3 for specific applications, this approach does not yield optimal predictions in general. 3 An Exact Algorithm for F-Measure Maximization We now introduce an exact and efficient algorithm for computing the F-maximizer without using any additional assumption on the probability distribution p(Y ). While adopting the idea of decomposing the problem into an outer and an inner maximization, our algorithm differs from Jansche’s in the way the inner maximization is solved. As a key element, we consider equivalence classes for the labels in terms of the number of ones in the vectors h and y. The optimization of the F-measure can be substantially simplified by using these equivalence classes, since h and y then only appear in the numerator of the objective function. First, we show that only m2 + 1 parameters of the joint distribution p(Y ) are needed to compute the F-maximizer. Theorem 3.1. Let sy = Pm i=1 yi. The solution of (2) can be computed by solely using p(Y = 0) and the values of pis = p(Yi = 1 , sy = s), i, s ∈{1, . . . , m} , which constitute an m × m matrix P. Proof. The inner optimization problem (3) can be formulated as follows: h(k)∗= arg max h∈Hk Ey∼p(Y ) [F(y, h)] = arg max h∈Hk X y∈{0,1}m p(y)2 Pm i=1 yihi sy + k . The sums can be swapped, resulting in h(k)∗= arg max h∈Hk 2 m X i=1 hi X y∈{0,1}m p(y)yi sy + k . (5) Furthermore, one can sum up the probabilities p(y) for all ys with an equal value of sy. By using pis = X y∈{0,1}m:sy=s yip(y) , one can transform (5) into the following expression: h(k)∗= arg max h∈Hk 2 m X i=1 hi m X s=1 pis s + k (6) As a result, one does not need the whole distribution to solve (3), but only the values of pis, which can be given in the form of an m × m matrix P with entries pis. For the special case of k = 0, we have h(k)∗= 0 and Ey∼p(Y ) [F(y, 0)] = p(Y = 0). 3Finding the exact value of the supremum is an interesting open question. 4 Algorithm 1 General F-measure Maximizer INPUT: matrix P and probability p(Y = 0) define matrix W with elements given by Eq. 7; compute F = PW for k = 1 to m do solve the inner optimization problem (3) that can be reformulated as: h(k)∗= arg max h∈Hk 2 m X i=1 hifik by setting hi=1 for top k elements in the k-th column of matrix F, and hi=0 for the rest; store a value of Ey∼p(Y ) h F(y, h(k)∗) i = 2 m X i=1 h(k)∗ i fik; end for for k = 0 take h(k)∗= 0, and Ey∼p(Y ) [F(y, 0)] = p(Y = 0); solve the outer optimization problem (4): h∗ F = arg max h∈{h(0)∗,...,h(m)∗} Ey∼p(Y ) [F(y, h)] ; return h∗ F and Ey∼p(Y ) [F(y, h∗ F )]; If the matrix P is given, the solution of (2) is straight-forward. To simplify the notation, let us introduce an m × m matrix W with elements wsk = 1 s + k , s, k ∈{1, . . . , m} , (7) The resulting algorithm, referred to as General F-measure Maximizer (GFM), is summarized in Algorithm 1 and its time complexity is analyzed in the following theorem. Theorem 3.2. Algorithm 1 solves problem (2) in time o(m3) assuming that the matrix P of m2 parameters and p(Y = 0) are given. Proof. We can notice in (6) that the sum s + k assumes at most m + 1 values (it varies from s to s + m). By introducing the matrix W with elements (7), we can simplify (6) to h(k)∗= arg max h∈Hk 2 m X i=1 hifik , (8) where fik are elements of a matrix F = PW. To solve (8), it is enough to find the top k elements (i.e., the elements with the highest values) in the k-th column of matrix F, which can be carried out in linear time [15]. The solution of the outer optimization problem (4) is then straight-forward. Consequently, the complexity of the algorithm is dominated by a matrix multiplication that is solved naively in O(m3), but faster algorithms working in O(m2.376) are known [16].4 Let us briefly discuss the properties of our algorithm in comparison to the other algorithms discussed in Section 2. First of all, MEUF is characterized by a much higher time complexity being O(m4) for the exact version. The recommended approximate variant reduces this complexity to O(m3). In turn, the GFM algorithm has a complexity of o(m3). In addition, let us also remark that this complexity can be further decreased if the number of distinct values of sy with non-zero probability mass is smaller than m. Moreover, the MEUF framework will not deliver an exact F-maximizer if the assumption of independence is violated. On the other hand, MEUF relies on a smaller number of parameters (m values 4The complexity of the Coppersmith-Winograd algorithm [16] is more of theoretical significance, since practically this algorithm outperforms the na¨ıve method only for huge matrices. 5 representing marginal probabilities). Our approach needs m2 +1 parameters, but then computes the maximizer exactly. Since estimating a larger number of parameters is statistically more difficult, it is a priori unclear which method performs better in practice. Our algorithm can also be tailored for finding an optimal threshold. It is then simplified due to constraining the number of hypotheses. Instead of finding the top k elements in the k-th column, it is enough to rely on the order of the marginal probabilities pi = Pm s=1 pis. As a result, there is no need to compute the entire matrix F; instead, only the elements that correspond to the k highest marginal probabilities for each column k are needed. Of course, the thresholding can be further simplified by verifying only a small number t < m of thresholds. 4 Application of the Algorithm The GFM algorithm can be used whenever an estimation of the distribution p(Y ) or, alternatively, estimates of the matrix P and probability p(Y = 0) are available. In this section, we focus on the application of GFM in the multi-label setting. Thus, we consider the task of predicting a vector y = (y1, y2, . . . , ym) ∈{0, 1}m given another vector x = (x1, x2, . . . , xn) ∈Rn as input attributes. To this end, we train a classifier h(x) on a training set {(xi, yi)}N i=1 and perform inference for a given test vector x so as to deliver an optimal prediction under the F-measure (1). Thus, we optimize the performance for each instance individually (instance-wise F-measure), in contrast to macro- and micro-averaging of the F-measure. We follow an approach similar to Conditional Random Fields (CRFs) [17, 18], which estimates the joint conditional distribution p(Y | x). This approach has the additional advantage that one can easily sample from the estimated distribution. The underlying idea is to repeatedly apply the product rule of probability to the joint distribution of the labels Y = (Y1, . . . , Ym): p(Y = y | x) = m Y k=1 p(Yk = yk | x, y1, . . . , yk−1) (9) This approach, referred to as Probabilistic Classifier Chains (PCC), has proved to yield state-ofthe-art performance in MLC [19]. Learning in this framework can be considered as a procedure that relies on constructing probabilistic classifiers for estimating p(Yk = yk|x, y1, . . . , yk−1), independently for each k = 1, . . . , m. To sample from the conditional joint distribution p(Y | x), one follows the chain and picks the value of label yk by tossing a biased coin with probabilities given by the k-th classifier. Based on a sample of observations generated in this way, our GFM algorithm can be used to perform the optimal inference under F-measure. In the experiments, we train PCC by using linear regularized logistic regression. By plugging the log-linear model into (9), it can be shown that pairwise dependencies between labels yi and yj can be modeled. We tune the regularization parameter using 3-fold cross-validation. To perform inference, we draw for each test example a sample of 200 observations from the estimated conditional distribution. We then apply five inference methods. The first one (H) estimates marginal probabilities pi(x) and predicts 1 for labels with ˆpi(x) ≥0.5; this is an optimal strategy for the Hamming loss. The second method (MEUF) uses the estimates ˆpi(x) for computing the F-measure by applying the MEUF method. If the labels are independent, this method computes the F-maximizer exactly. As a third method, we use the approximate cubic-time variant of MEUF with the parameters suggested in the original paper [12]. Finally, we use GFM and its variant that finds the optimal threshold (GFM-T). Before showing the results of PCC on benchmark datasets, let us discuss results for two synthetic models, one with independent and another one with dependent labels. Plots and a description of the models are given in Fig. 1. As can be observed, MEUF performs the best for independent labels, while GFM approaches its performance if the sample size increases. This is coherent with our theoretical analysis, since GFM needs to estimate more parameters. However, in the case of dependent labels, MEUF performs poorly, even for a larger sample size, since the underlying assumption is not satisfied. Interestingly, both approximate variants perform very similarly to the original algorithms. We also see that GFM has a huge advantage over MEUF regarding the time complexity.5 5All the computations are performed on a typical desktop machine. 6 20 40 60 80 100 0.14 0.16 0.18 sample size F1 G G G G G G G G G G G G GFM GFM−T MEUF MEUF Approx 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 sample size F1 G G G G G G G G G G G 10 20 30 40 50 0 50 100 150 # of labels time [s] G G G G G G G G G Figure 1: The plots show the performance under the F-measure of the inference methods: GFM, its thresholding variant GFM-T, MEUF, and its approximate version MEUF Approx. Left: the performance as a function of sample size generated from independent distribution with pi = 0.12 and m = 25 labels. Center: similarly as above, but the distribution is defined according to (9), where all p(Yi = yi | y1, . . . , yi−1) are defined by logistic models with a linear part −1 2(i−1)+Pi−1 j=1 yj. Right: running times as a function of the number of labels with a sample size of 200. All the results are averaged over 50 trials. Table 1: Experimental results on four benchmark datasets. For each dataset, we give the number of labels (m) and the size of training and test sets (in parentheses: training/test set). A “-” symbol indicates that an algorithm did not complete the computations in a reasonable amount of time (several days). In bold: the best results for a given dataset and performance measure. METHOD HAMMING MACRO-F MICRO-F F INFERENCE HAMMING MACRO-F MICRO-F F INFERENCE LOSS TIME [S] LOSS TIME [S] SCENE: m = 6 (1211/1169) YEAST: m = 14 (1500/917) PCC H 0.1030 0.6673 0.6675 0.5779 0.969 0.2046 0.3633 0.6391 0.6160 3.704 PCC GFM 0.1341 0.7159 0.6915 0.7101 0.985 0.2322 0.4034 0.6554 0.6479 3.796 PCC GFM-T 0.1343 0.7154 0.6908 0.7094 1.031 0.2324 0.4039 0.6553 0.6476 3.907 PCC MEUF APPROX. 0.1323 0.7131 0.6910 0.6977 1.406 0.2295 0.4030 0.6551 0.6469 10.000 PCC MEUF 0.1323 0.7131 0.6910 0.6977 1.297 0.2292 0.4034 0.6557 0.6477 11.453 BR 0.1023 0.6591 0.6602 0.5542 1.125 0.1987 0.3349 0.6299 0.6039 0.640 BR MEUF APPROX. 0.1140 0.7048 0.6948 0.6468 1.579 0.2248 0.4098 0.6601 0.6527 7.110 BR MEUF 0.1140 0.7048 0.6948 0.6468 2.094 0.2263 0.4096 0.6591 0.6523 10.031 ENRON: m = 53 (1123/579) MEDIAMILL: m = 101 (30999/12914) PCC H 0.0471 0.1141 0.5185 0.4892 195.061 0.0304 0.0931 0.5577 0.5429 1405.772 PCC GFM 0.0521 0.1618 0.5943 0.6006 194.889 0.0348 0.1491 0.5849 0.5734 1420.663 PCC GFM-T 0.0521 0.1619 0.5948 0.6011 196.030 0.0348 0.1499 0.5854 0.5737 1464.147 PCC MEUF APPROX. 0.0523 0.1612 0.5932 0.6007 1081.837 0.0350 0.1504 0.5871 0.5740 308582.019 PCC MEUF 0.0523 0.1612 0.5932 0.6007 6676.145 BR 0.0468 0.1049 0.5223 0.4821 8.594 0.0304 0.1429 0.5623 0.5462 207.655 BR MEUF APPROX. 0.0513 0.1554 0.5969 0.5947 850.494 0.3508 0.1917 0.5889 0.5744 258431.125 BR MEUF 0.0513 0.1554 0.5969 0.5947 7014.453 The results on four commonly used benchmark datasets6 with known training and test sets are presented in Table 1, which also includes some basic statistics of these datasets. We additionally present results of the binary relevance (BR) approach which trains an independent classifier for each label (we used the same base learner as in PCC). We also apply the MEUF method on marginals delivered by BR. This is the best we can do if only marginals are known. From the results of the F-measure, we can clearly state that all approaches tailored for this measure obtain better results. However, there is no clear winner among them. It seems that in practical applications, the theoretical results concerning the worst-case scenario do not directly apply. Also, the number of parameters to be estimated does not play an important role. However, GFM drastically outperforms MEUF in terms of computational complexity. For the Mediamill dataset, the MEUF algorithm in its exact version did not complete the computations in a reasonable amount of time. The running times for the approximate version are already unacceptably high for this dataset. We also report results for the Hamming loss, macro- and micro-averaging F-measure. We can see, for example, that approaches appropriate for Hamming loss obtain the best results regarding this measure. The macro and micro F-measure are presented mainly as a reference. The former is computed by averaging the F-measure label-wise, while the latter concatenates all test examples and computes a single value over all predictions. These two variants of the F-measure are not directly optimized by the algorithms used in the experiment. 6These datasets are taken from the MULAN (http://mulan.sourceforge.net/datasets.html) and LibSVM (http://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/multilabel.html) repositories. 7 5 Discussion The GFM algorithm can be considered for maximizing the macro F-measure, for example, in a similar setting as in [10], where a specific Bayesian on-line model is used. In order to maximize the macro F-measure, the authors sample from the graphical model to find an optimal threshold. The GFM algorithm may solve this problem optimally, since, as stated by the authors, the independence of labels is lost after integrating out the model parameters. Theoretically, one may also consider a direct maximization of the micro F-measure with GFM, but the computational burden is rather high in this case. Interestingly, there are no other MLC algorithms that maximize the F-measure in an instance-wise manner. We also cannot refer to other results already published in the literature, since usually only the micro- and macro-averaged F-measures are reported [20, 11]. This is rather surprising, especially since some closely related measures are often computed in the instance-wise manner in empirical studies. For example, the Jaccard distance (sometimes referred to as accuracy [21]), which differs from the F-measure in an additional term in the denominator, is commonly used in such a way. The situation is slightly different in structured output prediction, where algorithms for instance-wise maximization of the F-measure do exist. These include, for example, struct SVM [6], SEARN [8], and a specific variant of CRFs [7]. Usually, these algorithms are based on additional assumptions, like label independence in struct SVM. The GFM algorithm can also be easily tailored for maximizing the instance-wise F-measure in structured output prediction, in a similar way as presented above. If the structured output classifier is able to model the joint distribution from which we can easily sample observations, then the use of the algorithm is straight-forward. An application of this kind is planned as future work. Surprisingly, in both papers [8] and [6], experimental results are reported in terms of micro Fmeasure, although the algorithms maximize the instance-wise F-measure on the training set. Needless to say, one should not expect such an approach to result in optimal performance for the microaveraged F-measure. Despite being related to each other, these two measures coincide only in the specific case where Pm i=1(yi + hi) is constant for all test examples. The discrepancy between these measures strongly depends on the nature of the data and the classifier used. For high variability in Pm i=1(yi + hi), a significant difference between the values of these two measures is to be expected. The use of the GFM algorithm in binary classification seems to be superfluous, since in this case, the assumption of label independence is rather reasonable. MEUF seems to be the right choice for probabilistic classifiers, unless its application is prevented due to its computational complexity. Thresholding methods [5] or learning algorithms optimizing the F-measure directly [2, 3, 4] are probably the most appropriate solutions here. 6 Conclusions In contrast to other performance measures commonly used in experimental studies, such as misclassification error rate, squared loss, and AUC, the F-measure has been investigated less thoroughly from a theoretical point of view so far. In this paper, we analyzed the problem of optimal predictive inference from the joint distribution under the F-measure. While partial results were already known from the literature, we completed the picture by presenting the solution for the general case without any distributional assumptions. Our GFM algorithm requires only a polynomial number of parameters of the joint distribution and delivers the exact solution in polynomial time. From a theoretical perspective, GFM should be preferred to existing approaches, which typically perform threshold maximization on marginal probabilities, often relying on the assumption of (conditional) independence of labels. Acknowledgments. Krzysztof Dembczy´nski has started this work during his post-doctoral stay at Philipps-Universit¨at Marburg supported by German Research Foundation (DFG) and finalized it at Pozna´n University of Technology under the grant 91-515/DS of the Polish Ministry of Science and Higher Education. Willem Waegeman is supported as a postdoc by the Research Foundation of Flanders (FWO-Vlaanderen). The part of this work has been done during his visit at PhilippsUniversit¨at Marburg. Weiwei Cheng and Eyke H¨ullermeier are supported by DFG. We also thank the anonymous reviewers for their valuable comments. 8 References [1] C. J. van Rijsbergen. Foundation of evaluation. Journal of Documentation, 30(4):365–373, 1974. [2] David R. Musicant, Vipin Kumar, and Aysel Ozgur. Optimizing F-measure with support vector machines. In FLAIRS-16, 2003, pages 356–360, 2003. [3] Thorsten Joachims. A support vector method for multivariate performance measures. In ICML 2005, pages 377–384, 2005. [4] Martin Jansche. Maximum expected F-measure training of logistic regression models. In HLT/EMNLP 2005, pages 736–743, 2005. [5] Sathiya Keerthi, Vikas Sindhwani, and Olivier Chapelle. An efficient method for gradientbased adaptation of hyperparameters in SVM models. In Advances in Neural Information Processing Systems 19, 2007. [6] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453–1484, 2005. [7] Jun Suzuki, Erik McDermott, and Hideki Isozaki. Training conditional random fields with multivariate evaluation measures. In ACL, pages 217–224, 2006. [8] Hal Daum´e III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 75:297–325, 2009. [9] Rong-En Fan and Chih-Jen Lin. A study on threshold selection for multi-label classification. Technical report, Department of Computer Science, National Taiwan University, 2007. [10] Xinhua Zhang, Thore Graepel, and Ralf Herbrich. Bayesian online learning for multi-label and multi-variate performance measures. In AISTATS 2010, pages 956–963, 2010. [11] James Petterson and Tiberio Caetano. Reverse multi-label learning. In Advances in Neural Information Processing Systems 23, pages 1912–1920, 2010. [12] Martin Jansche. A maximum expected utility framework for binary sequence labeling. In ACL 2007, pages 736–743, 2007. [13] David Lewis. Evaluating and optimizing autonomous text classification systems. In SIGIR 1995, pages 246–254, 1995. [14] Juan Jose del Coz, Jorge Diez, and Antonio Bahamonde. Learning nondeterministic classifiers. J. Mach. Learn. Res., 10:2273–2293, 2009. [15] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 2nd edition. MIT Press, 2001. [16] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 3(9):251–280, 1990. [17] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML 2001, pages 282–289, 2001. [18] Nadia Ghamrawi and Andrew McCallum. Collective multi-label classification. In CIKM 2005, pages 195–200, 2005. [19] Krzysztof Dembczy´nski, Weiwei Cheng, and Eyke H¨ullermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML 2010, pages 279–286, 2010. [20] Piyush Rai and Hal Daum´e III. Multi-label prediction via sparse infinite CCA. In Advances in Neural Information Processing Systems 22, pages 1518–1526, 2009. [21] Matthew R. Boutell, Jiebo Luo, Xipeng Shen, and Christopher M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757–1771, 2004. 9
2011
134
4,185
Exploiting spatial overlap to efficiently compute appearance distances between image windows Bogdan Alexe ETH Zurich Viviana Petrescu ETH Zurich Vittorio Ferrari ETH Zurich Abstract We present a computationally efficient technique to compute the distance of highdimensional appearance descriptor vectors between image windows. The method exploits the relation between appearance distance and spatial overlap. We derive an upper bound on appearance distance given the spatial overlap of two windows in an image, and use it to bound the distances of many pairs between two images. We propose algorithms that build on these basic operations to efficiently solve tasks relevant to many computer vision applications, such as finding all pairs of windows between two images with distance smaller than a threshold, or finding the single pair with the smallest distance. In experiments on the PASCAL VOC 07 dataset, our algorithms accurately solve these problems while greatly reducing the number of appearance distances computed, and achieve larger speedups than approximate nearest neighbour algorithms based on trees [18] and on hashing [21]. For example, our algorithm finds the most similar pair of windows between two images while computing only 1% of all distances on average. 1 Introduction Computing the appearance distance between two windows is a fundamental operation in a wide variety of computer vision techniques. Algorithms for weakly supervised learning of object classes [7, 11, 16] typically compare large sets of windows between images trying to find recurring patterns of appearance. Sliding-window object detectors based on kernel SVMs [13, 24] compute appearance distances between the support vectors and a large number of windows in the test image. In human pose estimation, [22] computes the color histogram dissimilarity between many candidate windows for lower and upper arms. In image retrieval the user can search a large image database for a query object specified by an image window [20]. Finally, many tracking algorithms [4, 5] compare a window around the target object in the current frame to all windows in a surrounding region of the next frame. In most cases one is not interested in computing the distance between all pairs of windows from two sets, but in a small subset of low distances, such as all pairs below a given threshold, or the single best pair. Because of this, computer vision researchers often rely on efficient nearest neighbour algorithms [2, 6, 10, 17, 18, 21]. Exact nearest neighbour algorithms organize the appearance descriptors into trees which can be efficiently searched [17]. However, these methods work well only for descriptors of small dimensionality n (typically n < 20), and their speedup vanishes for larger n (e.g. the popular GIST descriptor [19] has n = 960). Locality sensitive hashing (LSH [2, 10, 21]) techniques hash the descriptors into bins, so that similar descritors are mapped to the same bins with high probability. LSH is typically used for efficiently finding approximate nearest neighbours in high dimensions [2, 6]. All the above methods consider windows only as points in appearance space. However, windows exist also as points in the geometric space defined as their 4D coordinates in the image they lie in. In this geometric space, a natural distance between two windows is their spatial overlap (fig. 1). In this paper we propose to take advantage of an important relation between the geometric and appearance spaces: the apparance distance between two windows decreases as their spatial overlap increases. We derive an upper bound on the appearance distance between two windows in the same image, 1 Fig. 1: Relation between spatial overlap and appearance distance. Windows w1, w2 in an image I are embedded in geometric space and in appearance space. All windows overlapping more than r with w1 are at most at distance B(r) in appearance space. The bound B(r) decreases as overlap increases (i.e. r decreases). given their spatial overlap (sec. 2). We then use this bound in conjuction with the triangle inequality to bound the appearance distances of many pairs of windows between two images, given the distance of just one pair. Building on these basic operations, we design algorithms to efficiently find all pairs with distance smaller than a threshold (sec. 3) and to find the single pair with the smallest distance (sec. 4). The techniques we propose reduce computation by minimizing the number of times appearance distances are computed. They are complementary to methods for reducing the cost of computing one distance, such as dimensionality reduction [15] or Hamming embeddings [14, 23]. We experimentally demonstrate in sec. 5 that the proposed algorithms accurately solve the above problems while greatly reducing the number of appearance distances computed. We compare to approximate nearest neighbour algorithms based on trees [18], as well as on the recent LSH technique [21]. The results show our techniques outperform them in the setting we consider, where the datapoints are embedded in a space with additional overlap structure. 2 Relation between spatial overlap and appearance distance Windows w in an image I are emdebbed in two spaces at the same time (fig. 1). In geometric space, w is represented by its 4 spatial coordinates (e.g. x, y center, width, height). The distance between two windows is defined based on their spatial overlap o(w1, w2) = |w1\w2| |w1[w2| 2 [0, 1], where \ denotes the area of the intersection and [ the area of the union. In appearance space, w is represented by a high dimensional vector describing the pixel pattern inside it, as computed by a function fapp(w) : I ! Rn (e.g. the GIST descriptor has n = 960 dimensions). In appearance space, two windows are compared using a distance d(fapp(w1), fapp(w2)). Two overlapping windows w1, w2 in an image I share the pixels contained in their intersection (fig. 1). The spatial overlap of the two windows correlates with the proportion of common pixels input to fapp when computing the descriptor for each window. In general, fapp varies smoothly with the geometry of w, so that windows of similar geometry are close in appearance space. Consequently, the spatial overlap o and appearance distance d are related. In this paper we exploit this relation to derive an upper bound B(o(w1, w2)) on the appearance distance between two overlapping windows. We present here the general form of the bound B, its main properties and explain why it is useful. In subsections 2.1 and 2.2 we derive the actual bound itself. To simplify the notation we use d(w1, w2) to denote the appearance distance d(fapp(w1), fapp(w2)). We refer to it simply as distance and we say overlap for spatial overlap. The upper bound B is a function of the overlap o(w1, w2), and has the following property d(w1, w2) B(o(w1, w2)) 8w1, w2 (1) Moreover, B is a monotonic decreasing function B(o1) B(o2) 8o1 ≥o2 (2) 2 (a) (b) (c) Fig. 2: Triangle inequality in appearance space. The triangle inequality (4) holds for any three points fapp(w1), fapp(w2) and fapp(w3) in appearance space. (a) General case; (b) Lower bound case: |d(w1, w2) −d(w2, w3)| = d(w1, w3); (c) Upper bound case: d(w1, w3) = d(w1, w2) + d(w2, w3). This property means B continuously decreases as overlap increases. Therefore, all pairs of windows within an overlap radius r (i.e. o(w1, w2) ≥r) have distance below B(r) (fig. 1) d(w1, w2) B(o(w1, w2)) B(r) 8w1, w2, o(w1, w2) ≥r (3) As defined above, B bounds the appearance distance between two windows in the same image. Now we show how it can be used to derive a bound on the distances between windows in two different images I1, I2. Given two windows w1, w2 in I1 and a window w3 in I2, we use the triangle inequality to derive (fig. 2) |d(w1, w2) −d(w2, w3)| d(w1, w3) d(w1, w2) + d(w2, w3) (4) Using the bound B in eq. (4) we obtain max(0, d(w2, w3) −B(o(w1, w2))) d(w1, w3) B(o(w1, w2)) + d(w2, w3) (5) Eq. (5) delivers lower and upper bounds for d(w1, w3) without explicitly computing it (given that d(w2, w3) and o(w1, w2) are known). These bounds will form the basis of our algorithms for reducing the number of times the appearance distance is computed when solving two classic tasks (sec. 3 and 4). In the next subsection we estimate B for arbitrary window descriptors (e.g. color histograms, bag of visual words, GIST [19], HOG [8]) from a set of images (no human annotation required). In subsection 2.2 we derive exact bounds in closed form for histogram descriptors (e.g. color histograms, bag of visual words [25]). 2.1 Statistical bounds for arbitrary window descriptors We estimate B↵from training data so that eq. (1) holds with probability ↵ P( d(w1, w2) B↵(o(w1, w2)) ) = ↵ 8w1, w2 (6) B↵is estimated from a set of M training images I = {Im}. For each image Im we sample N windows {wm i }, and then compute for all window pairs their overlap om ij = o(wm i , wm j ) and distance dm ij = d(wm i , wm j ). The overall training dataset D is composed of (om ij , dm ij) for every window pair D = { (om ij, dm ij) | k 2 {1, M} , i, j 2 {1, N}} (7) We now quantize the overlap values into 100 bins and estimate B↵(o) for each bin o separately. For a bin o, we consider the set Do of all distances dm ij for which om ij is in the bin. We choose B↵(o) as the ↵-quantile of D(o) (fig. 3a) B↵(o) = q↵(Do) (8) B1(o) is the largest distance dm ij for which om ij is in bin o. Fig. 3a shows the binned distanceoverlap pairs and the bound B0.95 for GIST descriptors [19]. The data comes from 100 windows sampled from more than 1000 images (details in sec. 5). Each column of this matrix is roughly Gaussian distributed, and its mean continuously decreases with increasing overlap, confirming our assumptions about the relation between overlap and distance (sec. 2). In particular, note how the mean distance decrease fastest for 50% to 80% overlap. 3 (a) (b) Fig. 3: Estimating B0.95(o) and omin(✏). (a) The estimated B0.95(o) (white line) for the GIST [19] appearance descriptor. (b) Using B0.95(o) we derive omin(✏). Given a window w1 and a distance ✏we can use B↵to find windows w2 overlapping with w1 that are at most distance ✏from w1. This will be used extensively by our algorithms presented in secs. 3 and 4. From B↵we can derive what is the smallest overlap omin(✏) so that all pairs of windows overlapping more than omin(✏) have distance smaller than ✏(with probability more than ↵). Formally P( d(w1, w2) ✏) ≥↵ 8w1, w2, o(w1, w2) ≥omin(✏) (9) and omin(✏) is defined as the smallest overlap o for which the bound is smaller than ✏(fig. 3b) omin(✏) = min{o | B↵(o) ✏} (10) 2.2 Exact bounds for histogram descriptors The statistical bounds of the previous subsection can be estimated from images for any appearance descriptor. In contrast, in this subsection we derive exact bounds in closed form for histogram descriptors (e.g. color histograms, bag of visual words [25]). Our derivation applies to L1-normalized histograms and the χ2 distance. For simplicity of presentation, we assume every pixel contributes one feature to the histogram of the window (as in color histograms). The derivation is very similar for features computed on another regular grid (e.g. dense SURF bag-of-words [11]). We present here the main idea behind the bound and give the full derivation in the supplementary material [1]. The upper bound B for two windows w1 and w2 corresponds to the limit case where the three regions w1 \ w2, w1 \ w2 and w2 \ w1 contain three disjoint sets of colors (or visual word in general). Therefore, the upper bound B is B(w1, w2) = |w1 \ w2| |w1| + |w2 \ w1| |w2| + |w1 \ w2| · ⇣ 1 |w1| − 1 |w2| ⌘2 1 |w1| + 1 |w2| (11) Expressing the terms in (11) based on the windows overlap o = o(w1, w2) = |w1\w2| |w1[w2|, we obtain a closed form for the upper bound B that depends only on o B(w1, w2) = B(o(w1, w2)) = B(o) = 2 −4 · o o + 1 (12) In practice, this exact bound is typically much looser than its corresponding statistical bound learned from data (sec. 2.1). Therefore, we use the statistical bound for the experiments in sec. 5. 3 Efficiently computing all window pairs with distance smaller than ✏ In this section we present an algorithm to efficiently find all pairs of windows with distance smaller than a threshold ✏between two images I1, I2. Formally, given an input set of windows W1 = {w1 i } in image I1 and a set W2 = {w2 j} in image I2, the algorithm should return the set of pairs P✏= { (w1 i , w2 j) | d(w1 i , w2 j) ✏}. Algorithm overview. Algorithm 1 summarizes our technique. Block 1 randomly samples a small set of seed pairs, for which it explicly computes distances. The core of the algorithm (Block 3) explores pairs overlapping with a seed, looking for all appearance distances smaller than ✏. When 4 Algorithm 1 Efficiently computing all distances smaller than ✏ Input: windows Wm = {wm i }, threshold ✏, lookup table omin, number of initial samples F Output: set P✏of all pairs p with d(p) ✏ 1. Compute seed pairs PF (a) sample F random pairs pij = (w1 i , w2 j) from P = W1 ⇥W2, giving PF (b) compute dij = d(w1 i , w2 j), 8pij 2 PF 2. Determine a sequence S of all pairs from P (gives schedule of block 3 below) (a) sort the seed pairs in PF in order of decreasing distance (b) set S(1 : F) = PF (c) fill S((F + 1) : end) with random pairs from P \ PF 3. For pc = S(1 : end) (explore the pairs in the S order) (a) compute d(pc) (b) if d(pc) ✏ i. let r = omin(✏−d(pc)) ii. let N = overlap neighborhood(pc, r) iii. for all pairs p 2 N: compute d(p) iv. update P✏ P✏[ {p 2 N | d(p) ✏} (c) else i. let r = omin(d(pc) −✏) ii. let N = overlap neighborhood(pc, r) iii. discard all pairs in N from S: S S \ N overlap neighborhood Input: pair pij = (w1 i , w2 j), overlap radius r Output: overlap neighborhood N of pij N = { (w1 i , w2 v) | o(w2 j, w2 v) ≥r } [ {(w1 u, w2 j) | o(w1 i , w1 u) ≥r } compute Input: pair pij Output: If d(w1 i , w2 j) was never computed before, then compute it and store it in a table D. If d(w1 i , w2 j) is already in D, then directly return it. exploring a seed, the algorithm can decide to discard many pairs overlapping with it, as the bound predicts that their distance cannot be lower than ✏. This causes the computational saving (step 3.c). Before starting Block 3, Block 2 establishes the sequence in which to explore the seeds, i.e. in order of decreasing distance. The remaining pairs are appended in random order afterwards. Algorithm core. Block 3 takes one of two actions based on the distance of the pair pc currently being explored. If d(pc) ✏, then all pairs in the overlap neighborhood N of pc have distance smaller than ✏. This overlap neighborhood has a radius r = omin(✏−d(pc)) predicted by the bound lookup table omin (fig. 4a). Therefore, Block 3 computes the distance of all pairs in N (step 3.b). Instead, if d(pc) > ✏, Block 3 determines the radius r = omin(d(pc) −✏) of the overlap neighborhood containing pairs with distance greater than ✏, and then discards all pairs in it (step 3.c). Overlap neighborhood. The overlap neighborhood of a pair pij = (w1 i , w2 j) with radius r contains all pairs (w1 i , w2 v) such that o(w2 j, w2 v) ≥r, and all pairs (w1 u, w2 j) such that o(w1 i , w1 u) ≥r (fig. 4a). 4 Efficiently computing the single window pair with the smallest distance We give an algorithm to efficiently find the single pair of windows with the smallest appearance distance between two images. Given as input the two sets of windows W1, W2, the algorithm should return the pair p⇤= (w1 i⇤, w2 j⇤) with the smallest distance: d(w1 i⇤, w2 j⇤) = minij d(w1 i , w2 j). 5 (a) (b) Fig. 4: Overlap neighborhoods. (a) The overlap neighborhood of radius r of a pair (w1 i , w2 j) contains all blue pairs. (b) The joint overlap neighborhood of radius s of a pair (w1 i , w2 j) contains all blue and green pairs. Algorithm overview. Algorithm 2 is analog to Algorithm 1. Block 1 computes distances for the seed pairs and it selectes the pair with the smallest distance as initial approximation to p⇤. Block 3 explores pairs overlapping with a seed, looking for a distance smaller than d(p⇤). When exploring a seed, the algorithm can decide to discard many pairs overlapping with it, as the bound predicts they cannot be better than p⇤. Block 2 organizes the seeds in order of increasing distance. In this way, the algorithm can rapidly refine p⇤towards smaller and smaller values. This is useful because in step 3.c, the amount of discarded pairs is greater as d(p⇤) gets smaller. Therefore, this seed ordering maximises the number of discarded pairs (i.e. minimizes the number of distances computed). Algorithm core. Block 3 takes one of two actions based on d(pc). If d(pc) d(p⇤) + B↵(s), then there might be a better pair than d(p⇤) within radius s in the joint overlap neighborhood of pc. Therefore, the algorithm computes the distance of all pairs in this neighborhood (step 3.b). The radius s is an input parameter. Instead, if d(pc) > d(p⇤) + B↵(s), the algorithm determines the radius r = omin(d(pc) −d(p⇤)) of the overlap neighborhood that contains only pairs with distance greater than d(p⇤), and then discards all pairs in it (step 3.c). Joint overlap neighborhood. The joint overlap neighborhood of a pair pij = (w1 i , w2 j) with radius s contains all pairs (w1 u, w2 v) such that o(w1 i , w1 u) ≥s and o(w2 j, w2 v) ≥s. 5 Experiments and conclusions We present experiments on a test set composed of 1000 image pairs from the PASCAL VOC 07 dataset [12], randomly sampled under the constraint that two images in a pair contain at least one object of the same class (out of 6 classes: aeroplane, bicycle, bus, boat, horse, motorbike). This setting is relevant for various applications, such as object detection [13, 24], and ensures a balanced distribution of appearance distances in each image pair (some pairs of windows will have a low distance while others high distances). We experiment with three appearance descriptors: GIST [19] (960D), color histograms (CHIST, 4000D), and bag-of-words [11, 25] on the dense SURF descriptor [3] (BOW, 2000D). As appearance distances we use the Euclidean for GIST, and χ2 for CHIST and SURF BOW. The bound tables B↵for each descriptor were estimated beforehand from a separate set of 1300 images of other classes (sec. 2.1). Task 1: all pairs of windows with distance smaller than ✏. The task is to find all pairs of windows with distance smaller than a user-defined threshold ✏between two images I1, I2 (sec. 3). This task occurs in weakly supervised learning of object classes [7, 11, 16], where algorithms search for recurring patterns over training images containing thousands of overlapping windows, and in human pose estimation [22], which compares many overlapping candidate body part locations. We random sample 3000 windows in each image (|W1| = |W2| = 3000) and set ✏so that 10% of all distances are below it. This makes the task meaningful for any image pair, regardless of the range of distances it contains. For each image pair we quantify performance with two measures: (i) cost: the number of computed distances divided by the total number of window pairs (9 millions); (i) accuracy: P p2P✏(✏−d(p)) P {p2W1⇥W2|d(p)✏}(✏−d(p)), where P✏is the set of window pairs returned by the algorithm, and the denominator sums over all distances truly below ✏. The lowest possible cost while still achieving 100% accuracy is 10%. We compare to LSH [2, 6, 10] using [21] as a hash function. It maps descriptors to binary strings, such that the Hamming distance between two strings is related to the value of a Gaussian kernel between the original descriptors [21]. As recommended in [6, 10], we generate T separate (random) encodings and build T hash tables, each with 2C bins, where C is the number of bits in the encoding. 6 Algorithm 2 Efficiently computing the smallest distance Input: windows Wm = {wm i }, lookup table omin, search radius s, number of initial samples F Output: pair p⇤with the smallest distance 1. Compute seed pairs PF (as Block 1 of Algorithm 1) and estimate current best pair: p⇤= arg minpij2PF dij 2. Determine a sequence S of all pairs (as Block 2 of Algorithm 1) 3. For pc = S(1 : end) (explore the pairs in the S order) (a) compute d(pc) (b) if d(pc) d(p⇤) + B↵(s) i. let N = joint overlap neighborhood(pc, s) ii. for all pairs p 2 N: compute d(p) iii. update p⇤ arg min {{d(p⇤)} [ {d(p) | p 2 N}} (c) else i. let r = omin(d(pc) −d(p⇤)) ii. let N = overlap neighborhood(pc, r) iii. discard all pairs in N from S: S S \ N joint overlap neighborhood Input pair pij = (w1 i , w2 j), overlap radius s Output: joint overlap neighborhood N of pij N = { (w1 u, w2 v) | o(w1 i , w1 u) ≥s, o(w2 j, w2 v) ≥s } To perform Task 1, we loop over each table t and do: (H1) hash all w2 j 2 W2 into table t; (H2) for each w1 i 2 W1 do: (H2.1) hash w1 i into its bin b1 t,i; (H2.2) compute all distances d in the original space between w1 i and all windows w2 j 2 b1 t,i (unless already computed when inspecting a previous table); (H3) return all computed d(w1 i , w2 j) ✏. We also compare to approximate nearest-neighbors based on kd-trees, using the ANN library [18]. To perform Task 1, we do: (A1) for each w1 i 2 W1 do: (A1.1) compute the ✏-NN between w1 i and all windows w2 j 2 W2 and return them all. The notion of cost above is not defined for ANN methods based on trees. Instead, we measure wall clock runtime. Instead, we report as cost the ratio of the runtime of approximate NN over the runtime of exact NN (also computed using the ANN library [18]). This gives a meaningful indication of speedup, which can be compared to the cost we report for our method and LSH. As the ANN library supports only the Euclidean distance, we report results only for GIST. The results table reports cost and accuracy averaged over the test set. Our method from sec. 3 performs very well for all three descriptors. On average it achieves 98% accuracy at 16% cost. This is a considerable speedup over exhaustive search, as it means only 7% of the 90% distances greater than ✏have been computed. The behavior of LSH depends on T and C. The higher the T, the higher the accuracy, but also the cost (because there are more collisions; the same holds for lower C). To compare fairly, we evaluate LSH over T 2 {1, 20} and C 2 {2, 30} and report results for the T, C that deliver the closest accuracy to our method. As the table shows, on average over the three descriptors, for same accuracy LSH has cost 92%, substantially worse than our method. The behavior of ANN depends on the degree of approximation which we set so as to get accuracy closest to our method. At 92% accuracy, ANN has 72% of the runtime of exact NN. This shows that, if high accuracy is desired, ANN offers only a modest speedup (compared to our 18% cost for GIST). Task 2: all windows closer than ✏to a query. This is a special case of Task 1, where W1 contains just one window. Hence, this becomes a ✏-nearest-neighbours task where W1 acts as a query and W2 as the retrieval database. This task occurs in many applications, e.g. object detectors based on kernel SVMs compare a support vector (query) to a large set of overlapping windows in the test image [13, 24]. As this is expensive, many detectors resort to linear kernels [9]. Our algorithms 7 Task 1 GIST + Euclidean distance CHIST + χ2 distance SURF BOW + χ2 distance method cost accuracy method cost accuracy method cost accuracy our 18.0% 97.3% our 15.7% 97.7% our 15.2% 98.5% LSH 86.2% 95.4% LSH 93.7% 97.2% LSH 96.8% 98.5% ANN 71.8% 91.9% ANN ANN Task 2 method cost accuracy method cost accuracy method cost accuracy our 30.2% 87.1% our 30.3% 96.2% our 28.6% 94.0% LSH 73.4% 83.5% LSH 96.9% 95.1% LSH 88.7% 92.1% ANN 72.6% 87.7% ANN ANN Task 3 method cost ratio rank method cost ratio rank method cost ratio rank our 2.3% 1.02 1.39 our 0.4% 1.01 1.12 our 0.7% 1.01 1.19 LSH 16.4% 1.03 2.72 LSH 37.5% 1.02 33.5 LSH 46.5% 1.01 9.62 ANN 58.6% 1.01 1.48 ANN ANN offer the option to use more complex kernels while retaining a practical speed. Other applications include tracking in video [4, 5] and image retrieval [20] (see beginning of sec. 1). As the table shows, our method is somewhat less efficient than on Task 1. This makes sense, as it can only exploit overlap structure in one of the two input sets. Yet, for a similar accuracy it offers greater speedup than LSH and ANN. Task 3: single pair of windows with smallest distance. The task is to find the single pair of windows with the smallest distance between I1 and I2, out of 3000 windows in each image (sec. 4), and has similar applications as Task 1. We quantify performance with three measures: (i) cost: as in all other tasks. (ii) distance ratio: the ratio between the smallest distance returned by the algorithm and the true smallest distance. The best possible value is 1, and higher values are worse; (iii) rank: the rank of the returned distance among all 9 million. To perform Task 3 with LSH, we simply modify step (H3) of the procedure given for Task 1 to: return the smallest distance among all those computed. To perform Task 3 with ANN we replace step (A1.1) with: compute the NN of w1 i in W2. At the end of loop (A1) return the smallest distance among all those computed. As the table shows, on average over the three descriptors, our method from sec. 4 achieves a distance ratio of 1.01 at 1.1% cost, which is almost a 100⇥faster than exhaustive search. The average rank of the returned distance is 1.25 out of 9 millions, which is almost a perfect result. When compared at a similar distance ratio, our method is considerably more efficient than LSH and ANN. LSH computes 33.3% of all distances, while ANN brings only a speedup of factor 2 over exact NN. Runtime considerations. While we have measured only the number of computed appearance distances, our algorithms also compute spatial overlaps. Crucially, spatial overlaps are computed in the 4D geometric space, compared to 1000+ dimensions for the appearance space. Therefore, computing spatial overlaps has negligible impact on the total runtime of the algorithms. In practice, when using 5000 windows per image with 4000D dense SURF BOW descriptors, the total runtime of our algorithms is 71s for Task 1 or 16s for Task 3, compared to 335s for exhaustive search. Importantly, the cost of computing the descriptors is small compared to the cost of evaluating distances, as it is roughly linear in the number of windows and can be implemented very rapidly. In practice, computing dense SURF BOW for 5000 windows in two images takes 5 seconds. Conclusions. We have proposed efficient algorithms for computing distances of appearance descriptors between two sets of image windows, by taking advantage of the overlap structure in the sets. Our experiments demonstrate that these algorithms greatly reduce the number of appearance distances computed when solving several tasks relevant to computer vision and outperform LSH and ANN for these tasks. Our algorithms could be useful in various applications. For example, improving the spatial accuracy of weakly supervised learners [7, 11] by using thousands of windows per image, using more complex kernels and detecting more classes in kernel SVM object detectors [13, 24], and enabling image retrieval systems to search at the window level with any descriptor, rather than returning entire images or be constrained to bag-of-words descriptors [20]. To encourage these applications, we release our source code at http://www.vision.ee.ethz.ch/˜calvin. 8 References [1] B. Alexe, V. Petrescu, and V. Ferrari. Exploiting spatial overlap to efficiently compute appearance distances between image windows - supplementary material. In NIPS, 2011. Also available at http://www.vision.ee.ethz.ch/ calvin/publications.html. [2] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Communications of the ACM, 2008. [3] H. Bay, A. Ess, T. Tuytelaars, and L. van Gool. SURF: Speeded up robust features. CVIU, 110(3):346–359, 2008. [4] C. Bibby and I. Reid. Robust real-time visual tracking using pixel-wise posteriors. In ECCV, 2008. [5] S. Birchfield. Elliptical head tracking using intensity gradients and color histograms. In CVPR, 1998. [6] O. Chum, J. Philbin, M. Isard, and A. Zisserman. Scalable near identical image and shot detection. In CIVR, 2007. [7] O. Chum and A. Zisserman. An exemplar model for learning object classes. In CVPR, 2007. [8] N. Dalal and B. Triggs. Histogram of Oriented Gradients for Human Detection. In CVPR, volume 2, pages 886–893, 2005. [9] N. Dalal and B. Triggs. Histogram of oriented gradients for human detection. In CVPR, 2005. [10] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In SCG, 2004. [11] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In ECCV, 2010. [12] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 Results, 2007. [13] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image classification. In ICCV, 2009. [14] H. Jegou, M. Douze, and C. Schmid. Hamming embedding and weak geometric consistency for large-scale image search. In ECCV, 2008. [15] Y. Ke and R. Sukthankar. Pca-sift: A more distinctive representation for local image descriptors. In CVPR, 2004. [16] G. Kim and A. Torralba. Unsupervised detection of regions of interest using iterative link analysis. In NIPS, 2009. [17] N. Kumar, L. Zhang, and S. Nayar. What is a good nearest neighbors algorithm for finding similar patches in images? In ECCV, 2008. [18] D. M. Mount and S. Arya. Ann: A library for approximate nearest neighbor searching, August 2006. [19] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001. [20] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In CVPR, 2007. [21] M. Raginski and S. Lazebnik. Locality sensitive binary codes from shift-invariant kernels. In NIPS, 2009. [22] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010. [23] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR, 2008. [24] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV, 2009. [25] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: a comprehensive study. IJCV, 2007. 9
2011
135
4,186
Signal Estimation Under Random Time-Warpings and Nonlinear Signal Alignment Sebastian Kurtek Anuj Srivastava Wei Wu Department of Statistics Florida State University, Tallahassee, FL 32306 skurtek,anuj,wwu@stat.fsu.edu Abstract While signal estimation under random amplitudes, phase shifts, and additive noise is studied frequently, the problem of estimating a deterministic signal under random time-warpings has been relatively unexplored. We present a novel framework for estimating the unknown signal that utilizes the action of the warping group to form an equivalence relation between signals. First, we derive an estimator for the equivalence class of the unknown signal using the notion of Karcher mean on the quotient space of equivalence classes. This step requires the use of Fisher-Rao Riemannian metric and a square-root representation of signals to enable computations of distances and means under this metric. Then, we define a notion of the center of a class and show that the center of the estimated class is a consistent estimator of the underlying unknown signal. This estimation algorithm has many applications: (1) registration/alignment of functional data, (2) separation of phase/amplitude components of functional data, (3) joint demodulation and carrier estimation, and (4) sparse modeling of functional data. Here we demonstrate only (1) and (2): Given signals are temporally aligned using nonlinear warpings and, thus, separated into their phase and amplitude components. The proposed method for signal alignment is shown to have state of the art performance using Berkeley growth, handwritten signatures, and neuroscience spike train data. 1 Introduction Consider the problem of estimating signal using noisy observation under the model: f(t) = cg(a t −φ) + e(t) , where the random quantities c ∈R is the scale, a ∈R is the rate, φ ∈R is the phase shift, and e(t) ∈R is the additive noise. There has been an elaborate theory for estimation of the underlying signal g, given one or several observations of the function f. Often one assumes that g takes a parametric form, e.g. a superposition of Gaussians or exponentials with different parameters, and estimates these parameters from the observed data [12]. For instance, the estimation of sinusoids or exponentials in additive Gaussian noise is a classical problem in signal and speech processing. In this paper we consider a related but fundamentally different estimation problem where the observed functional data is modeled as: for t ∈[0, 1], fi(t) = cig(γi(t)) + ei, i = 1, 2, . . . , n , (1) Here γi : [0, 1] →[0, 1] are diffeomorphisms with γi(0) = 0 and γi(1) = 1. The fis represent observations of an unknown, deterministic signal g under random warpings γi, scalings ci and vertical translations ei ∈R. (A more general model would be to use full functions for additive noise but that requires further discussion due to identifiability issues. Thus, we restrict to the above model in this paper.) This problem is interesting because in many situations, including speech, SONAR, RADAR, 1 −3 −2 −1 0 1 2 3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −3 −2 −1 0 1 2 3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 −3 −2 −1 0 1 2 3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 −3 −2 −1 0 1 2 3 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 −3 −2 −1 0 1 2 3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 original data mean +/- STD before warping mean +/- STD after warping warping functions phase components amplitude components Figure 1: Separation of phase and amplitude variability in function data. NMR, fMRI, and MEG applications, the noise can actually affect the instantaneous phase of the signal, resulting in an observation that is a phase (or frequency) modulation of the original signal. This problem is challenging because of the nonparametric, random nature of the warping functions γis. It seems difficult to be able to recover g where its observations have been time-warped nonlinearly in a random fashion. The past papers have either restricted to linear warpings (e.g. γi(t) = ait−φi) or known g (e.g. g(t) = cos(t)). It turns out that without any further restrictions on γis one can recover g only up to an arbitrary warping function. This is easy to see since g ◦γi = (g ◦γ) ◦(γ−1 ◦γi) for any warping function γ. (As described later, the warping functions are restricted to be automorphisms of a domain and, hence, form a group.) Under an additional condition related to the mean of (inverses of) γis, we can reach the exact signal g, as demonstrated in this paper. In fact, this model describes several related, some even equivalent, problems but with distinct applications: Problem 1: Joint Phase Demodulation and Carrier Estimation: One can view this problem as that of phase (or frequency) demodulation but without the knowledge of the carrier signal g. Thus, it becomes a problem of joint estimation of the carrier signal (g) and phase demodulation (γ−1 i ) of signals that share the same carrier. In case the carrier signal g is known, e.g. g is a sinusoid, then it is relatively easier to estimate the warping functions using dynamic time warping or other estimation theoretic methods [15, 13]. So, we consider problem of estimation of g from {fi} under the model given in Eqn. 1. Problem 2: Phase-Amplitude Separation: Consider the set of signals shown in the top-left panel of Fig. 1. These functions differ from each other in both heights and locations of their peaks and valleys. One would like to separate the variability associated with the heights, called the amplitude variability, from the variability associated with the locations, termed the phase variability. Although this problem has been studied for almost two decades in the statistics community, see e.g. [7, 9, 4, 11, 8], it is still considered an open problem. Extracting the amplitude variability implies temporally aligning the given functions using nonlinear time warping, with the result shown in the bottom right. The corresponding set of warping functions, shown in the top right, represent the phase variability. The phase component can also be illustrated by applying these warping functions to the same function, also shown in the top right. The main reason for separating functional data into these components is to better preserve the structure of the observed data, since a separate modeling of amplitude and phase variability will be more natural, parsimonious and efficient. It may not be obvious but the solution to this separation problem is intimately connected to the estimation of g in Eqn. 1. Problem 3: Multiple Signal/Image Registration: The problem of phase-amplitude separation is intrinsically same as the problem of joint registration of multiple signals. The problem here is: Given a set of observed signals {fi} estimate the corresponding points in their domains. In other words, 2 what are the γis such that, for any t0, fi(γ−1 i (t0)) correspond to each other. The bottom right panels of Fig. 1 show the registered signals. Although this problem is more commonly studied for images, its one-dimensional version is non-trivial and helps understand the basic challenges. We will study the 1D problem in this paper but, at least conceptually, the solutions extend to higher-dimensional problems also. In this paper we provide the following specific contributions. We study the problem of estimating g given a set {fi} under the model given in Eqn. 1 and propose a consistent estimator for this problem, along with the supporting asymptotic theory. Also, we illustrate the use of this solution in automated alignment of sets of given signals. Our framework is based on an equivalence relation between signals defined as follows. Two signals, are deemed equivalent if one can be time-warped into the other; since the warping functions form a group, the equivalence class is an orbit of the warping group. This relation partitions the set of signals into equivalence classes, and the set of equivalence classes (orbits) forms a quotient space. Our estimation of g is based on two steps. First, we estimate the equivalence class of g using the notion of Karcher mean on quotient space which, in turn, requires a distance on this quotient space. This distance should respect the equivalence structure, i.e. the distance between any elements should be zero if and only if they are in the same class. We propose to use a distance that results from the Fisher-Rao Riemannian metric. This metric was introduced in 1945 by C. R. Rao [10] and studied rigorously in the 70s and 80s by Amari [1], Efron [3], Kass [6], Cencov [2], and others. While those earlier efforts were focused on analyzing parametric families, we use the nonparametric version of the Fisher-Rao Riemannian metric in this paper. The difficulty in using this metric directly is that it is not straightforward to compute geodesics (remember that geodesics lengths provide the desired distances). However, a simple square-root transformation converts this metric into the standard L2 metric and the distance is obtainable as a simple L2 norm between the square-root forms of functions. Second, given an estimate of the equivalence class of g, we define the notion of a center of an orbit and use that to derive an estimator for g. 2 Background Material We introduce some notation. Let Γ be the set of orientation-preserving diffeomorphisms of the unit interval [0, 1]: Γ = {γ : [0, 1] →[0, 1]|γ(0) = 0, γ(1) = 1, γ is a diffeo}. Elements of Γ form a group, i.e. (1) for any γ1, γ2 ∈Γ, their composition γ1 ◦γ2 ∈Γ; and (2) for any γ ∈Γ, its inverse γ−1 ∈Γ, where the identity is the self-mapping γid(t) = t. We will use ∥f∥to denote the L2 norm ( R 1 0 |f(t)|2dt)1/2. 2.1 Representation Space of Functions Let f be a real-valued function on the interval [0, 1]. We are going to restrict to those f that are absolutely continuous on [0, 1]; let F denote the set of all such functions. We define a mapping: Q : R →R according to: Q(x) ≡ ½ x/ p |x| if |x| ̸= 0 0 otherwise . Note that Q is a continuous map. For the purpose of studying the function f, we will represent it using a square-root velocity function (SRVF) defined as q : [0, 1] →R, where q(t) ≡Q( ˙f(t)) = ˙f(t)/ q | ˙f(t)|. It can be shown that if the function f is absolutely continuous, then the resulting SRVF is square integrable. Thus, we will define L2([0, 1], R) (or simply L2) to be the set of all SRVFs. For every q ∈L2 there exists a function f (unique up to a constant, or a vertical translation) such that the given q is the SRVF of that f. If we warp a function f by γ, the SRVF of f ◦γ is given by: ˜q(t) = d dt (f◦γ)(t) √ | d dt (f◦γ)(t)| = (q ◦γ)(t) p ˙γ(t). We will denote this transformation by (q, γ) = (q ◦γ)√˙γ. 2.2 Elastic Riemannian Metric Definition 1 For any f ∈F and v1, v2 ∈Tf(F), where Tf(F) is the tangent space to F at f, the Fisher-Rao Riemannian metric is defined as the inner product: ⟨⟨v1, v2⟩⟩f = 1 4 Z 1 0 ˙v1(t) ˙v2(t) 1 | ˙f(t)| dt . (2) 3 This metric has many fundamental advantages, including the fact that it is the only Riemannian metric that is invariant to the domain warping [2]. This metric is somewhat complicated since it changes from point to point on F, and it is not straightforward to derive equations for computing geodesics in F. However, a small transformation provide an enormous simplification of this task. This motivates the use of SRVFs for representing and aligning elastic functions. Lemma 1 Under the SRVF representation, the Fisher-Rao Riemannian metric becomes the standard L2 metric. This result can be used to compute the distance dF R between any two functions by computing the L2 distance between the corresponding SRVFs, that is, dF R(f1, f2) = ∥q1 −q2∥. The next question is: What is the effect of warping on dF R? This is answered by the following result of isometry. Lemma 2 For any two SRVFs q1, q2 ∈L2 and γ ∈Γ, ∥(q1, γ) −(q2, γ)∥= ∥q1 −q2∥. 2.3 Elastic Distance on Quotient Space Our next step is to define an elastic distance between functions as follows. The orbit of an SRVF q ∈L2 is given by: [q] = closure{(q, γ)|γ ∈Γ}. It is the set of SRVFs associated with all the warpings of a function, and their limit points. Let S denote the set of all such orbits. To compare any two orbits we need a metric on S. We will use the Fisher-Rao distance to induce a distance between orbits, and we can do that only because under this the action of Γ is by isometries. Definition 2 For any two functions f1, f2 ∈F and the corresponding SRVFs, q1, q2 ∈L2, we define the elastic distance d on the quotient space S to be: d([q1], [q2]) = infγ∈Γ ∥q1 −(q2, γ)∥. Note that the distance d between a function and its domain-warped version is zero. However, it can be shown that if two SRVFs belong to different orbits, then the distance between them is non-zero. Thus, this distance d is a proper distance (i.e. it satisfies non-negativity, symmetry, and the triangle inequality) on S but not on L2 itself, where it is only a pseudo-distance. 3 Signal Estimation Method Our estimation is based on the model fi = ci(g ◦γi) + ei, i = 1, · · · , n, where g, fi ∈F, ci ∈R+, γi ∈Γ and ei ∈R. Given {fi}, our goal is to identify warping functions {γi} so as to reconstruct g. We will do so in three steps: 1) For a given collection of functions {fi}, and their SRVFs {qi}, we compute the mean of the corresponding orbits {[qi]} in the quotient space S; we will call it [µ]n. 2) We compute an appropriate element of this mean orbit to define a template µn in L2. The optimal warping functions {γi} are estimated by align individual functions to match the template µn. 3) The estimated warping functions are then used to align {fi} and reconstruct the underlying signal g. 3.1 Pre-step: Karcher Mean of Points in Γ In this section we will define a Karcher mean of a set of warping functions {γi}, under the FisherRao metric, using the differential geometry of Γ. Analysis on Γ is not straightforward because it is a nonlinear manifold. To understand its geometry, we will represent an element γ ∈Γ by the squareroot of its derivative ψ = √˙γ. Note that this is the same as the SRVF defined earlier for elements of F, except that ˙γ > 0 here. Since γ(0) = 0, the mapping from γ to ψ is a bijection and one can reconstruct γ from ψ using γ(t) = R t 0 ψ(s)2ds. An important advantage of this transformation is that since ∥ψ∥2 = R 1 0 ψ(t)2dt = R 1 0 ˙γ(t)dt = γ(1) −γ(0) = 1, the set of all such ψs is S∞, the unit sphere in the Hilbert space L2. In other words, the square-root representation simplifies the complicated geometry of Γ to the unit sphere. Recall that the distance between any two points on the unit sphere, under the Euclidean metric, is simply the length of the shortest arc of a great circle connecting them on the sphere. Using Lemma 1, the Fisher-Rao distance between any two warping functions is found to be dF R(γ1, γ2) = cos−1( R 1 0 p ˙γ1(t) p ˙γ2(t)dt). Now that we have a proper distance on Γ, we can define a Karcher mean as follows. Definition 3 For a given set of warping functions γ1, γ2, . . . , γn ∈Γ, define their Karcher mean to be ¯γn = argminγ∈Γ Pn i=1 dF R(γ, γi)2. 4 The search for this minimum is performed using a standard iterative algorithm that is not repeated here to save space. 3.2 Step 1: Karcher Mean of Points in S = L2/Γ Next we consider the problem of finding means of points in the quotient space S. Definition 4 Define the Karcher mean [µ]n of the given SRVF orbits {[qi]} in the space S as a local minimum of the sum of squares of elastic distances: [µ]n = argmin [q]∈S n X i=1 d([q], [qi])2 . (3) We emphasize that the Karcher mean [µ]n is actually an orbit of functions, rather than a function. The full algorithm for computing the Karcher mean in S is given next. Algorithm 1: Karcher Mean of {[qi]} in S 1. Initialization Step: Select µ = qj, where j is any index in argmin1≤i≤n ||qi−1 n Pn k=1 qk||. 2. For each qi find γ∗ i by solving: γ∗ i = argminγ∈Γ ∥µ −(qi, γ)∥. The solution to this optimization comes from a dynamic programming algorithm in a discretized domain. 3. Compute the aligned SRVFs using ˜qi 7→(qi, γ∗ i ). 4. If the increment ∥1 n Pn i=1 ˜qi −µ∥is small, then stop. Else, update the mean using µ 7→ 1 n Pn i=1 ˜qi and return to step 2. The iterative update in Steps 2-4 is based on the gradient of the cost function given in Eqn. 3. Denote the estimated mean in the kth iteration by µ(k). In the kth iteration, let γ(k) i denote the optimal domain warping from qi to µ(k) and let ˜q(k) i = (qi, γ(k) i ). Then, Pn i=1 d([µ(k)], [qi])2 = Pn i=1 ∥µ(k) −˜q(k) i ∥2 ≥Pn i=1 ∥µ(k+1) −˜q(k) i ∥2 ≥Pn i=1 d([µ(k+1)], [qi])2. Thus, the cost function decreases iteratively and as zero is a lower bound, Pn i=1 d([µ(k)], [qi])2 will always converge. 3.3 Step 2: Center of an Orbit Here we find a particular element of this mean orbit so that it can be used as a template to align the given functions. Definition 5 For a given set of SRVFs q1, q2, . . . , qn and q, define an element ˜q of [q] as the center of [q] with respect to the set {qi} if the warping functions {γi}, where γi = argminγ∈Γ ∥˜q−(qi, γ)∥, have the Karcher mean γid. We will prove the existence of such an element by construction. Algorithm 2: Finding Center of an Orbit : WLOG, let q be any element of the orbit [q]. 1. For each qi find γi by solving: γi = argminγ∈Γ ∥q −(qi, γ)∥. 2. Compute the mean ¯γn of all {γi}. The center of [q] wrt {qi} is given by ˜q = (q, ¯γ−1 n ). We need to show that ˜q resulting from Algorithm 2 satisfies the mean condition in Definition 5. Note that γi is chosen to minimize ∥q −(qi, γ)∥, and also that ∥˜q −(qi, γ)∥= ∥(q, ¯γ−1 n ) −(qi, γ)∥= ∥q −(qi, γ ◦¯γn)∥. Therefore, γ∗ i = γi ◦¯γ−1 n minimizes ∥˜q −(qi, γ)∥. That is, γ∗ i is a warping that aligns qi to ˜q. To verify the Karcher mean of γ∗ i , we compute the sum of squared distances Pn i=1 dF R(γ, γ∗ i )2 = Pn i=1 dF R(γ, γi ◦¯γ−1 n )2 = Pn i=1 dF R(γ ◦¯γn, γi)2. As ¯γn is already the mean of γi, this sum of squares is minimized when γ = γid. That is, the mean of γ∗ i is γid. We will apply this setup in our problem by finding the center of [µ]n with respect to the SRVFs {qi}. 5 g {fi} { ˜fi} estimate of g error w.r.t. n 0 0.5 1 −4 −2 0 2 4 0 0.5 1 −4 −2 0 2 4 0 0.5 1 −4 −2 0 2 4 0 0.5 1 −4 −2 0 2 4 true g estimated g 5 10 20 30 40 50 0 0.3 0.6 Figure 2: Example of consistent estimation. 3.4 Steps 1-3: Complete Estimation Algorithm Consider the observation model fi = ci(g ◦γi) + ei, i = 1, . . . , n, where g is an unknown signal, and ci ∈R+, γi ∈Γ and ei ∈R are random. Given the observations {fi}, the goal is to estimate the signal g. To make the system identifiable, we need some constraints on γi, ci, and ei. In this paper, the constraints are: 1) the population mean of {γ−1 i } is identity γid, and 2) the population Karcher means of {ci} and {ei} are known, denoted by E(¯c) and E(¯e), respectively. Now we can utilize Algorithms 1 and 2 to present the full procedure for function alignment and signal estimation. Complete Estimation Algorithm: Given a set of functions {fi}n i=1 on [0, 1], and population means E(¯c) and E(¯e). Let {qi}n i=1 denote the SRVFs of {fi}n i=1, respectively. 1. Computer the Karcher mean of {[qi]} in S using Algorithm 1; Denote it by [µ]n. 2. Find the center of [µ]n wrt {qi} using Algorithm 2; call it µn. 3. For i = 1, 2, . . . , n, find γ∗ i by solving: γ∗ i = argminγ∈Γ ∥µn −(qi, γ)∥. 4. Compute the aligned SRVFs ˜qi = (qi, γ∗ i ) and aligned functions ˜fi = fi ◦γ∗ i . 5. Return the warping functions {γ∗ i } and the estimated signal ˆg = ( 1 n Pn i=1 ˜fi−E(¯e))/E(¯c). Illustration. We illustrate the estimation process using an example which is a quadraticallyenveloped sine-wave function g(t) = (1 −(1 −2t)2) sin(5πt), t ∈[0, 1]. We randomly generate n = 50 warping functions {γi} such that {γ−1 i } are i.i.d with mean γid. We also generate i.i.d sequences {ci} and {ei} from the exponential distribution with mean 1 and the standard normal distribution, respectively. Then we compute functions fi = ci(g ◦γi) + ei to form the functional data. In Fig. 2, the first panel shows the function g, and the second panel shows the data {fi}. The Complete Estimation Algorithm results in the aligned functions { ˜fi = fi ◦γ∗ i } that are are shown in the third panel in Fig. 2. In this case, E(¯c)) = 1, E(¯e) = 0. This estimated g (red) using the Complete Estimation Algorithm as well as the true g (blue) are shown in the fourth panel. Note that the estimate is very successful despite large variability in the raw data. Finally, we examine the performance of the estimator with respect to the sample size, by performing this estimation for n equal to 5, 10, 20, 30, and 40. The estimation errors, computed using the L2 norm between estimated g’s and the true g, are shown in the last panel. As we will show in the following theoretical development, this estimate converges to the true g when the sample size n grows large. 4 Estimator Consistency and Asymptotics In this section we mathematically demonstrate that the proposed algorithms in Section 3 provide a consistent estimator for the underlying function g. This or related problems have been considered previously by several papers, including [14, 9], but we are not aware of any formal statistical solution. At first, we establish the following useful result. Lemma 3 For any q1, q2 ∈L2 and a constant c > 0, we have argminγ∈Γ ∥q1 −(q2, γ)∥= argminγ∈Γ ∥cq1 −(q2, γ)∥. Corollary 1 For any function q ∈L2 and constant c > 0, we have γid ∈argminγ∈Γ ∥cq −(q, γ)∥. Moreover, if the set {t ∈[0, 1]|q(t) = 0} has (Lebesgue) measure 0, γid = argminγ∈Γ ∥cq−(q, γ)∥. 6 Based on Lemma 3 and Corollary 1, we have the following result on the Karcher mean in the quotient space S. Theorem 1 For a function g, consider a sequence of functions fi(t) = cig(γi(t)) + ei, where ci is a positive constant, ei is a constant, and γi is a time warping, i = 1, · · · , n. Denote by qg and qi the SRVFs of g and fi, respectively, and let ¯s = 1 n Pn i=1 √ci. Then, the Karcher mean of {[qi], i = 1, 2, . . . , n} in S is ¯s[qg]. That is, [µ]n ≡argmin [q] Ã N X i=1 d2([qi], [q]) ! = ¯s[qg] = ¯s{(qg, γ), γ ∈Γ} . Next, we present a simple fact about the Karcher mean (see Definition 3) of warping functions. Lemma 4 Given a set {γi ∈Γ|i = 1, ..., n} and a γ0 ∈Γ, if the Karcher mean of {γi} is ¯γ, then the Karcher mean of {γi ◦γ0} is ¯γ ◦γ0. Theorem 1 ensures that [µ]n belongs to the orbit of [qg] (up to a scale factor) but we are interested in estimating g itself, rather than its orbit. We will show in two steps (Theorems 2 and 3) that finding the center of the orbit [µ]n leads to a consistent estimator for g. Theorem 2 Under the same conditions as in Theorem 1, let µ = (¯sqg, γ0), for γ0 ∈Γ, denote an arbitrary element of the Karcher mean class [µ]n = ¯s[qg]. Assume that the set {t ∈[0, 1]|˙g(t) = 0} has Lebesgue measure zero. If the population Karcher mean of {γ−1 i } is γid, then the center of the orbit [µ]n, denoted by µn, satisfies limn→∞µn = E(¯s)qg. This result shows that asymptotically one can recover the SRVF of the original signal by the Karcher mean of the SRVFs of the observed signals. Next in Theorem 3, we will show that one can also reconstruct g using aligned functions { ˜fi} generated by the Alignment Algorithm in Section 3. Theorem 3 Under the same conditions as in Theorem 2, let γ∗ i = argminγ ∥(qi, γ)−µn∥and ˜fi = fi ◦γ∗ i . If we denote ¯c = 1 n Pn i=1 ci and ¯e = 1 n Pn i=1 ei, then limn→∞1 n Pn i=1 ˜fi = E(¯c)g +E(¯e). 5 Application to Signal Alignment In this section we will focus on function alignment and comparison of alignment performance with some previous methods on several datasets. In this case, the given signals are viewed as {fi} in the previous set up and we estimate the center of the orbit and then use it for alignment of all signals. The datasets include 3 real experimental applications listed below. The data are shown in Column 1 in Fig. 3. 1. Real Data 1. Berkeley Growth Data: The Berkeley growth dataset for 39 male subjects [11]. For better illustrations, we have used the first derivatives of the growth (i.e. growth velocity) curves as the functions {fi} in our analysis. 2. Real Data 2. Handwriting Signature Data: 20 handwritten signatures and the acceleration functions along the signature curves [8]. Let (x(t), y(t)) denote the x and y coordinates of a signature traced as a function of time t. We study the acceleration functions f(t) = p ¨x(t)2 + ¨y(t)2 of the signatures. 3. Real Data 3. Neural Spike Data: Spiking activity of one motor cortical neuron in a Macaque monkey was recorded during arm-movement behavior [16]. The smoothed (using a Gaussian kernel) spike trains over 10 movement trials are used in this alignment analysis. There are no standard criteria on evaluating function alignment in the current literature. Here we use the following three criteria so that together they provide a comprehensive evaluation, where fi and ˜fi, i = 1, ..., N, denote the original and the aligned functions, respectively. 1. Least Squares: ls = 1 N PN i=1 R ( ˜ fi(t)− 1 N−1 P j̸=i ˜ fj(t))2dt R (fi(t)− 1 N−1 P j̸=i fj(t))2dt. ls measures the cross-sectional variance of the aligned functions, relative to original values. The smaller the value of ls, the better the alignment is in general. 7 Original PACE [11] SMR [4] MBM [5] F-R 5 10 15 0 10 20 30 5 10 15 0 10 20 30 5 10 15 0 10 20 30 5 10 15 0 10 20 30 5 10 15 0 10 20 30 Growth-male (0.91, 1.09, 0.68) (0.45, 1.17, 0.77) (0.70, 1.17, 0.62) (0.64, 1.18, 0.31) 20 40 60 80 0 0.5 1 1.5 20 40 60 80 0 0.5 1 1.5 20 40 60 80 0 0.5 1 1.5 20 40 60 80 0 0.5 1 1.5 20 40 60 80 0 0.5 1 1.5 Signature (0.91, 1.18, 0.84) (0.62, 1.59, 0.31) (0.64, 1.57, 0.46) (0.56, 1.79, 0.31) 0.5 1 1.5 2 2.5 0 0.5 1 1.5 0.5 1 1.5 2 2.5 0 0.5 1 1.5 0.5 1 1.5 2 2.5 0 0.5 1 1.5 0.5 1 1.5 2 2.5 0 0.5 1 1.5 0.5 1 1.5 2 2.5 0 0.5 1 1.5 Neural data (0.87, 1.35, 1.10) (0.69, 2.54, 0.95) (0.48, 3.06, 0.40) (0.40, 3.77, 0.28) Figure 3: Empirical evaluation of four methods on 3 real datasets, with the alignment performance computed using three criteria (ls, pc, sls). The best cases are shown in boldface. 2. Pairwise Correlation: pc = P i̸=j cc( ˜ fi(t), ˜ fj(t)) P i̸=j cc(fi(t),fj(t)) , where cc(f, g) is the pairwise Pearson’s correlation between functions. Large values of pc indicate good sychronization. 3. Sobolev Least Squares: sls = PN i=1 R ( ˙˜ fi(t)−1 N PN j=1 ˙˜ fj)2dt PN i=1 R ( ˙fi(t)−1 N PN j=1 ˙fj)2dt , This criterion measures the total cross-sectional variance of the derivatives of the aligned functions, relative to the original value. The smaller the value of sls, the better synchronization the method achieves. We compare our Fisher-Rao (F-R) method with the Tang-M¨uller method [11] provided in principal analysis by conditional expectation (PACE) package, the self-modeling registration (SMR) method presented in [4], and the moment-based matching (MBM) technique presented in [5]. Fig. 3 summarizes the values of (ls, pc, sls) for these four methods using 3 real datasets. From the results, we can see that the F-R method does uniformly well in functional alignment under all the evaluation metrics. We have found that the ls criterion is sometimes misleading in the sense that a low value can result even if the functions are not very well aligned. This is the case, for example, in the male growth data under SMR method. Here the ls = 0.45, while for our method ls = 0.64, even though it is easy to see that latter has performed a better alignment. On the other hand, the sls criterion seems to best correlate with a visual evaluation of the alignment. The neural spike train data is the most challenging and no other method except ours does a good job. 6 Summary In this paper we have described a parameter-free approach for reconstructing underlying signal using given functions with random warpings, scalings, and translations. The basic idea is to use the FisherRao Riemannian metric and the resulting geodesic distance to define a proper distance, called elastic distance, between warping orbits of SRVF functions. This distance is used to compute a Karcher mean of the orbits, and a template is selected from the mean orbit using an additional condition that the mean of the warping functions is identity. By applying these warpings on the original functions, we provide a consistent estimator of the underlying signal. One interesting application of this framework is in aligning functions with significant x-variability. We show the the proposed FisherRao method provides better alignment performance than the state-of-the-art methods in several real experimental data. 8 References [1] S. Amari. Differential Geometric Methods in Statistics. Lecture Notes in Statistics, Vol. 28. Springer, 1985. [2] N. N. ˇCencov. Statistical Decision Rules and Optimal Inferences, volume 53 of Translations of Mathematical Monographs. AMS, Providence, USA, 1982. [3] B. Efron. Defining the curvature of a statistical problem (with applications to second order efficiency). Ann. Statist., 3:1189–1242, 1975. [4] D. Gervini and T. Gasser. Self-modeling warping functions. Journal of the Royal Statistical Society, Ser. B, 66:959–971, 2004. [5] G. James. Curve alignment by moments. Annals of Applied Statistics, 1(2):480–501, 2007. [6] R. E. Kass and P. W. Vos. Geometric Foundations of Asymptotic Inference. John Wiley & Sons, Inc., 1997. [7] A. Kneip and T. Gasser. Statistical tools to analyze data representing a sample of curves. The Annals of Statistics, 20:1266–1305, 1992. [8] A. Kneip and J. O. Ramsay. Combining registration and fitting for functional models. Journal of American Statistical Association, 103(483), 2008. [9] J. O. Ramsay and X. Li. Curve registration. Journal of the Royal Statistical Society, Ser. B, 60:351–363, 1998. [10] C. R. Rao. Information and accuracy attainable in the estimation of statistical parameters. Bulletin of Calcutta Mathematical Society, 37:81–91, 1945. [11] R. Tang and H. G. Muller. Pairwise curve synchronization for functional data. Biometrika, 95(4):875–889, 2008. [12] H.L. Van Trees. Detection, Estimation, and Modulation Theory, vol. I. John Wiley, N.Y., 1971. [13] M. Tsang, J. H. Shapiro, and S. Lloyd. Quantum theory of optical temporal phase and instantaneous frequency. Phys. Rev. A, 78(5):053820, Nov 2008. [14] K. Wang and T. Gasser. Alignment of curves by dynamic time warping. Annals of Statistics, 25(3):1251–1276, 1997. [15] A. Willsky. Fourier series and estimation on the circle with applications to synchronous communication–I: Analysis. IEEE Transactions on Information Theory, 20(5):577 – 583, sep 1974. [16] W. Wu and A. Srivastava. Towards Statistical Summaries of Spike Train Data. Journal of Neuroscience Methods, 195:107–110, 2011. 9
2011
136
4,187
Multi-armed bandits on implicit metric spaces Aleksandrs Slivkins Microsoft Research Silicon Valley Mountain View, CA 94043 slivkins at microsoft.com Abstract The multi-armed bandit (MAB) setting is a useful abstraction of many online learning tasks which focuses on the trade-off between exploration and exploitation. In this setting, an online algorithm has a fixed set of alternatives (“arms”), and in each round it selects one arm and then observes the corresponding reward. While the case of small number of arms is by now well-understood, a lot of recent work has focused on multi-armed bandits with (infinitely) many arms, where one needs to assume extra structure in order to make the problem tractable. In particular, in the Lipschitz MAB problem there is an underlying similarity metric space, known to the algorithm, such that any two arms that are close in this metric space have similar payoffs. In this paper we consider the more realistic scenario in which the metric space is implicit – it is defined by the available structure but not revealed to the algorithm directly. Specifically, we assume that an algorithm is given a tree-based classification of arms. For any given problem instance such a classification implicitly defines a similarity metric space, but the numerical similarity information is not available to the algorithm. We provide an algorithm for this setting, whose performance guarantees (almost) match the best known guarantees for the corresponding instance of the Lipschitz MAB problem. 1 Introduction In a multi-armed bandit (MAB) problem, a player is presented with a sequence of trials. In each round, the player chooses one alternative from a set of alternatives (“arms”) based on the past history, and receives the payoff associated with this alternative. The goal is to maximize the total payoff of the chosen arms. The multi-armed bandit setting was introduced in 1950s and has since been studied intensively since then in Operations Research, Economics and Computer Science, e.g. see [8] for background. This setting is often used to model the tradeoffs between exploration and exploitation, which is the principal issue in sequential decision-making under uncertainty. One standard way to evaluate the performance of a multi-armed bandit algorithm is regret, defined as the difference between the expected payoff of an optimal arm and that of the algorithm. By now the multi-armed bandit problem with a small finite number of arms is quite well understood (e.g. see [22, 3, 2]). However, if the set of arms is exponentially or infinitely large, the problem becomes intractable, unless we make further assumptions about the problem instance. Essentially, an MAB algorithm needs to find a needle in a haystack; for each algorithm there are inputs on which it performs as badly as random guessing. The bandit problems with large sets of arms have received a considerable attention, e.g. [1, 5, 23, 12, 21, 10, 24, 25, 11, 4, 16, 20, 7, 19]. The common theme in these works is to assume a certain structure on payoff functions. Assumptions of this type are natural in many applications, and often lead to efficient learning algorithms, e.g. see [18, 8] for a background. 1 In particular, the line of work [1, 17, 4, 20, 7, 19] considers the Lipschitz MAB problem, a broad and natural bandit setting in which the structure is induced by a metric on the set of arms.1 In this setting an algorithm is given a metric space (X, D), where X is the set of arms, which represents the available similarity information (information on similarity between arms). Payoffs are stochastic: the payoff from choosing arm x is an independent random sample with expectation µ(x). The metric space is related to payoffs via the following Lipschitz condition:2 |µ(x) −µ(y)| ≤D(x, y) for all x, y ∈X. (1) Performance guarantees consider regret R(t) as a function of time t, and focus on the asymptotical dependence of R(·) on a suitably defined “dimensionality” of the problem instance (X, D, µ). Various upper and lower bounds of the form R(t) = ˜Θ(tγ), γ < 1 have been proved. We relax an important assumption in Lipschitz MAB that the available similarity information provides numerical values in the sense of (1).3 Specifically, following [21, 24, 25] we assume that an algorithm is (only) given a taxonomy on arms: a tree-based classification modeled by a rooted tree T whose leaf set is X. The idea is that any two arms in the same subtree are likely to have similar payoffs. Motivations include contextual advertising and web search with topical taxonomies, e.g. [25, 6, 29, 27], Monte-Carlo planning [21, 24], and Computer Go [13, 14]. We call the above formulation the Taxonomy MAB problem; a problem instance is a triple (X, T , µ). Crucially, in Taxonomy MAB no numerical similarity information is explicitly revealed. All prior algorithms for Lipschitz MAB (and in particular, all algorithms in [20, 7]) are parameterized by some numerical similarity information, and therefore do not directly apply to Taxonomy MAB. One natural way to quantify the extent of similarity between arms in a given subtree is via the maximum difference in expected payoffs. Specifically, for each internal node v we define the width of the corresponding subtree T (v) to be W(v) = supx,y∈X(v) |µ(x) −µ(y)|, where X(v) is the set of leaves in T (v). Note that the subtree widths are non-increasing from root to leaves. A standard notion of distance induced by subtree widths, henceforth called implicit distance, is as follows: Dimp(x, y) is the width of the least common ancestor of leaves x, y. It is immediate that this is indeed a metric, and moreover that it satisfies (1). In fact, Dimp(x, y) is the smallest “width-based” distance that satisfies (1). If the widths are strictly decreasing, T can be reconstructed from Dimp. Thus, an instance (X, T , µ) of Taxonomy MAB naturally induces an instance (X, Dimp, µ) of Lipschitz MAB which (assuming the widths are strictly decreasing) encodes all relevant information. The crucial distinction is that in Taxonomy MAB the metric space (X, Dimp) is implicit: the subtree widths are not revealed to the algorithm. In particular, the algorithms in [20, 7] do not apply. We view Lipschitz MAB as a performance benchmark for Taxonomy MAB. We are concerned with the following question: can an algorithm for Taxonomy MAB perform as if it was given the implicit metric space (X, Dimp)? More formally, we ask whether it is possible to obtain guarantees for Taxonomy MAB that (almost) match the state-of-art for Lipschitz MAB. We answer this question in the affirmative as long as the implicit metric space (X, Dimp) has a small doubling constant (see Section 2 for a milder condition). We provide an algorithm with guarantees that are almost identical to those for the zooming algorithm in [20].4 Our algorithm proceeds by estimating subtree widths of near-optimal subtrees. Thus, we encounter a two-pronged exploration-exploitation trade-off: samples from a given subtree reveal information not only about payoffs but also about the width, whereas in Lipschitz MAB we only need to worry about the payoffs. Dealing with this more complicated trade-off is the main new conceptual hurdle (which leads to some technical complications such as the proof of Lemma 4.4). These complications aside, our algorithm is similar to those in [17, 20] in that it maintains a partition of the space of arms into regions (in this case, subtrees) so that each region is treated as a “meta-arm”, and this partition is adapted to the high-payoff regions. 1This problem has been explicitly defined in [20]. Preceding work [1, 17, 9, 4] considered a few special cases such as a one-dimensional real interval with a metric defined by D(x, y) = |x −y|α, α ∈(0, 1]. 2Lipschitz constant is clip = 1 without loss of generality: else, one could take a metric clip × D. 3In the full version of [20] the setting is relaxed so that (1) needs to hold only if x is optimal, and the distances between non-optimal points do not need to be explicitly known; [7] provides a similar result. 4The guarantees in [7] are similar but slightly different technically. 2 1.1 Preliminaries The Taxonomy MAB problem and the implicit metric space (X, Dimp) are defined as in Section 1. We assume stochastic payoffs [2]: in each round t the algorithm chooses a point x = xt ∈X and observes an independent random sample from a payoff distribution Ppayoff(x) with support [0, 1] and expectation µ(x).5 The payoff function µ : X →[0, 1] is not revealed to an algorithm. The goal is to minimize regret with respect to the best expected arm: R(T) ≜µ∗T −E hPT t=1 µ(xt) i = E hPT t=1 ∆(xt) i , (2) where µ∗≜supx∈X µ(x) is the maximal expected payoff, and ∆(x) ≜µ∗−µ(x), is the “badness” of arm x. An arm x ∈X is called optimal if µ(x) = µ∗. We will assume that the number of arms is finite (but possibly very large). Extension to infinitely many arms (which does not require new algorithmic ideas) is not included to simplify presentation. Also, we will assume a known time horizon (total number of rounds), denoted Thor. Our guarantees are in terms of the zooming dimension [20] of (X, Dimp, µ), a concept that takes into account both the dimensionality of the metric space and the “goodness” of the payoff function. Below we specialize the definition from [20] to Taxonomy MAB. Definition 1.1 (zooming dimension). For X′ ⊂X, define the covering number N cov δ (X′) as the smallest number of subtrees of width at most δ that cover X′. Let Xδ ≜{x ∈X : 0 < ∆(x) ≤δ}. The zooming dimension of a problem instance I = (X, T , µ), with multiplier c, is ZoomDim(I, c) ≜inf{d ≥0 : N cov δ/8(Xδ) ≤c δ−d ∀δ > 0}. (3) In other words, we consider a covering property N cov δ/8(Xδ) ≤c δ−d, and define the zooming dimension as the smallest d such that this covering property holds for all δ > 0. The zooming dimension essentially coincides with the covering dimension of (X, D) 6 for the worst-case payoff function µ, but can be (much) smaller when µ is “benign”. In particular, zooming dimension would “ignore” a subtree with high covering dimension but significantly sub-optimal payoffs. The doubling constant cDBL of a metric space is the smallest k such that any ball can be covered by k balls of half the radius. (In our case, any subtree can be covered by k subtrees of half the width.) Doubling constant has been a standard notion in theoretical computer science literature since [15]; since then, it was used to characterize tractable problem instances for a variety of problems. It is known that cDBL = O(2d) for any bounded subset S ⊂Rd′ of linear dimension d, under any metric ℓp, p ≥1. Moreover, cDBL ≥c 2d if d is the covering dimension with multiplier c. 2 Statement of results We will prove that our algorithm (TaxonomyZoom) satisfies the following regret bound: For each instance I of Taxonomy MAB, each c > 0 and each T ≤Thor, R(T) ≤O(c KI log Thor)1/(2+d) × T 1−1/(2+d), d = ZoomDim(I, c). (4) We will bound the factor KI below. For KI = 1 this is the guarantee for the zooming algorithm in [20] for the corresponding instance (X, Dimp, µ) of Lipschitz MAB. Note that the definition of zooming dimension allows a trade-off between c and d, and we obtain the optimal trade-off since (4) holds for all values of c at once. Following the prior work on Lipschitz MAB, we identify the exponent in (4) as the crucial parameter, as long as the multiplier c is sufficiently small.7 Our first (and crude) bound for KI is in terms of the doubling constant of (X, Dimp). Theorem 2.1 (Crude bound). Given an upper bound c′ DBL on the doubling constant of (X, Dimp), TaxonomyZoom achieves (4) with KI = f(c′ DBL) log |X|, where f(n) = n 2n. 5Other than support and expectation, the “shape” of Ppayoff(x) is not essential for this paper. 6Covering dimension is defined as in (3), replacing N cov δ/8(Xδ) with N cov δ (X).. 7One can reduce ZoomDim by making c huge, e.g. ZoomDim = 0 for c = |X|. However, this is not likely to lead to useful regret bounds. Similar trade-off (dimension vs multiplier) is implicit in [7]. 3 Our main result (which implies Theorem 2.1) uses a more efficient bound for KI. Recall that in Taxonomy MAB subtree widths are not revealed, and the algorithm has to use sampling to estimate them. Informally, the taxonomy is useful for our purposes if and only if subtree widths can be efficiently estimated using random sampling. We quantify this as a parameter called quality, and bound KI in terms of this parameter. We use simple random sampling: start at a tree node v and choose a branch uniformly at random at each junction. Let P(u|v) be the probability that node u is reached starting from v. The probabilities P(·|v) induce a distribution on X(v), the leaf set of subtree T (v). A sample from this distribution is called a random sample from T (v), with expected payoff µ(v) ≜P x∈X(v) µ(x) P(x|v). Definition 2.2. The quality of the taxonomy for a given problem instance is the largest number q ∈(0, 1) with the following property: for each subtree T (v) containing an optimal arm there exist tree nodes u, u′ ∈T (v) such that P(u|v) and P(u′|v) are at least q and |µ(u) −µ(u′)| ≥1 2 W(v). (5) One could use the pair u, u′ in Definition 2.2 to obtain reliable estimates for W(v). The definition focuses on the difficulty of obtaining such pair via random sampling from T (v). The definition is flexible: it allows u and u′ to be at different depth (which is useful if node degrees are large and non-uniform) and the widths of other internal nodes in T (v) cannot adversely impact quality. The constant 1 2 in (5) is arbitrary; we fix it for convenience. For a particularly simple example, consider a binary taxonomy such that for each subtree T (v) containing an optimal arm there exist grandchildren u, u′ of v that satisfy (5). For instance, such u, u′ exist if the width of each grandchild of v is at most 1 4 W(v). Then quality ≥1 4. Theorem 2.3 (Main result). Assume an lower bound q ≤ quality(I) is known. Then TaxonomyZoom achieves (4) with KI = deg q log |X|, where deg is the degree of the taxonomy. Theorem 2.1 follows because, letting cDBL be the doubling constant of (X, Dimp), all node degrees are at most cDBL and moreover quality ≥2−cDBL (we omit the proof from this version). Discussion. The guarantee in Theorem 2.3 is instance-dependent: it depends on deg/quality and ZoomDim, and is meaningful only if these quantities are small compared to the number of arms (informally, we will call such problem instances “benign”). Also, the algorithm needs to know a non-trivial lower bound on quality; very conservative estimates would not suffice. However, underestimating quality (and likewise overestimating Thor) is relatively inexpensive as long as the “influence” of these parameters on regret is eventually dominated by the T 1−1/(2+d) term. For benign problem instances, the benefit of using the taxonomy is the vastly improved dependence on the number of arms. Without a taxonomy or any other structure, regret of any algorithm for stochastic MAB scales linearly in the number of (near-optimal) arms, for a sufficiently large t. Specifically, let Nδ be the number of arms x such that δ 2 < ∆(x) ≤δ. Then the worst-case regret (over all problem instances) cannot be better than R(t) = min(δt, Ω( 1 δ Nδ)). 8 An alternative approach to MAB problems on trees (without knowing the “widths”) are the “tree bandit algorithms” explored in [21, 24]. Here for each tree node v there is a separate, independent copy of UCB1 [2] or a UCB1-style index algorithm (call it Av), so that the “arms” for Av correspond to children u of v, and selecting a child u in a given round corresponds to playing Au in this round. [21, 24] report successful empirical performance of such algorithms on some examples. However, regret bounds for these algorithms do not scale as well with the number of arms: even if the tree widths are given, then letting ∆min ≜minx∈X: ∆(x)>0 ∆(x), the regret bound is proportional to |Xδ|/∆min (where Xδ is as in Definition 1.1), whereas the regret bound in Theorem 2.3 is (essentially) in terms of the covering numbers N cov δ/8(Xδ). 8This is implicit from the lower-bounding analysis in [22] and [3]. 4 3 Main algorithm Our algorithm TaxonomyZoom(Thor, q) is parameterized by the time horizon Thor and the quality parameter q ≤quality. In each round the algorithm selects one of the tree nodes, say v, and plays a randomly sampled arm x from T (v). We say that a subtree T (u) is hit in this round if u ∈T (v) and x ∈T (u). For each tree node v and time t, let nt(v) be the number of times the subtree T (v) has been hit by the algorithm before time t, and let µt(v) be the corresponding average reward. Note that E[µt(v) | nt(v) > 0] = µ(v). Define the confidence radius of v at time t as radt(v) ≜ p 8 log(Thor|X|) / (2 + nt(v)). (6) The meaning of the confidence radius is that |µt(v) −µ(v)| ≤radt(v) with high probability. For each tree node v and time t, let us define the index of v at time t as It(v) ≜µt(v) + (1 + 2 kA) radt(v), where kA ≜4 p 2/q. (7) Here we posit µt(v) = 0 if nt(v) = 0. Let us define the width estimate9 Wt(v) ≜max(0, Ut(v) −Lt(v)), where  Ut(v) ≜maxu∈T (v), s≤t µs(u) −rads(u), Lt(v) ≜minu∈T (v), s≤t µs(u) + rads(u). (8) Here Ut(v) is the best available lower confidence bound on maxx∈X(v) µ(x), and Lt(v) is the best available upper confidence bound on minx∈X(v) µ(x). If both bounds hold then Wt(v) ≤W(v). Throughout the phase, some tree nodes are designated active. We maintain the following invariant: Wt(v) < kA radt(v) for each active internal node v. (9) TaxonomyZoom(Thor, q ) operates as follows. Initially the only active tree node is the root. In each round, the algorithm performs the following three steps: (S1) While Invariant (9) is violated by some v, de-activate v and activate all its children. (S2) Select an active tree node v with the maximal index (7), breaking ties arbitrarily. (S3) Play a randomly sampled arm from T (v). Note that each arm is activated and deactivated at most once. Implementation details. If an explicit representation of the taxonomy can be stored in memory, then the following simple implementation is possible. For each tree node v, we store several statistics: nt, µt, Ut and Lt. Further, we maintain a linked list of active nodes, sorted by the index. Suppose in a given round t, a subtree v is chosen, and an arm x is played. We update the statistics by going up the x →v path in the tree (note that only the statistics on this path need to be updated). This update can be done in time O(depth(x)). Then one can check whether Invariant (9) holds for a given node in time O(1). So step (S1) of the algorithm can be implemented in time O(1 + N), where N is the number of nodes activated during this step. Finally, the linked list of active nodes can be updated in time O(1 + N). Then the selections in steps (S2) and (S3) are done in time O(1). Lemma 3.1. TaxonomyZoom can be implemented with O(1) storage per each tree node, so that in each round the time complexity is O(N + depth(x)), where N is the number of arms activated in step (S1), and x is the arm chosen in step (S3). Sometimes it may be feasible (and more space-efficient) to represent the taxonomy implicitly, so that a tree node is expanded only if needed. Specifically, suppose the following interface is provided: given a tree node v, return all its children and an arbitrary arm x ∈T (v). Then TaxonomyZoom can be implemented so that it only stores the statistics for each node u such that P(u|v) ≥q for some active node v (rather than for all tree nodes).10 The running times are as in Lemma 3.1. 9Defining Ut, Lt in (8) via s ≤t (rather than s = t) improves performance, but is not essential for analysis. 10The algorithm needs to be modified slightly; we leave the details to the full version. 5 4 Analysis: proof of Theorem 2.3 First, let us fix some notation. We will focus on regret up to a fixed time T ≤Thor. In what follows, let d = ZoomDim(I, c) for some fixed c > 0. Recall the notation Xδ ≜{x ∈X : ∆(x) ≤δ} from Definition 1.1. Here δ is the “ distance scale”; we will be interested in δ ≥δ0, for δ0 ≜( K T )1/(d+2), where K ≜O(c deg k2 A log Thor). (10) We identify a certain high-probability behavior of the algorithm, and argue deterministically conditional on the event that this behavior actually holds. Definition 4.1. An execution of TaxonomyZoom is called clean if for each time t ≤T and all tree nodes v the following two properties hold: (P1) |µt(v) −µ(v)| ≤radt(v) as long as nt(v) > 0. (P2) If u ∈T (v) then nt(v) P(u|v) ≥8 log T ⇒ nt(u) ≥1 2 nt(v) P(u|v). Note that in a clean execution the quantities in (8) satisfy the desired high-confidence bounds: Ut(v) ≤maxx∈X(v) µ(x) and Lt(v) ≥minx∈X(v) µ(x), which implies W(v) ≥Wt(v). Lemma 4.2. An execution of TaxonomyZoom is clean with probability at least 1 −2 T −2 hor. Proof. For part (P1), fix a tree node v and let ζj to be the payoff in the j-th round that v has been hit. Then {Pn j=1(ζj −µ(v))}n=1..T is a martingale.11 Let ¯ζn ≜1 n Pn j=1 ζj be the n-th average. Then by the Azuma-Hoeffding inequality, for any n ≤Thor we have: Pr[ |¯ζn −µ(v)| > r(n)] ≤(Thor |X|)−2, where r(n) ≜ p 8 log(Thor|X|) / (2 + n). (11) Note that radt(v) = r(nt(v)). We obtain (P1) by taking the Union Bound for (11) over all nodes v and all n ≤T. (This is the only place where we use the log |X| term in (6).) Part (P2) is proved via a similar application of martingales and Azuma-Hoeffding inequality. From now on we will argue about a clean execution. Recall that by definition of W(·), µ(v) ≤µ(u) + W(v) for any tree node u ∈T (v). (12) The crux of the proof of Theorem 2.3 is that at all times the maximal index is at least µ∗. Lemma 4.3. Consider a clean execution of TaxonomyZoom(Thor, q). Then the following holds: in any round t ≤Thor, at any point in the execution such that the invariant (9) holds, there exists an active tree node v∗such that It(v∗) ≥µ∗. Proof. Fix an optimal arm x∗∈X. Note that in each round t, there exist an active tree node v∗ t such that x∗∈T (v∗). (One can prove it by induction on t, using the (de)activation rule (S1) in TaxonomyZoom.) Fix round t and the corresponding tree node v∗= v∗ t . By Definition 2.2, there exist v0, v1 ∈Tq(v∗) such that |µ(v1) −µ(v0)| ≥W(v∗)/2. Assume that ∆≜W(v∗) > 0, and define f(∆) = 83 log(Thor) ∆−2. Then for each tree node v radt(v) ≤∆/8 ⇐⇒nt(v) ≥f(∆). (13) Now, for the sake of contradiction let us suppose that nt(v∗) ≥( 1 4 kA)2 f(∆). By (13), this is equivalent to ∆≥2 kA radt(v∗). Note that nt(v∗) ≥(2/q) f(∆) by our assumption on kA, so by property (P2) in the definition of the clean execution, for each node vj, j ∈{0, 1} we have nt(vj) ≥f(∆), which implies radt(vj) ≤∆/8. Therefore (8) gives a good estimate of W(v∗), namely Wt(v∗) ≥∆/4. It follows that Wt(v∗) ≥kA radt(v∗), which violates Invariant (9). We proved that W(v∗) ≤2 kA radt(v∗). Using (12), we have ∆(v∗) ≤W(v∗) < 2 kA radt(v∗) and It(v∗) ≥µ(v∗) + 2 kA radt(v∗) ≥µ∗, (14) where the first inequality in (14) holds by definition (7) and property (P1) of a clean execution. 11To make ζn well-defined for any n ≤Thor, consider a hypothetical algorithm which coincides with TaxonomyZoom for the first Thor rounds and then proceeds so that each tree node is selected Thor times. 6 We use Lemma 4.3 to show that the algorithm does not activate too many tree nodes with large badness ∆(·), and each such node is not played too often. For each tree node v, let N(v) be the number of times node v was selected in step (S2) of the algorithm. Call v positive if N(v) > 0. We partition all positive tree nodes and all deactivated tree nodes into sets Si = {positive tree nodes v : 2−i < ∆(v) ≤2−i+1}, S∗ i = {deactivated tree nodes v : 2−i < 4 W(v) ≤2−i+1}. Lemma 4.4. Consider a clean execution of algorithm TaxonomyZoom(Thor, q ). (a) for each tree node v we have N(v) ≤O(k2 A log Thor) ∆−2(v). (b) if node v is de-activated at some point in the execution, then ∆(v) ≤4 W(v). (c) For each i, |S∗ i | ≤2Ki, where Ki ≜c 2(i+1) d. (d) For each i, |Si| ≤O(deg Ki+1). Proof. For part (a), fix an arbitrary tree node v and let t be the last time v was selected in step (S2) of the algorithm. By Lemma 4.3, at that point in the execution there was a tree node v∗such that It(v∗) ≥µ∗. Then using the selection rule (step (S2)) and the definition of index (7), we have µ∗≤It(v∗) ≤It(v) ≤µ(v) + (2 + 2 kA) radt(v), ∆(v) ≤(2 + 2 kA) radt(v). (15) N(v) ≤nt(v) ≤O(k2 A log Thor) ∆−2(v). For part (b), suppose tree node v was de-activated at time s. Let t be the last round in which v was selected. Then W(v) ≥Ws(v) ≥kA rs(v) ≥1 3 (2 + 2 kA) radt(v) ≥1 3 ∆(v). (16) Indeed, the first inequality in (16) holds since we are in a clean execution, the second inequality in (16) holds because v was de-activated, the third inequality holds because ns(v) = nt(v) + 1, and the last inequality in (16) holds by (15). For part (c), let us fix i and define Yi = {x ∈X : ∆(x) ≤2−i+1}. By Definition 1.1, this set can be covered by Ki subtrees T (v1), . . . , T (vKi), each of width < 2−i/4. Fix a deactivated tree node v ∈S∗ i . For each arm x ∈X in subtree T (v) we have, by part (b), ∆(x) ≤∆(v) + W(v) ≤4 W(v) ≤2−i+1, so x ∈Yi and therefore is contained in some T (vj). Note that vj ∈T (v) since W(v) > W(vj). It follows that the subtrees T (v1), . . . , T (vK) cover the leaf set of T (v). Consider the graph G on the node set S∗ i ∪{v1, . . . , vK}, where two nodes u, v are connected by a directed edge (u, v) if there is a path from u to v in the tree T . This is a directed forest of out-degree at least 2, whose leaf set is a subset of {v1, . . . , vKi}. Since in any directed tree of out-degree ≥2 the number of nodes is at most twice the number of leaves, G contains at most Ki internal nodes. Thus |S∗ i | ≤2Ki, proving part (c). For part (d), let us fix i and consider a positive tree node u ∈Si. Since N(u) > 0, either u is active at time Thor, or it was deactivated in some round before Thor. In the former case, let v be the parent of u. In the latter case, let v = u. Then by part (b) we have 2−i ≤∆(u) ≤∆(v) + W(v) ≤4 W(v), so v ∈S∗ j for some j ≤i + 1. For each tree node v, define its family as the set which consists of u itself and all its children. We have proved that each positive node u ∈Si belongs to the family of some deactivated node v ∈∪i+1 j=1S∗ j . Since each family consists of at most 1 + deg nodes, it follows that |Si| ≤(1 + deg) (Pi+1 j=1Kj) ≤O(deg Ki+1). Proof of Theorem 2.3: The theorem follows Lemma 4.4(ad). Let us assume a clean execution. (Recall that by Lemma 4.2 the failure probability is sufficiently small to be neglected.) Then: P v∈SiN(v) ∆(v) ≤O(k2 A log Thor) P v∈Si 1 ∆(v) ≤O(k2 A log Thor) |Si| 2i ≤K 2(i+2)(1+d), 7 where K is defined in (10). For any δ0 = 2−i0 we have R(T) ≤P tree nodes v N(v) ∆(v) = P v: ∆(v)<δ0 N(v) ∆(v)  + P v: ∆(v)≥δ0 N(v) ∆(v)  ≤δ0T + P i≤i0 P v∈Si N(v) ∆(v)  ≤δ0T + P i≤i0 K 2(i+2)(1+d) ≤δ0T + O(K) ( 8 δ0 )(1+d). We obtain the desired regret bound (4) by setting δ0 as in (10). 5 (De)parameterizing the algorithm Recall that TaxonomyZoom needs to be parameterized by Thor and q. dependence on the parameters can be removed using a suitable version of the standard doubling trick: consider a “metaalgorithm” that proceeds in phases so that in each phase i = 1, 2, 3, . . . a fresh instance of TaxonomyZoom(2i, qi) is run for 2i rounds, where qi slowly decreases with i. For instance, if we take qi = 2−αi for some α ∈(0, 1) then this meta-algorithm has regret R(T) ≤O(c deg log T)1/(2+d) × T 1−(1−α)/(2+d) ∀T ≥quality−1/α (17) where d = ZoomDim(I, c), for any given c > 0. While the doubling trick is very useful in theory of online decision problems, its practical importance is questionable, as running a fresh algorithm instance in each phase seems unnecessarily wasteful. We conjecture that in practice one could run a single instance of the algorithm while gradually increasing Thor and decreasing q. However, providing provable guarantees for this modified algorithm seems beyond the current techniques. In particular, extending a much simpler analysis of the zooming algorithm [20] to arbitrary time horizon remains a challenge.12 Further, we conjecture that TaxonomyZoom will typically work in practice even if the parameters are misspecified, i.e. even if Thor is too low and q is too optimistic. Indeed, recall that our algorithm is index-based, in the style of UCB1 [2]. The only place where the parameters are invoked is in the definition of the index (7), namely in the constant in front of the exploration term. It has been observed in [28, 29] that in a related MAB setting, reducing this constant to 1 from the theoretically mandated Θ(log T)-type term actually improves algorithms’ performance in simulations. 6 Conclusions In this paper, we have extended previous multi-armed bandit learning algorithms with large numbers of available strategies. Whereas the most effective previous approaches rely on explicitly knowing the distance between available strategies, we consider the case where the distances are implicit in a hierarchy of available strategies. We have provided a learning algorithm for this setting, and show that its performance almost matches the best known guarantees for the Lipschitz MAB problem. Further, we have shown how our approach results in stronger provable guarantees than alternative algorithms such as tree bandit algorithms [21, 24]. We conjecture that the dependence on quality (or some version thereof) is necessary for the worstcase regret bounds, even if ZoomDim is low. It is an open question whether there are non-trivial families of problem instances with low quality for which one could achieve low regret. Our results suggest some natural extensions. Most interestingly, a number of applications recently posed as MAB problems over large sets of arms – including learning to rank online advertisements or web documents (e.g. [26, 29]) – naturally involve choosing among arms (e.g. ads) that can be classified according to any of a number of hierarchies (e.g. by class of product sold, geographic location, etc). In particular, such different hierarchies may be of different usefulness. Selecting among, or combining from, a set of available hierarchical representations of arms poses interesting challenges. More generally, we would like to generalize Theorem 2.3 to other structures that implicitly define a metric space on arms (in the sense of (1)). One specific target would be directed acyclic graphs. While our algorithm is well-defined for this setting, the theoretical analysis does not apply. 12However, [7] obtains similar guarantees for arbitrary time horizon, with a different algorithm. 8 References [1] Rajeev Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33(6):1926– 1951, 1995. [2] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002. Preliminary version in 15th ICML, 1998. [3] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002. Preliminary version in 36th IEEE FOCS, 1995. [4] Peter Auer, Ronald Ortner, and Csaba Szepesv´ari. Improved Rates for the Stochastic Continuum-Armed Bandit Problem. In 20th COLT, pages 454–468, 2007. [5] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. J. of Computer and System Sciences, 74(1):97–114, February 2008. Preliminary version in 36th ACM STOC, 2004. [6] Andrei Broder, Marcus Fontoura, Vanja Josifovski, and Lance Riedel. A semantic approach to contextual advertising. In 30th SIGIR, pages 559–566, 2007. [7] S´ebastien Bubeck, R´emi Munos, Gilles Stoltz, and Csaba Szepesvari. Online Optimization in X-Armed Bandits. J. of Machine Learning Research (JMLR), 12:1587–1627, 2011. Preliminary version in NIPS 2008. [8] Nicol`o Cesa-Bianchi and G´abor Lugosi. Prediction, learning, and games. Cambridge Univ. Press, 2006. [9] Eric Cope. Regret and convergence bounds for immediate-reward reinforcement learning with continuous action spaces. IEEE Trans. on Automatic Control, 54(6):1243–1253, 2009. A manuscript from 2004. [10] Varsha Dani and Thomas P. Hayes. Robbing the bandit: less regret in online geometric optimization against an adaptive adversary. In 17th ACM-SIAM SODA, pages 937–943, 2006. [11] Varsha Dani, Thomas P. Hayes, and Sham Kakade. The Price of Bandit Information for Online Optimization. In 20th NIPS, 2007. [12] Abraham Flaxman, Adam Kalai, and H. Brendan McMahan. Online Convex Optimization in the Bandit Setting: Gradient Descent without a Gradient. In 16th ACM-SIAM SODA, pages 385–394, 2005. [13] Sylvain Gelly and David Silver. Combining online and offline knowledge in UCT. In 24th ICML, 2007. [14] Sylvain Gelly and David Silver. Achieving master level play in 9x9 computer go. In 23rd AAAI, 2008. [15] Anupam Gupta, Robert Krauthgamer, and James R. Lee. Bounded geometries, fractals, and low– distortion embeddings. In 44th IEEE FOCS, pages 534–543, 2003. [16] Sham M. Kakade, Adam T. Kalai, and Katrina Ligett. Playing Games with Approximation Algorithms. In 39th ACM STOC, 2007. [17] Robert Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In 18th NIPS, 2004. [18] Robert Kleinberg. Online Decision Problems with Large Strategy Sets. PhD thesis, MIT, 2005. [19] Robert Kleinberg and Aleksandrs Slivkins. Sharp Dichotomies for Regret Minimization in Metric Spaces. In 21st ACM-SIAM SODA, 2010. [20] Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-Armed Bandits in Metric Spaces. In 40th ACM STOC, pages 681–690, 2008. [21] Levente Kocsis and Csaba Szepesvari. Bandit Based Monte-Carlo Planning. In 17th ECML, pages 282– 293, 2006. [22] T.L. Lai and Herbert Robbins. Asymptotically efficient Adaptive Allocation Rules. Advances in Applied Mathematics, 6:4–22, 1985. [23] H. Brendan McMahan and Avrim Blum. Online Geometric Optimization in the Bandit Setting Against an Adaptive Adversary. In 17th COLT, pages 109–123, 2004. [24] R´emi Munos and Pierre-Arnaud Coquelin. Bandit algorithms for tree search. In 23rd UAI, 2007. [25] Sandeep Pandey, Deepak Agarwal, Deepayan Chakrabarti, and Vanja Josifovski. Bandits for Taxonomies: A Model-based Approach. In SDM, 2007. [26] Sandeep Pandey, Deepayan Chakrabarti, and Deepak Agarwal. Multi-armed Bandit Problems with Dependent Arms. In 24th ICML, 2007. [27] Susan T. Dumais Paul N. Bennett, Krysta Marie Svore. Classification-enhanced ranking. In 19th WWW, pages 111–120, 2010. [28] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed bandits. In 25th ICML, pages 784–791, 2008. [29] Aleksandrs Slivkins, Filip Radlinski, and Sreenivas Gollapudi. Learning optimally diverse rankings over large document collections. In 27th ICML, pages 983–990, 2010. 9
2011
137
4,188
Predicting response time and error rates in visual search Bo Chen Caltech bchen3@caltech.edu Vidhya Navalpakkam Yahoo! Research nvidhya@yahoo-inc.com Pietro Perona Caltech perona@caltech.edu Abstract A model of human visual search is proposed. It predicts both response time (RT) and error rates (RT) as a function of image parameters such as target contrast and clutter. The model is an ideal observer, in that it optimizes the Bayes ratio of target present vs target absent. The ratio is computed on the firing pattern of V1/V2 neurons, modeled by Poisson distributions. The optimal mechanism for integrating information over time is shown to be a ‘soft max’ of diffusions, computed over the visual field by ‘hypercolumns’ of neurons that share the same receptive field and have different response properties to image features. An approximation of the optimal Bayesian observer, based on integrating local decisions, rather than diffusions, is also derived; it is shown experimentally to produce very similar predictions to the optimal observer in common psychophysics conditions. A psychophyisics experiment is proposed that may discriminate between which mechanism is used in the human brain. A B C Figure 1: Visual search. (A) Clutter and camouflage make visual search difficult. (B,C) Psychologists and neuroscientists build synthetic displays to study visual search. In (B) the target ‘pops out’ (∆θ = 450), while in (C) the target requires more time to be detected (∆θ = 100) [1]. 1 Introduction Animals and humans often use vision to find things: mushrooms in the woods, keys on a desk, a predator hiding in tall grass. Visual search is challenging because the location of the object that one is looking for is not known in advance, and surrounding clutter may generate false alarms. The three ecologically relevant performance parameters of visual search are the two error rates (ER): false alarms (FA) and false rejects (FR), and response time (RT). The design of a visual system is crucial in obtaining low ER and RT. These parameters may be traded off by manipulating suitable thresholds [2, 3, 4]. Psychologists and physiologists have long been interested in understanding the performance and the mechanisms of visual search. In order to approach this difficult problem they present human subjects with synthetic stimuli composed of a variable number of ‘items’ which may include a ‘target’ 1 and multiple ‘distractors’ (see Fig. 1). By varying the number of items one may vary the amount of clutter; by designing different target-distractor pairs one may probe different visual cues (contrast, orientation, color, motion) and by varying the visual distinctiveness of the target vis-a-vis the distractors one may study the effect of the signal-to-noise ratio (SNR). Several studies since 1980s have investigated how RT and ER are affected by the complexity of the stimulus (number of distractors), and by target-distractor discriminability with different visual cues. One early observation is that when the target and distractor features are widely separated in feature space (e.g., red target among green distractors), the target ‘pops out’. In these situations the ER is nearly zero, and the slope of RT vs. setsize is flat, i.e., RT to find the target is independent of number of items in the display [1]. Decreasing the discriminability between the target and distractor increases error rates, and increases the slope of RT vs. setsize [5]. Moreover, it was found that the RT for displays with no target is longer than where the target is present (see review in [6]). Recent studies investigated the shape of RT distributions in visual search [7, 8]. Neurophysiologically plausible models have been recently proposed to predict RTs in visual discrimination tasks [9] and various other 2AFC tasks [10] at a single spatial location in the visual field. They are based on sequential tests of statistical hypotheses (target present vs target absent) [11] computed on the response of stimulus-tuned neurons [2, 3]. We do not yet have satisfactory models for explaining RTs in visual search, which is harder as it involves integrating information across several locations across the visual field, as well as time. Existing models predicting RT in visual search are either qualitative (e.g. [12]) or descriptive (e.g., the drift-diffusion model [13, 14, 15]), and do not attempt to predict experimental results with new set sizes, target and distractor settings. We propose a Bayesian model of visual search that predicts both ER and RT. Our study makes a number of contributions. First, while visual search has been modeled using signal-detection theory to predict ER [16], our model is based on neuron-like mechanisms and predicts both ER and RT. Second, our model is an optimal observer, given a physiologically plausible front-end of the visual system. Third, our model shows that in visual search the optimal computation is not a diffusion, as one might believe by analogy with single-location discrimination models [17, 18], rather, it is a ‘softmax’ nonlinear combination of locally-computed diffusions. Fourth, we study a physiologically parsimonious approximation to the optimal observer, we show that it is almost optimal when the characteristics of the task are known in advance and held constant, and we explore whether there are psychophysical experiments that could discriminate between the two models. Our model is based on a number of simplifying assumptions. First, we assume that stimulus items are centered on cortical hypercolumns [19] and at locations where there is no item neuronal firing is negligible. Second, retinal and cortical magnification [19] are ignored, since psychophysicists have developed displays that sidestep this issue (by placing items on a constant-eccentricity ring as shown in Fig 1). Third, we do not account for overt and covert attentional shifts. Overt attentional shifts are manifested by saccades (eye motions), which happen every 200ms or so. Since the post-decision motor response to a stimulus by pressing a button takes about 250-300ms, one does not need to worry about eye motions when response times are shorter than 500ms. For longer RTs, one may enforce eye fixation at the center of the display so as to prevent overt attentional shifts. Furthermore, our model explains serial search without the need to invoke covert attentional shifts [20] which are difficult to prove neurophysiologically. 2 Target discrimination at a single location with Poisson neurons We first consider probabilistic reasoning at one location, where two possible stimuli may appear. The stimuli differ in one respect, e.g. they have different orientations θ(1) and θ(2). We will call them distractor (D) and target (T), also labeled C = 1 and C = 2 (call c ∈{1, 2} the generic value of C). Based on the response of N neurons (a hypercolumn) we will decide whether the stimulus was a target or a distractor. Crucially, a decision should be reached as soon as possible, i.e. as soon as there is sufficient evidence for T or D [11]. Given the evidence T (defined further below in terms of the neurons’ activity) we wish to decide whether the stimulus was of type 1 or 2. We may do so when the probability P(C = 1|T ) of the stimulus being of type 1 given the observations in T exceeds a given threshold T1 (T1 = 0.99). We may instead decide in favor of C = 2 e.g. when P(C = 1|T ) < T2 (e.g. T2 = 0.01). If 2 0 50 100 150 0 1 2 3 4 5 6 7 8 9 10 11 Stimulus orientation e (degrees) Expected firing rate h (spikes per second) Neurons’ tuning curves 0 50 100 150 0 2 4 6 8 10 12 Mean spiking rate per second Neuron’s preferred orientation e (degrees) Poisson h (spikes/s) eD eT h (D,ei) per s h (T,ei) per s 0 50 100 150 ï0.25 ï0.2 ï0.15 ï0.1 ï0.05 0 0.05 0.1 0.15 0.2 0.25 Diffusion jump caused by action potential Neuron’s preferred orientation e (degrees) diffusion jump per spike eD=90o eT=105o jump on spike interspike drift per s=0.01 <T0 >T1 0 1 exp log ￿ B exp ... <T0 >T1 0 1 <T0 >T1 0 1 <T0 >T1 0 1 <T0 >T1 0 1 ￿ dt ￿ dt max A C ￿ dt ... ￿ dt ￿ dt ... ￿ dt ￿ dt ... <T0 0 AND OR 0 1 ... D Figure 2: (Left three panels) Model of a hypercolumn in V1/V2 cortex composed of four orientation-tuned neurons (our simulations use 32). The left panel shows the neurons’ tuning curve λ(θ) representing the expected Poisson firing rate when the stimulus has orientation θ. The middle plot shows the expected firing rate of the population of neurons for two stimuli whose orientation is indicated with a red (distractor) and green (target) vertical line. The third plot shows the step-change in the value of the diffusion when an action potential is registered from a given neuron. (Right panel) Diagram of the decision models. (A) One-location Bayesian observer. The action potentials of a hypercolumn of neurons (top) are integrated in time to produce a diffusion. When the diffusion reaches either an upper bound T1 or a lower bound T0 the decision is taken that either the target is present (1) or the target is absent (0). (B–D) Multi-location ideal Bayesian observer. (B) While not a diffusion, it may be seen as a ‘soft maximum’ combination of local diffusions: the local diffusions are first exponentiated, then averaged; the log of the result is compared to two thresholds to reach a decision. (C) The ‘Max approximation’ is a simplified approximation of the ideal observer, where the maximum of local diffusions replaces a soft-maximum. (D) Equivalently, in the Max approximation decisions are reached locally and combined by logical operators. The white AND in a dark field indicates inverted AND of multiple inverted inputs. P(C = 1|T ) ∈(T2, T1) we will wait for more evidence. Thus, we need to compute P(C = 1|T ) : Pr(C = 1|T ) = 1 1 + P (C=2|T ) P (C=1|T ) = 1 1 + R(T ) P (C=2) P (C=1) where R(T ) = P(T |C = 2) P(T |C = 1) = P(C = 2|T ) P(C = 1|T ) P(C = 1) P(C = 2) (1) where P(C = 1) = 1 −P(C = 2) is the prior probability of C = 1. Thus, it is equivalent to take decisions by thresholding log R(T )1; we will elaborate on this in Sec. 3. We will model the firing rate of the neurons with a Poisson pdf: the number n of action potentials that will be observed during one second is distributed as P(n|λ) = λne−λ/n!. The constant λ is the expectation of the number of action potentials per second. Each neuron i ∈{1, . . . , N} is tuned to a different orientation θi; for the sake of simplicity we will assume that the width of the tuning curve is the same for all neurons; i.e. each neuron i will respond to stimulus c with expectation λi c = f(|θ(c)−θi|) (in spikes per second) which are determined by the distance between the neuron’s preferred orientation θi and by the stimulus orientation θ(c). Let Ti = {ti k} be the set of action potentials from neuron i produced starting at t = 0 and until the end of the observation period t = T. Indicate with T = {tk} = S i Ti the complete set of action potentials from all neurons (where the tk are sorted). We will indicate with i(k) the index of the neuron who fired the action potential at time tk. Call Ik = (tk tk+1) the intervals of time in between action potentials, where I0 = (0 t1). These intervals are open i.e. they do not contain the boundaries, hence they do not contain the action potentials. The signal coming from the neurons is thus a concatenation of ‘spikes’ and ‘intervals’, and the interval (0, T) may be viewed as the union of instants tk and open intervals (tk, tk+1). i.e. (0, T) = I0 S t1 S I1 S t2 S · · · Since the spike trains Ti and T are Poisson processes, once we condition on the class of the stimulus the spike times are independent. This implies that: P(T |C = c) = ΠkP(Ik|C = c)P(tk|C = c). This may be proven by dividing up (0, T) into smaller and smaller intervals and taking the limit for 1We use base 10 for all our logarithms and exponentials, i.e. log(x) ≡log10(x) and exp(x) ≡10x. 3 the size of the intervals going to zero. The intervals containing action potentials converge to the ti and the intervals not containing action potentials may be merged into the intervals Ii. Let’s analyze separately the log likelihood ratio for the intervals and for the spikes. Diffusion drift during the intervals. During the intervals no neuron spiked. The ratio therefore is computed as a function of the Poissons P(n = 0|λ) when the spike count n is zero. The Poisson expectation has to be multiplied by the time-length of the interval; call ∆tk = tk+1 −tk the length of the interval Ik. Assuming that the neurons i = 1, . . . , N are independent we obtain: log R(Ik) = log P(n = 0|C = 2, t ∈Ik) P(n = 0|C = 1, t ∈Ik) = log ΠN i=1P(n = 0|λi 2∆tk) ΠN i=1P(n = 0|λi 1∆tk) = ∆tk N X i=1 (λi 1 −λi 2) (2) Thus, during the time-intervals where no action potential is observed, the diffusion drifts linearly with a slope equal to the sum over all neurons of the difference between the expected firing rate with stimulus 1 and the expected firing rate with stimulus 2. Notice that if there are neurons that fire equally well to targets and distractors, and if the population of neurons is large and made of neurons whose tuning curve’s shape is identical and whose preferred orientation θi is regularly spaced, then P i λi 1 ≈P i λi 2, thus the diffusion has drift with slope close to zero and the drift term may be ignored. In this case intervals carry no information. Diffusion jump at the action potentials. If the neurons are uncorrelated, then the probability of two or more action potentials happening at the same time is zero. Thus, at any time tk there is only one action potential from one neuron. We can compute the likelihood ratio by taking a limit for the length δt of the interval t ∈(tk −δt/2, tk + δt/2) going to zero. As seen in the previous section, the contribution from the neurons who did not register a spike is δt(λi 1 −λi 2) and goes to zero as δt →0. Thus we are only left with the contribution of the neuron i(k) whose spike happened at time tk. log R(tk) = lim δt→0 log P(n = 1|λi(k) 2 δt) P(n = 1|λi(k) 1 δtk) = lim δt→0 log (λi(k) 2 δt)1e−λi(k) 2 δt (λi(k) 1 δt)1e−λi(k) 1 δt = log λi(k) 2 λi(k) 1 (3) As a result, at each action potential tk the diffusion jumps by an amount that is the log of the ratio of the expected firing rate of the neuron i(k)’s response to target vs distractor. Thus: 1. Neurons that are equally tuned to target and distractor, whether they respond much or not, will not contribute to the diffusion, while neurons whose response is very different for target and distractor will contribute substantially to the diffusion. 2. A larger number of neurons will produce more action potentials and thus a faster actionpotential-driven drift in the diffusion. Diffusion overall. Given the analysis presented above: log R(T ) = X k ∆tk X i (λi 1 −λi 2) + X k log λi(k) 2 λi(k) 1 = |T | X i (λi 1 −λi 2) + X k log λi(k) 2 λi(k) 1 (4) Ignoring diffusion during the intervals, the diffusion at a single location where the stimulus is of type c can be described as: log R(T ) ∼ N X i=1 (log λi 2 λi 1 )Poiss(λi c|T |) (5) E[log R(T )] = ac|T |, V[log R(T )] = b2 c|T | (6) where Poiss(λ) denotes a Poisson distributed variable with mean λ, ac ≡PN i=1(log λi 2 λi 1 )λi c and b2 c ≡PN i=1(log λi 2 λi 1 )2λi c. The mean and variance of the diffusion grows linearly with time. 4 0 500 1000 1500 2000 2500 3000 ï6 ï5 ï4 ï3 ï2 ï1 0 1 2 3 Time (ms) log R Full bayes MAX 0 500 1000 1500 2000 2500 ï5 ï4 ï3 ï2 ï1 0 1 2 3 Time (ms) log R Full bayes MAX 10 0 10 1 10 2 10 ï3 10 ï2 10 ï1 10 0 M Error Rate (%) ER=1.0% FPR,Full Bayes FNR,Full Bayes FPR,Max FNR,Max 10 0 10 1 10 2 10 ï3 10 ï2 10 ï1 10 0 ER=10.0% M 0 0.5 1 1.5 2 10 ï3 10 ï2 10 ï1 10 0 6e Error Rate (%) ER=1.0% FPR,Full Bayes FNR,Full Bayes FPR,Max FNR,Max 0 0.5 1 1.5 2 10 ï3 10 ï2 10 ï1 10 0 ER=10.0% 6e A B C D Figure 3: (A) Diffusions realized at 10 spatial locations when the target is absent (black). The ideal observer Bayes ratio is shown in green, the max-model approximation is shown in red. Thresholds Θ1 = −2, Θ2 = 2 are shown, which produce 1% error rates in the ideal observer. (B) Target present case. Notice that the decision takes longer when the target is absent (see also Fig. 4). (C) Error rates vs. number of items and (D) vs target contrast when decision thresholds are held constant. Decision thresholds were chosen to obtain 5% error rates in the condition M = 10,∆θ = π/6. As we change target contrast and the number of targets the optimal observer has constant error rates, while the Max approximation produces variable error rates. Testing human subjects with a mix of stimuli with different values of M and ∆θ may prevent them from adjusting decision thresholds between stimuli; thus, one would expect constant error rates if the visual system uses the ideal observer and variable error rates if it uses the Max approximation. 3 Visual search: detection across M locations with Poisson neurons We now consider the case with M locations with N Poisson neurons each. At each location we may either have a target T or a distractor D. In the whole display we have two hypotheses: no target (C = 1) or one target at a location l (C = 2). The second hypothesis may be broken up into the target being at any of M locations l. Because of this, the numerator of the likelihood ratio is the sum of M terms. The Bayesian observer must integrate the action potentials from each unit to a central unit that computes the posterior beliefs. The multi-location Bayesian observer may be computed by observing that the target-present event is the union of the target-present events in each one of the locations, while the target absent event implies that each location has no target. Thus, the likelihood can be computed as the weighted sum of local likelihoods at each location in the display. We assume that 1. The likelihood at each location is independent from the rest when the stimulus type at that location is known; i.e. P(T |Cl, ∀l) = Q l P(T l|Cl) . 2. The target, if present, is equally likely to occur at any location in the display; i.e. ∀l, P(Cl = 2, Cl = 1|C = 2) = 1/M. Calling l a location and l the complement of that location (i.e. all locations but l) we have: P(T |C = 1) = M Y l=1 P(T l|Cl = 1) P(T |C = 2) = M X l=1 P(T |Cl = 2, Cl = 1)P(Cl = 2, Cl = 1|C = 2) = 1 M ( M Y l=1 P(T l|Cl = 1)) M X l=1 Rl(T l) log Rtot(T ) = log P(T |C = 2) P(T |C = 1) = log 1 M M X l=1 Rl(T l) = log 1 M M X l=1 exp(log Rl(T l)) (7) Eqn. 7 tells us two things: 5 1. The process log Rtot is not a diffusion, in that log Rtot at time t + 1 can not be computed by incrementing its value at time t by a term that depends only on the interval (t, t + 1). 2. The process log Rtot may be computed easily from the local diffusions log Rl(T l) (in Sec. 4 we find an approximation that has a natural neural implementation). Now that we know how to compute log R(T ) for single and multi-location Bayesian observer, we may take our decision by thresholding log R(T ) (Eqn. 1). Specifically, we choose separate thresholds for making the target absent and the target present decision, and adjusted the thresholds based on tolerance levels for the false positive rate (FPR) and the false negative rate (FNR). We keep accumulating evidence until either decision can be made. The relationship between FPR, FNR and the two thresholds can be derived using analysis similar to [11]. When log Rtot(T ) reaches the target present threshold (Θ2), with probability P(C = 2|T ), the target is present and with probability P(C = 1|T ) the target is absent, i.e. FPR = P(C = 1|T ) and 1 −FNR = P(C = 2|T ). We have: Θ2 = log Rtot(T ) = log P(C = 2|T ) P(C = 1|T ) = log 1 −FNR FPR (8) Similarly, when log R(T ) reaches the target absent threshold (Θ1), we have: Θ1 = log Rtot(T ) = log P(C = 2|T ) P(C = 1|T ) = log FPR 1 −FNR (9) Therefore, given desired FPR and FNR, we can analytically compute the upper and lower thresholds for the Full Bayesian model using Eqn. 8 and 9. 4 Max approximation An alternative, more economic, approach to full Bayesian decision is to approximate the global belief using the largest local diffusion and suppress the rest. This is because, in the limit where |T | is large, the diffusion at the location where the target is present will dominate over the other diffusions and thus it is a good approximation of the sum in Eq. 7. We will call this approach “max approximation” and also “Max model”. In this scheme, at each location a diffusion based on the local Bayesian observer is computed. If any location ‘detects’ a target, then a target is declared. If all locations detect a distractor, then the ‘no target’ condition is declared. This may not be the optimal method, but it has the advantage of requiring only two low-frequency communication lines between each location and the central decision unit. Equivalently, the max approximation can be implemented by computing the maximum of the local diffusions and comparing it to an adjusted high and a low threshold for target present/absent decision (see Fig. 2). More specifically, let l∗denote the location of maximum diffusion in the display, and log Rl∗denote the maximum diffusion (i.e., log Rl∗= maxM l=1 log Rl(T l)). From eqn 7 we know that the global likelihood ratio is the average of the local likelihood ratios, and equivalently, the log likelihood ratio is the soft-max of the local diffusions: log Rtot(T ) = log 1 M M X l=1 exp log Rl(T l)  ! = log Rl∗+ log  1 M (1 + X l̸=l∗ exp(log Rl −log Rl∗))   (10) Target present – When the target is present in the display, if the target is different from the distractor, the diffusion at the target’s location will frequently become much higher than at other locations, and the terms corresponding to Rl Rl∗may be safely ignored. Thus, the total log likelihood ratio may be approximated by the maximum local diffusions minus a constant: log Rtot ≈ log Rl∗−log M if Rl << Rl∗ (11) 6 10 0 10 2 10 4 0 0.05 0.1 0.15 0.2 0.25 Response time (ms) Normalized counts 6e=pi/18 5.0%,TA 5.0%,TP 10 0 10 2 10 4 0 0.05 0.1 0.15 0.2 0.25 6e=pi/6 10 0 10 2 10 4 0 0.05 0.1 0.15 0.2 0.25 6e=pi/2 10 0 10 1 10 2 0 200 400 600 800 1000 1200 M Mean response time (ms) 6e=pi/18,Full Bayes TA TP 10 0 10 1 10 2 0 200 400 600 800 1000 1200 6e=pi/6,Full Bayes 10 0 10 1 10 2 0 200 400 600 800 1000 1200 6e=pi/2,Full Bayes 10 0 10 1 50 100 150 200 250 Error Rate (%) Response time (ms) M=3.00 TA,Full Bayes TP,Full Bayes TA,Max TP,Max 10 0 10 1 50 100 150 200 250 M=10.00 10 0 10 1 50 100 150 200 250 M=30.00 A B C Figure 4: (A) Histogram of response-times (RT) when the target is present (green) and when the target is absent (red) for M = 10 for different values of target contrast (∆θ). Response times are longer when the contrast is smaller (see Fig. 1). Also, they are longer when the target is absent (see Fig. 3). Notice that the response times have a Gaussian-like distribution when time is plotted on a log scale, and the width of the distribution does not change significantly as the difficulty of the task changes; thus, the mean and median response time are equivalently informative statistics of RT. (B) Mean RT as a function of the number M of items for different values of target contrast; the curves appear linear as a function of log M [21]. Notice that RT slope is almost zero (‘parallel search’) when the target has high contrast, while when target contrast is low RT increases significantly with M (‘serial search’) [1]. The response times observed using the Max approximation are almost identical to those obtained with the ideal observer. (C) Error vs. RT tradeoff curves obtained by changing systematically the value of the decision threshold. The mean RT ±σ is shown. Ideal bayesian observer (blue) and Max approximation (cyan) are almost identical indicating that the Max approximation’s performance is almost as good as that of the optimal observer. From Eqn. 5 and 6 we know that the difference in diffusion value between the target location and the distractor location grows linearly in time. Thus, the longer the process lasts, the better the approximation. Conversely, when t = |T | is small, the approximation is unreliable, and a different approximation term must be introduced (see supplementary material2 for derivation): log Rtot ≈log Rl∗−  a2t + log( 1 M + (M −1) M exp((a1 −a2 + b2 1 + b2 2 2 )t))  if Rl ≈Rl∗ (12) Target absent – When the target is absent in the display, the value of all the local diffusions at time t will be distributed according to the same density. According to Eqn. 6, the standard deviation grows as √ t, hence the expected value of log Rl∗−log Rl is monotonically increasing. When this expected difference is large enough, we can make the same approximation as Eqn. 11: log Rtot ≈ log Rl∗−log M if Rl << Rl∗ (13) On the other hand, when |T | is small, we resort to another approximation (see supplementary material for derivation): log Rtot ≈log Rl∗−µMb1 √ t + b2 1t 2 −1 2 log(exp(b2t) + M −1 M ) if Rl ≈Rl∗ (14) where µM ≡M R ∞ −∞zΦM−1(z)N(z)dz, and N(z) and Φ(z) denote the pdf and cdf of normal distribution. Since the max diffusion does not represent the global log likelihood ratio, its thresholds can not be computed directly from the error rates. Nonetheless we can first compute analytically the thresholds for the Bayesian observer (Eqn. 8 and 9), and adjust them based on the approximations stated above (Eqn. 11, 12, 13 and 14). Finally, we threshold the maximum local diffusion log Rl∗with respect to the adjusted upper and lower threshold to make our decision. 5 Experiments Experiment 1. - Overall model predictions. In this experiment we explore the model’s prediction of response time over a series of interesting conditions. The default parameters are the number of 2http://vision.caltech.edu/˜bchen3/nips2011/supplementary.pdf 7 neurons per location N = 32, the tuning width of each neuron = π/8, the maximum expected firing rate (λ = 10 action potentials per second) and minimum expected firing rate (λ = 1 a.p./s) of a neuron, which reflects the signal-to-noise ratio of the neuron’s tuning curves, the number of items (locations) in the display M = 10 and the stimulus contrast ∆θ = π/6. Both M and ∆θ refers to the display, while the other parameters refer to the brain. We will focus on how predictions change when the display parameters are changed over a set of discrete settings: M ∈{3, 10, 30} and ∆θ ∈{π/18, π/6, π/2}. For each setting of the parameters, we simulate the bayesian and the max model for 1000 runs. The length of simulation is set to a large value (4 seconds) to make sure that all decisions are made before the simulation terminates. We are also interested in the trade-off between RT and ER η for η = {1%, 5%, 10%}. For each η we search for the best pair of upper and lower thresholds that achieve FNR ≈FPR ≈η. We search over the interval [0 3.5] for the optimal upper threshold and over [−3.5 0] for the optimal lower threshold (an upper threshold of 3.5 corresponds to a FPR of 0.03%). The search is conducted exhaustively over an [80 × 80] discretization of the joint space of the thresholds. We record the response time distributions for all parameter settings and for all values of η (Fig. 4). Experiment 2. - Conditions where Bayesian and Max models differ maximally In this experiment we test the robustness of Bayesian and Max models with respect to a fixed threshold. For a Bayesian observer, the thresholds yielding a given error rate can be computed exactly independent of the display (Eqn. 9 and 8). On the contrary, in order for the max model to achieve the equivalent performance, its threshold must be adjusted differently depending on the number of items M and the target contrasts ∆θ (Eqn. 11-14). As a result, if a constant threshold is used for all conditions, we would expect the Bayesian observer ER to be roughly constant, whereas the Max model would have considerable ER variability. The error rates are shown in Fig. 3 as we vary M and ∆θ. The threshold is set as the optimal threshold that produces 5% error for the Bayesian observer at a single location M = 1 and with ∆θ = π/18. 6 Discussion and conclusions We presented a Bayesian ideal observer model of visual search. To the best of our knowledge, this is the first model that can predict the statistics of both response times (RT) and error rates (ER) purely from physiologically relevant constants (number, tuning width, signal-to-noise ratio of cortical mechanisms) and from image parameters (target contrast and number of distractors). Neurons are modeled as Poisson units and the model has only four free parameters: the number of neurons per hypercolumn, the tuning width of their response curve, the maximum and the minimum firing rate of each neuron. The model predicts qualitatively the main phenomena that are observed in visual search: serial vs. parallel search [1], the Gaussian-like shape of the response time histograms in log time [7] and the faster response times when the target is present [3]. The model is easily adaptable to predictions involving multiple targets, different image features and conjunction of features. Unlike the case of binary detection/decision, the ideal observer may not be implemented by a diffusion. However, it may be implemented using a precisely defined ‘soft-max’ combination of diffusions, each one of which is computed at a different location across the visual field. We discuss an approximation of the ideal observer, the Max model, which has two natural and simple implementations in neural hardware. The Max model is found experimentally to have a performance that is very close to that of the ideal observer when the task parameters do not change. We explored whether any combinations of target contrast and number of distractors would produce significantly different predictions of the ideal observer vs the Max model approximation and found none in the case where the visual system can estimate decision thresholds in advance. However, our simulations predict different error rates when interleaving images containing diverse contrast levels and distractor numbers. Acknowledgements: We thank the three anonymous referees for many insightful comments and suggestions; thanks to M. Shadlen for a tutorial discussion on the history of discrimination models. This research was supported by the California Institute of Technology. 8 References [1] A.M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive psychology, 12(1):97– 136, 1980. [2] W.T. Newsome, K.H. Britten, and J.A. Movshon. Neuronal correlates of a perceptual decision. Nature, 341(6237):52–54, 1989. [3] P. Verghese. Visual search and attention:: A signal detection theory approach. Neuron, 31(4):523–535, 2001. [4] Vidhya Navalpakkam and Laurent Itti. Search goal tunes visual features optimally. Neuron, 53(4):605–17, Feb 2007. [5] J. Duncan and G.W. Humphreys. Visual search and stimulus similarity. Psychological review, 96(3):433, 1989. [6] J.M. Wolfe. Attention (Ed. H. Pashler), chapter Visual Search, pages 13–73. University College London Press, London, U.K., 1998. [7] J.M. Wolfe, E.M. Palmer, and T.S. Horowitz. Reaction time distributions constrain models of visual search. Vision research, 50(14):1304–1311, 2010. [8] E.M. Palmer, T.S. Horowitz, A. Torralba, and J.M. Wolfe. What are the shapes of response time distributions in visual search? Journal of Experimental Psychology: Human Perception and Performance, 37(1):58, 2011. [9] Jeffrey M Beck, Wei Ji Ma, Roozbeh Kiani, Tim Hanks, Anne K Churchland, Jamie Roitman, Michael N Shadlen, Peter E Latham, and Alexandre Pouget. Probabilistic population codes for bayesian decision making. Neuron, 60(6):1142–52, Dec 2008. [10] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J.D. Cohen. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113(4):700, 2006. [11] A. Wald. Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2):117–186, 1945. [12] M.M. Chun and J.M. Wolfe. Just say no: How are visual searches terminated when there is no target present? Cognitive Psychology, 30(1):39–78, 1996. [13] R. Ratcliff. A theory of memory retrieval. Psychological Review, 85(2):59–108, 1978. [14] Philip L Smith and Roger Ratcliff. Psychology and neurobiology of simple decisions. Trends Neurosci, 27(3):161–8, Mar 2004. [15] Roger Ratcliff and Gail McKoon. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput, 20(4):873–922, Apr 2008. [16] D.G. Pelli. Uncertainty explains many aspects of visual contrast detection and discrimination. JOSA A, 2(9):1508–1531, 1985. [17] R. Ratcliff. A theory of order relations in perceptual matching. Psychological Review, 88(6):552, 1981. [18] Joshua I Gold and Michael N Shadlen. The neural basis of decision making. Annu Rev Neurosci, 30:535– 74, 2007. [19] R.L. De Valois and K.K. De Valois. Spatial vision. Oxford University Press, USA, 1990. [20] MI Posner, Y. Cohen, and RD Rafal. Neural systems control of spatial orienting. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 298(1089):187, 1982. [21] W.E. Hick. On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1):11– 46, 1952. 9
2011
138
4,189
Sequence learning with hidden units in spiking neural networks Johanni Brea, Walter Senn and Jean-Pascal Pfister Department of Physiology University of Bern B¨uhlplatz 5 CH-3012 Bern, Switzerland {brea, senn, pfister}@pyl.unibe.ch Abstract We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited. 1 Introduction Learning to produce temporal sequences is a general problem that the brain needs to solve. Movements, songs or speech, all require the generation of specific spatio-temporal patterns of neural activity that have to be learned. Early attempts to model sequence learning used a simple asymmetric Hebbian learning rule [10, 20, 6] and succeeded to store sequences of random patterns, but perform poorly as soon as there are temporal correlations between the patterns [3]. Later work on pattern storage or sequence learning recognized the need for matching the storage rule with the recall dynamics [2, 18, 12] and derived the optimal storage rule for a given recall dynamics [2, 18] or an optimal recall dynamics for a given storage rule [12], but didn’t consider hidden neurons and therefore restricted the class of possible patterns to be learned. Other studies [14] included a reservoir of hidden neurons but assumed weights towards the hidden neurons to be fixed. Finally, Boltzmann machines [1] - which learn to produce a given distribution of patterns with visible and hidden neurons - applied to sequence learning [9, 22, 21] are trained with Contrastive Divergence [8] and either an approximation that neglects the influence of the future or use a nonlocal and non-causal learning rule. Here we start by defining a stochastic neuronal dynamics - that can be arbitrarily complicated (e.g. with non-Markovian dependencies). This stochastic dynamics defines the overall probability distribution which is parametrized by the synaptic weights. The goal of learning is to adapt synaptic weights such that the model distribution approximates as good as possible the target distribution of temporal sequences. This can be seen as the extension of the maximum likelihood approach of Barber [2] where we add stochastic hidden neurons with plastic weights. In order to learn the weights, we implement a variant of the Expectation-Maximization (EM) algorithm [5] where we use importance sampling in the expectation step in a way that makes the sampling procedure easy. 1 A B vt−1 ht−1 vt ht vt−1 ht−1 vt ht stochastic visible neurons stochastic hidden neurons Figure 1: Graphical representation of the conditional dependencies of the joint distribution over visible and hidden sequences. A Graphical model used for the derivation of the learning rule in section 2 and the example in section 4. B Markovian model used in the example with binary neurons in section 3. The resulting learning rule is local (but modulated by a global factor), causal and biologically relevant in the sense that it shares important features with Spike-Timing Dependent Plasticity (STDP). We also derive an online version of the learning rule and show numerically that it performs almost equally well as the exact batch learning rule. 2 Learning a distribution of sequences Let us consider temporal sequences v = {vt,i|t = 0 . . . T, i = 1 . . . Nv} of Nv visible neurons over the interval [0, T ]. We will use the notation vt = {vt,i|i = 1 . . . Nv} and vt1:t2 = {vt,i|t = t1 . . . t2, i = 1 . . . Nv} to denote parts of the sequence. Note that v = v0:T denotes the whole sequence. Those visible sequences v are drawn i.i.d. from a target distribution P ∗(v) that must be learned by a model which consists of Nv visible neurons and Nh hidden neurons. The model distribution over those visible sequences is denoted by Pθ(v) = P h Pθ(v, h) where θ denotes the model parameters, h = {ht,i|t = 0 . . . T, i = 1 . . . Nh} the hidden temporal sequence and Pθ(v, h) the joint distribution over the visible and the hidden sequences. The natural way to quantify the mismatch between the target distribution P ∗(v) and the model distribution Pθ(v) is given by the Kullback-Leibler divergence: DKL(P ∗(v)||Pθ(v)) = X v P ∗(v) log P ∗(v) Pθ(v) . (1) If the joint model distribution Pθ(v, h) is differentiable with respect to the model parameters θ, then the sequence learning problem can be phrased as gradient descent on the KL divergence in Eq. (1): ∆θ = η ∂log Pθ(v, h) ∂θ  Pθ(h|v)P ∗(v) , (2) where η is the learning rate and we used the fact that ∂ ∂θ log Pθ(v) = 1 Pθ(v) ∂ ∂θ P h Pθ(v, h) = P h Pθ(h|v) ∂ ∂θ log Pθ(v, h). Eq. (2) can be seen as a variant of the EM algorithm [5, 16, 3] where the expectation ⟨·⟩Pθ(h|v)P ∗(v) corresponds to the E step and the gradient of log Pθ(v, h) is related to the M step1. Instead of calculating analytically the true expectation in Eq. (2), it is possible to approximate it by sampling the visible sequences v from the target distribution P ∗(v) and the hidden sequences from the posterior distribution Pθ(h|v) given the visible ones. Note that the posterior distribution Pθ(h|v) could be hard to sample from. Indeed, at a time t the posterior distribution over ht does not only depend on the past visible activity but also on the future visible activity, since it is conditioned on the whole visible activity v0:T from time step 0 to T . This renders a true challenge for online algorithms. In the case of Hidden Markov Model training, the forward-backward algorithm 1Strictly speaking the M step of the EM algorithm directly calculates the solution θnew for which ∂ ∂θ log Pθ(v, h) = 0 whereas in Eq. (2) there is only one step done in the direction of the gradient. 2 [4, 19] combines information from the past (by forward filtering) and from the future (by backward smoothing) to calculate Pθ(h|v). If the statistical model does not have the Markovian property, the problem of calculating Pθ(h|v) (or sampling from it) becomes even harder. Here, we propose an alternative solution that does not require to sample from Pθ(h|v) and does not require the Markovian assumption (see [11, 17] for other approaches on sampling Pθ(h|v)). We exploit that in all neuronal network models of interest, neuronal firing at any time point is conditionally independent given the past activity of the network. Using the chain rule this means that we can write the joint distribution Pθ(v, h) (see Fig. 1A) as Pθ(v, h) = Pθ(v0) T Y t=1 Nv Y i=1 Pθ(vt,i|v0:t−1, h0:t−1) ! | {z } Rθ(v|h) Pθ(h0) T Y t=1 Nh Y i=1 Pθ(ht,i|v0:t−1, h0:t−1) ! | {z } Qθ(h|v) , (3) where Rθ(v|h) is easy to calculate (see below) and Qθ(h|v) is easy to sample from. The sampling can be accomplished by clamping the visible neurons to a target sequence v and let the hidden dynamics run, i.e. at time t, ht is sampled from Pθ(ht|v0:t−1h0:t−1). 2 From Eq. (3), the posterior distribution Pθ(h|v) can be written as Pθ(h|v) = Rθ(v|h)Qθ(h|v) Pθ(v) , (4) where the marginal distribution over the visible sequences v can be also expressed as Pθ(v) = ⟨Rθ(v|h)⟩Qθ(h|v). As a consequence, by using Eq. (4), the learning rule in Eq. (2) can be rewritten as ∆θ = X v,h P ∗(v)Pθ(h|v)∂log Pθ(v, h) ∂θ = X v,h P ∗(v)Qθ(h|v)Rθ(v|h) Pθ(v) ∂log Pθ(v, h) ∂θ = η * Rθ(v|h) ⟨Rθ(v|h′)⟩Qθ(h′|v) ∂log Pθ(v, h) ∂θ + Qθ(h|v)P ∗(v) . (5) Instead of calculating the true expectation, Eq. (5) can be evaluated by using N samples (see algorithm 1) where the factor γθ(v, h) := Rθ(v|h)/ ⟨Rθ(v|h′)⟩Qθ(h′|v) acts as the importance weight [15]. Note that in the absence of hidden neurons, this factor γθ(v, h) is equal to one and the maximum likelihood learning rule [2, 18] is recovered. 2Note that for other conditional dependencies it might be reasonable to split Pθ(h|v) differently. For example in models with the structure of Hidden Markov Models one could make use of the fact that Pθ(h|v) = QT −1 t=0 Pθ(ht|v0:t, ht+1) = QT −1 t=0 Pθ(ht+1|ht) Pθ(ht+1|v0:t)Pθ(ht|v0:t) and take the product of filtering distributions Qθ(h|v) = QT −1 t=0 Pθ(ht|v0:t) to sample from and use the importance weights Rθ(v, h) = QT −1 t=0 Pθ(ht+1|ht) Pθ(ht+1|v0:t). Following the reasoning in the main text one finds an alternative to the forward-backward algorithm [4, 19] that might be interesting to investigate further. Algorithm 1 Sequence learning (batch mode) Set an initial θ while θ not converged do v ∼P ∗(v) α(v) = 0, Pθ(v) = 0 for i = 1 . . . N do h ∼Qθ(h|v) α(v) ←α(v) + Rθ(v|h) ∂log Pθ(v,h) ∂θ Pθ(v) ←Pθ(v) + N −1Rθ(v|h) end for θ ←θ + η α(v) Pθ(v) end while return θ 3 A B C 10 20 30 10 20 30 time step unit number 10 20 30 time step 10 20 30 time step D E F 10 20 30 20 40 60 time step unit number 10 20 30 time step 10 20 30 time step G 0 7500 15 000 0.5 0.6 0.7 0.8 0.9 1. learning step performance H I J 10 20 30 10 20 30 time step unit number 10 20 30 time step 10 20 30 10 20 30 40 time step Figure 2: Learning a non-Markoviansequence of temporally correlated and linearly dependent states with different learning rules. A The target distribution contained only this training pattern for 30 visible neurons and 30 time steps. B-F, H-J Overlay of 20 recalls after learning with 15 000 training pattern presentations, B with only visible neurons and a simple asymmetric Hebb rule (see main text) C only visible neurons and learning rule Eq. (5) D static weights towards 30 hidden neurons (Reservoir Computing) E learning rule Eq. (5), F online approximation Eq. (14). G Learning curves for the training pattern in A for only visible neurons (black line), static weights towards hidden (blue line), online learning approximation (purple line) exact learning rule (red line). The performance was measured in one minus average Hamming distance per neuron per time step (see main text). H A training pattern that exhibits a gap of 5 time-steps. I Recall with a network of 30 visible and 10 hidden neurons without learning the weights towards hidden neurons. J Recall after training the same network with learning rule Eq. (5). 3 Binary neurons In order to illustrate the learning rule given by Eq. (5), let us consider sequences of binary patterns. Let x denote the activity of the visible and hidden neurons, i.e. x = (v, h). Since the individual neurons are binary xt,i ∈{−1, 1}, their distribution is given by Pθ(xt,i|x0:t−1) = (ρt,iδt)(1+xt,i)/2(1 −ρt,iδt)(1−xt,i)/2, where the firing rate ρt,i of neuron i at time t is given by a monotonically increasing (and non-linear) function g of its membrane potential ut,i, i.e. ρt,i = g(ut,i) with ut,i = X j wijxt−1,j . (6) Note that these assumptions lead to Markovian neuronal dynamics i.e. Pθ(xt,i|x0:t−1) = Pθ(xt,i|xt−1) (see Fig. 1B). Further calculations will be slightly simplified, if we assume that the non-linear function g is constraint by the following differential equation dg(u)/du = βg(u)(1 − g(u)δt). Note that in the limit of δt →0, this function is an exponential, i.e. g(u) = g0 exp(βu) and for finite δt, it is a sigmoidal and takes the form g(u) = δt−1 1 + (g0δt)−1 −1  exp(−βu) −1, where we constrained the solutions such that g(0) = g0 in order to be consistent with the case where δt →0. For the distribution over the initial conditions Pθ(v0) and Pθ(h0) we choose delta distributions such that v0 is equal to the first state of the training sequence and h0 is an arbitrary but fixed vector of binary values. If we assume that the weights wij are the only adaptable parameters in this model, 4 A B 20 40 60 80 100 0.5 0.6 0.7 0.8 0.9 1.0 number of hidden units performance 20 40 60 80 100 0.5 0.6 0.7 0.8 0.9 1.0 sequence length performance Figure 3: Adding trainable hidden neurons leads to much better recall performance than having static hidden neurons or no hidden neurons at all. A Comparison of the performance after 20000 learning cycles between static (blue curve) and dynamic weights (red curve) towards hidden neurons for a network with 30 visible and different numbers of hidden neurons in a training task with a uncorrelated random pattern of length 60 time steps. For B we generated random, uncorrelated sequences of different length and compared the performance after 20000 learning cycles for only visible neurons (black curve), static weights towards hidden (blue curve) and dynamic weights towards hidden (red curve). we have ∂log Pw(xt,i|x0:t−1) ∂wij = 1 2  (1 + xt,i)g′(ut,i) g(ut,i) −(1 −xt,i) g′(ut,i)δt 1 −g(ut,i)δt  ∂ut,i ∂wij . (7) With the above assumption on g(u) and Eq. (3) and (6) we find ∂log Pw(x) ∂wij = β 2 T X t=1 (xt,i −⟨xt,i⟩Pθ(xt,i|xt−1))xt−1,j , (8) where ⟨xt,i⟩Pθ(xt,i|xt−1) = g(ut,i)δt −(1 −g(ut,i)δt) and the indices i and j run over all visible and hidden neurons. The factor Rw(v|h) can be expressed as Rw(v|h) = exp 1 2 T X t=0 Nv X i=1 (1 + vt,i) log(ρt,iδt) + (1 −vt,i) log(1 −ρt,iδt) ! . (9) Let us now consider a simple case (Fig. 2) where the distribution over sequences is a delta distribution P ∗(v) = δ(v −v∗) around a single pattern v∗(Fig. 2A) which is made of a set of temporally correlated and linearly dependent states {v∗ t }T t=0, i.e. a non-Markovian pattern, thus making it a difficult pattern to learn with a simple asymmetric Hebb rule ∆wij ∝PT t=0 v∗ t+1,iv∗ t,j (Fig. 2B) or only visible neurons (Fig. 2C), which are both Markovian learning rules. The performance was measured by one minus the Hamming distance per visible neuron and time step 1−(T Nv)−1 P t,i |vt,i−v∗ t,i|/2 between target pattern and recall pattern averaged over 100 runs. Adding hidden neurons without learning the weights towards hidden neurons is similar to the idea used in the framework of Reservoir Computing (for a review see [13]): the visible states feed a fixed reservoir of neurons that returns a non-linear transformation of the input. Only the readout from hidden to visible neurons and in our case the recurrent connections in the visible layer are trained. To assure a sensible distribution of weights towards hidden units, we used the weights that were obtained after learning with Eq. (5) and reshuffled them. Obviously, without training the reservoir the performance is always worse compared to a system with an equal number of hidden neurons but dynamic weights (Fig. 2E and 2F). With only a few hidden neurons our rule is also capable to learn patterns where the visible neurons are silent during a few time-steps. The training pattern in Fig. 2H exhibits a gap of 5 time steps. After learning the weights towards 10 hidden neurons with learning rule Eq. (5) recall performance is nearly perfect (see Fig. 2J). With only visible neurons (not shown in Fig. 2) or static weights towards hidden neurons the time gap was not learned (see Fig. 2I). 5 -40 -20 0 20 40 0 tpost-tpre @msD Dw @arbitrary unitsD Figure 4: The learning rule Eq. (11) is compatible with Spike-Timing Dependent Plasticity (STDP): the weight gets potentiated if a presynaptic spike is followed by a postsynaptic spike and depressed otherwise. The time course of the postsynaptic potential and the refractory kernel is given in the text. In Fig. 3 we used again delta target distributions P ∗(v) = δ(v −v∗) with random uncorrelated patterns v∗of different length. Each model was trained with 20000 pattern presentations. For a pattern of length 2Nv = 60 only Nv/2 = 15 trainable hidden neurons are sufficient to reach perfect recall (see Fig. 3A). This is in clear contrast to the case of static hidden weights. Again the static weights were obtained by reshuffling those that we obtained after learning with Eq. (5). Fig. 3B compares the capacity of our learning rule with Nh = Nv = 30 hidden neurons to the case of no hidden neuron or static weights towards hidden neurons. Without learning the weights towards hidden neurons the performance drops to almost chance level for sequences of 45 or more time steps, whereas with our learning rule this decrease of performance occurs only at sequences of 100 or more time steps. 4 Limit to Continuous Time Starting from the neurons in the last section we show that in the limit to continuous time we can implement the sequence learning task with stochastic spiking neurons [7]. First note that the state of a neuron at time t in the model described in the previous section is fully defined by ut,i := P j wijxt−1,j (see Eq. (6)) and its spiking activity xt,i. The weighted sum P j wijxt−1,j is the response of neuron i to the spikes of its presynaptic neurons and its own spikes. The terms in this sum depend on the previous time step only. In a more realistic model the postsynaptic neuron feels the influence of presynaptic spikes through a perturbation of the membrane potential on the order of a few milliseconds, which in the limit to continuous time clearly cannot be modeled by a one-time step response. For a more realistic model we replace ut,i in Eq. (6) by ut,i = ∞ X s=1 κsxt−s,i | {z } =:xκ t,i + X j̸=i wij ∞ X s=1 ǫsxt−s,j | {z } =:xǫ t,j , (10) where xt−s,i ∈{0, 1}. The kernel ǫ models the time-course of the response to a presynaptic spike and κ the refractoriness. Our model holds for any choices of ǫ and κ, including for example a hard refractory period where the neuron is forced not to spike. In order to take the limit δt →0 in Eq. (9) we note that we can scale Rw(v|h) without changing the learning rule Eq. (5), since there only the ratio Rθ(v|h)/ ⟨Rθ(v|h′)⟩Qθ(h′|v) enters. We use the scaling Rw(v|h) →eRw(v|h) := (g0δt)−SvRw(v|h), where Sv denotes the total number of spikes in the visible sequence v, i.e. Sv = PT t=0 PNv i=1 vt,i. Note that for (0, 1)-units the expectation in Eq. (8) becomes ⟨xt,i⟩Pθ(xt,i|xt−1) = g(ut,i)δt = ρt,iδt . Now we take the limit δt →0 in Eq. (8) 6 and (9) and find ∂log Pw(x) ∂wij = Z T 0 dt β(xi(t) −ρi(t))xǫ j(t) (11) eRw(v|h) = exp Z T 0 dt Nv X i=1 βvi(t)ui(t) −ρi(t) ! , (12) where the training pattern runs from time 0 to T , xi(t) = P t(f) i δ(t −t(f) i ) is the sum of delta spikes of neuron i at times t(f) i , xǫ j(t) = R ds ǫ(s)xj(t −s) (and similarly xκ i (t)) is the convolution of presynaptic spike trains with the response kernel ǫ(t). With neuron i’s response to past spiking activity ui(t) = xκ i (t) + P j̸=i wijxǫ j(t) and the escape rate function ρi(t) = g0 exp (βui(t)) we recovered the defining equations of a simplified stochastic spike response model [7]. In Fig. 4 we display the weight change after forcing two neurons to fire with a fixed time lag. For the figure we used the kernels ǫs ∝exp(−s/τm)−exp(−s/τs) and κs ∝−exp(−s/τm) with τm = 10 ms and τs = 2 ms. Our learning rule is consistent with STDP in the sense that a presynaptic spike followed by a postsynaptic spike leads to potentiation and to depression otherwise. Note that this result was also found in [18]. 5 Approximate online version Without hidden neurons the learning rule found by using Eq. (11) is straightforward to implement in an online way where the parameters are updated at every moment in time according to ˙wij ∝ (xi(t) −ρi(t))xǫ j(t) instead of waiting with the update until a training batch finished. Finding an online version of the learning algorithm for networks with hidden neurons turns out to be a challenge, since we need to know the whole sequences v and h in order to evaluate the importance factor Rθ(v|h)/⟨Rθ(v|h′)⟩Qθ(h′|v). Here we propose to use in each time step an approximation of the importance factor based on the network dynamics during the preceding period of typical sequence length and multiply it by the low-pass filtered change of parameters. We write this section with xi(t) ∈{0, 1}, but similar expressions are easily found for xi(t) ∈{−1, 1}. Algorithm 2 Sequence learning (online mode) Set an initial wij, eij, a, ¯r, t while wij not converged do if t mod NT == 0 then v ∼P ∗(v) end if s = t mod T if s < τ then h(s) ∼P(h(s)) else h(s) ∼Pw(h(s)|past spiking activity) end if x(s) = (v(s), h(s)) eij ←(1 −δt T )eij + β(xi(s) −ρi(s))xǫ j(s) a ←(1 −δt T )a + PNv i=1 βvi(s)ui(s) −ρi(s) ¯r ←(1 − δt NT )¯r + exp(a) wij ←wij + η exp(a) ¯r eij t ←t + δt end while return wij In Eq. (13a) and (13b) we summarize how to use low-pass filters to approximate the integrals in Eq. (11) and Eq. (12). The time constant of the low-pass filter is chosen to match the sequence length T . To find an online estimate of ⟨Rθ(v, h′)⟩Qθ(h′|v) we assume that a training pattern v ∼P ∗(v) is presented a few times in a row and after time NT , with N ∈N, N ≫1, a new training pattern is picked from the training distribution. Under this assumption we can replace the average over 7 hidden sequences by a low-pass filter of r with time constant NT , see Eq. (13c). At the beginning of each pattern presentation - i.e. during the time interval [0, τ), with τ on the order of the kernel time constant τm - the hidden activity h(s) is drawn from a given distribution P(h(s)). ˙eij(t) = −1 T eij(t) + β(xi(t) −ρi(t))xǫ j(t) eij(T ) ≈∂log Pw(x) ∂wij (13a) ˙a(t) = −1 T a(t) + Nv X i=1 βvi(t)ui(t) −ρi(t) exp(a(T )) ≈Rw(v|h) (13b) NT ˙¯r(t) = −¯r(t) + r(t), r(t) := exp(a(t)) ¯r(NT ) ≈⟨Rθ(v, h′)⟩Qθ(h′|v) (13c) Finally we learn the model parameters in each time step according to ˙wij(t) = η r(t) ¯r(t)eij(t) . (14) This online algorithm is certainly a rough approximation of the batch algorithm. Nevertheless, when applied to the challenging example (Fig. 2A) in section 3, the performance of the online rule is close to the one of the batch rule (Fig. 2F, G). 6 Discussion Learning long and temporally correlated sequences with neural networks is a difficult task. In this paper we suggested a statistical model with hidden neurons and derived a learning rule that leads to optimal recall of the learned sequences given the neuronal dynamics. The learning rule is derived by minimizing the Kullback-Leibler divergence from training distribution to model distribution with a variant of the EM-algorithm, where we use importance sampling to draw hidden sequences given the visible training sequence. Choosing an appropriate distribution in the importance sampling step we are able to circumvent inference which usually makes the training of non-Markovian models hard. The resulting learning algorithm consists of a local term modulated by a global factor. We showed that it is ready to be implemented with biologically realistic neurons and that an approximate online version exists. Our approach follows the ideas outlined in [2], where sequence learning was considered with visible neurons. Here we extended this model by adding stochastic hidden neurons that help to perform well with sequences of linearly depend states - including non-Markovian sequences - or long sequences. As in [18] we look at the limit of continuous time and find that the learning rule is consistent with Spike-Timing Dependent Plasticity. In contrast to Reservoir Computing [13] we train the weights towards hidden neurons which clearly helps to improve performance. Our learning rule does not need a “wake” and a “sleep” phase as we know it from Boltzmann machines [1, 8]. Viewed in a different light our learning algorithm has a nice interpretation: as in reinforcement learning, the hidden neurons explore different sequences, where each trial leads to a global reward signal that modulates the weight change. However, in contrast to common reinforcement learning the reward is not provided by an external teacher but depends solely on the internal dynamics and the visible neurons do not explore but are clamped to the training sequence. To make our model even more biologically relevant, future work should aim for a biological implementation of the global importance factor that depends on the spike timing and the membrane potential of all the visible neurons (see Eq. (9)). It would also be interesting to study online approximations of the learning algorithm in more detail or its application to models with the Hidden Markov structure. Acknowledgments The authors thank Robert Urbanczik for helpful discussions. This work was supported by the Swiss National Science Foundation (SNF), grant 31-133094, and a grant from the Swiss SystemsX.ch initiative (Neurochoice, evaluated by the SNF). 8 References [1] D. Ackley and G. E. Hinton. A learning algorithm for boltzmann machines. Cognitive Science, 9(1):147– 169, 1985. [2] D. Barber. Learning in spiking neural assemblies. Advances in Neural Information Processing Systems, 15, 2003. [3] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2011. In press. [4] L. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics, 41(1):164–171, 1970. [5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38, 1977. [6] A. D¨uring, A. Coolen, and D. Sherrington. Phase diagram and storage capacity of sequence processing neural networks. Journal of Physics A: Mathematical and General, 31:8607, 1998. [7] W. Gerstner and W. M. Kistler. Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press, 2002. [8] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–800, 2002. [9] G. E. Hinton and A. Brown. Spiking boltzmann machines. Advances in Neural Information Processing Systems, 12, 2000. [10] J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79(8):2554, 1982. [11] P. Latham and J. W. Pillow. Neural characterization in partially observed populations of spiking neurons. Advances in Neural Information Processing Systems, 20:1161–1168, 2008. [12] M. Lengyel, J. Kwag, O. Paulsen, and P. Dayan. Matching storage and recall: hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience, 8(12):1677–83, 2005. [13] M. Lukoˇseviˇcius and H. Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149, 2009. [14] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation, 14(11):2531–60, 2002. [15] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press, 2002. [16] G. McLachlan and T. Krishnan. The EM Algorithm and Extensions. John Wiley and Sons, 1997. [17] Y. Mishchenko and L. Paninski. Efficient methods for sampling spike trains in networks of coupled neurons. The Annals of Applied Statistics, 5(3):1893–1919, 2011. [18] J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner. Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6):1318–1348, 2006. [19] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–86, 1989. [20] H. Sompolinsky and I. Kanter. Temporal association in asymmetric neural networks. Physical Review Letters, 57(22):2861–64, 1986. [21] I. Sutskever, G. E. Hinton, and G. Taylor. The Recurrent Temporal Restricted Boltzmann Machine. Advances in Neural Information Processing Systems, 21:1601–08, 2009. [22] G. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. Advances in Neural Information Processing Systems, 19:1345–52, 2007. 9
2011
139
4,190
On the Analysis of Multi-Channel Neural Spike Data Bo Chen, David E. Carlson and Lawrence Carin Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708 {bc69, dec18, lcarin}@duke.edu Abstract Nonparametric Bayesian methods are developed for analysis of multi-channel spike-train data, with the feature learning and spike sorting performed jointly. The feature learning and sorting are performed simultaneously across all channels. Dictionary learning is implemented via the beta-Bernoulli process, with spike sorting performed via the dynamic hierarchical Dirichlet process (dHDP), with these two models coupled. The dHDP is augmented to eliminate refractoryperiod violations, it allows the “appearance” and “disappearance” of neurons over time, and it models smooth variation in the spike statistics. 1 Introduction The analysis of action potentials (“spikes”) from neural-recording devices is a problem of longstanding interest (see [21, 1, 16, 22, 8, 4, 6] and the references therein). In such research one is typically interested in clustering (sorting) the spikes, with the goal of linking a given cluster to a particular neuron. Such technology is of interest for brain-machine interfaces and for gaining insight into the properties of neural circuits [14]. In such research one typically (i) filters the raw sensor readings, (ii) performs thresholding to “detect” the spikes, (iii) maps each detected spike to a feature vector, and (iv) then clusters the feature vectors [12]. Principal component analysis (PCA) is a popular choice [12] for feature mapping. After performing such sorting, one typically must (v) search for refractory-time violations [5], which occur when two or more spikes that are sufficiently proximate are improperly associated with the same cluster/neuron (which is impossible due to the refractory time delay required for the same neuron to re-emit a spike). Recent research has combined (iii) and (iv) within a single model [6], and methods have been developed recently to address (v) while performing (iv) [5]. Many of the early methods for spike sorting were based on classical clustering techniques [12] (e.g., K-means and GMMs, with a fixed number of mixtures), but recently Bayesian methods have been developed to account for more modeling sophistication. For example, in [5] the authors employed a modification to the Chinese restaurant formulation of the Dirichlet process (DP) [3] to automatically infer the number of clusters (neurons) present, allow statistical drift in the feature statistics, permit the “appearance”/“disappearance” of neurons with time, and automatically account for refractorytime requirements within the clustering (not as a post-clustering step). However, [5] assumed that the spike features were provided via PCA in the first two or three principal components (PCs). In [6] feature learning and spike sorting were performed jointly via a mixture of factor analyzers (MFA) formulation. However, in [6] model selection was performed (for the number of features and number of neurons) and a maximum likelihood (ML) “point” estimate was constituted for the model parameters; since a fixed number of clusters are inferred in [6], the model does not directly allow for the “appearance”/“disappearance” of neurons, or for any temporal dependence to the spike statistics. There has been an increasing interest in developing neural devices with C > 1 recording channels, each of which produces a separate electrical recording of neural activity. Recent research shows increased system performance with large C [18]. Almost all of the above research on spike sorting 1 −700 −500 −300 −100 100 300 500 −500 −300 −100 100 300 PC−1 PC−2 Unkown Neuron Known Neuron Ground Truth (a) −700 −400 −100 200 500 −500 −300 −100 100 300 PC−1 PC−2 K−means (b) −700 −400 −100 200 500 −500 −300 −100 100 300 PC−1 PC−2 GMM (c) −700 −500 −300 −100 100 300 500 −500 −300 −100 100 300 PC−1 PC−2 HDP−DL (d) Figure 1: Comparison of spike sorting on real data. (a) Ground truth; (b) K-means clustering on the first 2 principal components; (c) GMM clustering with the first 2 principal components; (d) proposed method. We label using arrows examples K-means and the GMM miss, and that the proposed method properly sort. has been performed on a single channel, or when multiple channels are present each is typically analyzed in isolation. In [5] C = 4 channels were considered, but it was assumed that a spike occurred at the same time (or nearly same time) across all channels, and the features from the four channels were concatenated, effectively reducing this again to a single-channel analysis. When C ≫1, the assumption that a given neuron is observed simultaneously on all channels is typically inappropriate, and in fact the diversity of neuron sensing across the device is desired, to enhance functionality [18]. This paper addresses the multi-channel neural-recording problem, under conditions for which concatenation may be inappropriate; the proposed model generalizes the DP formulation of [5], with a hierarchical DP (HDP) formulation [20]. In this formulation statistical strength is shared across the channels, without assuming that a given neuron is simultaneously viewed across all channels. Further, the model generalizes the HDP, via a dynamic HDP (dHDP) [17] to allow the “appearance”/“disappearance” of neurons, while also allowing smooth changes in the statistics of the neurons. Further, we explicitly account for refractory times, as in [5]. We also perform joint feature learning and clustering, using a mixture of factor analyzers construction as in [6], but we do so in a fully Bayesian, multi-channel setting (additionally, [6] did not account for time-varying statistics). The learned factor loadings are found to be similar to wavelets, but they are matched to the properties of neuron spikes; this is in contrast to previous feature extraction on spikes [11] based on orthogonal wavelets, that are not necessarily matched to neuron properties. To give a preview of the results, providing a sense of the importance of feature learning (relative to mapping data into PCA features learned offline), in Figure 1 we show a comparison of clustering results on the first channel of d533101 data from hc-1 [7]. For all cases in Figure 1 the data are depicted in the first two PCs for visualization, but the proposed method in (d) learns the number of features and their composition, while simultaneously performing clustering. The results in (b) and (c) correspond respectively to widely employed K-means and GMM analysis, based on using two PCs (in these cases the analysis are employed in PCA space, as have been many more-advanced approaches [5]). From Figures 1 (b) and (c), we observe that both K-means and GMM work well, but due to the constrained feature space they incorrectly classify some spikes (marked by arrows). However, the proposed model, shown in Figure 1(d), which incorporates dictionary learning with spike sorting, infers an appropriate feature space (not shown) and more effectively clusters the neurons. The details of this model, including a multi-channel extension, are discussed in detail below. 2 Model Construction 2.1 Dictionary learning We initially assume that spike detection has been performed on all channels. Spike n 2 {1, . . . , Nc} on channel c 2 {1, . . . , C} is a vector x(c) n 2 RD, defined by D time samples for each spike, centered at the peak of the detected signal; there are Nc spikes on channel c. Data from spike n on channel c, x(c) n , is represented in terms of a dictionary D 2 RD⇥K, where K is an upper bound on the number of needed dictionary elements (columns of D), and the model 2 infers the subset of dictionary elements needed to represent the data. Each x(c) n is represented as x(c) n = D⇤(c)s(c) n + ✏(c) n (1) where ⇤(c) = diag(λ(c) 1 b1, λ(c) 2 b2, . . . , λ(c) K bK) is a diagonal matrix, with b = (b1, . . . , bK)T 2 {0, 1}K. Defining dk as the kth column of D, and letting ID represent the D ⇥D identity matrix, the priors on the model parameters are dk ⇠N(0, 1 DID) , λ(c) k ⇠T N +(0, γ−1 c ) , ✏(c) n ⇠N(0, ⌃−1 c ) (2) where ⌃c = diag(⌘(c) 1 , . . . , ⌘(c) D ), and T N +(·) represents the truncated (positive) normal distribution. Gamma priors (detailed when presenting results) are placed on γc and on each of the elements of (⌘(c) 1 , . . . , ⌘(c) D ). For the binary vector b we impose the prior bk ⇠Bernoulli(⇡k), with ⇡k ⇠Beta(a/K, b(K −1)/K), implying that the number of non-zero components of b is drawn Binomial(K, a/(a+b(K −1))); this corresponds to Poisson(a/b) in the limit K ! 1. Parameters a and b are set to favor a sparse b. This model imposes that each x(c) n is drawn from a linear subspace, defined by the columns of D with corresponding non-zero components in b; the same linear subspace is shared across all channels c 2 {1, . . . , C}. However, the strength with which a column of D contributes toward x(c) n depends on the channel c, as defined by ⇤(c). Concerning ⇤(c), rather than explicitly imposing a sparse diagonal via b, we may also draw λ(c) k ⇠T N +(0, γ−1 ck ), with shrinkage priors employed on the γck (i.e., with the γck drawn from a gamma prior that favors large γck; which encourages many of the diagonal elements of ⇤(c) to be small, but typically not exactly zero). In tests, the model performed similarly when shrinkage priors were used on ⇤(c) relative to explicit imposition of sparseness via b; all results below are based on the latter construction. 2.2 Multi-Channel Dynamic hierarchical Dirichlet process We sort the spikes on the channels by clustering the {s(c) n }, and in this sense feature design (learning {D⇤(c)}) and sorting are performed simultaneously. We first discuss how this may be performed via a hierarchical Dirichlet process (HDP) construction [20], and then extend this via a dynamic HDP (dHDP) [17] considering multiple channels. In an HDP construction, the {s(c) n } are modeled as being drawn s(c) n ⇠f(✓(c) n ) , ✓(c) n ⇠G(c) , G(c) ⇠DP(↵cG) , G ⇠DP(↵0G0) (3) where a draw from, for example, DP(↵0G0) may be constructed [19] as G = P1 i=1 ⇡iδ✓⇤ i , where ⇡i = Vi Q h<i(1 −Vh), Vi ⇠Beta(1, ↵0), ✓⇤ i ⇠G0, and δ✓⇤ i is a unit point measure situated at ✓⇤ i . Each of the G(c) is therefore of the form G(c) = P1 i=1 ⇡(c) i δ✓⇤ i , with P1 i=1 ⇡(c) i = 1 and with the {✓⇤ i } shared across all G(c), but with channel-dependent (c-dependent) probability of using elements of {✓⇤ i }. Gamma hyperpriors are employed for {↵c} and ↵0. In the context of the model developed in Section 2.1, the density function f(·) corresponds to a Gaussian, and parameters ✓⇤ i = (µ⇤ i , Γ⇤ i ) correspond to means and precision matrices, with G0 a normal-Wishart distribution. The proposed model may be viewed as an mixture of factor analyzers (MFA) [6] applied to each channel, with the addition of sharing of statistical strength across the C channels via the HDP. Sharing is manifested in two forms: (i) via the shared linear subspace defined by the columns of D, and (ii) via hierarchical clustering via HDP of the relative weightings {s(c) n }. In tests, the use of channel-dependent ⇤(c) was found critical to modeling success, as compared to employing a single ⇤shared across all channels. The above HDP construction assumes that G(c) = P1 i=1 ⇡(c) i δ✓⇤ i is time-independent, implying that the probability ⇡(c) i that x(c) n is drawn from f(✓⇤ i ) is time invariant. There are two ways this assumption may be violated. First, neuron refractory time implies a minimum delay between consecutive firing of the same neuron; this effect is addressed in a relatively straightforward manner discussed in Section 2.3. The second issue corresponds to the “appearance” or “disappearance” of neurons [5]; the former would be characterized by an increase in the value of a component of ⇡(c) i , while the latter would be characterized by one of the components of ⇡(c) i going to zero (or near zero). It is 3 desirable to augment the model to address these objectives. We achieve this by application of the dHDP construction developed in [17]. As in [5], we divide the time axis into contiguous, non-overlapping temporal blocks, where block j corresponds to spikes observed between times ⌧j−1 and ⌧j; we consider J such blocks, indexed j = 1, . . . , J. The spikes on channel c within block j are denoted {x(c) jn }n=1,Ncj, where Ncj represents the number of spikes within block j on channel c. In the dHDP construction we have s(c) jn ⇠f(✓(c) jn ) , ✓(c) jn ⇠w(c) j G(c) j + (1 −w(c) j )G(c) j−1 (4) G(c) j ⇠DP(↵jcG) , G ⇠DP(↵0G0) , w(c) j ⇠Beta(c, d) (5) where w(c) 1 = 1 for all c. The expression w(c) j controls the probability that ✓(c) jn is drawn from G(c) j , while with probability 1 −w(c) j parameter ✓(c) jn is drawn from G(c) j−1. The cumulative mixture model w(c) j G(c) j + (1 −w(c) j )G(c) j−1 supports arbitrary levels of variation from block to block in the spike-train analysis: If w(c) j is small the probability of observing a particular type of neuron doesn’t change significantly from block j −1 to j, while if w(c) j ⇡1 the mixture probabilities can change quickly (e.g., due to the “appearance”/“disappearance” of a neuron); for w(c) j in between these extremes, the probability of observing a particular neuron changes slowly/smoothly with consecutive blocks. The model therefore allows a significant degree of flexibility and adaptivity to changes in neuron statistics. 2.3 Accounting for refractory time and drift To demonstrate how one may explicitly account for refractory-time conditions within the model, assume the time difference between spikes x(c) j⌫and x(c) j⌫0 is less than the refractory time, while all other spikes have temporal separations greater than the refractory time; we consider two spikes of this type for notational convenience, but the basic formulation below may be readily extended to more than two spikes of this type. We wish to impose that x(c) j⌫and x(c) j⌫0 should not be associated with the same cluster/neuron, but otherwise the model is unchanged. Hence, for n 6= ⌫0, ✓(c) jn ⇠ ˆG(c) j = w(c) j G(c) j + (1 −w(c) j )G(c) j−1 as in (4). Assuming ˆG(c) j = P1 i=1 ˆ⇡(c) ji δ✓⇤ i , we have the new conditional generative construction ✓(c) j⌫0|✓(c) j⌫⇠ 1 X i=1 ˆ⇡(c) ji [1 −I(✓(c) j⌫= ✓⇤ i )] P1 l=1 ˆ⇡(c) jl [1 −I(✓(c) j⌫= ✓⇤ l )] δ✓⇤ i (6) where I(·) is the indicator function (it is equal to one if the argument is true, and it is zero otherwise). This construction imposes that ✓(c) j⌫0 6= ✓(c) j⌫, but otherwise the model preserves that the elements of {✓⇤ i } are drawn with a relative probability consistent with ˆG(c) j . Note that the time associated with a given spike is assumed known after detection (i.e., it is a covariate), and therefore it is known a priori for which spikes the above adjustments must be made to the model. The representation in (6) constitutes a proper generative construction for {✓(c) jn } in the presence of spikes that co-occur within the refractory time, but it complicates inference. Specifically, recall that G(c) j = P1 i=1 ⇡(c) ji δ✓⇤ i , with ⇡(c) ji = U (c) ji Q h<i(1 −U (c) jh ), with U (c) ji ⇠Beta(1, ↵jc). In the original construction, (4) and (5), in which refractory-time violations are not account for, the Gibbs update equations for {U (c) ji } are analytic, due to model conjugacy. However, conjugacy for {U (c) ji } is lost with (6), and therefore a Metropolis-Hastings (MH) step is required to draw these random variables with an Markov Chain Monte Carlo (MCMC) analysis. This added complexity is often unnecessary, since the number of refractory-time events is typically very small relative to the total number of spikes that must be sorted. Hence, we have successfully implemented the following approximation to the above construction. While the ✓(c) j⌫0 is drawn as in (6), assigning ✓(c) j⌫0 to one of the members of {✓⇤ i } while avoiding a refractory-time violation, the update equations for {U (c) ji } are executed as they 4 would be in (4) and (5), without an MH step. In other words, a construction like (6) is used to assign elements of {✓⇤ i } to spikes, but after this step the update equations for {U (c) ji } are implemented as in the original (conjugate) model. This is essentially the same approach employed in [5], but now in terms of a “stick-breaking” rather than CRP construction of the DP (here an dHDP), and like in [5] we have found this to yield encouraging results (e.g., no refractory-time violations, and sorting in good agreement with “truth” when available). Finally, in [5] the authors considered a “drift” in the atoms associated with the DP, which here would correspond to a drift in the atoms associated with our dHDP. In this construction, rather that drawing the ✓⇤ i ⇠G0 once as in (5), one may draw ✓⇤ i ⇠G0 for the first block of time, and then a simple Gaussian auto-regressive model is employed to allow the {✓⇤ i } drift a small amount between consecutive blocks. Specifically, if {✓⇤ ji} represents the atoms for block j, then ✓⇤ j+1,i ⇠ N(✓⇤ ji, β−1 0 ), where it is imposed that β0 is large. We examined this within the context of the model proposed here, and for the data considered in Section 4 this added modeling complexity did not change the results significantly, and therefore we did not consider this added complexity when presenting results. This observed un-importance in imposing drift in {✓⇤ ji} is likely due to the fact that we draw s(c) jn ⇠f(✓(c) jn ) with a Gaussian f(·), and therefore even if the {✓⇤ ji} do not change across data blocks, the model allows drift via variations in the draws from the Gaussian (effecting the inferred variance thereof). 3 Inference and Computations For online sorting of spikes, a Chinese restaurant process (CRP) formulation like that in [5] is desirable. The proposed model may be implemented as a generalization of the CRP, as the general form of the model in Section 2.2 is independent of the specific way inference is performed. In a CRP construction, the Chinese restaurant franchise (CRF) model [20] is invoked, and the model in Section 2.2 yields a dynamic CRF (dCRF), where each franchise is associated with a particular channel. The hierarchical form of the dCRF, including the dictionary-learning component of Section 2.1, is fully conjugate, and may therefore be implemented via a Gibbs sampler. As hinted by the construction in (6), we here employ a stick-breaking construction of the model, analogous to the form of inference employed in [17]. We employ a retrospective stick-breaking construction [15] for G(c) j and G [10], such that the number of terms used to construct G and G(c) j is unbounded and adapts to the data. Using this construction the model is able to adapt to the number of neurons present, adding and deleting clusters as needed. In this sense the stick-breaking construction may also be considered for online implementations. Further, in this model the parameter Gibbs sampling follows an online-style inference, since the data blocks come in sequentially and the parameters for each block only depend on the previous one or a new component. Therefore, while online implementation is not our principal focus here, it may be executed with the proposed model. We also implemented a CRF implementation, for which there is no truncation. Both inference methods (stick-breaking and CRF implementations) gave very similar results. Although this paper is not principally focused on online implementations, in the context of such, one may also consider online and evolving learning of the dictionary D [13]. There is recent research on online dictionary learning, which may be adapted here, using recent extensions via Bayesian formalisms [9]; this would, for example, allow the linear subspace in which the spike shapes reside to adapt/change with data block. 4 Example Results For these experiments we used a truncation level of K = 60 dictionary elements. In dictionary learning, the hyperparameters in the gamma priors of γc and ⌘(c) p were set as aγc = 10−6 and bγc = 10−6, a⌘(c) p = 0.1 and b⌘(c) p = 10−5. In the HDP, we set Ga(1,1) for ↵0 and ↵c. In dHDP, we set Ga(1,1) for ↵0 and ↵jc. Meanwhile, in order to encourage the groups to be shared, we set the prior QC c=1 QJ−1 j=1 Beta(w(c) j ; aw, bw) with aw = 0.1 and bw = 1. These parameters have not been optimized, and many analogous settings yield similar results. We used 5000 burn-in samples and 5000 collection samples in the Gibbs sampler, and we choose the collection sample with the 5 Table 1: Summary of results on simulated data. Methods Channel 1 Channel 2 Channel 3 Average K-means 96.00% 96.02% 95.77% 95.93% GMM 84.33% 94.25% 91.75% 90.11% K-means with 2 PCs 96.8% 96.9% 96.50% 96.81% GMM with 2 PCs 96.83% 96.98% 96.92% 96.91% DP-DL 97.00% 96.92% 97.08% 97.00% HDP-DL 97.39% 97.08% 97.08% 97.18% maximum likelihood when presenting below example clusterings. For the K-means and GMM, we set the cluster level to 3 in the simulated data and to 2 clusters in the real data (see below). 4.1 Simulated Data In neural spike trains it is very difficult to get ground truth information, so for testing and verification we initially consider simulated data with known ground truth. To generate data we draw from the model x(c) n ⇠N(D(diag(λ(c)))s(c) n , 0.01ID). We define D 2 RD⇥K and λ(c) 2 RK, which constructs our data from K = 2 primary dictionary elements of length D = 40 in C = 3 channels. These dictionary elements are randomly drawn. We vary λ(c) from channel to channel, and for each spike, we generate the feature strength according to p(s(c) n ) = P3 i=1 ⇡iN(s(c) n |µ(c) i , 0.5IK) with ⇡= [1/3 1/3 1/3], which means that there are three neurons across all the channels. We defined µ(c) i 2 RK as the mean in the feature space for each neuron and shift the neuron mean from channel to channel. For results we associate each cluster with a neuron and determine the percentage of spikes in their correct cluster. The results are shown in Table 1. The combined Dirichlet process and dictionary learning (DP-DL) give similar results to the GMM with 2 principal components (PCs). Because the DP-DL learns the appropriate number of clusters (three) and dictionary elements (two), these models are expected to perform similarly, except that the DP-DL does not require knowledge of the number of dictionary elements and clusters a priori. The HDP-DL is allowed to share global clusters and dictionary elements between channels, which improves results as well. In Figure 2 the sample posteriors show that we peak at the true values of 3 used “global” clusters (at the top layer of the HDP) and 2 used dictionary elements. Additionally, the HDP shares cluster information between channels, which helps the cluster accuracy. In fact, the spikes at the same time will typically be drawn from the same global cluster despite having independent local clusters as seen in the global cluster from each channel in Figure 2(b). Thus, we can determine a global spike at each time point as well as on each channel. 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 Number of Dictionary Elements Probability (a) 0 500 1000 0 1 2 3 4 5 6 7 Channel 1 Spike Index Index of Global Clusters 0 500 1000 0 1 2 3 4 5 6 7 Channel 2 Spike Index 0 500 1000 0 1 2 3 4 5 6 7 Channel 3 Spike Index (b) 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 Number of Global Clusters Probability (c) Figure 2: Posterior information from HDP-DL on simulated data. (a) Approximate posterior distribution of the number of used dictionary elements (i.e., kbk0); (b) Example collection sample on the global cluster usage (each local cluster is mapped to its corresponding global index); (c) The approximate posterior distribution on the number of global cluster used. 6 Table 2: Results from testing on d533101 data [7]. KFM represent Kalman Filter Mixture method [2]. Methods Channel 1 Channel 2 Channel 3 Channel 4 Average K-means 86.67% 88.04% 89.20% 88.4% 88.08% GMM 87.43% 90.06% 86.75% 85.43% 87.42% K-means with 2 PCs 87.47% 88.16% 89.40% 88.72% 88.44% GMM with 2 PCs 89.00% 89.04% 87.43% 90.7% 89.04% KFM with 2 PCs 91.00% 89.2% 86.35% 86.87% 88.36% DP with 2 PCs 89.04% 89.00% 87.43% 86.79% 88.07% HDP with 2 PCs 90.36% 90.00% 90.00% 87.79% 89.54% DP-DL 92.29% 92.38% 89.52% 92.45% 91.89% HDP-DL 93.38% 93.18% 93.05% 92.61% 93.05% 4.2 Real Data with Partial Ground Truth We use the publicly available dataset1 hc-1. These data consist of both extracellular recordings and an intracellular recording from a nearby neuron in the hippocampus of a anesthetized rat [7]. Intracellular recordings give clean signals on a spike train from a specific neuron, giving accurate spike times for that neuron. Thus, if we detect a spike in a nearby extracellular recording within a close time period (<.5ms) to an intracellular spike, we assume that the spike detected in the extracellular recording corresponds to the known neuron’s spikes. This allows us to know partial ground truth, and allows us to test on methods compared to the known information. For the accuracy analysis, we determine one cluster that corresponds to the known neuron. Then we consider a spike to be correctly sorted if it is a known spike and is in the known cluster or if it is an unknown spike in the unknown cluster. In order to give a fair comparison of methods, we first considered the widely used data d533101 and used the same preprocessing from [2]. This data consists of a 4-channel extracellular recordings and 1-channel intracellular recording. We used 2491 detected spikes and 786 of those spikes came from the known neuron. The results are shown in Figure 2. The results show that learning the feature space instead of using the top 2 PCA components increases sorting accuracy. This phenomenon can be seen in Figure 1, where it is impossible to accurately resolve the clusters in the space based on the 2 principal components, through either K-means or GMM. Thus, by jointly learning the suitable feature space and clustering, we are able to separate the unknown and known neurons clusters more accurately. In the HDP model the advantage is clear in the global accuracy as we achieve 89.54% when using 2 PCs and 93.05% when using dictionary learning. In addition to learning the appropriate feature space, HDP-DL and DP-DL can infer the appropriate number of clusters, allowing the data to define the number of neurons. The posterior distribution on the number of global clusters and number of factors (dictionary elements) used is shown in Figure 3(a) and 3(b), along with the most used elements of the learned dictionary in Figure 3(c). The dictionary elements show shapes similar to both neuron spikes in Figure 3(d) and wavelets. The spiky nature of the learned dictionary can give factors similar to those use in the discrete wavelet transform cluster in [11], which choose to use the Daubechies wavelet for its spiky nature (but here, rather than a priori selecting an orthogonal wavelet basis, we learn a dictionary that is typically not orthogonal, but is wavelet-like). Next we used the d561102 data from hc-1, which consists of 4 extracellular recording and 1 intracellular recording. To do spike detection we high-pass filtered the data from 300 Hz and detected spikes when the voltage level passed a positive or negative threshold, as in [2]. We choose this data the known neuron displays dynamic properties by showing periods of activity and inactivity. The intracellular recording in Figure 4(a) shows the known neuron is active for only a brief section of the recorded signal, and is then inactive for the rest of the signal. The nonstationarity passes along to the extracellular spike train and the detect spikes. We used the first 930 detected spikes, which included 202 spikes from the known cluster. In order to model the dynamic properties, we binned the data into 31 subgroups of 30 spikes to use with our multichannel dynamic HDP. The results are shown in 1available from http://crcns.org/data-sets/hc/hc-1 7 0 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 Number of Global Clusters Probability (a) 20 25 30 35 40 45 0 0.1 0.2 0.3 0.4 0.5 Number of Dictionary Elements Probability (b) (c) (d) Figure 3: Results from HDP-DL on d533101 data. (a) approximate posterior probability on the number of global clusters (across all channels); (b) approximate posterior distribution on the number of dictionary elements; (c) six most used dictionary elements; (d) examples of typical spikes from the data. Table 3: Results for d566102 data [7]. Methods Channel 1 Channel 2 Channel 3 Channel 4 Average K-means 61.82% 78.77% 83.59% 89.39% 78.39% GMM 73.85% 78.66% 74.18% 76.59% 75.82% K-means with 2 PCs 61.82% 78.77% 84.79% 89.39% 78.69% GMM with 2 PCs 75.82% 78.77% 75.71% 88.73% 79.76% DP-DL 68.49% 81.73% 84.57% 88.73% 80.88% HDP-DL 74.40% 82.49% 85.34% 88.40% 82.66% MdHDP-DL 76.04% 84.79% 87.53% 90.48% 84.71% Table 3. The model adapts to the nonstationary spike dynamics by learning the parameters to model dynamic properties at block 11 (w(c) 11 ⇡1, indicating that the dHDP has detected a change in the characteristics of the spikes), where the known neuron goes inactive. Thus, the model is more likely to draw new local clusters at this point, reflecting the nonstationary data. Additionally, in Figure 4(c) the global cluster usage shows a dramatic change at time block 11, where a cluster in the model goes inactive at the same time the known neuron is inactive. Because the dynamic model can map these dynamic properties, the results improve while using this model. Additionally, we obtain a global accuracy (across all channels) of 82.66% using the HDP-DL and an global accuracy of 84.71% using the multichannel dynamic HDP-DL (MdHDP-DL). We also tried the KFM on these data, but we were unable to get satisfactory results with it. Additionally, we also calculated the true positive and false positive number to evaluate each method, but due to the limited space, those results were put in Supplementary Material. 10 20 30 40 500 1000 1500 2000 Time, s Recorded Signal (a) 0 200 400 600 800 950 0 5 10 15 20 25 30 Spike Index Index of Mixture Distribution (b) 0 5 10 0 0.2 0.4 0.6 0.8 1 3 0 5 10 5 0 5 10 10 0 5 10 11 0 5 10 0 0.2 0.4 0.6 0.8 13 0 5 10 18 0 5 10 21 0 5 10 30 (c) 10 20 30 0 0.2 0.4 0.6 0.8 1 Block Index Probability of Changing The probability of introducing a new component for the 11th block (d) Figure 4: Results of the multichannel dHDP on d561102. (a) first 40 seconds of the intracellular recording of d561102; (b) local cluster usage by each spike in the d561102 data in channel 4; (c) global cluster usage at different time blocks for the data d561102; (d) sharing weight w(c) j at each time blocks in the fourth channel. The spike in 11 occurs when the known neuron goes inactive. 5 Conclusions We have presented a new method for performing multi-channel spike sorting, in which the underlying features (dictionary elements) and sorting are performed jointly, while also allowing timeevolving variation in the spike statistics. The model adaptively learns dictionary elements of a wavelet-like nature (but not orthogonal), with characteristics like the shape of the spikes. Encouraging results have been presented on simulated and real data sets.The authors would like to thank A. Calabrese for providing the KFM codes and processed d533101 data. 8 Acknowledgement The research reported here was supported under the DARPA HIST program. References [1] A. Bar-Hillel, A. Spiro, and E. Stark. Spike sorting: Bayesian clustering of non-stationary data. J. Neuroscience Methods, 2006. [2] A. Calabrese and L. Paniski. Kalman filter mixture model for spike sorting of non-stationary data. J. Neuroscience Methods, 2010. [3] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1973. [4] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue. Probabilistic inference of arm motion from neural activity in motor cortex. Proc. Advances in NIPS, 2002. [5] J. Gasthaus, F. Wood, D. Gorur, and Y.W. Teh. Dependent Dirichlet process spike sorting. In Advances in Neural Information Processing Systems, 2009. [6] D. Gorur, C. Rasmussen, A. Tolias, F. Sinz, and N. Logothetis. Modelling spikes with mixtures of factor analysers. Pattern Recognition, 2004. [7] D. A. Henze, Z. Borhegyi, J. Csicsvari, A. Mamiya, K. D. Harris, and G. Buzsaki. Intracellular feautures predicted by extracellular recordings in the hippocampus in vivo. J. Neurophysiology, 2010. [8] J.A. Herbst, S. Gammeter, D. Ferrero, and R.H.R. Hahnloser. Spike sorting with hidden Markov models. J. Neuroscience Methods, 2008. [9] M.D. Hoffman, D.M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. Proc. NIPS, 2010. [10] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. Am. Stat. Ass., 2001. [11] J. C. Letelier and P. P. Weber. Spike sorting based on discrete wavelet transform coefficients. J. Neuroscience Methods, 2000. [12] M. S. Lewicki. A review of methods for spike sorting: the detection and classification of neural action potentials. Network: Computation in Neural Systems, 1998. [13] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. J. Machine Learning Research, 2010. [14] M.A. Nicolelis. Brain-machine interfaces to restore motor function and probe neural circuits. Nature reviews: Neuroscience, 2003. [15] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov Chain Monte Carlo methods for Dirichlet process hierarchiacal models. Biometrika, 2008. [16] C. Pouzat, M. Delescluse, P. Viot, and J. Diebolt. Improved spike-sorting by modeling firing statistics and burst-dependent spike amplitude attenuation: A Markov Chain Monte Carlo approach. J. Neurophysiology, 2004. [17] L. Ren, D. B. Dunson, and L. Carin. The dynamic hierarchical dirichlet process. International Conference on Machine Learning, 2008. [18] G. Santhanam, S.I. Ryu, B.M. Yu, A. Afshar, and K.V. Shenoy. A high-performance braincomputer interface. Nature, 2006. [19] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639–650, 1994. [20] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. J. Am. Stat. Ass., 2005. [21] F. Wood, S. Roth, and M. J. Black. Modeling neural population spiking activity with Gibbs distributions. Proc. Advances in Neural Information Processing Systems, 2005. [22] W. Wu, M. J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J. P. Donoghue. Neural decoding of cursor motion using a Kalman filter. Proc. Advances in NIPS, 2003. 9
2011
14
4,191
Convergent Fitted Value Iteration with Linear Function Approximation Daniel J. Lizotte David R. Cheriton School of Computer Science University of Waterloo Waterloo, ON N2L 3G1 Canada dlizotte@uwaterloo.ca Abstract Fitted value iteration (FVI) with ordinary least squares regression is known to diverge. We present a new method, “Expansion-Constrained Ordinary Least Squares” (ECOLS), that produces a linear approximation but also guarantees convergence when used with FVI. To ensure convergence, we constrain the least squares regression operator to be a non-expansion in the ∞-norm. We show that the space of function approximators that satisfy this constraint is more rich than the space of “averagers,” we prove a minimax property of the ECOLS residual error, and we give an efficient algorithm for computing the coefficients of ECOLS based on constraint generation. We illustrate the algorithmic convergence of FVI with ECOLS in a suite of experiments, and discuss its properties. 1 Introduction Fitted value iteration (FVI), both in the model-based [4] and model-free [5, 15, 16, 17] settings, has become a method of choice for various applied batch reinforcement learning problems. However, it is known that depending on the function approximation scheme used, fitted value iteration can and does diverge in some settings. This is particularly problematic—and easy to illustrate—when using linear regression as the function approximator. The problem of divergence in FVI has been clearly illustrated in several settings [2, 4, 8, 22]. Gordon [8] proved that the class of averagers–a very smooth class of function approximators–can safely be used with FVI. Further interest in batch RL methods then led to work that uses non-parametric function approximators with FVI to avoid divergence [5, 15, 16, 17]. This has left a gap in the “middle ground” of function approximator choices that guarantee convergence–we would like to have a function approximator that is more flexible than the averagers but more easily interpreted than the non-parametric approximators. In many scientific applications, linear regression is a natural choice because of its simplicity and interpretability when used with a small set of scientifically meaningful state features. For example, in a medical setting, one may want to base a value function on patient features that are hypothesized to impact a long-term clinical outcome [19]. This enables scientists to interpret the parameters of an optimal learned value function as evidence for or against the importance of these features. Thus for this work, we restrict our attention to linear function approximation, and ensure algorithmic convergence to a fixed point regardless of the generative model of the data. This is in contrast to previous work that explores how properties of the underlying MDP and properties of the function approximation space jointly influence convergence of the algorithm [1, 14, 6]. Our aim is to develop a variant of linear regression that, when used in a fitted value iteration algorithm, guarantees convergence of the algorithm to a fixed point. The contributions of this paper are three-fold: 1) We develop and describe the “Expansion-Constrained Ordinary Least Squares” (ECOLS) approximator. Our approach is to constrain the regression operator to be a non-expansion in the ∞-norm. We show that the space of function approximators that satisfy this property is more 1 rich than the space of averagers [8], and we prove a minimax property on the residual error of the approximator. 2) We give an efficient algorithm for computing the coefficients of ECOLS based on quadratic programming with constraint generation. 3) We verify the algorithmic convergence of fitted value iteration with ECOLS in a suite of experiments and discuss its performance. Finally, we discuss future directions of research and comment on the general problem of learning an interpretable value function and policy from fitted value iteration. 2 Background Consider a finite MDP with states S = {1, ..., n}, actions A = {1, ..., |A|}, state transition matrices P (a) ∈Rn×n for each action, a deterministic1 reward vector r ∈Rn, and a discount factor γ < 1. Let Mi,: (M:,i) denote the ith row (column) of a matrix M. The “Bellman optimality” operator or “Dynamic Programming” operator T is given by (Tv)i = ri + max a h γP (a) i,: v i . (1) The fixed point of T is the optimal value function v∗which satisfies the Bellman equation, Tv∗= v∗ [3]. From v∗we can recover a policy π∗ i = ri + argmaxa γP (a) i,: v∗that has v∗as its value function. An analogous operator K can be defined for the state-action value function Q ∈Rn×|A|. (KQ)i,j = ri + γP (j) i,: max a Q:,a (2) The fixed point of K is the optimal state-action value Q∗which satisfies KQ∗= Q∗. The value iteration algorithm proceeds by starting with an initial v or Q, and applying T or K repeatedly until convergence, which is guaranteed because both T and K are contraction mappings in the infinity norm [8], as we discuss further below. The above operators assume knowledge of the transition model P (a) and rewards r. However K in particular is easily adapted to the case of a batch of n tuples of the form (si, ai, ri, s′ i) obtained by interaction with the system [5, 15, 16, 17]. In this case, Q is only evaluated at states in our data set, and in MDPs with continuous state, the number of tuples n is analogous from a computational point of view to the size of our state space. Fitted value iteration [5, 15, 16, 17] (FVI) interleaves either T or K above with a function approximation operator M. For example in the model-based case, the composed operator (M ◦T) is applied repeatedly to an initial guess v0. FVI has become increasingly popular especially in the field of “batch-mode Reinforcement Learning” [13, 7] where a policy is learned from a fixed batch of data that was collected by a prior agent. This has particular significance in scientific and medical applications, where ethics concerns prevent the use of current RL methods to interact directly with a trial subject. In these settings, data gathered from controlled trials can still be used to learn good policies [11, 19]. Convergence of FVI depends on properties of M—particularly on whether M is a non-expansion in the ∞-norm, as we discuss below. The main advantage of fitted value iteration is that the computation of (M ◦T) can be much lower than n in cases where the approximator M only requires computation of elements of (Tv)i for a small subset of the state space. If M generalizes well, this enables learning in large finite or continuous state spaces. Another advantage is that M can be chosen to represent the value function in a meaningful way, i.e. in a way that meaningfully relates state variables to expected performance. For example, if M were linear regression and a particular state feature had a positive coefficient in the learned value function, we know that larger values of that state feature are preferable. Linear models are of importance because of their ease of interpretation, but unfortunately, ordinary least squares (OLS) function approximation can cause the successive iterations of FVI to fail to converge. We now examine properties of the approximation operator M that control the algorithmic convergence of FVI. 3 Non-Expansions and Operator Norms We say M is a linear operator if My + My′ = M(y + y′) ∀y, y′ ∈Rp and M0 = 0. Any linear operator can be represented by a p × p matrix of real numbers. 1A noisy reward signal does not alter the analyses that follow, nor does dependence of the reward on action. 2 By definition, an operator M is a γ-contraction in the q-norm if ∃γ ≤1 s.t. ||My −My′||q ≤γ||y −y′||q ∀y, y′ ∈Rp (3) If the condition holds only for γ = 1 then M is called a non-expansion in the q-norm. It is wellknown [3, 5, 21] that the operators T and K are γ-contractions in the ∞-norm. The operator norm of M induced by the q-norm can be defined in several ways, including ||M||op(q) = sup y∈Rp,y̸=0 ||My||q ||y||q . (4) Lemma 1. A linear operator M is a γ-contraction in the q-norm if and only if ||M||op(q) ≤γ. Proof. If M is linear and is a γ-contraction, we have ||M(y −y′)||q ≤γ||y −y′||q ∀y, y′ ∈Rp. (5) By choosing y′ = 0, it follows that M satisfies ||Mz||q ≤γ||z||q ∀z ∈Rp. (6) Using the definition of || · ||op(q), we have that the following conditions are equivalent: ||Mz||q ≤γ||z||q ∀z ∈Rp (7) ||Mz||q ||z||q ≤γ ∀z ∈Rp, z ̸= 0 (8) sup z∈Rp,z̸=0 ||Mz||q ||z||q ≤γ (9) ||M||op(q) ≤γ. (10) Conversely, any M that satisfies (10) satisfies (5) because we can always write y −y′ = z. Lemma 1 implies that a linear operator M is a non-expansion in the ∞-norm only if ||M||op(∞) ≤1 (11) which is equivalent [18] to: max i X j |mij| ≤1 (12) Corollary 1. The set of all linear operators that satisfy (12) is exactly the set of linear operators that are non-expansions in the ∞-norm. One subset of operators on Rp that are guaranteed to be non-expansions in the ∞-norm are the averagers2, as defined by Gordon [8]. Corollary 2. The set of all linear operators that satisfy (12) is larger than the set of averagers. Proof. For M to be an averager, it must satisfy mij ≥0 ∀i, j (13) max i X j mij ≤1. (14) These constraints are stricter than (12), because they impose an additional non-negativity constraint on the elements of M. We have shown that restricting M to be a non-expansion is equivalent to imposing the constraint ||M||op(∞) ≤1. It is well-known [8] that if such an M is used as a function approximator in fitted value iteration, the algorithm is guaranteed to converge from any starting point because the composition M ◦T is a γ-contraction in the ∞-norm. 2The original definition of an averager was an operator of the form y 7→Ay + b for a constant vector b. For this work we assume b = 0. 3 4 Expansion-Constrained Ordinary Least Squares We now describe our Expansion-Constrained Ordinary Least Squares function approximation method, and show how we enforce that it is a non-expansion in the ∞-norm. Suppose X is an n × p design matrix with n > p and rank(X) = p, and suppose y is a vector of regression targets. The usual OLS estimate ˆβ for the model y ≈Xβ is given by ˆβ = argmin β ||Xβ −y||2 (15) = (XTX)−1XTy. (16) The predictions made by the model at the points in X—i.e., the estimates of y—are given by ˆy = X ˆβ = X(XTX)−1XTy = Hy (17) where H is the “hat” matrix because it “puts the hat” on y. The ith element of ˆy is a linear combination of the elements of y, with weights given by the ith row of H. These weights sum to one, and may be positive or negative. Note that H is a projection of y onto the column space of X, and has 1 as an eigenvalue with multiplicity rank(X), and 0 as an eigenvalue with multiplicity (n−rank(X)). It is known [18] that for a linear operator M, ||M||op(2) is given by the largest singular value of M. It follows that ||H||op(2) ≤1 and, by Lemma 1, H is a non-expansion in the 2-norm. However, depending on the data X, we may not have ||H||op(∞) ≤1, in which case H will not be a nonexpansion in the ∞-norm. The ∞-norm expansion property of H is problematic when using linear function approximation for fitted value iteration, as we described earlier. If one wants to use linear regression safely within a value-iteration algorithm, it is natural to consider constraining the least-squares problem so that the resulting hat matrix is an ∞-norm non-expansion. Consider the following optimization problem: ¯W = argmin W ||XWXTy −y||2 (18) s.t. ||XWXT||op(∞) ≤1, W ∈Rp×p, W = W T. The symmetric matrix W is of size p × p, so we have a quadratic objective with a convex norm constraint on XWXT, resulting in a hat matrix ¯H = X ¯WXT. If the problem were unconstrained, we would have ¯W = (XTX)−1, ¯H = H and ¯β = ¯WXTy = ˆβ, the original OLS parameter estimate. The matrix ¯H is a non-expansion by construction. However, unlike the OLS hat matrix H = X(XTX)−1XT, the matrix ¯H depends on the targets y. That is, given a different set of regression targets, we would compute a different ¯H. We should therefore more properly write this non-linear operator as ¯Hy. Because of the non-linearity, the operator ¯Hy resulting from the minimization in (18) can in fact be an expansion in the ∞-norm despite the constraints. We now show how we might remove the dependence on y from (18) so that the resulting operator is a linear non-expansion in the op(∞)-norm. Consider the following optimization problem: ˇW = argmin W max z ||XWXTz −z||2 (19) s.t. ||XWXT||op(∞) ≤1, ||z||2 = c, W ∈Rp×p, W = W T, z ∈Rn Intuitively, the resulting ˇW is a linear operator of the form X ˇWXT that minimizes the squared error between its approximation ˇz and the worst-case (bounded) targets z.3 The resulting ˇW does not depend on the regression targets y, so the corresponding ˇH is a linear operator. The constraint ||XWXT||op(∞) ≤1 is effectively a regularizer on the coefficients of the hat matrix which will tend to shrink the fitted values X ˇWXTy toward zero. Minimization 19 gives us a linear operator, but, as we now show, ˇW is not unique—there are in fact an uncountable number of ˇW that minimize (19). 3The c is a mathematical convenience; if ||z||2 were unbounded then the max would be unbounded and the problem ill-posed. 4 Theorem 1. Suppose W ′ is feasible for (19) and is positive semi-definite. Then W ′ satisfies max z,||z||2<c ||XW ′XTz −z||2 = min W max z,||z||2<c ||XWXTz −z||2 (20) for all c. Proof. We begin by re-formulating (19), which contains a non-concave maximization, as a convex minimization problem with convex constraints. Lemma 2. Let X, W, c, and H be defined as above. Then max z,||z||2=c ||XWXTz −z||2 = c||XWXT −I||op(2). Proof. maxz∈Rn,||z||2=c ||XWXTz −Iz||2 = maxz∈Rn,||z||2≤1 ||(XWXT −I)cz||2 = c maxz∈Rn,||z||2̸=0 ||(XWXT −I)z||2/||z||2 = c||XWXT −I||op(2). Using Lemma 2, we can rewrite (19) as ˇW = argmin W ||XWXT −I||op(2) (21) s.t. ||XWXT||op(∞) ≤1, W ∈Rp×p, W = W T which is independent of z and independent of the positive constant c. This objective is convex in W, as are the constraints. We now prove a lower bound on (21) and prove that W ′ meets the lower bound. Lemma 3. For all n×p design matrices X s.t. n > p and all symmetric W, ||XWXT−I||op(2) ≥1. Proof. Recall that ||XWXT −I||op(2) is given by the largest singular value of XWXT −I. By symmetry of W, write XWXT = UDU T where D is a diagonal matrix whose diagonal entries dii are the eigenvalues of XWXT and U is an orthonormal matrix. We therefore have XWXT −I = UDU T −I = UDU T −UIU T = U(D −I)U T (22) Therefore ||XWXT −I||op(2) = maxi |dii −1|, which is the largest singular value of XWXT −I. Furthermore we know that rank(XWXT) ≤p and that therefore at least n −p of the dii are zero. Therefore maxi |dii −1| ≥1, implying ||XWXT −I||op(2) ≥1. Lemma 4. For any symmetric positive definite matrix W ′ that satisfies the constraints in (19) and any n × p design matrix X s.t. n > p, we have ||XW ′XT −I||op(2) = 1. Proof. Let H′ = XW ′XT and write H′ −I = U ′(D′ −I)U ′T where U is orthogonal and D′ is a diagonal matrix whose diagonal entries d′ii are the eigenvalues of H′. We know H′ is positive semidefinite because W ′ is assumed to be positive semi-definite; therefore d′ ii ≥0. From the constraints in (19), we have ||H′||op(∞) ≤1, and by symmetry of H′ we have ||H′||op(∞) = ||H′||op(1). It is known [18] that for any M, ||M||op(2) ≤ p ||M||op(∞)||M||op(1) which gives ||H′||op(2) ≤1 and therefore |d′ ii| ≤1 for all i ∈1..n. Combining these results gives 0 ≤d′ ii ≤1 ∀i. Recall that ||XW ′XT −I||op(2) = maxi |dii −1|, the maximum eigenvalue of H′. Because rank(XWXT) ≤ p, we know that there exists an i such that d′ ii = 0, and because we have shown that 0 ≤d′ ii ≤1, it follows that maxi |dii −1| = 1, and therefore ||XW ′XT −I||op(2) = 1. Lemma 4 shows that the objective value at any feasible, symmetric postive-definite W ′ matches the lower bound proved in Lemma 3, and that therefore any such W ′ satisfies the theorem statement. 5 Theorem 1 shows that the optimum of (19) not unique. We therefore solve the following optimization problem, which has a unique solution, shows good empirical performance, and yet still provides the minimax property guaranteed by Theorem 1 when the optimal matrix is positive semi-definite.4 ˜W = argmin W max z ||XWXTz −Hz||2 (23) s.t. ||XWXT||op(∞) ≤1, ||z||2 = c, W ∈Rp×p, W = W T, z ∈Rn Intuitively, this objective searches for a ˜W such that linear approximation using X ˜W TXT is as close as possible to the OLS approximation, for the worst case regression targets, according to the 2-norm. 5 Computational Formulation By an argument identical to that of Lemma 2, we can re-formulate (23) as a convex optimization problem with convex constraints, giving ˜W = argmin W ||XWXT −H||op(2) (24) s.t. ||XWXT||op(∞) ≤1, W ∈Rp×p, W = W T. Though convex, objective (24) has no simple closed form, and we found that standard solvers have difficulty for larger problems [9]. However, ||XWXT −H||op(2) is upper bounded by the Frobenius norm ||M||F = (P i,j m2 ij)1/2. Therefore, we minimize the quadratic objective ||XWXT −H||F subject to the same convex constraints, which is easier to solve than (21). Note that Theorem 1 applies to the solution of this modified objective when the resulting ˜W is positive semidefinite. Expanding ||XWXT −H||F gives ||XWXT −H||F = Tr  XWXTXWXT −2XWXT −H  . Let M(:) be the length p · n vector consisting of the stacked columns of the matrix M. After some algebraic manipulations, we can re-write the objective as W(:)TΞW(:) −2ζTW(:), where Ξ = Pn i=1 Pn j=1 ξ(ij)ξ(ij)T and ξ(ij) = (XT i,:Xj,:)(:), and ζ = (XTX)(:). This objective can then be fed into any standard QP solver. The constraint ||XWXT||op(∞) ≤1 can be expressed as the set of constraints Pn j=1 |Xi,:WXT j,:| < 1, i = 1..n, or as a set of n2n linear constraints Pn j=1 kjXi,:WXT j,: < 1, i = 1..n, k ∈{+1, −1}n. Each of these linear constraints involves a vector k with entries {+1, −1} multiplied by a row of XWXT. If the entries in k match the signs of the row of XWXT, then their inner product is equal to the sum of the absolute values of the row, which must be constrained. If they do not match, the result is smaller. By constraining all n2n patterns of signs, we constrain the sum of the absolute values of the entries in the row. Explicitly enforcing all of these constraints is intractable, so we employ a constraint-generation approach [20]. We solve a sequence of quadratic programs, adding the most violated linear constraint after each step. The most violated constraint is given by a row i∗= argmaxi∈1..n Pn j=1 |Xi,:WXT j,:| and a vector k∗= sign Xi,:W. The resulting constraint on W(:) can be written as k∗L W(:) ≤1 where Lj,: = ξ(i∗j), i = 1..n. This formulation allows us to use a general QP solver to compute ˜W. Note that batch fitted value iteration performs many regressions where the targets y change from iteration to iteration, but the design matrix X is fixed. Therefore we only need to solve the ECOLS optimization problem once for any given application of FVI, meaning the additional computational cost of ECOLS over OLS is not a major drawback. 6 Experimental results In order to illustrate the behavior of ECOLS in different settings, we present four different empirical evaluations: one regression problem and three RL problems. In each of the RL settings, ECOLS with FVI converges, and the learned value function defines a good greedy policy. 4One could in principle include a semi-definite constraint in the problem formulation, at an increased computational cost. (The problem is not a standard semi-definite program because the objective is not linear in the elements of W.) We have not imposed this constraint in our experiments and we have always found that the resulting ˜ W is positive semi-definite. We conjecture that ˜ W is always positive semi-definite. 6 −2 −1 0 1 2 3 4 −15 −10 −5 0 5 10 x y Expansion−Constrained Ordinary Least Squares Comparisons OLS ECOLS with Fro. norm ECOLS with op(2)−norm ECOLS Avg. with Fro. norm Function Coefficients β∗ ˆβ ˜βF ˜βop(2) ˜βavg 1 1 0.95 0.16 0.77 -2.21 x -3 -2.92 -1.80 -2.02 -0.97 x2 -3 -3.00 -1.71 -1.88 -1.09 x3 1 1.00 0.58 0.64 0.37 rms 6.69 6.68 13.60 13.44 16.52 Figure 1: Example of OLS, ECOLS with ||XWXT −H||F , ECOLS with ||XWXT −H||op(2) Regression The first is a simple regression setting, where we examine the behavior of ECOLS compared to OLS. To give a simple, pictorial rendition of the difference between OLS, ECOLS using the Frobenius, ECOLS using the op(2)-norm, and an averager, we generated a dataset of n = 25 tuples (x, y) as follows: x ∼U(−2, 4), y = 1 −3x −3x2 + x3 + ε, ε ∼N(0, 4). The design matrix X had rows Xi,: = [1, xi, x2 i , x3 i ]. The ECOLS regression optimizing the Frobenius norm using CPLEX [12] took 0.36 seconds, whereas optimizing the op(2)-norm using the cvx package [10] took 8.97 seconds on a 2 GHz Intel Core 2 Duo. Figure 1 shows the regression curves produced by OLS and the two versions of ECOLS, along with the learned coefficients and root mean squared error of the predictions on the data. Neither of the ECOLS curves fit the data as well as OLS, as one would expect. Generally, their curves are smoother than the OLS fit, and predictions are on the whole shrunk toward zero. We also ran ECOLS with an additional positivity constraint on X ˜WXT, effectively forcing the result to be an averager as described in Sect. 3. The result is smoother than either of the ECOLS regressors, with a higher RMS prediction error. Note the small difference between ECOLS using the Frobenius norm (dark black line) and using the op(2)-norm (dashed line.) This is encouraging, as we have found that in larger datasets optimizing the op(2)-norm is much slower and less reliable. Two-state example Our second example is a classic on-policy fitted value iteration problem that is known to diverge using OLS. It is perhaps the simplest example of FVI diverging, due to Tsitsiklis and Van Roy [22]. This is a deterministic on-policy example, or equivalently for our purposes, a problem with |A| = 1. There are three states {1, 2, 3} with features X = (1, 2, 0)T, one action with P1,2 = 1, P2,2 = 1 −ε, P2,3 = ε, P3,3 = 1 and Pi,j = 0 elsewhere. The reward is R = [0, 0, 0]T and the value function is v∗= [0, 0, 0]T. For γ > 5/(6 −4ε), FVI with OLS diverges for any starting point other than v∗. FVI with ECOLS always converges to v∗. If we change the reward to R = [1, 1, 0]T and set γ = 0.95, ε = 0.1, we have v∗= [7.55, 6.90, 0]. FVI with OLS of course still diverges, whereas FVI with ECOLS converges to ˜v = [4.41, 8.82, 0]. In this case, the approximation space is poor, and no linear method based on the features in X can hope to perform well. Nonetheless, ECOLS converges to a ˆv of at least the appropriate magnitude. Grid world Our third example is an off-policy value iteration problem which is known to diverge with OLS, due to Boyan and Moore [4]. In this example, there are effectively 441 discrete states, laid out in a 21 × 21 grid, and assigned an (x, y) feature in [0, 1]2 according to their position in the grid. There are four actions which deterministically move the agent up, down, left, or right by a distance of 0.05 in the feature space, and the reward is -0.5 everywhere except the corner state (1, 1), where it is 0. The discount γ is set to 1.0 so the optimal value function is v∗(x, y) = −20 + 10x + 10y. Boyan and Moore define “lucky” convergence of FVI as the case where the policy induced by the learned value function is optimal, even if the learned value function itself does not accurately represent v∗. They found that with OLS and a design matrix Xi,: = [1, xi, yi], they achieve lucky convergence. We replicated their result using FVI on 255 randomly sampled states plus the goal 7 state, and found that OLS converged5 to ˆβ = [−515.89, 9.99, 9.99] after 10455 iterations. This value function induces a policy that attempts to increase x and y, which is optimal. ECOLS on the other hand converged to ˜β = [−1.09, 0.030, 0.07] after 31 iterations, which also induces an optimal policy. In terms of learning correct value function coefficients, the OLS estimate gets 2 of the 3 almost exactly correct. In terms of estimating the value of states, OLS achieves an RMSE over all states of 10413.73, whereas ECOLS achieves an RMSE of 208.41. In the same work, Boyan and Moore apply OLS with quadratic features Xi,: = [1, x, y, x2, y2, xy], and find that FVI diverges. We found that ECOLS converges, with coefficients [−0.80, −2.67, −2.78, 2.73, 2.91, 0.06]. This is not “lucky”, as the induced policy is only optimal for states in the upper-right half of the state space. Left-or-right world Our fourth and last example is an off-policy value iteration problem with stochastic dynamics where OLS causes non-divergent but non-convergent behavior. To investigate properties of their tree-based Fitted Q-Iteration (FQI) methods, Ernst, Geurts, and Wehenkel define the “left-or-right” problem [5], an MDP with S = [0, 10], and stochastic dynamics given by st+1 = st + a + ε, where ε ∼N(0, 1). Rewards are 0 for s ∈[0, 10], 100 for s > 10, and 50 for s < 0. All states outside [0, 10] are terminal. The discount factor γ is 0.75. In their formulation they use A ∈ {−2, 2}, which gives an optimal policy that is approximately π∗(s) = {2 if s > 2.5, -2 otherwise}. We examine a simpler scenario by choosing A ∈{−4, 4}, so that π∗(s) = 4, i.e., it is optimal to always go right. Based on prior data [5], the optimal Q functions for this type of problem appear to be smooth and non-linear, possibly with inflection points. Thus we use polynomial features6 Xi,: = [1, x, x2, x3] where x = s/5 −1. As is common in FQI, we fit separate regressions to learn Q(·, 4) and Q(·, −4) at each iteration. We used 300 episodes worth of data generated by the uniform random policy for learning. In this setting, OLS does not diverge, but neither does it converge: the parameter vector of each Q function moves chaotically within some bounded region of R4. The optimal policy induced by the Q-functions is determined solely by zeroes of Q(·, 4) −Q(·, −4), and in our experiments this function had at most one zero. Over 500 iterations of FQI with OLS, the cutpoint ranged from -7.77 to 14.04, resulting in policies ranging from “always go right” to “always go left.’ FQI with ECOLS converged to a near-optimal policy ˜π(s) = {4 if s > 1.81, -4 otherwise}. We determined by Monte Carlo rollouts that, averaged over a uniform initial state, the value of ˜π is 59.59, whereas the value of the optimal policy π∗is 60.70. While the performance of the learned policy is very good, the estimate of the average value using the learned Qs, 28.75, is lower due to the shrinkage induced by ECOLS in the predicted state-action values. 7 Concluding Remarks Divergence of FVI with OLS has been a long-standing problem in the RL literature. In this paper, we introduced ECOLS, which provides guaranteed convergence of FVI. We proved theoretical properties that show that in the minimax sense, ECOLS is optimal among possible linear approximations that guarantee such convergence. Our test problems confirm the convergence properties of ECOLS and also illustrate some of its properties. In particular, the empirical results illustrate the regularization effect of the op(∞)-norm constraint that tends to “shrink” predicted values toward zero. This is a further contribution of our paper: Our theoretical and empirical results indicate that this shrinkage is a necessary cost of guaranteeing convergence of FVI using linear models with a fixed set of features. This has important implications for the deployment of FVI with ECOLS. In some applications where accurate estimates of policy performance are required, this shrinkage may be problematic; addressing this problem is an interesting avenue for future research. In other applications where the goal is to identify a good, intuitively represented value function and policy ECOLS, is a useful new tool. Acknowledgements We acknowledge support from Natural Sciences and Engineering Research Council of Canada (NSERC) and the National Institutes of Health (NIH) grants R01 MH080015 and P50 DA10075. 5Convergence criterion was ||βiter+1 −βiter|| ≤10−5. All starts were from β = 0. 6The re-scaling of s is for numerical stability. 8 References [1] A. Antos, R. Munos, and Cs. Szepesv´ari. Fitted Q-iteration in continuous action-space MDPs. In Advances in Neural Information Processing Systems 20, pages 9–16. MIT Press, 2008. [2] L. Baird. Residual Algorithms: Reinforcement Learning with Function Approximation. In A. Prieditis and S. Russell, editors, Proceedings of the 25th International Conference on Machine Learning, pages 30–37. Morgan Kaufmann, 1995. [3] D. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 2007. [4] J. Boyan and A. W. Moore. Generalization in reinforcement learning: Safely approximating the value function. In Advances in neural information processing systems, pages 369–376, 1995. [5] D. Ernst, P. Geurts, and L. Wehenkel. Tree-Based Batch Mode Reinforcement Learning. Journal of Machine Learning Research, 6:503–556, 2005. [6] A. M. Farahmand, M. Ghavamzadeh, Cs. Szepesv´ari, and S. Mannor. Regularized fitted Qiteration for planning in continuous-space Markovian decision problems. In American Control Conference, pages 725–730, 2009. [7] R. Fonteneau. Contributions to Batch Mode Reinforcement Learning. PhD thesis, University of Liege, 2011. [8] G. J. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. [9] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, Apr. 2011. [10] M. C. Grant. Disciplined convex programming and the cvx modeling framework. Information Systems Journal, 2006. [11] A. Guez, R. D. Vincent, M. Avoli, and J. Pineau. Adaptive treatment of epilepsy via batchmode reinforcement learning. In D. Fox and C. P. Gomes, editors, Innovative Applications of Artificial Intelligence, pages 1671–1678, 2008. [12] IBM. IBM ILOG CPLEX Optimization Studio V12.2, 2011. [13] S. Kalyanakrishnan and P. Stone. Batch reinforcement learning in a complex domain. In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems AAMAS 07, 2007. [14] R. Munos and Cs. Szepesv´ari. Finite time bounds for fitted value iteration. Journal of Machine Learning Research, 9:815–857, 2008. [15] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine learning, 49(2):161– 178, 2002. [16] M. Riedmiller. Neural fitted Q iteration-first experiences with a data efficient neural reinforcement learning method. In ECML 2005, pages 317–328. Springer, 2005. [17] J. Rust. Using randomization to break the curse of dimensionality. Econometrica, 65(3):pp. 487–516, 1997. [18] G. A. F. Seber. A MATRIX HANDBOOK FOR STATISTICIANS. Wiley, 2007. [19] S. M. Shortreed, E. Laber, D. J. Lizotte, T. S. Stroup, J. Pineau, and S. A. Murphy. Informing sequential clinical decision-making through reinforcement learning : an empirical study. Machine Learning, 2010. [20] S. Siddiqi, B. Boots, and G. Gordon. A Constraint Generation Approach to Learning Stable Linear Dynamical Systems. In Advances in Neural Information Processing Systems 20, pages 1329–1336. MIT Press, 2008. [21] Cs. Szepesv´ari. Algorithms for Reinforcement Learning. Morgan and Claypool, 2010. [22] J. N. Tsitsiklis and B. van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674–690, 1997. 9
2011
140
4,192
Learning in Hilbert vs. Banach Spaces: A Measure Embedding Viewpoint Bharath K. Sriperumbudur Gatsby Unit University College London bharath@gatsby.ucl.ac.uk Kenji Fukumizu The Institute of Statistical Mathematics, Tokyo fukumizu@ism.ac.jp Gert R. G. Lanckriet Dept. of ECE UC San Diego gert@ece.ucsd.edu Abstract The goal of this paper is to investigate the advantages and disadvantages of learning in Banach spaces over Hilbert spaces. While many works have been carried out in generalizing Hilbert methods to Banach spaces, in this paper, we consider the simple problem of learning a Parzen window classifier in a reproducing kernel Banach space (RKBS)—which is closely related to the notion of embedding probability measures into an RKBS—in order to carefully understand its pros and cons over the Hilbert space classifier. We show that while this generalization yields richer distance measures on probabilities compared to its Hilbert space counterpart, it however suffers from serious computational drawback limiting its practical applicability, which therefore demonstrates the need for developing efficient learning algorithms in Banach spaces. 1 Introduction Kernel methods have been popular in machine learning and pattern analysis for their superior performance on a wide spectrum of learning tasks. They are broadly established as an easy way to construct nonlinear algorithms from linear ones, by embedding data points into reproducing kernel Hilbert spaces (RKHSs) [1, 14, 15]. Over the last few years, generalization of these techniques to Banach spaces has gained interest. This is because any two Hilbert spaces over a common scalar field with the same dimension are isometrically isomorphic while Banach spaces provide more variety in geometric structures and norms that are potentially useful for learning and approximation. To sample the literature, classification in Banach spaces, more generally in metric spaces were studied in [3, 22, 11, 5]. Minimizing a loss function subject to a regularization condition on a norm in a Banach space was studied by [3, 13, 24, 21] and online learning in Banach spaces was considered in [17]. While all these works have focused on theoretical generalizations of Hilbert space methods to Banach spaces, the practical viability and inherent computational issues associated with the Banach space methods has so far not been highlighted. The goal of this paper is to study the advantages/disadvantages of learning in Banach spaces in comparison to Hilbert space methods, in particular, from the point of view of embedding probability measures into these spaces. The concept of embedding probability measures into RKHS [4, 6, 9, 16] provides a powerful and straightforward method to deal with high-order statistics of random variables. An immediate application of this notion is to problems of comparing distributions based on finite samples: examples include tests of homogeneity [9], independence [10], and conditional independence [7]. Formally, suppose we are given the set P(X) of all Borel probability measures defined on the topological space X, and the RKHS (H, k) of functions on X with k as its reproducing kernel (r.k.). If k is measurable and bounded, then we can embed P in H as P 7→ Z X k(·, x) dP(x). (1) 1 Given the embedding in (1), the RKHS distance between the embeddings of P and Q defines a pseudo-metric between P and Q as γk(P, Q) := Z X k(·, x) dP(x) − Z X k(·, x) dQ(x) H . (2) It is clear that when the embedding in (1) is injective, then P and Q can be distinguished based on their embeddings R X k(·, x) dP(x) and R X k(·, x) dQ(x). [18] related RKHS embeddings to the problem of binary classification by showing that γk(P, Q) is the negative of the optimal risk associated with the Parzen window classifier in H. Extending this classifier to Banach space and studying the highlights/issues associated with this generalization will throw light on the same associated with more complex Banach space learning algorithms. With this motivation, in this paper, we consider the generalization of the notion of RKHS embedding of probability measures to Banach spaces—in particular reproducing kernel Banach spaces (RKBSs) [24]—and then compare the properties of the RKBS embedding to its RKHS counterpart. To derive RKHS based learning algorithms, it is essential to appeal to the Riesz representation theorem (as an RKHS is defined by the continuity of evaluation functionals), which establishes the existence of a reproducing kernel. This theorem hinges on the fact that a notion of inner product can be defined on Hilbert spaces. In this paper, as in [24], we deal with RKBSs that are uniformly Fr´echet differentiable and uniformly convex (called as s.i.p. RKBS) as many Hilbert space arguments—most importantly the Riesz representation theorem—can be carried over to such spaces through the notion of semi-inner-product (s.i.p.) [12], which is a more general structure than an inner product. Based on Zhang et al. [24], who recently developed RKBS counterparts of RKHS based algorithms like regularization networks, support vector machines, kernel principal component analysis, etc., we provide a review of s.i.p. RKBS in Section 3. We present our main contributions in Sections 4 and 5. In Section 4, first, we derive an RKBS embedding of P into B′ as P 7→ Z X K(·, x) dP(x), (3) where B is an s.i.p. RKBS with K as its reproducing kernel (r.k.) and B′ is the topological dual of B. Note that (3) is similar to (1), but more general than (1) as K in (3) need not have to be positive definite (pd), in fact, not even symmetric (see Section 3; also see Examples 2 and 3). Based on (3), we define γK(P, Q) := Z X K(·, x) dP(x) − Z X K(·, x) dQ(x) B′ , a pseudo-metric on P(X), which we show to be the negative of the optimal risk associated with the Parzen window classifier in B′. Second, we characterize the injectivity of (3) in Section 4.1 wherein we show that the characterizations obtained for the injectivity of (3) are similar to those obtained for (1) and coincide with the latter when B is an RKHS. Third, in Section 4.2, we consider the empirical estimation of γK(P, Q) based on finite random samples drawn i.i.d. from P and Q and study its consistency and the rate of convergence. This is useful in applications like two-sample tests (also in binary classification as it relates to the consistency of the Parzen window classifier) where different P and Q are to be distinguished based on the finite samples drawn from them and it is important that the estimator is consistent for the test to be meaningful. We show that the consistency and the rate of convergence of the estimator depend on the Rademacher type of B′. This result coincides with the one obtained for γk when B is an RKHS. The above mentioned results, while similar to results obtained for RKHS embeddings, are significantly more general, as they apply RKBS spaces, which subsume RKHSs. We can therefore expect to obtain “richer” metrics γK than when being restricted to RKHSs (see Examples 1–3). On the other hand, one disadvantage of the RKBS framework is that γK(P, Q) cannot be computed in a closed form unlike γk (see Section 4.3). Though this could seriously limit the practical impact of the RKBS embeddings, in Section 5, we show that closed form expression for γK and its empirical estimator can be obtained for some non-trivial Banach spaces (see Examples 1–3). However, the critical drawback of the RKBS framework is that the computation of γK and its empirical estimator is significantly more involved and expensive than the RKHS framework, which means a simple kernel algorithm like a Parzen window classifier, when generalized to Banach spaces suffers from a serious computational drawback, thereby limiting its practical impact. Given the advantages of learning in Banach space over Hilbert space, this work, therefore demonstrates the need for the 2 development of efficient algorithms in Banach spaces in order to make the problem of learning in Banach spaces worthwhile compared to its Hilbert space counterpart. The proofs of the results in Sections 4 and 5 are provided in the supplementary material. 2 Notation We introduce some notation that is used throughout the paper. For a topological space X, C(X) (resp. Cb(X)) denotes the space of all continuous (resp. bounded continuous) functions on X. For a locally compact Hausdorff space X, f ∈C(X) is said to vanish at infinity if for every ǫ > 0 the set {x : |f(x)| ≥ǫ} is compact. The class of all continuous f on X which vanish at infinity is denoted as C0(X). For a Borel measure µ on X, Lp(X, µ) denotes the Banach space of p-power (p ≥1) µ-integrable functions. For a function f defined on Rd, ˆf and f ∨denote the Fourier and inverse Fourier transforms of f. Since ˆf and f ∨on Rd can be defined in L1, L2 or more generally in distributional senses, they should be treated in the appropriate sense depending on the context. In the L1 sense, the Fourier and inverse Fourier transforms of f ∈L1(Rd) are defined as: ˆf(y) = (2π)−d/2 R Rd f(x) e−i⟨y,x⟩dx and f ∨(y) = (2π)−d/2 R Rd f(x) ei⟨y,x⟩dx, where i denotes the imaginary unit √−1. φP := R Rd ei⟨·,x⟩dP(x) denotes the characteristic function of P. 3 Preliminaries: Reproducing Kernel Banach Spaces In this section, we briefly review the theory of RKBSs, which was recently studied by [24] in the context of learning in Banach spaces. Let X be a prescribed input space. Definition 1 (Reproducing kernel Banach space). An RKBS B on X is a reflexive Banach space of functions on X such that its topological dual B′ is isometric to a Banach space of functions on X and the point evaluations are continuous linear functionals on both B and B′. Note that if B is a Hilbert space, then the above definition of RKBS coincides with that of an RKHS. Let (·, ·)B be a bilinear form on B × B′ wherein (f, g∗)B := g∗(f), f ∈B, g∗∈B′. Theorem 2 in [24] shows that if B is an RKBS on X, then there exists a unique function K : X × X →C called the reproducing kernel (r.k.) of B, such that the following hold: (a1) K(x, ·) ∈B, K(·, x) ∈B′, x ∈X, (a2) f(x) = (f, K(·, x))B, f ∗(x) = (K(x, ·), f ∗)B, f ∈B, f ∗∈B′, x ∈X. Note that K satisfies K(x, y) = (K(x, ·), K(·, y))B and therefore K(·, x) and K(x, ·) are reproducing kernels for B and B′ respectively. When B is an RKHS, K is indeed the r.k. in the usual sense. Though an RKBS has exactly one r.k., different RKBSs may have the same r.k. (see Example 1) unlike an RKHS, where no two RKHSs can have the same r.k (by the Moore-Aronszajn theorem [4]). Due to the lack of inner product in B (unlike in an RKHS), it can be shown that the r.k. for a general RKBS can be any arbitrary function on X ×X for a finite set X [24]. In order to have a substitute for inner products in the Banach space setting, [24] considered RKBS B that are uniformly Fr´echet differentiable and uniformly convex (referred to as s.i.p. RKBS) as it allows Hilbert space arguments to be carried over to B—most importantly, an analogue to the Riesz representation theorem holds (see Theorem 3)—through the notion of semi-inner-product (s.i.p.) introduced by [12]. In the following, we first present results related to general s.i.p. spaces and then consider s.i.p. RKBS. Definition 2 (S.i.p. space). A Banach space B is said to be uniformly Fr´echet differentiable if for all f, g ∈B, limt∈R,t→0 ∥f+tg∥B−∥f∥B t exists and the limit is approached uniformly for f, g in the unit sphere of B. B is said to be uniformly convex if for all ǫ > 0, there exists a δ > 0 such that ∥f + g∥B ≤2 −δ for all f, g ∈B with ∥f∥B = ∥g∥B = 1 and ∥f −g∥B ≥ǫ. B is called an s.i.p. space if it is both uniformly Fr´echet differentiable and uniformly convex. Note that uniform Fr´echet differentiability and uniform convexity are properties of the norm associated with B. [8, Theorem 3] has shown that if B is an s.i.p. space, then there exists a unique function [·, ·]B : B × B →C, called the semi-inner-product such that for all f, g, h ∈B and λ ∈C: (a3) [f + g, h]B = [f, h]B + [g, h]B, (a4) [λf, g]B = λ[f, g]B, [f, λg]B = λ[f, g]B, (a5) [f, f]B =: ∥f∥2 B > 0 for f ̸= 0, 3 (a6) (Cauchy-Schwartz) |[f, g]B|2 ≤∥f∥2 B∥g∥2 B, and limt∈R,t→0 ∥f+tg∥B−∥f∥B t = Re([g,f]B) ∥f∥B , f, g ∈B, f ̸= 0, where Re(α) and α represent the real part and complex conjugate of a complex number α. Note that s.i.p. in general do not satisfy conjugate symmetry, [f, g]B = [g, f]B for all f, g ∈B and therefore is not linear in the second argument, unless B is a Hilbert space, in which case the s.i.p. coincides with the inner product. Suppose B is an s.i.p. space. Then for each h ∈B, f 7→[f, h]B defines a continuous linear functional on B, which can be identified with a unique element h∗∈B′, called the dual function of h. By this definition of h∗, we have h∗(f) = (f, h∗)B = [f, h]B, f, h ∈B. Using the structure of s.i.p., [8, Theorem 6] provided the following analogue in B to the Riesz representation theorem of Hilbert spaces. Theorem 3 ([8]). Suppose B is an s.i.p. space. Then (a7) (Riesz representation theorem) For each g ∈B′, there exists a unique h ∈B such that g = h∗, i.e., g(f) = [f, h]B, f ∈B and ∥g∥B′ = ∥h∥B. (a8) B′ is an s.i.p. space with respect to the s.i.p. defined by [h∗, f ∗]B′ := [f, h]B, f, h ∈B and ∥h∗∥B′ := [h∗, h∗]1/2 B′ . For more details on s.i.p. spaces, we refer the reader to [8]. A concrete example of an s.i.p. space is as follows, which will prove to be useful in Section 5. Let (X, A , µ) be a measure space and B := Lp(X, µ) for some p ∈(1, +∞). It is an s.i.p. space with dual B′ := Lq(X, µ) where q = p p−1. For each f ∈B, its dual element in B′ is f ∗= f|f|p−2 ∥f∥p−2 Lp(X,µ) . Consequently, the semi-innerproduct on B is [f, g]B = g∗(f) = R X fg|g|p−2 dµ ∥g∥p−2 Lp(X ,µ) . (4) Having introduced s.i.p. spaces, we now discuss s.i.p. RKBS which was studied by [24]. Using the Riesz representation for s.i.p. spaces (see (a7)), Theorem 9 in [24] shows that if B is an s.i.p. RKBS, then there exists a unique r.k. K : X × X →C and a s.i.p. kernel G : X × X →C such that: (a9) G(x, ·) ∈B for all x ∈X, K(·, x) = (G(x, ·))∗, x ∈X, (a10) f(x) = [f, G(x, ·)]B, f ∗(x) = [K(x, ·), f]B for all f ∈B, x ∈X. It is clear that G(x, y) = [G(x, ·), G(y, ·)]B, x, y ∈X. Since s.i.p. in general do not satisfy conjugate symmetry, G need not be Hermitian nor pd [24, Section 4.3]. The r.k. K and the s.i.p. kernel G coincide when span{G(x, ·) : x ∈X} is dense in B, which is the case when B is an RKHS [24, Theorems 2, 10 and 11]. This means when B is an RKHS, then the conditions (a9) and (a10) reduce to the well-known reproducing properties of an RKHS with the s.i.p. reducing to an inner product. 4 RKBS Embedding of Probability Measures In this section, we present our main contributions of deriving and analyzing the RKBS embedding of probability measures, which generalize the theory of RKHS embeddings. First, we would like to remind the reader that the RKHS embedding in (1) can be derived by choosing F = {f : ∥f∥H ≤1} in γF(P, Q) = sup f∈F Z X f dP − Z X f dQ . See [19, 20] for details. Similar to the RKHS case, in Theorem 4, we show that the RKBS embeddings can be obtained by choosing F = {f : ∥f∥B ≤1} in γF(P, Q). Interestingly, though B does not have an inner product, it can be seen that the structure of semi-inner-product is sufficient enough to generate an embedding similar to (1). Theorem 4. Let B be an s.i.p. RKBS defined on a measurable space X with G as the s.i.p. kernel and K as the reproducing kernel with both G and K being measurable. Let F = {f : ∥f∥B ≤1} and G be bounded. Then γK(P, Q) := γF(P, Q) = Z X K(·, x) dP(x) − Z X K(·, x) dQ(x) B′ . (5) 4 Based on Theorem 4, it is clear that P can be seen as being embedded into B′ as P 7→ R X K(·, x) dP(x) and γK(P, Q) is the distance between the embeddings of P and Q. Therefore, we arrive at an embedding which looks similar to (1) and coincides with (1) when B is an RKHS. Given these embeddings, two questions that need to be answered for these embeddings to be practically useful are: (⋆) When is the embedding injective? and (⋆⋆) Can γK(P, Q) in (5) be estimated consistently and computed efficiently from finite random samples drawn i.i.d. from P and Q? The significance of (⋆) is that if (3) is injective, then such an embedding can be used to differentiate between different P and Q, which can then be used in applications like two-sample tests to differentiate between P and Q based on samples drawn i.i.d. from them if the answer to (⋆⋆) is affirmative. These questions are answered in the following sections. Before that, we show how these questions are important in binary classification. Following [18], it can be shown that γK is the negative of the optimal risk associated with a Parzen window classifier in B′, that separates the class-conditional distributions P and Q (refer to the supplementary material for details). This means that if (3) is not injective, then the maximum risk is attained for P ̸= Q, i.e., distinct distributions are not classifiable. Therefore, the injectivity of (3) is of primal importance in applications. In addition, the question in (⋆⋆) is critical as well, as it relates to the consistency of the Parzen window classifier. 4.1 When is (3) injective? The following result provides various characterizations for the injectivity of (3), which are similar (but more general) to those obtained for the injectivity of (1) and coincide with the latter when B is an RKHS. Theorem 5 (Injectivity of γK). Suppose B is an s.i.p. RKBS defined on a topological space X with K and G as its r.k. and s.i.p. kernel respectively. Then the following hold: (a) Let X be a Polish space that is also locally compact Hausdorff. Suppose G is bounded and K(x, ·) ∈C0(X) for all x ∈X. Then (3) is injective if B is dense in C0(X). (b) Suppose the conditions in (a) hold. Then (3) is injective if B is dense in Lp(X, µ) for any Borel probability measure µ on X and some p ∈[1, ∞). Since it is not easy to check for the denseness of B in C0(X) or Lp(X, µ), in Theorem 6, we present an easily checkable characterization for the injectivity of (3) when K is bounded continuous and translation invariant on Rd. Note that Theorem 6 generalizes the characterization (see [19, 20]) for the injectivity of RKHS embedding (in (1)). Theorem 6 (Injectivity of γK for translation invariant K). Let X = Rd. Suppose K(x, y) = ψ(x −y), where ψ : Rd →R is of the form ψ(x) = R Rd ei⟨x,ω⟩dΛ(ω) and Λ is a finite complexvalued Borel measure on Rd. Then (3) is injective if supp(Λ) = Rd. In addition if K is symmetric, then the converse holds. Remark 7. If ψ in Theorem 6 is a real-valued pd function, then by Bochner’s theorem, Λ has to be real, nonnegative and symmetric, i.e., Λ(dω) = Λ(−dω). Since ψ need not be a pd function for K to be a real, symmetric r.k. of B, Λ need not be nonnegative. More generally, if ψ is a real-valued function on Rd, then Λ is conjugate symmetric, i.e., Λ(dω) = Λ(−dω). An example of a translation invariant, real and symmetric (but not pd) r.k. that satisfies the conditions of Theorem 6 can be obtained with ψ(x) = (4x6 + 9x4 −18x2 + 15) exp(−x2). See Example 3 for more details. 4.2 Consistency Analysis Consider a two-sample test, wherein given two sets of random samples, {Xj}m j=1 and {Yj}n j=1 drawn i.i.d. from distributions P and Q respectively, it is required to test whether P = Q or not. Given a metric, γK on P(X), the problem can equivalently be posed as testing for γK(P, Q) = 0 or not, based on {Xj}m j=1 and {Yj}n j=1, in which case, γK(P, Q) is estimated based on these random samples. For the test to be meaningful, it is important that this estimate of γK is consistent. [9] showed that γK(Pm, Qn) is a consistent estimator of γK(P, Q) when B is an RKHS, where Pm := 1 m Pm j=1 δXj, Qn := 1 n Pn j=1 δYj and δx represents the Dirac measure at x ∈X. Theorem 9 generalizes the consistency result in [9] by showing that γK(Pm, Qn) is a consistent estimator of 5 γK(P, Q) and the rate of convergence is O(m(1−t)/t +n(1−t)/t) if B′ is of type t, 1 < t ≤2. Before we present the result, we define the type of a Banach space, B [2, p. 303]. Definition 8 (Rademacher type of B). Let 1 ≤t ≤2. A Banach space B is said to be of tRademacher (or, more shortly, of type t) if there exists a constant C∗such that for any N ≥1 and any {fj}N j=1 ⊂B: (E∥PN j=1 ̺jfj∥t B)1/t ≤C∗(PN j=1 ∥fj∥t B)1/t, where {̺j}N j=1 are i.i.d. Rademacher (symmetric ±1-valued) random variables. Clearly, every Banach space is of type 1. Since having type t′ for t′ > t implies having type t, let us define t∗(B) := sup{t : B has type t}. Theorem 9 (Consistency of γK(Pm, Qn)). Let B be an s.i.p. RKBS. Assume ν := sup{ p G(x, x) : x ∈X} < ∞. Fix δ ∈(0, 1). Then with probability 1−δ over the choice of samples {Xj}m j=1 i.i.d. ∼P and {Yj}n j=1 i.i.d. ∼Q, we have |γK(Pm, Qn) −γK(P, Q)| ≤2C∗ν  m 1−t t + n 1−t t  + p 18ν2 log(4/δ)  m−1 2 + n−1 2  , where t = t∗(B′) and C∗is some universal constant. It is clear from Theorem 9 that if t∗(B′) ∈(1, 2], then γK(Pm, Qn) is a consistent estimator of γK(P, Q). In addition, the best rate is obtained if t∗(B′) = 2, which is the case if B is an RKHS. In Section 5, we will provide examples of s.i.p. RKBSs that satisfy t∗(B′) = 2. 4.3 Computation of γK(P, Q) We now consider the problem of computing γK(P, Q) and γK(Pm, Qn). Define λ∗ P := R X K(·, x) dP(x). Consider γ2 K(P, Q) = ∥λ∗ P −λ∗ Q∥2 B′ (a5) = [λ∗ P −λ∗ Q, λ∗ P −λ∗ Q]B′ (a3) = [λ∗ P, λ∗ P −λ∗ Q]B′ −[λ∗ Q, λ∗ P −λ∗ Q]B′ = h Z X K(·, x) dP(x), λ∗ P −λ∗ Q i B′ − h Z X K(·, x) dQ(x), λ∗ P −λ∗ Q i B′ (∗) = Z X [K(·, x), λ∗ P −λ∗ Q]B′ dP(x) − Z X [K(·, x), λ∗ P −λ∗ Q]B′ dQ(x) = Z X h K(·, x), Z X K(·, y) d(P −Q)(y) i B′ d(P −Q)(x), (6) where (∗) is proved in the supplementary material. (6) is not reducible as the s.i.p. is not linear in the second argument unless B is a Hilbert space. This means γK(P, Q) is not representable in terms of the kernel function, K(x, y) unlike in the case of B being an RKHS, in which case the s.i.p. in (6) reduces to an inner product providing γ2 K(P, Q) = Z Z X K(x, y) d(P −Q)(x) d(P −Q)(y). Since this issue holds for any P, Q ∈P(X), it also holds for Pm and Qn, which means γK(Pm, Qn) cannot be computed in a closed form in terms of the kernel, K(x, y) unlike in the case of an RKHS where γK(Pm, Qn) can be written as a simple V-statistic that depends only on K(x, y) computed at {Xj}m j=1 and {Yj}n j=1. This is one of the main drawbacks of the RKBS approach where the s.i.p. structure does not allow closed form representations in terms of the kernel K (also see [24] where regularization algorithms derived in RKBS are not solvable unlike in an RKHS), and therefore could limit its practical viability. However, in the following section, we present non-trivial examples of s.i.p. RKBSs for which γK(P, Q) and γK(Pm, Qn) can be obtained in closed forms. 5 Concrete Examples of RKBS Embeddings In this section, we present examples of RKBSs and then derive the corresponding γK(P, Q) and γK(Pm, Qn) in closed forms. To elaborate, we present three examples that cover the spectrum: Example 1 deals with RKBS (in fact a family of RKBSs induced by the same r.k.) whose r.k. is pd, Example 2 with RKBS whose r.k. is not symmetric and therefore not pd and Example 3 with RKBS whose r.k. is symmetric but not pd. These examples show that the Banach space embeddings result in richer metrics on P(X) than those obtained through RKHS embeddings. 6 Example 1 (K is positive definite). Let µ be a finite nonnegative Borel measure on Rd. Then for any 1 < p < ∞with q = p p−1 Bpd p (Rd) :=  fu(x) = Z Rd u(t)ei⟨x,t⟩dµ(t) : u ∈Lp(Rd, µ), x ∈Rd  , (7) is an RKBS with K(x, y) = G(x, y) = (µ(Rd))(p−2)/p R Rd e−i⟨x−y,t⟩dµ(t) as the r.k. and γK(P, Q) = Z Rd ei⟨x,·⟩d(P −Q)(x) Lq(Rd,µ) = ∥φP −φQ∥Lq(Rd,µ) . (8) First note that K is a translation invariant pd kernel on Rd as it is the Fourier transform of a nonnegative finite Borel measure, µ, which follows from Bochner’s theorem. Therefore, though the s.i.p. kernel and the r.k. of an RKBS need not be symmetric, the space in (7) is an interesting example of an RKBS, which is induced by a pd kernel. In particular, it can be seen that many RKBSs (Bpd p (Rd) for any 1 < p < ∞) have the same r.k (ignoring the scaling factor which can be made one for any p by choosing µ to be a probability measure). Second, note that Bpd p is an RKHS when p = q = 2 and therefore (8) generalizes γk(P, Q) = ∥φP −φQ∥L2(Rd,µ). By Theorem 6, it is clear that γK in (8) is a metric on P(Rd) if and only if supp(µ) = Rd. Refer to the supplementary material for an interpretation of Bpd p (Rd) as a generalization of Sobolev space [23, Chapter 10]. Example 2 (K is not symmetric). Let µ be a finite nonnegative Borel measure such that its momentgenerating function, i.e., Mµ(x) := R Rd e⟨x,t⟩dµ(t) exists. Then for any 1 < p < ∞with q = p p−1 Bns p (Rd) :=  fu(x) = Z Rd u(t)e⟨x,t⟩dµ(t) : u ∈Lp(Rd, µ), x ∈Rd  is an RKBS with K(x, y) = G(x, y) = (Mµ(qx))(p−2)/p Mµ(x(q −1) + y) as the r.k. Suppose P and Q are such that MP and MQ exist. Then γK(P, Q) = ∥ R Rd e⟨x,·⟩d(P −Q)(x)∥Lq(Rd,µ) = ∥MP −MQ∥Lq(Rd,µ), which is the weighted Lq distance between the moment-generating functions of P and Q. It is easy to see that if supp(µ) = Rd, then γK(P, Q) = 0 ⇒MP = MQ a.e. ⇒P = Q, which means γK is a metric on P(Rd). Note that K is not symmetric (for q ̸= 2) and therefore is not pd. When p = q = 2, K(x, y) = Mµ(x + y) is pd and Bns p (Rd) is an RKHS. Example 3 (K is symmetric but not positive definite). Let ψ(x) = Ae−x2 4x6 + 9x4 −18x2 + 15  with A := (1/243) 4π2/25 1/6. Then Bsnpd 3 2 (R) :=  fu(x) = Z R (x −t)2e−3(x−t)2 2 u(t) dt : u ∈L 3 2 (R), x ∈R  is an RKBS with r.k. K(x, y) = g(x, y) = ψ(x −y). Clearly, ψ and therefore K are not pd (though symmetric on R) as bψ(x) = −e−x2 4 34992 √ 2 x6 −39x4 + 216x2 −324  is not nonnegative at every x ∈R. Refer to the supplementary material for the derivation of K and bψ. In addition, γK(P, Q) = ∥ R R θ(· −x) d(P −Q)(x)∥Lq(R) = ∥(bθ (φP −φQ))∨∥Lq(R), where θ(t) = t2e−3 2 t2. Since supp(bθ) = R, we have γK(P, Q) = 0 ⇒(bθ (φP−φQ))∨= 0 ⇒bθ (φP−φQ) = 0 ⇒φP = φQ a.e., which implies P = Q and therefore γK is a metric on P(R). So far, we have presented different examples of RKBSs, wherein we have demonstrated the nature of the r.k., derived the Banach space embeddings in closed form and studied the conditions under which it is injective. These examples also show that the RKBS embeddings result in richer distance measures on probabilities compared to those obtained by the RKHS embeddings—an advantage gained by moving from Hilbert to Banach spaces. Now, we consider the problem of computing γK(Pm, Qn) in closed form and its consistency. In Section 4.3, we showed that γK(Pm, Qn) does not have a nice closed form expression unlike in the case of B being an RKHS. However, in the following, we show that for K in Examples 1–3, γK(Pm, Qn) has a closed form expression for certain choices of q. Let us consider the estimation of γK(P, Q): γq K(Pm, Qn) = Z X b(x, ·) d(Pm −Qn)(x) q Lq(X ,µ) = Z X Z X b(x, t) d(Pm −Qn)(x) q dµ(t) = Z X 1 m m X j=1 b(Xj, t) −1 n n X j=1 b(Yj, t) q dµ(t), (9) 7 where b(x, t) = ei⟨x,t⟩in Example 1, b(x, t) = e⟨x,t⟩in Example 2 and b(x, t) = θ(x −t) with q = 3 and µ being the Lebesgue measure in Example 3. Since the duals of RKBSs considered in Examples 1–3 are of type min(q, 2) for 1 ≤q ≤∞[2, p. 304], by Theorem 9, γK(Pm, Qn) estimates γK(P, Q) consistently at a convergence rate of O(m max(1−q,−1) min(q,2) + n max(1−q,−1) min(q,2) ) for q ∈ (1, ∞), with the best rate of O(m−1/2 + n−1/2) attainable when q ∈[2, ∞). This means for q ∈(2, ∞), the same rate as attainable by the RKHS can be achieved. Now, the problem reduces to computing γK(Pm, Qn). Note that (9) cannot be computed in a closed form for all q—see the discussion in the supplementary material about approximating γK(Pm, Qn). However, when q = 2, (9) can be computed very efficiently in closed form (in terms of K) as a V-statistic [9], given by γ2 K(Pm, Qn) = m X j,l=1 K(Xj, Xl) m2 + n X j,l=1 K(Yj, Yl) n2 −2 m X j=1 n X l=1 K(Xj, Yl) mn . (10) More generally, it can be shown that if q = 2s, s ∈N, then (9) reduces to γq K(Pm, Qn) = Z X q· · · Z X A(x1,...,xq) z }| { Z X s Y j=1 b(x2j−1, t)b(x2j, t) dµ(t) q Y j=1 d(Pm −Qn)(xj) (11) for which closed form computation is possible for appropriate choices of b and µ. Refer to the supplementary material for the derivation of (11). For b and µ as in Example 1, we have A(x1, . . . , xq) = (µ(Rd)) 2−p p K Ps j=1 x2j−1, Ps j=1 x2j  , while for b and µ as in Example 2, we have A(x1, . . . , xq) = Mµ(Pq j=1 xj). By appropriately choosing θ and µ in Example 3, we can obtain a closed form expression for A(x1, . . . , xq), which is proved in the supplementary material. Note that choosing s = 1 in (11) results in (10). (11) shows that γq K(Pm, Qn) can be computed in a closed form in terms of A at a complexity of O(mq), assuming m = n, which means the least complexity is obtained for q = 2. The above discussion shows that for appropriate choices of q, i.e., q ∈(2, ∞), the RKBS embeddings in Examples 1–3 are useful in practice as γK(Pm, Qn) is consistent and has a closed form expression. However, the drawback of the RKBS framework is that the computation of γK(Pm, Qn) is more involved than its RKHS counterpart. 6 Conclusion & Discussion With a motivation to study the advantages/disadvantagesof generalizing Hilbert space learning algorithms to Banach spaces, in this paper, we generalized the notion of RKHS embedding of probability measures to Banach spaces, in particular RKBS that are uniformly Fr´echet differentiable and uniformly convex—note that this is equivalent to generalizing a RKHS based Parzen window classifier to RKBS. While we showed that most of results in RKHS like injectivity of the embedding, consistency of the Parzen window classifier, etc., nicely generalize to RKBS yielding richer distance measures on probabilities, the generalized notion is less attractive in practice compared to its RKHS counterpart because of the computational disadvantage associated with it. Since most of the existing literature on generalizing kernel methods to Banach spaces deal with more complex algorithms than a simple Parzen window classifier that is considered in this paper, we believe that most of these algorithms may have limited practical applicability, though they are theoretically appealing. This, therefore raises an important open problem of developing computationally efficient Banach space based learning algorithms. Acknowledgments The authors thank the anonymous reviewers for their constructive comments that improved the presentation of the paper. Part of the work was done while B. K. S. was a Ph. D. student at UC San Diego. B. K. S. and G. R. G. L. acknowledge support from the National Science Foundation (grants DMS-MSPA 0625409 and IIS-1054960). K. F. was supported in part by JSPS KAKENHI (B) 22300098. References [1] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950. 8 [2] B. Beauzamy. Introduction to Banach spaces and their Geometry. North-Holland, The Netherlands, 1985. [3] K. Bennett and E. Bredensteiner. Duality and geometry in svm classifier. In Proc. 17th International Conference on Machine Learning, pages 57–64, 2000. [4] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer Academic Publishers, London, UK, 2004. [5] R. Der and D. Lee. Large-margin classification in Banach spaces. In JMLR Workshop and Conference Proceedings, volume 2, pages 91–98. AISTATS, 2007. [6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research, 5:73–99, 2004. [7] K. Fukumizu, A. Gretton, X. Sun, and B. Sch¨olkopf. Kernel measures of conditional dependence. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 489–496, Cambridge, MA, 2008. MIT Press. [8] J. R. Giles. Classes of semi-inner-product spaces. Trans. Amer. Math. Soc., 129:436–446, 1967. [9] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the two sample problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513–520. MIT Press, 2007. [10] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch¨olkopf, and A. J. Smola. A kernel statistical test of independence. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 585–592. MIT Press, 2008. [11] M. Hein, O. Bousquet, and B. Sch¨olkopf. Maximal margin classification for metric spaces. J. Comput. System Sci., 71:333–359, 2005. [12] G. Lumer. Semi-inner-product spaces. Trans. Amer. Math. Soc., 100:29–43, 1961. [13] C. A. Micchelli and M. Pontil. A function representation for learning in Banach spaces. In Conference on Learning Theory, 2004. [14] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, UK, 2004. [16] A. J. Smola, A. Gretton, L. Song, and B. Sch¨olkopf. A Hilbert space embedding for distributions. In Proc. 18th International Conference on Algorithmic Learning Theory, pages 13–31. Springer-Verlag, Berlin, Germany, 2007. [17] K. Sridharan and A. Tewari. Convex games in Banach spaces. In Conference on Learning Theory, 2010. [18] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, G. R. G. Lanckriet, and B. Sch¨olkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1750–1758. MIT Press, 2009. [19] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. G. Lanckriet, and B. Sch¨olkopf. Injective Hilbert space embeddings of probability measures. In R. Servedio and T. Zhang, editors, Proc. of the 21st Annual Conference on Learning Theory, pages 111–122, 2008. [20] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch¨olkopf, and G. R. G. Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517–1561, 2010. [21] H. Tong, D.-R. Chen, and F. Yang. Least square regression with ℓp-coefficient regularization. Neural Computation, 22:3221–3235, 2010. [22] U. von Luxburg and O. Bousquet. Distance-based classification with Lipschitz functions. Journal for Machine Learning Research, 5:669–695, 2004. [23] H. Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, UK, 2005. [24] H. Zhang, Y. Xu, and J. Zhang. Reproducing kernel Banach spaces for machine learning. Journal of Machine Learning Research, 10:2741–2775, 2009. 9
2011
141
4,193
Predicting Dynamic Difficulty Olana Missura and Thomas G¨artner University of Bonn and Fraunhofer IAIS Schloß Birlinghoven 52757 Sankt Augustin, Germany {olana.missura,thomas.gaertner}@uni-bonn.de Abstract Motivated by applications in electronic games as well as teaching systems, we investigate the problem of dynamic difficulty adjustment. The task here is to repeatedly find a game difficulty setting that is neither ‘too easy’ and bores the player, nor ‘too difficult’ and overburdens the player. The contributions of this paper are (i) the formulation of difficulty adjustment as an online learning problem on partially ordered sets, (ii) an exponential update algorithm for dynamic difficulty adjustment, (iii) a bound on the number of wrong difficulty settings relative to the best static setting chosen in hindsight, and (iv) an empirical investigation of the algorithm when playing against adversaries. 1 Introduction While difficulty adjustment is common practise in many traditional games (consider, for instance, the handicap in golf or the handicap stones in go), the case for dynamic difficulty adjustment in electronic games has been made only recently [7]. Still, there are already many different, more or less successful, heuristic approaches for implementing it. In this paper, we formalise dynamic difficulty adjustment as a game between a master and a player in which the master tries to predict the most appropriate difficulty setting. As the player is typically a human with changing performance depending on many hidden factors as well as luck, no assumptions about the player can be made. The difficulty adjustment game is played on a partially ordered set which reflects the ‘more difficult than’-relation on the set of difficulty settings. To the best of our knowledge, in this paper, we provide the first thorough theoretical treatment of dynamic difficulty adjustment as a prediction problem. The contributions of this paper are: We formalise the learning problem of dynamic difficulty adjustment (in Section 2), propose a novel learning algorithm for this problem (in Section 4), and give a bound on the number of proposed difficulty settings that were not just right (in Section 5). The bound limits the number of mistakes the algorithm can make relative to the best static difficulty setting chosen in hindsight. For the bound to hold, no assumptions whatsoever need to be made on the behaviour of the player. Last but not least we empirically study the behaviour of the algorithm under various circumstances (in Section 6). In particular, we investigate the performance of the algorithm ‘against’ statistically distributed players by simulating the players as well as ‘against’ adversaries by asking humans to try to trick the algorithm in a simplified setting. Implementing our algorithm into a real game and testing it on real human players is left to future work. 2 Formalisation To be able to theoretically investigate dynamic difficulty adjustment, we view it as a game between a master and a player, played on a partially ordered set modelling the ‘more difficult than’-relation. The game is played in turns where each turn has the following elements: 1 1. the game master chooses a difficulty setting, 2. the player plays one ‘round’ of the game in this setting, and 3. the game master experiences whether the setting was ‘too difficult’, ‘just right’, or ‘too easy’ for the player. The master aims at making as few as possible mistakes, that is, at choosing a difficulty setting that is ‘just right’ as often as possible. In this paper, we aim at developing an algorithm for the master with theoretical guarantees on the number of mistakes in the worst case while not making any assumptions about the player. To simplify our analysis, we make the following, rather natural assumptions: • the set of difficulty settings is finite and • in every round, the (hidden) difficulty settings respect the partial order, that is, – no state that ‘is more difficult than’ a state which is ‘too difficult’ can be ‘just right’ or ‘too easy’ and – no state that ‘is more difficult than’ a state which is ‘just right’ can be ‘too easy’. Even with these natural assumptions, in the worst case, no algorithm for the master will be able to make even a single correct prediction. As we can not make any assumptions about the player, we will be interested in comparing our algorithm theoretically and empirically with the best statically chosen difficulty setting, as is commonly the case in online learning [3]. 3 Related Work As of today there exist a few commercial games with a well designed dynamic difficulty adjustment mechanism, but all of them employ heuristics and as such suffer from the typical disadvantages (being not transferable easily to other games, requiring extensive testing, etc). What we would like to have instead of heuristics is a universal mechanism for dynamic difficulty adjustment: An online algorithm that takes as an input (game-specific) ways to modify difficulty and the current player’s in-game history (actions, performance, reactions, ...) and produces as an output an appropriate difficulty modification. Both artificial intelligence researchers and the game developers community display an interest in the problem of automatic difficulty scaling. Different approaches can be seen in the work of R. Hunicke and V. Chapman [10], R. Herbich and T. Graepel [9], Danzi et al [7], and others. Since the perceived difficulty and the preferred difficulty are subjective parameters, the dynamic difficulty adjustment algorithm should be able to choose the “right” difficulty level in a comparatively short time for any particular player. Existing work in player modeling in computer games [14, 13, 5, 12] demonstrates the power of utilising the player models to create the games or in-game situations of high interest and satisfaction for the players. As can be seen from these examples the problem of dynamic difficulty adjustment in video games was attacked from different angles, but a unifying and theoretically sound approach is still missing. To the best of our knowledge this work contains the first theoretical formalization of dynamic difficulty adjustment as a learning problem. Under the assumptions described in Section 2, we can view the partially ordered set as a directed acyclic graph, at each round labelled by three colours (say, red, for ‘too difficult’ green for ‘just right’, and blue for ‘too easy’) such that • for every directed path in the graph between two equally labelled vertices, all vertices on that path have the same colour and • there is no directed path from a green vertex to a red vertex and none from a blue vertex to either a red or a green vertex. The colours are allowed to change in each round as long as they obey the above rules. The master, i.e., the learning algorithm, does not see the colours but must point at a green vertex as often as 2 possible. If he points at a red vertex, he receives the feedback −1; if he points at a blue vertex, he receives the feedback +1. This setting is related to learning directed cuts with membership queries. For learning directed cuts, i.e., monotone subsets, G¨artner and Garriga [8] provided algorithms and bounds for the case in which the labelling does not change over time. They then showed that the intersection between a monotone and an antimonotone subset in not learnable. This negative result is not applicable in our case, as the feedback we receive is more powerful. They furthermore showed that directed cuts are not learnable with traditional membership queries if the labelling is allowed to change over time. This negative result also does not apply to our case as the aim of the master is “only” to point at a green vertex as often as possible and as we are interested in a comparison with the best static vertex chosen in hindsight. If we ignore the structure inherent in the difficulty settings, we will be in a standard multi-armed bandit setting [2]: There are K arms, to which an unknown adversary assigns loss values on each iteration (0 to the ‘just right’ arms, 1 to all the others). The goal of the algorithm is to choose an arm on each iteration to minimize its overall loss. The difficulty of the learning problem comes from the fact that only the loss of the chosen arm is revealed to the algorithm. This setting was studied extensively in the last years, see [11, 6, 4, 1] and others. The standard performance measure is the so-called ‘regret’: The difference of the loss acquired by the learning algorithm and by the best static arm chosen in hindsight. The best known to-date algorithm that does not use any additional information is the Improved Bandit Strategy (called IMPROVEDPI in the following) [3]. The upper bound on its regret is of the order p KT ln(T), where T is the amount of iterations. IMPROVEDPI will be the second baseline after the best static in hindsight (BSIH) in our experiments. 4 Algorithm In this section we give an exponential update algorithm for predicting a vertex that corresponds to a ‘just right’ difficulty setting in a finite partially ordered set (K, ≻) of difficulty settings. The partial order is such that for i, j ∈K we write i ≻j if difficulty setting i is ‘more difficult than’ difficulty setting j. The learning rate of the algorithm is denoted by β. The response that the master algorithm can observe ot is +1 if the chosen difficulty setting was ‘too easy’, 0 if it was ‘just right’, and −1 if it was ‘too difficult’. The algorithm maintains a belief w of each vertex being ‘just right’ and updates this belief if the observed response implies that the setting was ‘too easy’ or ‘too difficult’. Algorithm 1 PARTIALLY-ORDERED-SET MASTER (POSM) for Difficulty Adjustment Require: parameter β ∈(0, 1), K difficulty Settings K, partial order ≻on K, and a sequence of observations o1, o2, . . . 1: ∀k ∈K : let w1(k) = 1 2: for each turn t = 1, 2, . . . do 3: ∀k ∈K : let At(k) = P x∈K:x⪰k wt(x) 4: ∀k ∈K : let Bt(k) = P x∈K:x⪯k wt(x) 5: PREDICT kt = argmaxk∈K min {Bt(k), At(k)} 6: OBSERVE ot ∈{−1, 0, +1} 7: if ot = +1 then 8: ∀k ∈K : let wt+1(k) = βwt(k) if k ⪯kt wt(x) otherwise 9: end if 10: if ot = −1 then 11: ∀k ∈K : let wt+1(k) = βwt(k) if k ⪰kt wt(x) otherwise 12: end if 13: end for The main idea of Algorithm 1 is that in each round we want to make sure we can update as much belief as possible. The significance of this will be clearer when looking at the theory in the next section. To ensure it, we compute for each setting k the belief ‘above’ k as well as ‘below’ k . 3 That is, At in line 3 of the algorithm collects the belief of all settings that are known to be ‘more difficult’ and Bt in line 4 of the algorithm collects the belief of all settings that are known to be ‘less difficult’ than k. If we observe that the proposed setting was ‘too easy’, that is, we should ‘increase the difficulty’, in line 8 we update the belief of the proposed setting as well as all settings easier than the proposed. If we observe that the proposed setting was ‘too difficult’, that is, we should ‘decrease the difficulty’, in line 11 we update the belief of the proposed setting as well as all settings more difficult than the proposed. The amount of belief that is updated for each mistake is thus equal to Bt(kt) or At(kt). To gain the most information independent of the observation and thus to achieve the best performance, we choose the k that gives us the best worst case update min{Bt(k), At(k)} in line 5 of the algorithm. 5 Theory We will now show a bound on the number of inappropriate difficulty settings that are proposed, relative to the number of mistakes the best static difficulty setting makes. We denote the number of mistakes of POSM until time T by m and the minimum number of times a statically chosen difficulty setting would have made a mistake until time T by M. We denote furthermore the total amount of belief on the partially ordered set by Wt = P k∈K wt(k). The analysis of the algorithm relies on the notion of a path cover of K, i.e., a set of paths covering K. A path is a subset of K that is totally ordered. A set of paths is covering K if the union of the paths is equal to K. Any path cover can be chosen but the minimum path cover of K achieves the tightest bound. It can be found in time polynomial in |K| and its size is equal to the size of the largest antichain in (K, ≻). We denote the chosen set of paths by C. With this terminology, we are now ready to state the main result of our paper: Theorem 1. For the number of mistakes of POSM, it holds that: m ≤ ln |K| + M ln 1/β ln 2|C| 2|C|−1+β . For all c ∈C we denote the amount of belief on every chain by W c t = P x∈c wt(x), the belief ‘above’ k on c by Ac t(k) = P x∈c:x⪰k wt(x), and the belief ‘below’ k on c by Bc t (k) = P x∈c:x⪯k wt(x). Furthermore, we denote the ‘heaviest’ chain by ct = argmaxc∈C W c t . Unless stated otherwise, the following statements hold for all t. Observation 1.1. To relate the amount of belief updated by POSM to the amount of belief on each chain observe that max k∈K min{At(k), Bt(k)} = max c∈C max k∈c min{At(k), Bt(k)} ≥max c∈C max k∈c min{Ac t(k), Bc t (k)} ≥max k∈ct min{Act t (k), Bct t (k)} . Observation 1.2. As ct is the ‘heaviest’ among all chains and P c∈C W c t ≥WT , it holds that W ct t ≥Wt/|C|. We will next show that for every chain, there is a difficulty setting for which it holds that: If we proposed that setting and made a mistake, we would be able to update at least half of the total weight of that chain. Proposition 1.1. For all c ∈C it holds that max k∈c min{Ac t(k), Bc t (k)} ≥W c t /2 . Proof. We choose i = argmax k∈c {Bc t (k) | Bc t (k) < W c t /2} 4 and j = argmin k∈c {Bc t (k) | Bc t (k) ≥W c t /2} . This way, we obtain i, j ∈c for which Bc t (i) < W c t /2 ≤Bc t (j) and which are consecutive, that is, ∄k ∈c : i ≺k ≺j. Such i, j exist and are unique as ∀x ∈K : wt(x) > 0. We then have Bc t (i) + Ac t(j) = W c t and thus also Ac t(j) > W c t /2. This immediatelly implies W c t /2 ≤min{Ac t(j), Bc t (j)} ≤max k∈c min{Ac t(k), Bc t (k)} . Observation 1.3. We use the previous proposition to show that in each iteration in which POSM proposes an inappropriate difficulty setting, we update at least a constant fraction of the total weight of the partially ordered set: max k∈K min{At(k), Bt(k)} ≥max k∈ct min{Act t (k), Bct t (k)} ≥W ct t 2 ≥Wt 2|C| Proof (of Theorem 1). From the previous observations it follows that at each mistake we update at least a fraction of 1/(2|C|) of the total weight and have at most a fraction of (2|C|−1)/(2|C|) which is not updated. This implies Wt+1 ≤β · 1 2|C|Wt + 2|C| −1 2|C| Wt ≤  β 2|C| + 2|C| −1 2|C|  Wt . Applying this bound recursively, we obtain for time T WT ≤W0  β 2|C| + 2|C| −1 2|C| m ≤|K|  β 2|C| + 2|C| −1 2|C| m . As we only update the weight of a difficulty setting if the response implied that the algorithm made a mistake, βM is a lower bound on the weight of one difficulty setting and hence also WT ≥βM. Solving βM ≤|K| |C| −1 2|C| + β 2|C| m for m, proves the theorem. Note, that this bound is similar to the bound for the full information setting [3] despite much weaker information being available in our case. The influence of |C| is the new ingredient that changes the behaviour of this bound for different partially ordered sets. 6 Experiments We performed two sets of experiments: simulating a game against a stochastic environment, as well as using human players to provide our algorithm with a non-oblivious adversary. To evaluate the performance of our algorithm we have chosen two baselines. The first one is the best static difficulty setting in hindsight: it is a difficulty that a player would pick if she knew her skill level in advance and had to choose the difficulty only once. The second one is the IMPROVEDPI algorithm [3]. In the following we denote the subset of poset’s vertices with the ‘just right’ labels the zero-zone (because in the corresponding loss vector their components are equal to zero). In both stochastic and adversarial scenario we consider two different settings: so called ‘smooth’ and ‘non-smooth’ one. The settings’ names describe the way the zero-zone changes with time. In the ‘non-smooth’ setting we don’t place any restrictions on it apart from its size, while in the ‘smooth’ setting the border of the zero-zone is allowed to move only by one vertex at a time. These two settings represent two extreme situations: one player changing her skills gradually with time is changing the zero-zone ‘smoothly’; different players with different skills for each new challenge the game presents will make the zero-zone ‘jump’. In a more realistic scenario the zero-zone would change ‘smoothly’ most of the time, but sometimes it would perform jumps. 5 0 50 100 150 200 250 300 350 400 0 100 200 300 400 500 loss time ImprovedPI POSM BSIH (a) Loss. -100 -50 0 50 100 150 200 250 300 0 100 200 300 400 500 regret time ImprovedPI POSM (b) Regret. Figure 1: Stochastic adversary, ‘smooth’ setting, on a single chain of 50 vertices. 0 50 100 150 200 250 300 350 0 100 200 300 400 500 loss time ImprovedPI POSM BSIH (a) Loss. -100 -50 0 50 100 150 200 250 0 100 200 300 400 500 regret time ImprovedPI POSM (b) Regret. Figure 2: Stochastic adversary, ‘smooth’ setting, on a grid of 7x7 vertices. 6.1 Stochastic Adversary In the first set of experiments we performed, the adversary is stochastic: On every iteration the zerozone changes with a pre-defined probability. In the ‘smooth’ setting only one of the border vertices of the zero-zone at a time can change its label.For the ‘non-smooth’ setting we consider a truly evil case of limiting the zero-zone to always containing only one vertex and a case where the zero-zone may contain up to 20% of all the vertices in the graph. Note that even relabeling of a single vertex may break the consistency of the labeling with regard to the poset. The necessary repair procedure may result in more than one vertex being relabeled at a time. We consider two graphs that represent two different but typical games structures with regard to the difficulty: a single chain and a 2-dimensional grid. A set of progressively more difficult challenges such that can be found in a puzzle or a time-management game can be directly mapped onto a chain of a length corresponding to the amount of challenges. A 2- (or more-) dimensional grid on the other hand is more like a skill-based game, where depending on the choices players make different game states become available to them. In our experiments the chain contains 50 vertices, while the grid is built on 7 × 7 vertices. In all considered variations of the setting the game lasts for 500 iterations and is repeated 10 times. The resulting mean and standard deviation values of loss and regret, respectively, are shown in the following figures: The ‘smooth’ setting in Figures 1(a), 1(b) and 2(a), 2(b); The ‘non-smooth’ setting in Figures 3(a), 3(b) and 4(a), 4(b). (For brevity we omit the plots with the results of other ‘non-smooth’ variations. They all show very similar behaviour.) Note that in the ‘smooth’ setting POSM is outperforming BSIH and, therefore, its regret is negative. Furthermore, in the considerably more difficult ‘non-smooth’ setting all algorithms perform badly (as expected). Nevertheless, in a slightly easier case of larger zero-zone, BSIH performs the best of the three, and POSM performance starts getting better. While BSIH is a baseline that can not be implemented as it requires to foresee the future, POSM is a correct algorithm for dynamic difficulty adjustment. Therefore it is surprising that POSM performs almost as good as BSIH or even better. 6 0 50 100 150 200 250 300 350 400 450 0 100 200 300 400 500 loss time ImprovedPI POSM BSIH (a) Loss. -4 -2 0 2 4 6 8 10 12 0 100 200 300 400 500 regret time ImprovedPI POSM (b) Regret. Figure 3: Stochastic adversary, ‘non-smooth’ setting, exactly one vertex in the zero-zone, on a single chain of 50 vertices. 0 50 100 150 200 250 300 350 400 0 100 200 300 400 500 loss time ImprovedPI POSM BSIH (a) Loss. -10 0 10 20 30 40 50 60 0 100 200 300 400 500 regret time ImprovedPI POSM (b) Regret. Figure 4: Stochastic adversary, ‘non-smooth’ setting, up to 20% of all vertices may be in the zerozone, on a single chain of 50 vertices. 6.2 Evil Adversary While the experiments in our stochastic environment show encouraging results, of real interest to us is the situation where the adversary is ‘evil’, non-stochastic, and furthermore, non-oblivious. In dynamic difficulty adjustment the algorithm will have to deal with people, who are learning and changing in hard to predict ways. We limit our experiments to a case of a linear order on difficulty settings, in other words, the chain. Even though it is a simplified scenario, this situation is rather natural for games and it demonstrates the power of our algorithm. To simulate this situation, we’ve decided to use people as adversaries. Just as in dynamic difficulty adjustment players are not supposed to be aware of the mechanics, our methods and goals were not disclosed to the testing persons. Instead they were presented with a modified game of cups: On every iteration the casino is hiding a coin under one of the cups; after that the player can point at two of the cups. If the coin is under one of these two, the player wins it. Behind the scenes the cups represented the vertices on the chain and the players’ choices were setting the lower and upper borders of the zero-zone. If the algorithm’s prediction was wrong, one of the two cups was decided on randomly and the coin was placed under it. If the prediction was correct, no coin was awarded. Unfortunately, using people in such experiments places severe limitations on the size of the game. In a simplified setting as this and without any extrinsic rewards they can only handle short chains and short games before getting bored. In our case we restricted the length of the chain to 8 and the length of each game to 15. It is possible to simulate a longer game by not resetting the weights of the algorithm after each game is over, but at the current stage of work it wasn’t done. Again, we created the ‘smooth’ and ‘non-smooth’ setting by placing or removing restrictions on how players were allowed to choose their cups. To each game either IMPROVEDPI or POSM was assigned. The results for the ‘smooth’ setting are on Figures 5(a), 5(b), and 5(c); for the ‘nonsmooth’ on Figures 6(a), 5(b), and 6(c). Note, that due to the fact that this time different games were played by IMPROVEDPI and POSM, we have two different plots for their corresponding loss values. 7 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 loss time ImprovedPI Best Static (a) Games vs IMPROVEDPI. 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 loss time POSM Best Static (b) Games vs POSM. 0 2 4 6 8 10 0 2 4 6 8 10 12 14 regret time ImprovedPI POSM (c) Regret. Figure 5: Evil adversary, ‘smooth’ setting, a single chain of 8 vertices. 0 2 4 6 8 10 12 0 2 4 6 8 10 12 14 loss time ImprovedPI Best Static (a) Games vs IMPROVEDPI. 0 2 4 6 8 10 12 0 2 4 6 8 10 12 14 loss time POSM Best Static (b) Games vs POSM. 0 2 4 6 8 10 12 0 2 4 6 8 10 12 14 regret time ImprovedPI POSM (c) Regret. Figure 6: Evil adversary, ‘non-smooth’ setting, a single chain of 8 vertices. We can see that in the ‘smooth’ setting again the performance of POSM is very close to that of BSIH. In the more difficult ‘non-smooth’ one the results are also encouraging. Note, that the loss of BSIH appears to be worse in games played by POSM. A plausible interpretation is that players had to follow more difficult (less static) strategies to fool POSM to win their coins. Nevertheless, the regret of POSM is small even in this case. 7 Conclusions In this paper we formalised dynamic difficulty adjustment as a prediction problem on partially ordered sets and proposed a novel online learning algorithm, POSM, for dynamic difficulty adjustment. Using this formalisation, we were able to prove a bound on the performance of POSM relative to the best static difficulty setting chosen in hindsight, BSIH. To validate our theoretical findings empirically, we performed a set of experiments, comparing POSM and another state-of-the-art algorithm to BSIH in two settings (a) simulating the player by a stochastic process and (b) simulating the player by humans that are encouraged to play as adverserially as possible. These experiements showed that POSM performs very often almost as well as BSIH and, even more surprisingly, sometimes even better. As this is also even better than the behaviour suggested by our mistake bound, there seems to be a gap between the theoretical and empirical performance of our algorithm. In future work we will on the one hand investigate this gap, aiming at providing better bounds by, perhaps, making stronger but still realistic assumptions. On the other hand, we will implement POSM in a range of computer games as well as teaching systems to observe its behaviour in real application scenarios. Acknowledgments This work was supported in part by the German Science Foundation (DFG) in the Emmy Noetherprogram under grant ‘GA 1615/1-1’. The authors thank Michael Kamp for proofreading. 8 References [1] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. 2008. [2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. Foundations of Computer Science, Annual IEEE Symposium on, 0:322, 1995. [3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [4] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66:321–352, 2007. 10.1007/s10994-006-5001-7. [5] D. Charles and M. Black. Dynamic player modeling: A framework for player-centered digital games. In Proc. of the International Conference on Computer Games: Artificial Intelligence, Design and Education, pages 29–35, 2004. [6] V. Dani and T. P. Hayes. Robbing the bandit: less regret in online geometric optimization against an adaptive adversary. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, SODA ’06, pages 937–943, New York, NY, USA, 2006. ACM. [7] G. Danzi, A. H. P. Santana, A. W. B. Furtado, A. R. Gouveia, A. Leit˜ao, and G. L. Ramalho. Online adaptation of computer games agents: A reinforcement learning approach. II Workshop de Jogos e Entretenimento Digital, pages 105–112, 2003. [8] T. G¨artner and G. C. Garriga. The cost of learning directed cuts. In Proceedings of the 18th European Conference on Machine Learning, 2007. [9] R. Herbrich, T. Minka, and T. Graepel. Trueskilltm: A bayesian skill rating system. In NIPS, pages 569–576, 2006. [10] R. Hunicke and V. Chapman. AI for dynamic difficulty adjustment in games. Proceedings of the Challenges in Game AI Workshop, Nineteenth National Conference on Artificial Intelligence, 2004. [11] H. McMahan and A. Blum. Online geometric optimization in the bandit setting against an adaptive adversary. In J. Shawe-Taylor and Y. Singer, editors, Learning Theory, volume 3120 of Lecture Notes in Computer Science, pages 109–123. Springer Berlin / Heidelberg, 2004. [12] O. Missura and T. G¨artner. Player Modeling for Intelligent Difficulty Adjustment. In Discovery Science, pages 197–211. Springer, 2009. [13] J. Togelius, R. Nardi, and S. Lucas. Making racing fun through player modeling and track evolution. In SAB’06 Workshop on Adaptive Approaches for Optimizing Player Satisfaction in Computer and Physical Games, pages 61–70, 2006. [14] G. Yannakakis and M. Maragoudakis. Player Modeling Impact on Player’s Entertainment in Computer Games. Lecture notes in computer science, 3538:74, 2005. 9
2011
142
4,194
Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness R´emi Munos SequeL project, INRIA Lille – Nord Europe, France remi.munos@inria.fr Abstract We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget of n evaluations. The function f is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric ℓ. We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of ℓ. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric ℓunder which f is smooth, and whose performance is almost as good as DOO optimally-fitted. 1 Introduction We consider the problem of finding a good approximation of the maximum of a function f : X →R using a finite budget of evaluations of the function. More precisely, we want to design a sequential exploration strategy of the search space X, i.e. a sequence x1, x2, . . . , xn of states of X, where each xt may depend on previously observed values f(x1), . . . , f(xt−1), such that at round n (computational budget), the algorithms A returns a state x(n) with highest possible value. The performance of the algorithm is evaluated by the loss rn = sup x∈X f(x) −f(x(n)). (1) Here the performance criterion is the accuracy of the recommendation made after n evaluations to the function (which may be thought of as calls to a black-box model). This criterion is different from usual bandit settings where the cumulative regret (n supx∈X f(x) −Pn t=1 f(x(t))) measures how well the algorithm succeeds in selecting states with good values while exploring the search space. The loss criterion (1) is closer to the simple regret defined in the bandit setting [BMS09, ABM10]. Since the literature on global optimization is huge, we only mention the works that are closely related to our contribution. The approach followed here can be seen as an optimistic sampling strategy where, at each round, we explore the space where the function could be the largest, given the knowledge of previous evaluations. A large body of algorithmic work has been developed using branch-and-boundtechniques [Neu90, Han92, Kea96, HT96, Pin96, Flo99, SS00], such as Lipschitz optimization where the function is assumed to be globally Lipschitz. Our first contribution with respect to (w.r.t.) this literature is to considerably weaken the Lipschitz assumption usually made and consider only a locally one-sided Lipschitz assumption around the maximum of f. In addition, we do not require the space to be a metric space but only to be equipped with a semi-metric. The optimistic strategy has been recently intensively studied in the bandit literature, such as in the UCB algorithm [ACBF02] and the many extensions to tree search [KS06, CM07] (with application 1 to computer-go [GWMT06]), planning [HM08, BM10, BMSB11], and Gaussian process optimization [SKKS10]. The case of Lipschitz (or relaxed) assumption in a metric spaces is considered in [Kle04, AOS07] and more recently in [KSU08, BMSS08, BMSS11], and in the case of unknown Lipschitz constant, see [BSY11, Sli11] (where they assume a bound on the Hessian or another related parameter). Compared to this literature, our contribution is the design and analysis of two algorithms: (1) A first algorithm, Deterministic Optimistic Optimization (DOO), that requires the knowledge of the semimetric ℓfor which f is locally smooth around its maximum. A loss bound is provided (in terms of the near-optimality dimension of f under ℓ) in a more general setting that previously considered. (2) A second algorithm, Simultaneous Optimistic Optimization (SOO), that does not require the knowledge of ℓ. We show that SOO performs almost as well as DOO optimally-fitted. 2 Assumptions about the hierarchical partition and the function Our optimization algorithms will be implemented by resorting to a hierarchical partitioning of the space X, which is given to the algorithms. More precisely, we consider a set of partitions of X at all scales h ≥0: for any integer h, X is partitioned into a set of Kh sets Xh,i (called cells), where 0 ≤i ≤Kh −1. This partitioning may be represented by a K-ary tree structure where each cell Xh,i corresponds to a node (h, i) of the tree (indexed by its depth h and index i), and such that each node (h, i) possesses K children nodes {(h + 1, ik)}1≤k≤K. In addition, the cells of the children {Xh+1,ik, 1 ≤k ≤K} form a partition of the parent’s cell Xh,i. The root of the tree corresponds to the whole domain X (cell X0,0). To each cell Xh,i is assigned a specific state xh,i ∈Xh,i where f may be evaluated. We now state 4 assumptions: Assumptions 1 is about the semi-metric ℓ, Assumption 2 is about the smoothness of the function w.r.t. ℓ, and Assumptions 3 and 4 are about the shape of the hierarchical partition w.r.t. ℓ. Assumption 1 (Semi-metric). We assume that ℓ: X × X →R+ is such that for all x, y ∈X, we have ℓ(x, y) = ℓ(y, x) and ℓ(x, y) = 0 if and only if x = y. Note that we do not require that ℓsatisfies the triangle inequality (in which case, ℓwould be a metric). An example of a metric space is the Euclidean space Rd with the metric ℓ(x, y) = ∥x −y∥ (Euclidean norm). Now consider Rd with ℓ(x, y) = ∥x −y∥α, for some α > 0. When α ≤1, then ℓis also a metric, but whenever α > 1 then ℓdoes not satisfy the triangle inequality anymore, and is thus a semi-metric only. Assumption 2 (Local smoothness of f). There exists at least a global optimizer x∗∈X of f (i.e., f(x∗) = supx∈X f(x)) and for all x ∈X, f(x∗) −f(x) ≤ℓ(x, x∗). (2) This condition guarantees that f does not decrease too fast around (at least) one global optimum x∗ (this is a sort of a locally one-sided Lipschitz assumption). Now we state the assumptions about the hierarchical partitions. Assumption 3 (Bounded diameters). There exists a decreasing sequence δ(h) > 0, such that for any depth h ≥0, for any cell Xh,i of depth h, we have supx∈Xh,i ℓ(xh,i, x) ≤δ(h). Assumption 4 (Well-shaped cells). There exists ν > 0 such that for any depth h ≥0, any cell Xh,i contains a ℓ-ball of radius νδ(h) centered in xh,i. 3 When the semi-metric ℓis known In this Section, we consider the setting where Assumptions 1-4 hold for a specific semi-metric ℓ, and that the semi-metric ℓis known from the algorithm. 3.1 The DOO Algorithm The Deterministic Optimistic Optimization (DOO) algorithm described in Figure 1 uses explicitly the knowledge of ℓ(through the use of δ(h)). DOO builds incrementally a tree Tt for t = 1 . . . n, by 2 Initialization: T1 = {(0, 0)} (root node) for t = 1 to n do Select the leaf (h, j) ∈Lt with maximum bh,j def = f(xh,j) + δ(h) value. Expand this node: add to Tt the K children of (h, j) end for Return x(n) = arg max(h,i)∈Tn f(xh,i) Figure 1: Deterministic optimistic optimization (DOO) algorithm. selecting at each round t a leaf of the current tree Tt to expand. Expanding a leaf means adding its K children to the current tree (this corresponds to splitting the cell Xh,j into K sub-cells). We start with the root node T1 = {(0, 0)}. We write Lt the leaves of Tt (set of nodes whose children are not in Tt), which are the set of nodes that can be expanded at round t. This algorithm is called optimistic because it expands at each round a cell that may contain the optimum of f, based on the information about (i) the previously observed evaluations of f, and (ii) the knowledge of the local smoothness property (2) of f (since ℓis known). The algorithm computes the b-values bh,j def = f(xh,j) + δ(h) of all nodes (h, j) of the current tree Tt and select the leaf with highest b-value to expand next. It returns the state x(n) with highest evaluation. 3.2 Analysis of DOO Note that Assumption 2 implies that the b-value of any cell containing x∗upper bounds f ∗, i.e., for any cell Xh,i such that x∗∈Xh,i, bh,i = f(xh,i) + δ(h) ≥f(xh,i) + ℓ(xh,i, x∗) ≥f ∗. As a consequence, a node (h, i) such that f(xh,i) + δ(h) < f ∗will never be expanded (since at any time t, the b-value of such a node will be dominated by the b-value of the leaf containing x∗). We deduce that DOO only expands nodes of the set I def = ∪h≥0Ih, where Ih def = {nodes (h, i) such that f(xh,i) + δ(h) ≥f ∗}. In order to derive a loss bound we now define a measure of the quantity of near-optimal states, called near-optimality dimension. This measure is closely related to similar measures introduced in [KSU08, BMSS08]. For any ε > 0, let us write Xε def = {x ∈X, f(x) ≥f ∗−ε} the set of ε-optimal states. Definition 1 (Near-optimality dimension). The near-optimality dimension is the smallest d ≥0 such that there exists C > 0 such that for any ε > 0, the maximal number of disjoint ℓ-balls of radius νε and center in Xε is less than Cε−d. Note that d is not an intrinsic property of f: it characterizes both f and ℓ(since we use ℓ-balls in the packing of near-optimal states), and also depend on ν. We now bound the number of nodes in Ih. Lemma 1. We have |Ih| ≤Cδ(h)−d. Proof. From Assumption 4, each cell (h, i) contains a ball of radius νδ(h) centered in xh,i, thus if |Ih| = |{xh,i ∈Xδ(h)}| exceeded Cδ(h)−d, this would mean that there exists more than Cδ(h)−d disjoint ℓ-balls of radius νδ(h) with center in Xδ(h), which contradicts the definition of d. We now provide our loss bound for DOO. Theorem 1. Let us write h(n) the smallest integer h such that C Ph l=0 δ(l)−d ≥n. Then the loss of DOO is bounded as rn ≤δ(h(n)). 3 Proof. Let (hmax, j) be the deepest node that has been expanded by the algorithm up to round n. We known that DOO only expands nodes in the set I. Now, among all node expansion strategies of the set of expandable nodes I, the uniform strategy is the one which minimizes the depth of the resulting tree. From the definition of h(n) and from Lemma 1, we have Ph(n)−1 l=0 |Il| ≤C Ph(n)−1 l=0 δ(l)−d < n, thus the maximum depth of the uniform strategy is at least h(n), and we deduce that hmax ≥h(n). Now since node (hmax, j) has been expanded, we have that (hmax, j) ∈I, thus f(x(n)) ≥f(xhmax,j) ≥f ∗−δ(hmax) ≥f ∗−δ(h(n)). Remark 1. This bound is in terms of the number of expanded nodes n. The actual number of function evaluations is Kn (since each expansion generates K children that need to be evaluated). Now, let us make the bound more explicit when the diameter δ(h) of the cells decreases exponentially fast with their depth (this case is rather general as illustrated in the examples described next, as well as in the discussion in [BMSS11]). Corollary 1. Assume that δ(h) = cγh for some constants c > 0 and γ < 1. If the near-optimality of f is d > 0, then the loss decreases polynomially fast: rn ≤c d+1 d 1 −γd−1/dC1/dn−1/d. Now, if d = 0, then the loss decreases exponentially fast: rn ≤cγ(n/C)−1. Proof. From Theorem 1, whenever d > 0 we have n ≤C Ph(n) l=0 δ(l)−d = c C γ−d(h(n)+1)−1 γ−d−1 , thus γ−dh(n) ≥ n cC 1 −γd , from which we deduce that rn ≤δ(h(n)) ≤cγh(n) ≤c d+1 d 1 − γd−1/dC1/dn−1/d. Now, if d = 0 then n ≤C Ph(n) l=0 δ(l)−d = C(h(n) + 1), and we deduce that the loss is bounded as rn ≤δ(h(n)) = cγ(n/C)−1. 3.3 Examples Example 1: Let X = [−1, 1]D and f be the function f(x) = 1 −∥x∥α ∞, for some α ≥1. Consider a K = 2D-ary tree of partitions with (hyper)-squares. Expanding a node means splitting the corresponding square in 2D squares of half length. Let xh,i be the center of Xh,i. Consider the following choice of the semi metric: ℓ(x, y) = ∥x −y∥β ∞, with β ≤α. We have δ(h) = 2−hβ (recall that δ(h) is defined in terms of ℓ), and ν = 1. The optimum of f is x∗= 0 and f satisfies the local smoothness property (2). Now let us compute its near-optimality dimension. For any ε > 0, Xε is the L∞-ball of radius ε1/α centered in 0, which can be packed by ε1/α ε1/β D L∞-balls of diameter ε (since a L∞-balls of diameter ε is a ℓ-ball of diameter ε1/β). Thus the nearoptimality dimension is d = D(1/β −1/α) (and the constant C = 1). From Corollary 1 we deduce that (i) when α > β, then d > 0 and in this case, rn = O n−1 D αβ α−β  . And (ii) when α = β, then d = 0 and the loss decreases exponentially fast: rn ≤21−n. It is interesting to compare this result to a uniform sampling strategy (i.e., the function is evaluated at the set of points on a uniform grid), which would provide a loss of order n−α/D. We observe that DOO is better than uniform whenever α < 2β and worse when α > 2β. This result provides some indication on how to choose the semi-metric ℓ(thus β), which is a key ingredient of the DOO algorithm (since δ(h) = 2−hβ appears in the b-values): β should be as close as possible to the true (but unknown) α (which can be seen as a local smoothness order of f around its maximum), but never larger than α (otherwise f does not satisfy the local smoothness property (2)). Example 2: The previous analysis generalizes to any function which is locally equivalent to ∥x − x∗∥α, for some α > 0 (where ∥· ∥is any norm, e.g., Euclidean, L∞, or L1), around a global maximum x∗(among a set of global optima assumed to be finite). That is, we assume that there exists constants c1 > 0, c2 > 0, η > 0, such that f(x∗) −f(x) ≤ c1∥x −x∗∥α, for all x ∈X, f(x∗) −f(x) ≥ c2∥x −x∗∥α, for all ∥x −x∗∥≤η. 4 Let X = [0, 1]D. Again, consider a K = 2D-ary tree of partitions with (hyper)-squares. Let ℓ(x, y) = c∥x −y∥β with c1 ≤c and β ≤α (so that f satisfies (2)). For simplicity we do not make explicit all the constants using the O notation for convenience (the actual constants depend on the choice of the norm ∥· ∥). We have δ(h) = O(2−hβ). Now, let us compute the nearoptimality dimension. For any ε > 0, Xε is included in a ball of radius (ε/c2)1/α centered in x∗, which can be packed by O ε1/α ε1/β D ℓ-balls of diameter ε. Thus the near-optimality dimension is d = D(1/β −1/α), and the results of the previous example apply (up to constants), i.e. for α > β, then d > 0 and rn = O n−1 D αβ α−β  . And when α = β, then d = 0 and one obtains the exponential rate rn = O(2−α(n/C−1)). We deduce that the behavior of the algorithm depends on our knowledge of the local smoothness (i.e. α and c1) of the function around its maximum. Indeed, if this smoothness information is available, then one should defined the semi-metric ℓ(which impacts the algorithm through the definition of δ(h)) to match this smoothness (i.e. set β = α) and derive an exponential loss rate. Now if this information is unknown, then one should underestimate the true smoothness (i.e. by choosing β ≤α) and suffer a loss rn = O n−1 D αβ α−β  , rather than overestimating it (β > α) since in this case, (2) may not hold anymore and there is a risk that the algorithm converges to a local optimum (thus suffering a constant loss). 3.4 Comparison with previous works Optimistic planning: The deterministic planning problem described in [HM08] considers an optimistic approach for selecting the first action of a sequence x that maximizes the sum of discounted rewards. We can easily cast their problem in our setting by considering the space X of the set of infinite sequences of actions. The metric ℓ(x, y) is γh(x,y)/(1 −γ), where h(x, y) is the length of the common initial actions between the sequences x and y, and γ is the discount factor. It is easy to show that the function f(x), defined as the discounted sum of rewards along the sequence x of actions, is Lipschitz w.r.t. ℓand thus satisfies (2). Their algorithm is very close to DOO: it expands a node of the tree (finite sequence of actions) with highest upper-bound on the possible value. Their regret analysis makes use of a quantity of near-optimal sequences, from which they define κ ∈[1, K] that can be seen as the branching factor of the set of nodes I that can be expanded. This measure is related to our near-optimality dimension by κ = γ−d. Corollary 1 implies directly that the loss bound is rn = O(n−log 1/γ log κ ) which is the result reported in [HM08]. HOO and Zooming algorithms: The DOO algorithm can be seen as a deterministic version of the HOO algorithm of [BMSS11] and is also closely related to the Zooming algorithm of [KSU08]. Those works consider the case of noisy evaluations of the function (X-armed bandit setting), which is assumed to be weakly Lipschitz (slightly stronger than our Assumption 2). The bounds reported in those works are (for the case of exponentially decreasing diameters considered in their work and in our Corollary 1) on the cumulative regret Rn = O(n d+1 d+2 ), which translates into the loss considered here as rn = O(n− 1 d+2 ), where d is the near-optimality dimension (or the closely defined zooming dimension). We conclude that a deterministic evaluation of the function enables to obtain a much better polynomial rate O(n−1/d) when d > 0, and even an exponential rate when d = 0 (Corollary 1). In the next section, we address the problem of an unknown semi-metric ℓ, which is the main contribution of the paper. 4 When the semi-metric ℓis unknown We now consider the setting where Assumptions 1-4 hold for some semi-metric ℓ, but the semimetric ℓis unknown. The hierarchical partitioning of the space is still given, but since ℓis unknown, one cannot use the diameter δ(h) of the cells to design upper-bounds, like in DOO. The question we wish to address is: If ℓis unknown, is it possible to implement an optimistic algorithm with performance guarantees? We provide a positive answer to this question and in addition we show that we can be almost as good as an algorithm that would know ℓ, for the best possible ℓsatisfying Assumptions 1-4. 5 The maximum depth function t 7→hmax(t) is a parameter of the algorithm. Initialization: T1 = {(0, 0)} (root node). Set t = 1. while True do Set vmax = −∞. for h = 0 to min(depth(Tt), hmax(t)) do Among all leaves (h, j) ∈Lt of depth h, select (h, i) ∈arg max(h,j)∈Lt f(xh,j) if f(xh,i) ≥vmax then Expand this node: add to Tt the K children (h + 1, ik)1≤k≤K Set vmax = f(xh,i), Set t = t + 1 if t = n then Return x(n) = arg max(h,i)∈Tn xh,i end if end for end while. Figure 2: Simultaneous Optimistic Optimization (SOO) algorithm. 4.1 The SOO algorithm The idea is to expand at each round simultaneously all the leaves (h, j) for which there exists a semi-metric ℓsuch that the corresponding upper-bound f(xh,j) + supx∈Xh,j ℓ(xh,j, x) would be the highest. This is implemented by expanding at each round at most a leaf per depth, and a leaf is expanded only if it has the largest value among all leaves of same or lower depths. The Simultaneous Optimistic Optimization (SOO) algorithm is described in Figure 2. The SOO algorithm takes as parameter a function t →hmax(t) which forces the tree to a maximal depth of hmax(t) after t node expansions. Again, Lt refers to the set of leaves of Tt. 4.2 Analysis of SOO All previously relevant quantities such as the diameters δ(h), the sets Ih, and the near-optimality dimension d depend on the unknown semi-metric ℓ(which is such that Assumptions 1-4 are satisfied). At time t, let us write h∗ t the depth of the deepest expanded node in the branch containing x∗(an optimal branch). Let (h∗ t +1, i∗) be an optimal node of depth h∗ t +1 (i.e., such that x∗∈Xh∗ t +1,i∗). Since this node has not been expanded yet, any node (h∗ t +1, i) of depth h∗ t +1 that is later expanded, before (h∗ t + 1, i∗) is expanded, is δ(h∗ t + 1)-optimal. Indeed, f(xh∗ t +1,i) ≥f(xh∗ t +1,i∗) ≥f ∗− δ(h∗ t +1). We deduce that once an optimal node of depth h is expanded, it takes at most |Ih+1| node expansions at depth h + 1 before the optimal node of depth h + 1 is expanded. From that simple observation, we deduce the following lemma. Lemma 2. For any depth 0 ≤h ≤hmax(t), whenever t ≥(|I0| + |I1| + · · · + |Ih|)hmax(t), we have h∗ t ≥h. Proof. We prove it by induction. For h = 0, we have h∗ t ≥0 trivially. Assume that the proposition is true for all 0 ≤h ≤h0 with h0 < hmax(t). Let us prove that it is also true for h0 + 1. Let t ≥(|I0| + |I1| + · · · + |Ih0+1|)hmax(t). Since t ≥(|I0| + |I1| + · · · + |Ih0|)hmax(t) we know that h∗ t ≥h0. So, either h∗ t ≥h0 + 1 in which case the proof is finished, or h∗ t = h0. In this latter case, consider the nodes of depth h0 + 1 that are expanded. We have seen that as long as the optimal node of depth h0 + 1 is not expanded, any node of depth h0 + 1 that is expanded must be δ(h0 +1)-optimal, i.e., belongs to Ih0+1. Since there are |Ih0+1| of them, after |Ih0+1|hmax(t) node expansions, the optimal one must be expanded, thus h∗ t ≥h0 + 1. Theorem 2. Let us write h(n) the smallest integer h such that Chmax(n) Ph l=0 δ(l)−d ≥n. (3) Then the loss is bounded as rn ≤δ min(h(n), hmax(n) + 1)  . (4) 6 Proof. From Lemma 1 and the definition of h(n) we have hmax(n) h(n)−1 X l=0 |Il| ≤Chmax(n) h(n)−1 X l=0 δ(l)−d < n, thus from Lemma 2, when h(n) −1 ≤hmax(n) we have h∗ n ≥h(n) −1. Now in the case h(n) −1 > hmax(n), since the SOO algorithm does not expand nodes beyond depth hmax(n), we have h∗ n = hmax(n). Thus in all cases, h∗ n ≥min(h(n) −1, hmax(n)). Let (h, j) be the deepest node in Tn that has been expanded by the algorithm up to round n. Thus h ≥h∗ n. Now, from the definition of the algorithm, we only expand a node when its value is larger than the value of all the leaves of equal or lower depths. Thus, since the node (h, j) has been expanded, its value is at least as high as that of the optimal node (h∗ n + 1, i∗) of depth h∗ n + 1 (which has not been expanded, by definition of h∗ n). Thus f(x(n)) ≥f(xh,j) ≥f(xh∗n+1,i∗) ≥f ∗−δ(h∗ n + 1) ≥f ∗−δ(min(h(n), hmax(n) + 1)). Remark 2. This result appears very surprising: although the semi-metric ℓis not known, the performance is almost as good as for DOO (see Theorem 1) which uses the knowledge of ℓ. The main difference is that the maximal depth hmax(n) appears both as a multiplicative factor in the definition of h(n) in (3) and as a threshold in the loss bound (4). Those two appearances of hmax(n) defines a tradeoff between deep (large hmax) versus broad (small hmax) types of exploration. We now illustrate the case of exponentially decreasing diameters. Corollary 2. Assume that δ(h) = cγh for some c > 0 and γ < 1. Consider the two cases: • The near-optimality d > 0. Let the depth function hmax(t) = tε, for some ε > 0 arbitrarily small. Then, for n large enough (as a function of ε) the loss of SOO is bounded as: rn ≤c d+1 d  C 1 −γd 1/d n−1−ε d . • The near-optimality d = 0. Let the depth function hmax(t) = √ t. Then the loss of SOO is bounded as: rn ≤cγ √n min(1/C,1)−1. Proof. From Theorem 1, when d > 0 we have n ≤Chmax(n) h(n) X l=0 δ(l)−d = cChmax(n)γ−d(h(n)+1) −1 γ−d −1 thus for the choice hmax(n) = nε, we deduce γ−dh(n) ≥n1−ε cC 1 −γd . Thus h(n) is logarithmic in n and for n large enough (as a function of ε), h(n) ≤hmax(n) + 1, thus rn ≤δ min(h(n), hmax(n) + 1)  = δ(h(n)) ≤cγh(n) ≤c d+1 d  C 1 −γd 1/d n−1−ε d . Now, if d = 0 then n ≤Chmax(n) Ph(n) l=0 δ(l)−d = Chmax(n)(h(n) + 1), thus for the choice hmax(n) = √n we deduce that the loss decreases as: rn ≤δ min(h(n), hmax(n) + 1)  ≤cγ √n min(1/C,1)−1. Remark 3. The maximal depth function hmax(t) is still a parameter of the algorithm, which somehow influences the behavior of the algorithm (deep versus broad exploration of the tree). However, for a large class of problems (e.g. when d > 0) the choice of the order ε does not impact the asymptotic performance of the algorithm. Remark 4. Since our algorithm does not depend on ℓ, our analysis is actually true for any semimetric ℓthat satisfies Assumptions 1-4, thus Theorem 2 and Corollary 2 hold for the best possible choice of such a ℓ. In particular, we can think of problems for which there exists a semimetric ℓsuch that the corresponding near-optimality dimension d is 0. Instead of describing a general class of problems satisfying this property, we illustrate in the next subsection non-trivial optimization problems in X = RD where there exists ℓsuch that d = 0. 7 4.3 Examples Example 1: Consider the previous Example 1 where X = [−1, 1]D and f is the function f(x) = 1 −∥x∥α ∞, where α ≥1 is unknown. We have seen that DOO with the metric ℓ(x, y) = ∥x −y∥β ∞ provides a polynomial loss rn = O n−1 D αβ α−β  whenever β < α, and an exponential loss rn ≤21−n when β = α. However, here α is unknown. Now consider the SOO algorithm with the maximum depth function hmax(t) = √ t. As mentioned before, SOO does not require ℓ, thus we can apply the analysis for any ℓthat satisfies Assumptions 1-4. So let us consider ℓ(x, y) = ∥x −y∥α ∞. Then δ(h) = 2−hα, ν = 1, and the near-optimality dimension of f under ℓis d = 0 (and C = 1). We deduce that the loss of SOO is rn ≤2(1−√n)α. Thus SOO provides a stretched-exponential loss without requiring the knowledge of α. Note that a uniform grid provides the loss n−α/D, which is polynomially decreasing only (and subject to the curse of dimensionality). Thus, in this example SOO is always better than both Uniform and DOO except if one knows perfectly α and would use DOO with β = α (in which case we obtain an exponential loss). The fact that SOO is not as good as DOO optimally fitted comes from the truncation of SOO at a maximal depth hmax(n) = √n (whereas DOO optimally fitted would explore the tree up to a depth linear in n). Example 2: The same conclusion holds for Example 2, where we consider a function f defined on [0, 1]D that is locally equivalent to ∥x−x∗∥α, for some unknown α > 0 (see the precise assumptions in Section 3.3). We have seen that DOO using ℓ(x, y) = c∥x −y∥β with β < α has a loss rn = O n−1 D αβ α−β  , and when α = β, then d = 0 and the loss is rn = O(2−α(n/C−1)). Now by using SOO (which does not require the knowledge of α) with hmax(t) = √ t we deduce the stretched-exponential loss rn = O(2−√nα/C) (by using ℓ(x, y) = ∥x −y∥α in the analysis, which gives δ(h) = 2−hα and d = 0). 4.4 Comparison with the DIRECT algorithm The DIRECT (DIviding RECTangles) algorithm [JPS93, FK04, Gab01] is a Lipschitz optimization algorithm where the Lipschitz constant L of f is unknown. It uses an optimistic splitting technique similar to ours where at each round, it expands the set of nodes that have the highest upper-bound (as defined in DOO) for at least some value of L. To the best of our knowledge, there is no finite-time analysis of this algorithm (only the consistency property limn→∞rn = 0 is proven in [FK04]). Our approach generalizes DIRECT and we are able to derive finite-time loss bounds in a much broader setting where the function is only locally smooth and the space is semi-metric. We are not aware of other finite-time analysis of global optimization algorithms that do not require the knowledge of the smoothness of the function. 5 Conclusions We presented two algorithms: DOO requires the knowledge of the semi-metric ℓunder which the function f is locally smooth (according to Assumption 2). SOO does not require this knowledge and performs almost as well as DOO optimally-fitted (i.e. for the best choice of ℓsatisfying Assumptions 1-4). We reported finite-time loss bounds using the near-optimality dimension d, which relates the local smoothness of f around its maximum and the quantity of near-optimal states, measured by the semi-metric ℓ. We provided illustrative examples of the performance of SOO in Euclidean spaces where the local smoothness of f is unknown. Possible future research directions include (i) deriving problem-dependent lower bounds, (ii) characterizing classes of functions f such that there exists a semi-metric ℓfor which f is locally smooth w.r.t. ℓand whose corresponding near-optimal dimension is d = 0 (in order to have a stretchedexponentially decreasing loss), and (iii) extending the SOO algorithm to stochastic X-armed bandits (optimization of a noisy function) when the smoothness of f is unknown. Acknowledgements: French ANR EXPLO-RA (ANR-08-COSI-004) and the European project COMPLACS (FP7, grant agreement no231495). 8 References [ABM10] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In Conference on Learning Theory, 2010. [ACBF02] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning Journal, 47(2-3):235–256, 2002. [AOS07] P. Auer, R. Ortner, and Cs. Szepesv´ari. Improved rates for the stochastic continuum-armed bandit problem. 20th Conference on Learning Theory, pages 454–468, 2007. [BM10] S. Bubeck and R. Munos. Open loop optimistic planning. In Conference on Learning Theory, 2010. [BMS09] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proc. of the 20th International Conference on Algorithmic Learning Theory, pages 23–37, 2009. [BMSB11] L. Busoniu, R. Munos, B. De Schutter, and R. Babuska. Optimistic planning for sparsely stochastic systems. In IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning, 2011. [BMSS08] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv´ari. Online optimization of X-armed bandits. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 22, pages 201–208. MIT Press, 2008. [BMSS11] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv´ari. X-armed bandits. Journal of Machine Learning Research, 12:1655–1695, 2011. [BSY11] S. Bubeck, G. Stoltz, and J. Y. Yu. Lipschitz bandits without the Lipschitz constant. In Proceedings of the 22nd International Conference on Algorithmic Learning Theory, 2011. [CM07] P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In Uncertainty in Artificial Intelligence, 2007. [FK04] D. E. Finkel and C. T. Kelley. Convergence analysis of the direct algorithm. Technical report, North Carolina State University, Center for, 2004. [Flo99] C.A. Floudas. Deterministic Global Optimization: Theory, Algorithms and Applications. Kluwer Academic Publishers, Dordrecht / Boston / London, 1999. [Gab01] J. M. X. Gablonsky. Modifications of the direct algorithm. PhD thesis, 2001. [GWMT06] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in monte-carlo go. Technical report, INRIA RR-6062, 2006. [Han92] E.R. Hansen. Global Optimization Using Interval Analysis. Marcel Dekker, New York, 1992. [HM08] J-F. Hren and R. Munos. Optimistic planning of deterministic systems. In European Workshop on Reinforcement Learning Springer LNAI 5323, editor, Recent Advances in Reinforcement Learning, pages 151–164, 2008. [HT96] R. Horst and H. Tuy. Global Optimization ? Deterministic Approaches. Springer, Berlin / Heidelberg / New York, 3rd edition, 1996. [JPS93] D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian optimization without the lipschitz constant. Journal of Optimization Theory and Applications, 79(1):157–181, 1993. [Kea96] R. B. Kearfott. Rigorous Global Search: Continuous Problems. Kluwer Academic Publishers, Dordrecht / Boston / London, 1996. [Kle04] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In 18th Advances in Neural Information Processing Systems, 2004. [KS06] L. Kocsis and Cs. Szepesv´ari. Bandit based Monte-Carlo planning. In Proceedings of the 15th European Conference on Machine Learning, pages 282–293, 2006. [KSU08] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the 40th ACM Symposium on Theory of Computing, 2008. [Neu90] Neumaier. Interval Methods for Systems of Equations. Cambridge University Press, 1990. [Pin96] J.D. Pint´er. Global Optimization in Action (Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications). Kluwer Academic Publishers, 1996. [SKKS10] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on Machine Learning, pages 1015–1022, 2010. [Sli11] A. Slivkins. Multi-armed bandits on implicit metric spaces. In Advances in Neural Information Processing Systems, 2011. [SS00] R.G. Strongin and Ya.D. Sergeyev. Global Optimization with Non-Convex Constraints: Sequential and Parallel Algorithms. Kluwer Academic Publishers, Dordrecht / Boston / London, 2000. 9
2011
143
4,195
Robust Multi-Class Gaussian Process Classification Daniel Hern´andez-Lobato ICTEAM - Machine Learning Group Universit´e catholique de Louvain Place Sainte Barbe, 2 Louvain-La-Neuve, 1348, Belgium danielhernandezlobato@gmail.com Jos´e Miguel Hern´andez-Lobato Department of Engineering University of Cambridge Trumpington Street, Cambridge CB2 1PZ, United Kingdom jmh233@eng.cam.ac.uk Pierre Dupont ICTEAM - Machine Learning Group Universit´e catholique de Louvain Place Sainte Barbe, 2 Louvain-La-Neuve, 1348, Belgium pierre.dupont@uclouvain.be Abstract Multi-class Gaussian Process Classifiers (MGPCs) are often affected by overfitting problems when labeling errors occur far from the decision boundaries. To prevent this, we investigate a robust MGPC (RMGPC) which considers labeling errors independently of their distance to the decision boundaries. Expectation propagation is used for approximate inference. Experiments with several datasets in which noise is injected in the labels illustrate the benefits of RMGPC. This method performs better than other Gaussian process alternatives based on considering latent Gaussian noise or heavy-tailed processes. When no noise is injected in the labels, RMGPC still performs equal or better than the other methods. Finally, we show how RMGPC can be used for successfully identifying data instances which are difficult to classify correctly in practice. 1 Introduction Multi-class Gaussian process classifiers (MGPCs) are a Bayesian approach to non-parametric multiclass classification with the advantage of producing probabilistic outputs that measure uncertainty in the predictions [1]. MGPCs assume that there are some latent functions (one per class) whose value at a certain location is related by some rule to the probability of observing a specific class there. The prior for each of these latent functions is specified to be a Gaussian process. The task of interest is to make inference about the latent functions using Bayes’ theorem. Nevertheless, exact Bayesian inference in MGPCs is typically intractable and one has to rely on approximate methods. Approximate inference can be implemented using Markov-chain Monte Carlo sampling, the Laplace approximation or expectation propagation [2, 3, 4, 5]. A problem of MGPCs is that, typically, the assumed rule that relates the values of the latent functions with the different classes does not consider the possibility of observing errors in the labels of the data, or at most, only considers the possibility of observing errors near the decision boundaries of the resulting classifier [1]. The consequence is that over-fitting can become a serious problem when errors far from these boundaries are observed in practice. A notable exception is found in the binary classification case when the labeling rule suggested in [6] is used. Such rule considers the possibility of observing errors independently of their distance to the decision boundary [7, 8]. However, the generalization of this rule to the multi-class case is difficult. Existing generalizations 1 are in practice simplified so that the probability of observing errors in the labels is zero [3]. Labeling errors in the context of MGPCs are often accounted for by considering that the latent functions of the MGPC are contaminated with additive Gaussian noise [1]. Nevertheless, this approach has again the disadvantage of considering only errors near the decision boundaries of the resulting classifier and is expected to lead to over-fitting problems when errors are actually observed far from the boundaries. Finally, some authors have replaced the underlying Gaussian processes of the MGPC with heavytailed processes [9]. These processes have marginal distributions with heavier tails than those of a Gaussian distribution and are in consequence expected to be more robust to labeling errors far from the decision boundaries. In this paper we investigate a robust MGPC (RMGPC) that addresses labeling errors by introducing a set of binary latent variables. One latent variable for each data instance. These latent variables indicate whether the assumed labeling rule is satisfied for the associated instances or not. If such rule is not satisfied for a given instance, we consider that the corresponding label has been randomly selected with uniform probability among the possible classes. This is used as a back-up mechanism to explain data instances that are highly unlikely to stem from the assumed labeling rule. The resulting likelihood function depends only on the total number of errors, and not on the distances of these errors to the decision boundaries. Thus, RMGPC is expected to be fairly robust when the data contain noise in the labels. In this model, expectation propagation (EP) can be used to efficiently carry out approximate inference [10]. The cost of EP is O(ln3), where n is the number of training instances and l is the number of different classes. RMGPC is evaluated in four datasets extracted from the UCI repository [11] and from other sources [12]. These experiments show the beneficial properties of the proposed model in terms of prediction performance. When labeling noise is introduced in the data, RMGPC outperforms other MGPC approaches based on considering latent Gaussian noise or heavy-tailed processes. When there is no noise in the data, RMGPC performs better or equivalent to these alternatives. Extra experiments also illustrate the utility of RMGPC to identify data instances that are unlikely to stem from the assumed labeling rule. The organization of the rest of the manuscript is as follows: Section 2 introduces the RMGPC model. Section 3 describes how expectation propagation can be used for approximate Bayesian inference. Then, Section 4 evaluates and compares the predictive performance of RMGPC. Finally, Section 5 summarizes the conclusions of the investigation. 2 Robust Multi-Class Gaussian Process Classification Consider n training instances in the form of a collection of feature vectors X = {x1, . . . , xn} with associated labels y = {y1, . . . , yn}, where yi ∈C = {1, . . . , l} and l is the number of classes. We follow [3] and assume that, in the noise free scenario, the predictive rule for yi given xi is yi = arg max k fk(xi) , (1) where f1, . . . , fl are unknown latent functions that have to be estimated. The prediction rule given by (1) is unlikely to hold always in practice. For this reason, we introduce a set of binary latent variables z = {z1, . . . , zn}, one per data instance, to indicate whether (1) is satisfied (zi = 0) or not (zi = 1). In this latter case, the pair (xi, yi) is considered to be an outlier and, instead of assuming that yi is generated by (1), we assume that xi is assigned a random class sampled uniformly from C. This is equivalent to assuming that f1, . . . , fl have been contaminated with an infinite amount of noise and serves as a back-up mechanism to explain observations which are highly unlikely to originate from (1). The likelihood function for f = (f1(x1), f1(x2) . . . , f1(xn), f2(x1), f2(x2) . . . , f2(xn), . . . , fl(x1), fl(x2), . . . , fl(xn))T given y, X and z is P(y|X, z, f) = n Y i=1  Y k̸=yi Θ(fyi(xi) −fk(xi))   1−zi 1 l zi , (2) where Θ(·) is the Heaviside step function. In (2), the contribution to the likelihood of each instance (xi, yi) is a a mixture of two terms: A first term equal to Q k̸=yi Θ(fyi(xi) −fk(xi)) and a second term equal to 1/l. The mixing coefficient is the prior probability of zi = 1. Note that only the first term actually depends on the accuracy of f. In particular, it takes value 1 when the corresponding instance is correctly classified using (1) and 0 otherwise. Thus, the likelihood function described in 2 (2) considers only the total number of prediction errors made by f and not the distance of these errors to the decision boundary. The consequence is that (2) is expected to be robust when the observed data contain labeling errors far from the decision boundaries. We do not have any preference for a particular instance to be considered an outlier. Thus, z is set to follow a priori a factorizing multivariate Bernoulli distribution: P(z|ρ) = Bern(z|ρ) = n Y i=1 ρzi(1 −ρ)1−zi , (3) where ρ is the prior fraction of training instances expected to be outliers. The prior for ρ is set to be a conjugate beta distribution, namely P(ρ) = Beta(ρ|a0, b0) = ρa0−1(1 −ρ)b0−1 B(a0, b0) , (4) where B(·, ·) is the beta function and a0 and b0 are free hyper-parameters. The values of a0 and b0 do not have a big impact on the final model provided that they are consistent with the prior belief that most of the observed data are labeled using (1) (b0 > a0) and that they are small such that (4) is not too constraining. We suggest a0 = 1 and b0 = 9. As in [3], the prior for f1, . . . , fl is set to be a product of Gaussian processes with means equal to 0 and covariance matrices K1, . . . , Kl, as computed by l covariance functions c1(·, ·), . . . , cl(·, ·): P(f) = lY k=1 N(fk|0, Kk) (5) where N(·|µ, Σ) denotes a multivariate Gaussian density with mean vector µ and covariance matrix Σ, f is defined as in (2) and fk = (fk(x1), fk(x2), . . . , fk(xn))T, for k = 1, . . . , l. 2.1 Inference, Prediction and Outlier Identification Given the observed data X and y, we make inference about f, z and ρ using Bayes’ theorem: P(ρ, z, f|y, X) = P(y|X, z, f)P(z|ρ)P(ρ)P(f) P(y|X) , (6) where P(y|X) is the model evidence, a constant useful to perform model comparison under a Bayesian setting [13]. The posterior distribution and the likelihood function can be used to compute a predictive distribution for the label y⋆∈C associated to a new observation x⋆: P(y⋆|x⋆, y, X) = X z ,z⋆ Z P(y⋆|x⋆, z⋆, f⋆)P(z⋆|ρ)P(f⋆|f)P(ρ, z, f|y, X) df df⋆dρ , (7) where f⋆= (f1(x⋆), . . . , fl(x⋆))T, P(y⋆|x⋆, z⋆, f⋆) = Q k̸=y⋆Θ(fk(x⋆) −fy⋆(x⋆))1−z⋆(1/l)z⋆, P(z⋆|ρ) = ρz⋆(1 −ρ)1−z⋆and P(f⋆|f) is a product of l conditional Gaussians with zero mean and covariance matrices given by the covariance functions of K1, . . . , Kl. The posterior for z is P(z|y, X) = Z P(ρ, z, f|y, X)dfdρ . (8) This distribution is useful to compute the posterior probability that the i-th training instance is an outlier, i.e., P(zi = 1|y, X). For this, we only have to marginalize (8) with respect to all the components of z except zi. Unfortunately, the exact computation of (6), (7) and P(zi = 1|y, X) is intractable for typical classification problems. Nevertheless, these expressions can be approximated using expectation propagation [10]. 3 Expectation Propagation The joint probability of f, z, ρ and y given X can be written as the product of l(n + 1) + 1 factors: P(f, z, ρ, y|X) = P(y|X, z, f)P(z|ρ)P(ρ)P(f) =   n Y i=1 Y k̸=yi ψik(f, z, ρ)   " n Y i=1 ψi(f, z, ρ) # ψρ(f, z, ρ) " lY k=1 ψk(f, z, ρ) # , (9) 3 where each factor has the following form: ψik(f, z, ρ) = Θ(fyi(xi) −fk(xi))1−zi(l− 1 l−1 )zi , ψi(f, z, ρ) = ρzi(1 −ρ)1−zi , ψρ(f, z, ρ) = ρa0−1(1 −ρ)b0−1 B(a0, b0) , ψk(f, z, ρ) = N(fk|0, Kk) . (10) Let Ψ be the set that contains all these exact factors. Expectation propagation (EP) approximates each ψ ∈Ψ using a corresponding simpler factor ˜ψ such that   n Y i=1 Y k̸=yi ψik   " n Y i=1 ψi # ψρ " lY k=1 ψk # ≈   n Y i=1 Y k̸=yi ˜ψik   " n Y i=1 ˜ψi # ˜ψρ " lY k=1 ˜ψk # . (11) In (11) the dependence of the exact and the approximate factors on f, z and ρ has been removed to improve readability. The approximate factors ˜ψ are constrained to belong to the same family of exponential distributions, but they do not have to integrate to one. Once normalized with respect to f, z and ρ, (9) becomes the exact posterior distribution (6). Similarly, the normalized product of the approximate factors becomes an approximation to the posterior distribution: Q(f, z, ρ) = 1 Z   n Y i=1 Y k̸=yi ˜ψik(f, z, ρ)   " n Y i=1 ˜ψi(f, z, ρ) # ˜ψρ(f, z, ρ) " lY k=1 ˜ψk(f, z, ρ) # , (12) where Z is a normalization constant that approximates P(y|X). Exponential distributions are closed under product and division operations. Therefore, Q has the same form as the approximate factors and Z can be readily computed. In practice, the form of Q is selected first, and the approximate factors are then constrained to have the same form as Q. For each approximate factor ˜ψ define Q\ ˜ ψ ∝Q/ ˜ψ and consider the corresponding exact factor ψ. EP iteratively updates each ˜ψ, one by one, so that the Kullback-Leibler (KL) divergence between ψQ\ ˜ ψ and ˜ψQ\ ˜ ψ is minimized. The EP algorithm involves the following steps: 1. Initialize all the approximate factors ˜ψ and the posterior approximation Q to be uniform. 2. Repeat until Q converges: (a) Select an approximate factor ˜ψ to refine and compute Q\ ˜ ψ ∝Q/ ˜ψ. (b) Update the approximate factor ˜ψ so that KL(ψQ\ ˜ ψ|| ˜ψQ\ ˜ ψ) is minimized. (c) Update the posterior approximation Q to the normalized version of ˜ψQ\ ˜ ψ. 3. Evaluate Z ≈P(y|X) as the integral of the product of all the approximate factors. The optimization problem in step 2-(b) is convex with a single global optimum. The solution to this problem is found by matching sufficient statistics between ψQ\ ˜ ψ and ˜ψQ\ ˜ ψ. EP is not guaranteed to converge globally but extensive empirical evidence shows that most of the times it converges to a fixed point [10]. Non-convergence can be prevented by damping the EP updates [14]. Damping is a standard procedure and consists in setting ˜ψ = [ ˜ψnew]ϵ[ ˜ψold]1−ϵ in step 2-(b), where ˜ψnew is the updated factor and ˜ψold is the factor before the update. ϵ ∈[0, 1] is a parameter which controls the amount of damping. When ϵ = 1, the standard EP update operation is recovered. When ϵ = 0, no update of the approximate factors occurs. In our experiments ϵ = 0.5 gives good results and EP seems to always converge to a stationary solution. EP has shown good overall performance when compared to other methods in the task of classification with binary Gaussian processes [15, 16]. 3.1 The Posterior Approximation The posterior distribution (6) is approximated by a distribution Q in the exponential family: Q(f, z, ρ) = Bern(z|p)Beta(ρ|a, b) lY k=1 N(fk|µk, Σk) , (13) where N(·|, µ, Σ) is a multivariate Gaussian distribution with mean µ and covariance matrix Σ; Beta(·|a, b) is a beta distribution with parameters a and b; and Bern(·|p) is a multivariate Bernoulli 4 distribution with parameter vector p. The parameters µk and Σk for k = 1, . . . , l and p, a and b are estimated by EP. Note that Q factorizes with respect to fk for k = 1, . . . , l. This makes the cost of the EP algorithm linear in l, the total number of classes. More accurate approximations can be obtained at a cubic cost in l by considering correlations among the fk. The choice of (13) also makes all the required computations tractable and provides good results in Section 4. The approximate factors must have the same functional form as Q but they need not be normalized. However, the exact factors ψik with i = 1, . . . , n and k ̸= yi, corresponding to the likelihood, (2), only depend on fk(xi), fyi(xi) and zi. Thus, the beta part of the corresponding approximate factors can be removed and the multivariate Gaussian distributions simplify to univariate Gaussians. Specifically, the approximate factors ˜ψik with i = 1, . . . , n and k ̸= yi are: ˜ψik(f, z, ρ) = ˜sik exp  −1 2 (fk(xi) −˜µik)2 ˜νik + (fyi(xi) −˜µyi ik)2 ˜νyi ik  ˜pzi ik(1 −˜pik)1−zi , (14) where ˜sik, ˜pik, ˜µik, ˜νik, ˜µyi ik and ˜νyi ik are free parameters to be estimated by EP. Similarly, the exact factors ψi, with i = 1, . . . , n, corresponding to the prior for the latent variables z, (3), only depend on ρ and zi. Thus, the Gaussian part of the corresponding approximate factors can be removed and the multivariate Bernoulli distribution simplifies to a univariate Bernoulli. The resulting factors are: ˜ψi(f, z, ρ) = ˜siρ˜ai−1(1 −ρ) ˜bi−1˜pzi i (1 −˜pi)1−zi , (15) for i = 1, . . . , n, where ˜si, ˜ai, ˜bi, ˜pi are free parameters to be estimated by EP. The exact factor ψρ corresponding to the prior for ρ, (4), need not be approximated, i.e., ˜ψρ = ψρ. The same applies to the exact factors ψk, for k = 1, . . . , l, corresponding to the priors for f1, . . . , fl, (5). We set ˜ψk = ψk for k = 1, . . . , l. All these factors ˜ψρ and ˜ψk, for k = 1, . . . , l, need not be refined by EP. 3.2 The EP Update Operations The approximate factors ˜ψik, for i = 1, . . . , n and k ̸= yi, corresponding to the likelihood, are refined in parallel, as in [17]. This notably simplifies the EP updates. In particular, for each ˜ψik we compute Q\ ˜ ψik as in step 2-(a) of EP. Given each Q\ ˜ ψik and the exact factor ψik, we update each ˜ψik. Then, Qnew is re-computed as the normalized product of all the approximate factors. Preliminary experiments indicate that parallel and sequential updates converge to the same solution. The remaining factors, i.e., ˜ψi, for i = 1, . . . , n, are updated sequentially, as in standard EP. Further details about all these EP updates are found in the supplementary material1. The cost of EP, assuming constant iterations until convergence, is O(ln3). This is the cost of inverting l matrices of size n×n. 3.3 Model Evidence, Prediction and Outlier Identification Once EP has converged, we can evaluate the approximation to the model evidence as the integral of the product of all the approximate terms. This gives the following result: log Z = B + " n X i=1 log Di # + 1 2 " l X k=1 Ck −log |Mk| # +   n X i=1  X k̸=yi log ˜sik  + log ˜si  , (16) where Di = ˜pi  Y k̸=yi ˜pik  + (1 −˜pi)  Y k̸=yi (1 −˜pik)  , Ck = µT kΣ−1 k µk − n X i=1 τ k i , τ k i = (P k̸=yi(˜µyi ik)2/˜νyi ik if k = yi , ˜µ2 ik/˜νik otherwise , B = log B(a, b) −log B(a0, b0) , (17) and Mk = ΛkKk + I, with Λk a diagonal matrix defined as Λk ii = P k̸=yi(˜νyi ik)−1, if yi = k, and Λk ii = ˜ν−1 ik otherwise. It is possible to compute the gradient of log Z with respect to θkj, i.e., the j-th 1The supplementary material is available online at http://arantxa.ii.uam.es/%7edhernan/RMGPC/. 5 hyper-parameter of the k-th covariance function used to compute Kk. Such gradient is useful to find the covariance functions ck(·, ·), with k = 1, . . . , l, that maximize the model evidence. Specifically, one can show that, if EP has converged, the gradient of the free parameters of the approximate factors with respect to θkj is zero [18]. Thus, the gradient of log Z with respect to θkj is ∂log Z ∂θkj = −1 2trace  M−1 k Λk ∂Kk ∂θkj  + 1 2(υk)T(M−1 k )T ∂Kk ∂θkj M−1 k υk , (18) where υk = (bk 1, bk 2, . . . , bk n)T with bk i = P k̸=yi ˜µyi ik/˜νyi ik, if k = yi, and bk i = ˜µik/˜νik otherwise. The predictive distribution (7) can be approximated when the exact posterior is replaced by Q: P(y⋆|x⋆, y, X) ≈ρ l + (1 −ρ) Z N (u|my⋆, vy⋆) Y k̸=y⋆ Φ u −mk √vk  du , (19) where Φ(·) is the cumulative probability function of a standard Gaussian distribution and ρ = a/(a + b) , mk = (k⋆ k)TK−1 k Mkυk , vk = κ⋆ k −(k⋆ k)T K−1 k −K−1 k ΣkK−1 k  k⋆ k , (20) for k = 1, . . . , l, with k⋆ k equal to the covariances between x⋆and X, and with κ⋆ k equal to the corresponding variance at x⋆, as computed by ck(·, ·). There is no closed form expression for the integral in (19). However, it can be easily approximated by a one-dimensional quadrature. The posterior (8) of z can be similarly approximated by marginalizing Q with respect to ρ and f: P(z|y, X) ≈Bern(z|p) = n Y i=1  pzi i (1 −pi)1−zi , (21) where p = (p1, . . . , pn)T. Each parameter pi of Q, with 1 ≤i ≤n, approximates P(zi = 1|y, X), i.e., the posterior probability that the i-th training instance is an outlier. Thus, these parameters can be used to identify the data instances that are more likely to be outliers. The cost of evaluating (16) and (18) is respectively O(ln3) and O(n3). The cost of evaluating (19) is O(ln2) since K−1 k , with k = 1, . . . , l, needs to be computed only once. 4 Experiments The proposed Robust Multi-class Gaussian Process Classifier (RMGPC) is compared in several experiments with the Standard Multi-class Gaussian Process Classifier (SMGPC) suggested in [3]. SMGPC is a particular case of RMGPC which is obtained when b0 →∞. This forces the prior distribution for ρ, (4), to be a delta centered at the origin, indicating that it is not possible to observe outliers. SMGPC explains data instances for which (1) is not satisfied in practice by considering Gaussian noise in the estimation of the functions f1, . . . , fl, which is the typical approach found in the literature [1]. RMGPC is also compared in these experiments with the Heavy-Tailed Process Classifier (HTPC) described in [9]. In HTPC, the prior for each latent function fk, with k = 1, . . . , l, is a Gaussian Process that has been non-linearly transformed to have marginals that follow hyperbolic secant distributions with scale parameter bk. The hyperbolic secant distribution has heavier tails than the Gaussian distribution and is expected to perform better in the presence of outliers. 4.1 Classification of Noisy Data We carry out experiments on four datasets extracted from the UCI repository [11] and from other sources [12] to evaluate the predictive performance of RMGPC, SMGPC and HTPC when different fractions of outliers are present in the data2. These datasets are described in Table 1. All have multiple classes and a fairly small number n of instances. We have selected problems with small n because all the methods analyzed scale as O(n3). The data for each problem are randomly split 100 times into training and test sets containing respectively 2/3 and 1/3 of the data. Furthermore, the labels of η ∈{0%, 5%, 10%, 20%} of the training instances are selected uniformly at random from C. The data are normalized to have zero mean and unit standard deviation on the training set and 2The R source code of RMGPC is available at http://arantxa.ii.uam.es/%7edhernan/RMGPC/. 6 the average balanced class rate (BCR) of each method on the test set is reported for each value of η. The BCR of a method with prediction accuracy ak on those instances of class k (k = 1, . . . , l) is defined as 1/l Pl k=1 ak. BCR is preferred to prediction accuracy in datasets with unbalanced class distributions, which is the case for the datasets displayed in Table 1. Table 1: Characteristics of the datasets used in the experiments. Dataset # Instances # Attributes # Classes # Source New-thyroid 215 5 3 UCI Wine 178 13 3 UCI Glass 214 9 6 UCI SVMguide2 319 20 3 LIBSVM In our experiments, the different methods analyzed (RMGPC, SMGPC and HTPC) use the same covariance function for each latent function, i.e., ck(·, ·) = c(·, ·), for k = 1, . . . , l, where c(xi, xj) = exp  −1 2γ (xi −xj)T (xi −xj)  (22) is a standard Gaussian covariance function with length-scale parameter γ. Preliminary experiments on the datasets analyzed show no significant benefit from considering a different covariance function for each latent function. The diagonal of the covariance matrices Kk, for k = 1, . . . , l, of SMGPC are also added an extra term equal to ϑ2 k to account for latent Gaussian noise with variance ϑ2 k around fk [1]. These extra terms are used by SMGPC to explain those instances that are unlikely to stem from (1). In both RMGPC and SMGPC the parameter γ is found by maximizing (16) using a standard gradient ascent procedure. The same method is used for tuning the parameters ϑk in SMGPC. In HTPC an approximation to the model evidence is maximized with respect to γ and the scale parameters bk, with k = 1, . . . , l, using also gradient ascent [9]. Table 2: Average BCR in % of each method for each problem, as a function of η. Dataset RMGPC SMGPC HTPC RMGPC SMGPC HTPC η = 0% η = 5% New-thyroid 94.2±4.5 93.9±4.4 90.0±5.5 ◁ 92.7±4.9 90.7±5.8 ◁89.7±6.1 ◁ Wine 98.0±1.6 98.0±1.6 97.3±2.0 ◁ 97.5±1.7 97.3±2.0 96.6±2.2 ◁ Glass 65.2±7.7 60.6±8.6 ◁59.5±8.0 ◁ 63.5±8.0 58.9±8.0 ◁57.9±7.5 ◁ SVMguide2 76.3±4.1 74.6±4.2 ◁72.8±4.1 ◁ 75.6±4.3 73.8±4.4 ◁71.9±4.5 ◁ η = 10% η = 20% New-thyroid 92.3±5.4 89.0±5.5 ◁88.3±6.6 ◁ 89.5±6.0 85.9±7.4 ◁85.7±7.7 ◁ Wine 97.0±2.2 96.4±2.6 95.6±4.6 ◁ 96.6±2.7 95.5±2.6 ◁95.1±3.0 ◁ Glass 63.9±7.9 58.0±7.4 ◁55.7±7.7 ◁ 59.7±8.3 55.5±7.3 ◁52.8±7.8 ◁ SVMguide2 74.9±4.4 72.8±4.7 ◁71.5±4.7 ◁ 72.8±5.1 71.4±5.0 ◁67.5±5.6 ◁ Table 2 displays for each problem the average BCR of each method for the different values of η considered. When the performance of a method is significantly different from the performance of RMGPC, as estimated by a Wilcoxon rank test (p-value < 1%), the corresponding BCR is marked with the symbol ◁. The table shows that, when there is no noise in the labels (i.e., η = 0%), RMGPC performs similarly to SMGPC in New-Thyroid and Wine, while it outperforms SMGPC in Glass and SVMguide2. As the level of noise increases, RMGPC is found to outperform SMGPC in all the problems investigated. HTPC typically performs worse than RMGPC and SMGPC independently of the value of η. This can be a consequence of HTPC using the Laplace approximation for approximate inference [9]. In particular, there is evidence indicating that the Laplace approximation performs worse than EP in the context of Gaussian process classifiers [15]. Extra experiments comparing RMGPC, SMGPC and HTPC under 3 different noise scenarios appear in the supplementary material. They further support the better performance of RMGPC in the presence of outliers in the data. 4.2 Outlier Identification A second batch of experiments shows the utility of RMGPC to identify data instances that are likely to be outliers. These experiments use the Glass dataset from the previous section. Recall that for this 7 dataset RMGPC performs significantly better than SMGPC for η = 0%, which suggest the presence of outliers. After normalizing the Glass dataset, we run RMGPC on the whole data and estimate the posterior probability that each instance is an outlier using (21). The hyper-parameters of RMGPC are estimated as described in the previous section. Figure 1 shows for each instance (xi, yi) of the Glass dataset, with i = 1, . . . , n, the value of P(zi = 1|y, X). Note that most of the instances are considered to be outliers with very low posterior probability. Nevertheless, there is a small set of instances that have very high posterior probabilities. These instances are unlikely to stem from (1) and are expected to be misclassified when placed on the test set. Consider the set of instances that are more likely to be outliers than normal instances (i.e., instances 3, 36, 127, 137, 152, 158 and 188). Assume the experimental protocol of the previous section. Table 3 displays the fraction of times that each of these instances is misclassified by RMGPC, SMGPC and HTPC when placed on the test set. The posterior probability that each instance is an outlier, as estimated by RMGPC, is also reported. The table shows that all the instances are typically misclassified by all the classifiers investigated, which confirms the difficulty of obtaining accurate predictions for them in practice. 0 50 100 150 200 0.00 0.50 1.00 Glass Data Instances P(z_i = 1|y,X) Figure 1: Posterior probability that each data instance form the Glass dataset is an outlier. Table 3: Average test error in % of each method on each data instance that is more likely to be an outlier. The probability that the instance is an outlier, as estimated by RMGPC, is also displayed. Glass Data Instances 3-rd 36-th 127-th 137-th 152-th 158-th 188-th Test Error RMGPC 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 SMGPC 100.0±0.0 92.0±5.5 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 HTPC 100.0±0.0 84.0±7.5 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 P(zi = 1|y, X) 0.69 0.96 0.82 0.51 0.86 0.83 1.00 5 Conclusions We have introduced a Robust Multi-class Gaussian Process Classifier (RMGPC). RMGPC considers only the number of errors made, and not the distance of such errors to the decision boundaries of the classifier. This is achieved by introducing binary latent variables that indicate when a given instance is considered to be an outlier (wrongly labeled instance) or not. RMGPC can also identify the training instances that are more likely to be outliers. Exact Bayesian inference in RMGPC is intractable for typical learning problems. Nevertheless, approximate inference can be efficiently carried out using expectation propagation (EP). When EP is used, the training cost of RMGPC is O(ln3), where l is the number of classes and n is the number of training instances. Experiments in four multi-class classification problems show the benefits of RMGPC when labeling noise is injected in the data. In this case, RMGPC performs better than other alternatives based on considering latent Gaussian noise or noise which follows a distribution with heavy tails. When there is no noise in the data, RMGPC performs better or equivalent to these alternatives. Our experiments also confirm the utility of RMGPC to identify data instances that are difficult to classify accurately in practice. These instances are typically misclassified by different predictors when included in the test set. Acknowledgment All experiments were run on the Center for Intensive Computation and Mass Storage (Louvain). All authors acknowledge support from the Spanish MCyT (Project TIN2010-21575-C02-02). 8 References [1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2006. [2] Christopher K. I. Williams and David Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998. [3] Hyun-Chul Kim and Zoubin Ghahramani. Bayesian Gaussian process classification with the EM-EP algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):1948–1959, 2006. [4] R.M Neal. Regression and classification using Gaussian process priors. Bayesian Statistics, 6:475–501, 1999. [5] Matthias Seeger and Michael I. Jordan. Sparse Gaussian process classification with multiple classes. Technical report, University of California, Berkeley, 2004. [6] M. Opper and O. Winther. Gaussian process classification and SVM: Mean field results. In P. Bartlett, B.Schoelkopf, D. Schuurmans, and A. Smola, editors, Advances in large margin classifiers, pages 43–65. MIT Press, 2000. [7] Daniel Hern´andez-Lobato and Jos´e Miguel Hern´andez-Lobato. Bayes machines for binary classification. Pattern Recognition Letters, 29(10):1466–1473, 2008. [8] Hyun-Chul Kim and Zoubin Ghahramani. Outlier robust Gaussian process classification. In Structural, Syntactic, and Statistical Pattern Recognition, volume 5342 of Lecture Notes in Computer Science, pages 896–905. Springer Berlin / Heidelberg, 2008. [9] Fabian L. Wauthier and Michael I. Jordan. Heavy-Tailed Process Priors for Selective Shrinkage. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2406–2414. 2010. [10] Thomas Minka. A Family of Algorithms for approximate Bayesian Inference. PhD thesis, Massachusetts Institute of Technology, 2001. [11] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. [12] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines, 2001. [13] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, August 2006. [14] T. Minka and J. Lafferty. Expectation-propagation for the generative aspect model. In Adnan Darwiche and Nir Friedman, editors, Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, pages 352–359. Morgan Kaufmann, 2002. [15] Malte Kuss and Carl Edward Rasmussen. Assessing approximate inference for binary Gaussian process classification. Journal of Machine Learning Research, 6:1679–1704, 2005. [16] H Nickisch and CE Rasmussen. Approximations for binary Gaussian process classification. Journal of Machine Learning Research, 9:2035–2078, 10 2008. [17] Marcel Van Gerven, Botond Cseke, Robert Oostenveld, and Tom Heskes. Bayesian source localization with the multivariate Laplace prior. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1901–1909, 2009. [18] Matthias Seeger. Expectation propagation for exponential families. Technical report, Department of EECS, University of California, Berkeley, 2006. 9
2011
144
4,196
Practical Variational Inference for Neural Networks Alex Graves Department of Computer Science University of Toronto, Canada graves@cs.toronto.edu Abstract Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus. 1 Introduction In the eighteen years since variational inference was first proposed for neural networks [10] it has not seen widespread use. We believe this is largely due to the difficulty of deriving analytical solutions to the required integrals over the variational posteriors. Such solutions are complicated for even the simplest network architectures, such as radial basis networks [2] and single layer feedforward networks with linear outputs [10, 1, 14], and are generally unavailable for more complex systems. The approach taken here is to forget about analytical solutions and search instead for variational distributions whose expectation values (and derivatives thereof) can be efficiently approximated with numerical integration. While it may seem perverse to replace one intractable integral (over the true posterior) with another (over the variational posterior), the point is that the variational posterior is far easier to draw probable samples from, and correspondingly more amenable to numerical methods. The result is a stochastic method for variational inference with a diagonal Gaussian posterior that can be applied to any differentiable log-loss parametric model—which includes most neural networks1 Variational inference can be reformulated as the optimisation of a Minimum Description length (MDL; [21]) loss function; indeed it was in this form that variational inference was first considered for neural networks. One advantage of the MDL interpretation is that it leads to a clear separation between prediction accuracy and model complexity, which can help to both analyse and optimise the network. Another benefit is that recasting inference as optimisation makes it to easier to implement in existing, gradient-descent-based neural network software. 2 Neural Networks For the purposes of this paper a neural network is a parametric model that assigns a conditional probability Pr(D|w) to some dataset D, given a set w = {wi}W i=1 of real-valued parameters, or weights. The elements (x, y) of D, each consisting of an input x and a target y, are assumed to be 1An important exception are energy-based models such as restricted Boltzmann machines [24] whose logloss is intractable. 1 drawn independently from a joint distribution p(x, y)2. The network loss LN(w, D) is defined as the negative log probability of the data given the weights. LN(w, D) = −ln Pr(D|w) = − X (x,y)∈D ln Pr(y|x, w) (1) The logarithm could be taken to any base, but to avoid confusion we will use the natural logarithm ln throughout. We assume that the partial derivatives of LN(w, D) with respect to the network weights can be efficiently calculated (using, for example, backpropagation or backpropagation through time [22]). 3 Variational Inference Performing Bayesian inference on a neural network requires the posterior distribution of the network weights given the data. If the weights have a prior probability P(w|α) that depends on some parameters α, the posterior can be written Pr(w|D, α). Unfortunately, for most neural networks Pr(w|D, α) cannot be calculated analytically, or even efficiently sampled from. Variational inference addresses this problem by approximating Pr(w|D, α) with a more tractable distribution Q(w|β). The approximation is fitted by minimising the variational free energy F with respect to the parameters β, where F = −  ln Pr(D|w)P(w|α) Q(w|β)  w∼Q(β) (2) and for some function g of a random variable x with distribution p(x), ⟨g⟩x∼p denotes the expectation of g over p. A fully Bayesian approach would infer the prior parameters α from a hyperprior; however in this paper they are found by simply minimising F with respect to α as well as β. 4 Minimum Description Length F can be reinterpreted as a minimum description length loss function [12] by rearranging Eq. (2) and substituting in from Eq. (1) to get F = LN(w, D) w∼Q(β) + DKL(Q(β)||P(α)), (3) where DKL(Q(β)||P(α)) is the Kullback-Leibler divergence between Q(β) and P(α). Shannon’s source coding theorem [23] tells us that the first term on the right hand side of Eq. (3) is a lower bound on the expected amount of information (measured in nats, due to the use of natural logarithms) required to transmit the targets in D to a receiver who knows the inputs, using the outputs of a network whose weights are sampled from Q(β). Since this term decreases as the network’s prediction accuracy increases, we identify it as the error loss LE(β, D): LE(β, D) = LN(w, D) w∼Q(β) (4) Shannon’s bound can almost be achieved in practice using arithmetic coding [26]. The second term on the right hand side of Eq. (3) is the expected number of nats required by a receiver who knows P(α) to pick a sample from Q(β). Since this term measures the cost of ‘describing’ the network weights to the receiver, we identify it as the complexity loss LC(α, β): LC(α, β) = DKL(Q(β)||P(α)) (5) LC(α, β) can be realised with bits-back coding [25, 10]. Although originally conceived as a thought experiment, bits-back coding has been used for an actual compression scheme [5]. Putting the terms together F can be rephrased as an MDL loss function L(α, β, D) that measures the total number of nats required to transmit the training targets using the network, given α and β: L(α, β, D) = LE(β, D) + LC(α, β) (6) The network is then trained on D by minimising L(α, β, D) with respect to α and β, just like an ordinary neural network loss function. One advantage of using a transmission cost as a loss 2Unsupervised learning can be treated as a special case where x = ∅ 2 function is that we can immediately determine whether the network has compressed the targets past a reasonable benchmark (such as that given by an off-the-shelf compressor). If it has, we can be fairly certain that the network is learning underlying patterns in the data and not simply memorising the training set. We would therefore expect it to generalise well to new data. In practice we have found that as long as significant compression is taking place, decreasing L(α, β, D) on the training set does not increase LE(β, D) on the test set, and it is therefore unnecessary to sacrifice any training data for early stopping. Two transmission costs were ignored in the above discussion. One is the cost of transmitting the model with w unspecified (for example software that implements the network architecture, the training algorithm etc.). The other is the cost of transmitting the prior. If either of these are used to encode a significant amount of information about D, the MDL principle will break down and the generalisation guarantees that come with compression will be lost. The easiest way to prevent this is to keep both costs very small compared to D. In particular the prior should not contain too many parameters. 5 Choice of Distributions We now derive the form of LE(β, D) and LC(α, β) for various choices of Q(β) and P(α). We also derive the gradients of LE(β, D) and LC(α, β) with respect to β and the optimal values of α given β. All continuous distributions are implicitly assumed to be quantised at some very fine resolution, and we will limit ourselves to diagonal posteriors of the form Q(β) = QW i=1 qi(βi), meaning that LC(α, β) = PW i=1 DKL(qi(βi)||P(α)). 5.1 Delta Posterior Perhaps the simplest nontrivial distribution for Q(β) is a delta distribution that assigns probability 1 to a particular set of weights w and 0 to all other weights. In this case β = w, LE(β, D) = LN(w, D) and LC(α, β) = LC(α, w) = −logP(w|α) + C. where C is a constant that depends only on the discretisation of Q(β). Although C has no effect on the gradient used for training, it is usually large enough to ensure that the network cannot compress the data using the coding scheme described in the previous section3. If the prior is uniform, and all realisable weight values are equally likely then LC(α, β) is a constant and we recover ordinary maximum likelihood training. If the prior is a Laplace distribution then α = {µ, b}, P(w|α) = QW i=1 1 2b exp  −|wi−µ| b  and LC(α, w) = W ln 2b + 1 b W X i=1 |wi −µ| + C =⇒∂LC(α, w) ∂wi = sgn(wi −µ) b (7) If µ = 0 and b is fixed, this is equivalent to ordinary L1 regularisation. However we can instead determine the optimal prior parameters ˆα for w as follows: ˆµ = µ1/2(w) (the median weight value) and ˆb = 1 W PW i=1 |wi −ˆµ|. If the prior is Gaussian then α = {µ, σ2}, P(w|α) = QW i=1 1 √ 2πσ2 exp  −(wi−µ)2 2σ2  and LC(α, w) = W ln( √ 2πσ2) + 1 2σ2 W X i=1 (wi −µ)2 + C =⇒∂LC(α, w) ∂wi = wi −µ σ2 (8) With µ = 0 and σ2 fixed this is equivalent to L2 regularisation (also known as weight decay for neural networks). The optimal ˆα given w are ˆµ = 1 W PW i=1 wi and ˆσ2 = 1 W PW i=1 (wi −ˆµ)2 5.2 Gaussian Posterior A more interesting distribution for Q(β) is a diagonal Gaussian. In this case each weight requires a separate mean and variance, so β = {µ, σ2} with the mean vector µ and variance vector σ2 both 3The floating point resolution of the computer architecture used to train the network could in principle be used to upper-bound the discretisation constant, and hence the compression; but in practice the bound would be prohibitively high. 3 the same size as w. For a general network architecture we cannot compute either LE(β, D) or its derivatives exactly, so we resort to sampling. Applying Monte-Carlo integration to Eq. (4) gives LE(β, D) ≈1 S S X k=1 LN(wk, D) (9) with wk drawn independently from Q(β). A combination of the Gaussian characteristic function and integration by parts can be used to derive the following identities for the derivatives of multivariate Gaussian expectations [18]: ∇µ ⟨V (a)⟩a∼N = ⟨∇aV (a)⟩a∼N , ∇Σ ⟨V (a)⟩a∼N = 1 2 ⟨∇a∇aV (a)⟩a∼N (10) where N is a multivariate Gaussian with mean vector µ and covariance matrix Σ, and V is an arbitrary function of a. Differentiating Eq. (4) and applying these identities yields ∂LE(β, D) ∂µi = ∂LN(w, D) ∂wi  w∼Q(β) ≈1 S S X k=1 ∂LN(wk, D) ∂wi (11) ∂LE(β, D) ∂σ2 i = 1 2 ∂2LN(w, D) ∂w2 i  w∼Q(β) ≈1 2 *∂LN(w, D) ∂wi 2+ w∼Q(β) ≈1 2S S X k=1 ∂LN(wk, D) ∂wi 2 (12) where the first approximation in Eq. (12) comes from substituting the negative diagonal of the empirical Fisher information matrix for the diagonal of the Hessian. This approximation is exact if the conditional distribution Pr(D|w) matches the empirical distribution of D (i.e. if the network perfectly models the data); we would therefore expect it to improve as LE(β, D) decreases. For simple networks whose second derivatives can be calculated efficiently the approximation is unnecessary and the diagonal Hessian can be sampled instead. A simplification of the above distribution is to consider the variances of Q(β) fixed and optimise only the means. Then the sampling used to calculate the derivatives in Eq. (11) is equivalent to adding zero-mean, fixed-variance Gaussian noise to the network weights during training. In particular, if the prior P(α) is uniform and a single weight sample is taken for each element of D, then minimising L(α, β, D) is identical to minimising LN(w, D) with weight noise or synaptic noise [13]. Note that the quantisation of the uniform prior adds a large constant to LC(α, β), making it unfeasible to compress the data with our MDL coding scheme; in practice early stopping is required to prevent overfitting when training with weight noise. If the prior is Gaussian then α = {µ, σ2} and LC(α, β) = W X i=1 ln σ σi + 1 2σ2 h (µi −µ)2 + σ2 i −σ2i (13) =⇒∂LC(α, β) ∂µi = µi −µ σ2 , ∂LC(α, β) ∂σ2 i = 1 2  1 σ2 −1 σ2 i  (14) The optimal prior parameters ˆα given β are ˆµ = 1 W W X i=1 µi, ˆσ2 = 1 W W X i=1 h σ2 i + (µi −ˆµ)2i (15) If a Gaussian prior is used with the fixed variance ‘weight noise’ posterior described above, it is still possible to choose the optimal prior parameters for each β. This requires only a slight modification of standard weight-noise training, with the derivatives on the left of Eq. (14) added to the weight gradient and α optimised after every weight update. But because the prior is no longer uniform the network is able to compress the data, making it feasible to dispense with early stopping. The terms in the sum on the right hand side of Eq. (13) are the complexity costs of individual network weights. These costs give valuable insight into the internal structure of the network, since (with a limited budget of bits to spend) the network will assign more bits to more important weights. Importance can be used, for example, to prune away spurious weights [15] or determine which inputs are relevant [16]. 4 6 Optimisation If the derivatives of LE(β, D) are stochastic, we require an optimiser that can tolerate noisy gradient estimates. Steepest descent with momentum [19] and RPROP [20] both work well in practice. Although stochastic derivatives should in principle be estimated using the same weight samples for the entire dataset, it is in practice much more efficient to pick different weight samples for each (x, y) ∈D. If both the prior and posterior are Gaussian this yields ∂L(α, β, D) ∂µi ≈µi −µ σ2 + X (x,y)∈D 1 S S X k=1 ∂LN(wk, x, y) ∂wi (16) ∂L(α, β, D) ∂σ2 i ≈1 2  1 σ2 −1 σ2 i  + X (x,y)∈D 1 2S S X k=1 ∂LN(wk, x, y) ∂wi 2 (17) where LN(wk, x, y) = −ln Pr(y|x, w) and a separate set of S weight samples {wk}S k=1 is drawn from Q(β) for each (x, y). For large datasets it is usually sufficient to set S = 1; however performance can in some cases be substantially improved by using more samples, at the cost of longer training times. If the data is divided into B equally-sized batches such that D = {bj}B j=1, and an ‘online’ optimiser is used, with the parameters updated after each batch gradient calculation, the following online loss function (and corresponding derivatives) should be employed: L(α, β, bj) = 1 B LC(α, β) + LE(β, bj) (18) Note the 1/B factor for the complexity loss. This is because the weights (to which the complexity cost applies) are only transmitted once for the entire dataset, whereas the error cost must be transmitted separately for each batch. During training, the prior parameters α should be set to their optimal values after every update to β. For more complex priors where the optimal α cannot be found in closed form (such as mixture distributions), α and β can instead be optimised simultaneously with gradient descent [17, 10]. Ideally a trained network should be evaluated on some previously unseen input x′ using the expected distribution ⟨Pr(.|x′, w)⟩w∼Q(β). However the maximum a posteriori approximation Pr(.|x′, w∗), where w∗is the mode of Q(β), appears to work well in practice (at least for diagonal Gaussian posteriors). This is equivalent to removing weight noise during testing. 7 Pruning Removing weights from a neural network (a process usually referred to as pruning) has been repeatedly proposed as a means of reducing complexity and thereby improving generalisation [15, 7]. This would seem redundant for variational inference, which automatically limits the network complexity. However pruning can reduce the computational cost and memory demands of the network. Furthermore we have found that if the network is retrained after pruning, the final performance can be improved. A possible explanation is that pruning reduces the noise in the gradient estimates (because the pruned weights are not sampled) without increasing network complexity. Weights w that are more probable under Q(β) tend to give lower LN(w, D) and pruning a weight is equivalent to fixing it to zero. These two facts suggest a pruning heuristic where a weight is removed if its probability density at zero is sufficiently high under Q(β). For a diagonal posterior we can define the relative probability of each wi at zero as the density of qi(βi) at zero divided by the density of qi(βi) at its mode. We can then define a pruning heuristic by removing all weights whose relative probability at zero exceeds some threshold γ, with 0 ≤γ ≤1. If qi(βi) is Gaussian this yields exp  −µ2 i 2σ2 i  > γ =⇒ µi σi < λ (19) 5 “In wage negotiations the industry bargains as a unit with a single union.” Figure 1: Two representations of a TIMIT utterance. Note the lower resolution and greater decorrelation of the MFC coefficients (top) compared to the spectrogram (bottom). where we have used the reparameterisation λ = √−2 ln γ, with λ ≥0. If λ = 0 no weights are pruned. As λ grows the amount of pruning increases, and the probability of the pruned weight vector under Q(β) (and therefore the likely network performance) decreases. A good rule of thumb for how high λ can safely be set is the point at which the pruned weights become less probable than an average weight sampled from qi(βi). For a Gaussian this is λ = q 2 ln √ 2 ≈0.83 (20) If the network is retrained after pruning, the cost of transmitting which weights have been removed should in principle be added to LC(α, β) (since this information could be used to overfit the training data). However the extra cost does not depend on the network parameters, and can therefore be ignored for the purposes of optimisation. When a Gaussian prior is used its mean tends to be near zero. This implies that ‘cheaper’ weights, where qi(βi) ≈P(α), have high relative probability at zero and are thus more likely to be pruned. 8 Experiments We tested all the combinations of posterior and prior described in Section 5 on a hierarchical multidimensional recurrent neural network [9] trained to do phoneme recognition on the TIMIT speech corpus [4]. We also assessed the pruning heuristic from Section 7 by applying it with various thresholds to a trained network and observing the impact on performance and network size. TIMIT is a popular phoneme recognition benchmark. The core training and test sets (which we used for our experiments) contain respectively 3696 and 192 phonetically transcribed utterances. We defined a validation set by randomly selecting 184 sequences from the training set. The reduced set of 39 phonemes [6] was used during both training and testing. The audio data was presented to the network in the form of spectrogram images. One such image is contrasted with the mel-frequency cepstrum representation used for most speech recognition systems in Fig. 1. Hierarchical multidimensional recurrent neural networks containing Long Short-Term Memory [11] hidden layers and a CTC output layer [8] have proven effective for offline handwriting recognition [9]. The same architecture is employed here, with a spectrogram in place of a handwriting image, and phoneme labels in place of characters. Since the network scans through the spectrogram in all directions, both vertical and horizontal correlations can be captured. The network topology was identical for all experiments. It was the same as that of the handwriting recognition network in [9] except that the dimensions of the three subsampling windows used to progressively decrease resolution were now 2×4, 2×4 and 1×4, and the CTC layer now contained 40 output units (one for each phoneme, plus an extra for ‘blank’). This gave a total of 15 layers, 1306 units (not counting the inputs or bias), and 139,536 weights. All network parameters were trained with online steepest descent (weight updates after every sequence) using a learning rate of 10−4 and a momentum of 0.9. For the networks with stochastic derivatives (i.e those with Gaussian posteriors) a single weight sample was drawn for each sequence. Prefix search CTC decoding [8] was used to transcribe the test set, with probability threshold 0.995. When parameters in the posterior or prior were fixed, the best value was found empirically. All networks were initialised with random weights (or random weight means if the posterior was Gaussian), chosen from a Gaussian 6 Adaptive weight noise Adapt. prior weight noise Weight noise Maximum likelihood Figure 2: Error curves for four networks during training. The green, blue and red curves correspond to the average per-sequence error loss LE(β, D) on the training, test and validation sets respectively. Adaptive weight noise does not overfit, and normal weight noise overfits much more slowly than maximum likelihood. Adaptive weight noise led to longer training times and noisier error curves. Table 1: Results for different priors and posteriors. All distribution parameters were learned by the network unless fixed values are specified. ‘Error’ is the phoneme error rate on the core test set (total edit distance between the network transcriptions and the target transcriptions, multiplied by 100). ‘Epochs’ is the number of passes through the training set after which the error was recorded. ‘Ratio’ is the compression ratio of the training set transcription targets relative to a uniform code over the 39 phoneme labels (≈5.3 bits per phoneme); this could only be calculated for the networks with Gaussian priors and posteriors. Name Posterior Prior Error Epochs Ratio Adaptive L1 Delta Laplace 49.0 7 – Adaptive L2 Delta Gauss 35.1 421 – Adaptive mean L2 Delta Gauss σ2 = 0.1 28.0 53 – L2 Delta Gauss µ = 0, σ2 = 0.1 27.4 59 – Maximum likelihood Delta Uniform 27.1 44 – L1 Delta Laplace µ = 0, b = 1/12 26.0 545 – Adaptive mean L1 Delta Laplace b = 1/12 25.4 765 – Weight noise Gauss σi = 0.075 Uniform 25.4 220 – Adaptive prior weight noise Gauss σi = 0.075 Gauss 24.7 260 0.542 Adaptive weight noise Gauss Gauss 23.8 384 0.286 with mean 0, standard deviation 0.1. For the adaptive Gaussian posterior, the standard deviations of the weights were initialised to 0.075 then optimised during training; this ensured that the variances (which are the standard deviations squared) remained positive. The networks with Gaussian posteriors and priors did not require early stopping and were trained on all 3696 utterances in the training set; all other networks used the validation set for early stopping and hence were trained on 3512 utterances. These were also the only networks for which the transmission cost of the network weights could be measured (since it did not depend on the quantisation of the posterior or prior). The networks were evaluated on the test set using the parameters giving lowest LE(β, D) on the training set (or validation set if present). All experiments were stopped after 100 training epochs with no improvement in either L(α, β, D), LE(β, D) or the number of transcription errors on the training or validation set. The reason for such conservative stopping criteria was that the error curves of some of the networks were extremely noisy (see Fig. 2). Table 1 shows the results for the different posteriors and priors. L2 regularisation was no better than unregularised maximum likelihood, while L1 gave a slight improvement; this is consistent with our previous experience of recurrent neural networks. The fully adaptive L1 and L2 networks performed very badly, apparently because the priors became excessively narrow (σ2 ≈0.003 for L2 and b ≈0.002 for L1). L1 with fixed variance and adaptive mean was somewhat better than L1 with mean fixed at 0 (although the adaptive mean was very close to zero, settling around 0.0064). The networks with Gaussian posteriors outperformed those with delta posteriors, with the best score obtained using a fully adaptive posterior. Table 2 shows the effect of pruning on the trained ‘adaptive weight noise’ network from Table 1. The pruned networks were retrained using the same optimisation as before, with the error recorded before and after retraining. As well as being highly effective at removing weights, pruning led to improved performance following retraining in some cases. Notice the slow increase in initial error up to λ = 0.5 and sharp rise thereafter; this is consistent with the ‘safe’ threshold of λ ≈0.83 7 Table 2: Effect of Network Pruning. ‘λ’ is the threshold used for pruning. ‘Weights’ is the number of weights left after pruning and ‘Percent’ is the same figure expressed as a percentage of the original weights. ‘Initial Error’ is the test error immediately after pruning and ‘Retrain Error’ is the test error following ‘Retrain Epochs’ of subsequent retraining. ‘Bits/weight’ is the average bit cost (as defined in Eq. (13)) of the unpruned weights. λ Weights Percent Initial error Retrain error Retrain Epochs Bits/weight 0 139,536 100% 23.8 23.8 0 0.53 0.01 107,974 77.4% 23.8 24.0 972 0.72 0.05 63,079 45.2% 23.9 23.5 35 1.15 0.1 52,984 37.9% 23.9 23.3 351 1.40 0.2 43,182 30.9% 23.9 23.7 740 1.82 0.5 31,120 22.3% 24.0 23.3 125 2.21 1 22,806 16.3% 24.5 24.1 403 3.19 2 16,029 11.5% 28.0 24.5 335 3.55 input gates H forget gates V forget gates cells output gates cells Figure 3: Weight costs in an 2D LSTM recurrent connection. Each dot corresponds to a weight; the lighter the colour the more bits the weight costs. The vertical axis shows the LSTM cell the weight comes from; the horizontal axis shows the LSTM unit the weight goes to. Note the low cost of the ‘V forget gates’ (these mediate vertical correlations between frequency bands in the spectrogram, which are apparently less important to transcription than horizontal correlations between timesteps); the high cost of the ‘cells’ (LSTM’s main processing units); the bright horizontal and vertical bands (corresponding to units with ‘important’ outputs and inputs respectively); and the bright diagonal through the cells (corresponding to self connections). mentioned in Section 7. The lowest final phoneme error rate of 23.3 would until recently have been the best recorded on TIMIT; however the application of deep belief networks has now improved the benchmark to 20.5 [3]. Acknowledgements I would like to thank Geoffrey Hinton, Christian Osendorfer, Justin Bayer and Thomas R¨uckstieß for helpful discussions and suggestions. Alex Graves is a Junior Fellow of the Canadian Institute for Advanced Research. Figure 4: The ‘cell’ weights from Fig. 3 pruned at different thresholds. Black dots are pruned weights, white dots are remaining weights. ‘Cheaper’ weights tend to be removed first as λ grows. 8 References [1] D. Barber and C. M. Bishop. Ensemble learning in Bayesian neural networks., pages 215–237. SpringerVerlag, Berlin, 1998. [2] D. Barber and B. Schottky. Radial basis functions: A bayesian treatment. In NIPS, 1997. [3] G. E. Dahl, M. Ranzato, A. rahman Mohamed, and G. Hinton. Phone recognition with the meancovariance restricted boltzmann machine. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 469–477. 2010. [4] DARPA-ISTO. The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus (TIMIT), speech disc cd1-1.1 edition, 1990. [5] B. J. Frey. Graphical models for machine learning and digital communication. MIT Press, Cambridge, MA, USA, 1998. [6] K. fu Lee and H. wuen Hon. Speaker-independent phone recognition using hidden markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1989. [7] C. L. Giles and C. W. Omlin. Pruning recurrent neural networks for improved generalization performance. IEEE Transactions on Neural Networks, 5:848–851, 1994. [8] A. Graves, S. Fern´andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, ICML 2006, Pittsburgh, USA, 2006. [9] A. Graves and J. Schmidhuber. Offline handwriting recognition with multidimensional recurrent neural networks. In NIPS, pages 545–552, 2008. [10] G. E. Hinton and D. van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In COLT, pages 5–13, 1993. [11] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. [12] A. Honkela and H. Valpola. Variational learning and bits-back coding: An information-theoretic view to bayesian learning. IEEE Transactions on Neural Networks, 15:800–810, 2004. [13] K.-C. Jim, C. Giles, and B. Horne. An analysis of noise in recurrent neural networks: convergence and generalization. Neural Networks, IEEE Transactions on, 7(6):1424 –1438, nov 1996. [14] N. D. Lawrence. Variational Inference in Probabilistic Models. PhD thesis, University of Cambridge, 2000. [15] Y. Le Cun, J. Denker, and S. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 2, pages 598–605. Morgan Kaufmann, San Mateo, CA, 1990. [16] D. J. C. MacKay. Probable networks and plausible predictions - a review of practical bayesian methods for supervised neural networks. Neural Computation, 1995. [17] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight sharing. Neural Computation, 4:173–193, 1992. [18] M. Opper and C. Archambeau. The variational gaussian approximation revisited. Neural Computation, 21(3):786–792, 2009. [19] D. Plaut, S. Nowlan, and G. E. Hinton. Experiments on learning by back propagation. Technical Report CMU-CS-86-126, Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1986. [20] M. Riedmiller and T. Braun. A direst adaptive method for faster backpropagation learning: The rprop algorithm. In International Symposium on Neural Networks, 1993. [21] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465 – 471, 1978. [22] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors, pages 696–699. MIT Press, Cambridge, MA, USA, 1988. [23] C. E. Shannon. A mathematical theory of communication. Bell system technical journal, 27, 1948. [24] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory, pages 194– 281. MIT Press, Cambridge, MA, USA, 1986. [25] C. S. Wallace. Classification by minimum-message-length inference. In Proceedings of the international conference on Advances in computing and information, ICCI’90, pages 72–81, New York, NY, USA, 1990. Springer-Verlag New York, Inc. [26] I. H. Witten, R. M. Neal, and J. G. Cleary. Arithmetic coding for data compression. Commun. ACM, 30:520–540, June 1987. 9
2011
145
4,197
Penalty Decomposition Methods for Rank Minimization ∗ Zhaosong Lu † Yong Zhang ‡ Abstract In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 1 Introduction In this paper we consider the following rank minimization problems: min X {f(X) : rank(X) ≤r, X ∈X ∩Ω}, (1) min X {f(X) + ν rank(X) : X ∈X ∩Ω} (2) for some r, ν ≥0, where X is a closed convex set, Ωis a closed unitarily invariant set in ℜm×n, and f : ℜm×n →ℜis a continuously differentiable function (for the definition of unitarily invariant set, see Section 2.1). In literature, there are numerous application problems in the form of (1) or (2). For example, several well-known combinatorial optimization problems such as maximal cut (MAXCUT) and maximal stable set can be formulated as problem (1) (see, for example, [11, 1, 5]). More generally, nonconvex quadratic programming problems can also be cast into (2) (see, for example, [1]). Recently, some image recovery and machine learning problems are formulated as (1) or (2) (see, for example, [27, 31]). In addition, the problem of finding nearest low-rank correlation matrix is in the form of (1), which has important application in finance (see, for example, [4, 29, 36, 38, 25, 30, 12]). Several approaches have recently been developed for solving problems (1) and (2) or their special cases. In particular, for those arising in combinatorial optimization (e.g., MAXCUT), one novel method is to first solve the semidefinite programming (SDP) relaxation of (1) and then obtain an approximate solution of (1) by applying some heuristics to the solution of the SDP (see, for example, [11]). Despite the remarkable success on those problems, it is not clear about the performance of this method when extended to solve more general problem (1). In addition, the nuclear norm relaxation approach has been proposed for problems (1) or (2). For example, Fazel et al. [10] considered a ∗This work was supported in part by NSERC Discovery Grant. †Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. (email: zhaosong@sfu.ca). ‡Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. (email: yza30@sfu.ca). 1 special case of problem (2) with f ≡0 and Ω= ℜm×n. In their approach, a convex relaxation is applied to (1) or (2) by replacing the rank of X by the nuclear norm of X and numerous efficient methods can then be applied to solve the resulting convex problems. Recently, Recht et al. [27] showed that under some suitable conditions, such a convex relaxation is tight when X is an affine manifold. The quality of such a relaxation, however, remains unknown when applied to general problems (1) and (2). Additionally, for some application problems, the nuclear norm stays constant in feasible region. For example, as for nearest low-rank correlation matrix problem (see Subsection 3.2), any feasible point is a symmetric positive semidefinite matrix with all diagonal entries equal to one. For those problems, nuclear norm relaxation approach is obviously inappropriate. Finally, nonlinear programming (NLP) reformulation approach has been applied for problem (1) (see, for example, [5]). In this approach, problem (1) is cast into an NLP problem by replacing the constraint rank(X) ≤r by X = UV where U ∈ℜm×r and V ∈ℜr×n, and then numerous optimization methods can be applied to solve the resulting NLP. It is not hard to observe that such an NLP has infinitely many local minima, and moreover it can be highly nonlinear, which might be challenging for all existing numerical optimization methods for NLP. Also, it is not clear whether this approach can be applied to problem (2). In this paper we consider general rank minimization problems (1) and (2). We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. The convergence of the PD methods has been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. The rest of this paper is organized as follows. In Subsection 1.1, we introduce the notation that is used throughout the paper. In Section 2, we first establish some technical results on a class of rank minimization problems and then use them to develop the penalty decomposition methods for solving problems (1) and (2). In Section 3, we conduct numerical experiments to test the performance of our penalty decomposition methods for solving matrix completion and nearest low-rank correlation matrix problems. Finally, we present some concluding remarks in Section 4. 1.1 Notation In this paper, the symbol ℜn denotes the n-dimensional Euclidean space, and the set of all m × n matrices with real entries is denoted by ℜm×n. The spaces of n × n symmetric matrices will be denoted by Sn. If X ∈Sn is positive semidefinite, we write X ⪰0. The cone of positive semidefinite matrices is denoted by Sn +. The Frobenius norm of a real matrix X is defined as ∥X∥F := p Tr(XXT ) where Tr(·) denotes the trace of a matrix, and the nuclear norm of X, denoted by ∥X∥∗, is defined as the sum of all singular values of X. The rank of a matrix X is denoted by rank(X). We denote by I the identity matrix, whose dimension should be clear from the context. For a real symmetric matrix X, λ(X) denotes the vector of all eigenvalues of X arranged in nondecreasing order and Λ(X) is the diagonal matrix whose ith diagonal entry is λi(X) for all i. Similarly, for any X ∈ℜm×n, σ(X) denotes the q-dimensional vector consisting of all singular values of X arranged in nondecreasing order, where q = min(m, n), and Σ(X) is the m × n matrix whose ith diagonal entry is σi(X) for all i and all off-diagonal entries are 0, that is, Σii(X) = σi(X) for 1 ≤i ≤q and Σij(X) = 0 for all i ̸= j. We define the operator D : ℜq →ℜm×n as follows: Dij(x) =  xi if i = j; 0 otherwise ∀x ∈ℜq, where q = min(m, n). For any real vector, ∥· ∥0, ∥· ∥1 and ∥· ∥2 denote the cardinality (i.e., the number of nonzero entries), the standard 1-norm and the Euclidean norm of the vector, respectively. 2 2 Penalty decomposition methods In this section, we first establish some technical results on a class of rank minimization problems. Then we propose penalty decomposition (PD) methods for solving problems (1) and (2) by using these technical results. 2.1 Technical results on special rank minimization In this subsection we first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish a result that a class of rank minimization problems have closed form solutions, which will be used to develop penalty decomposition methods in Subsection 2.2. The proof of the result can be found in the longer version of the paper [19]. Before proceeding, we introduce some definitions that will be used subsequently. Let Un denote the set of all unitary matrices in ℜn×n. A norm ∥· ∥is a unitarily invariant norm on ℜm×n if ∥UXV ∥= ∥X∥for all U ∈Um, V ∈Un, X ∈ℜn×n. More generally, a function F : ℜm×n →ℜis a unitarily invariant function if F(UXV ) = F(X) for all U ∈Um, V ∈Un, X ∈ℜm×n. A set X ⊆ℜm×n is a unitarily invariant set if {UXV : U ∈Um, V ∈Un, X ∈X} = X. Similarly, a function F : Sn →ℜis a unitary similarity invariant function if F(UXU T ) = F(X) for all U ∈Un, X ∈Sn. A set X ⊆Sn is a unitary similarity invariant set if {UXU T : U ∈Un, X ∈X} = X. The following result establishes that a class of matrix optimization problems over a subset of ℜm×n can be solved as lower dimensional vector optimization problems. Proposition 2.1 Let ∥· ∥be a unitarily invariant norm on ℜm×n, and let F : ℜm×n →ℜbe a unitarily invariant function. Suppose that X ⊆ℜm×n is a unitarily invariant set. Let A ∈ℜm×n be given, q = min(m, n), and let φ be a non-decreasing function on [0, ∞). Suppose that UΣ(A)V T is the singular value decomposition of A. Then, X∗= UD(x∗)V T is an optimal solution of the problem min F(X) + φ(∥X −A∥) s.t. X ∈X, (3) where x∗∈ℜq is an optimal solution of the problem min F(D(x)) + φ(∥D(x) −Σ(A)∥) s.t. D(x) ∈X. (4) As some consequences of Proposition 2.1, we next state that a class of rank minimization problems on a subset of ℜm×n can be solved as lower dimensional vector minimization problems. Corollary 2.2 Let ν ≥0 and A ∈ℜm×n be given, and let q = min(m, n). Suppose that X ⊆ ℜm×n is a unitarily invariant set, and UΣ(A)V T is the singular value decomposition of A. Then, X∗= UD(x∗)V T is an optimal solution of the problem min{ν rank(X) + 1 2∥X −A∥2 F : X ∈X}, (5) where x∗∈ℜq is an optimal solution of the problem min{ν∥x∥0 + 1 2∥x −σ(A)∥2 2 : D(x) ∈X}. (6) Corollary 2.3 Let r ≥0 and A ∈ℜm×n be given, and let q = min(m, n). Suppose that X ⊆ ℜm×n is a unitarily invariant set, and UΣ(A)V T is the singular value decomposition of A. Then, X∗= UD(x∗)V T is an optimal solution of the problem min{∥X −A∥F : rank(X) ≤r, X ∈X}, (7) where x∗∈ℜq is an optimal solution of the problem min{∥x −σ(A)∥2 : ∥x∥0 ≤r, D(x) ∈X}. (8) 3 Remark. When X is simple enough, problems (5) and (7) have closed form solutions. In many applications, X = {X ∈ℜm×n : a ≤σi(X) ≤b ∀i} for some 0 ≤a < b ≤∞. For such X, one can see that D(x) ∈X if and only if a ≤|xi| ≤b for all i. In this case, it is not hard to observe that problems (6) and (8) have closed form solutions (see [20]). It thus follows from Corollaries 2.2 and 2.3 that problems (5) and (7) also have closed form solutions. The following results are heavily used in [6, 22, 34] for developing algorithms for solving the nuclear norm relaxation of matrix completion problems. They can be immediately obtained from Proposition 2.1. Corollary 2.4 Let ν ≥0 and A ∈ℜm×n be given, and let q = min(m, n). Suppose that UΣ(A)V T is the singular value decomposition of A. Then, X∗= UD(x∗)V T is an optimal solution of the problem min ν∥X∥∗+ 1 2∥X −A∥2 F , where x∗∈ℜq is an optimal solution of the problem min ν∥x∥1 + 1 2∥x −σ(A)∥2 2. Corollary 2.5 Let r ≥0 and A ∈ℜm×n be given, and let q = min(m, n). Suppose that UΣ(A)V T is the singular value decomposition of A. Then, X∗= UD(x∗)V T is an optimal solution of the problem min{∥X −A∥F : ∥X∥∗≤r}, where x∗∈ℜq is an optimal solution of the problem min{∥x −σ(A)∥2 : ∥x∥1 ≤r}. Clearly, the above results can be generalized to solve a class of matrix optimization problems over a subset of Sn. The details can be found in the longer version of the paper [19]. 2.2 Penalty decomposition methods for solving (1) and (2) In this subsection, we consider the rank minimization problems (1) and (2). In particular, we first propose a penalty decomposition (PD) method for solving problem (1), and then extend it to solve problem (2) at end of this subsection. Throughout this subsection, we make the following assumption for problems (1) and (2). Assumption 1 Problems (1) and (2) are feasible, and moreover, at least a feasible solution, denoted by Xfeas, is known. Clearly, problem (1) can be equivalently reformulated as min X,Y {f(X) : X −Y = 0, X ∈X, Y ∈Y}, (9) where Y := {Y ∈Ω| rank(Y ) ≤r}. Given a penalty parameter ϱ > 0, the associated quadratic penalty function for (9) is defined as Qϱ(X, Y ) := f(X) + ϱ 2∥X −Y ∥2 F . (10) We now propose a PD method for solving problem (9) (or, equivalently, (1)) in which each penalty subproblem is approximately solved by a block coordinate descent (BCD) method. Penalty decomposition method for (9) (asymmetric matrices): Let ϱ0 > 0, σ > 1 be given. Choose an arbitrary Y 0 0 ∈ Y and a constant Υ ≥ max{f(Xfeas), minX∈X Qϱ0(X, Y 0 0 )}. Set k = 0. 1) Set l = 0 and apply the BCD method to find an approximate solution (Xk, Y k) ∈X × Y for the penalty subproblem min{Qϱk(X, Y ) : X ∈X, Y ∈Y} (11) by performing steps 1a)-1d): 4 1a) Solve Xk l+1 ∈Arg min X∈X Qϱk(X, Y k l ). 1b) Solve Y k l+1 ∈Arg min Y ∈Y Qϱk(Xk l+1, Y ). 1c) Set (Xk, Y k) := (Xk l+1, Y k l+1). 2) Set ϱk+1 := σϱk. 3) If min X∈X Qϱk+1(X, Y k) > Υ, set Y k+1 0 := Xfeas. Otherwise, set Y k+1 0 := Y k. 4) Set k ←k + 1 and go to step 1). end Remark. We observe that the sequence {Qϱk(Xk l , Y k l )} is non-increasing for any fixed k. Thus, in practical implementation, it is reasonable to terminate the BCD method based on the relative progress of {Qϱk(Xk l , Y k l )}. In particular, given accuracy parameter ϵI > 0, one can terminate the BCD method if |Qϱk(Xk l , Y k l ) −Qϱk(Xk l−1, Y k l−1)| max(|Qϱk(Xk l , Y k l )|, 1) ≤ϵI. (12) Moreover, we can terminate the outer iterations of the above method once max ij |Xk ij −Y k ij| ≤ϵO (13) for some ϵO > 0. In addition, given that problem (11) is nonconvex, the BCD method may converge to a stationary point. To enhance the quality of approximate solutions, one may execute the BCD method multiple times starting from a suitable perturbation of the current approximate solution. In detail, at the kth outer iteration, let (Xk, Y k) be a current approximate solution of (11) obtained by the BCD method, and let rk = rank(Y k). Assume that rk > 1. Before starting the (k + 1)th outer iteration, one can apply the BCD method again starting from Y k 0 ∈Arg min{∥Y − Y k∥F : rank(Y ) ≤rk −1} (namely, a rank-one perturbation of Y k) and obtain a new approximate solution ( ˜Xk, ˜Y k) of (11). If Qϱk( ˜Xk, ˜Y k) is “sufficiently” smaller than Qϱk(Xk, Y k), one can set (Xk, Y k) := ( ˜Xk, ˜Y k) and repeat the above process. Otherwise, one can terminate the kth outer iteration and start the next outer iteration. Furthermore, in view of Corollary 2.3, the subproblem in step 1b) can be reduced to the problem in form of (8), which has closed form solution when Ωis simple enough. Finally, the convergence results of this PD method has been shown in the longer version of the paper [19]. Under some suitable assumptions, we have established that any accumulation point of the sequence generated by our method when applied to problem (1) is a stationary point of a nonlinear reformulation of the problem. Before ending this section, we extend the PD method proposed above to solve problem (2). Clearly, (2) can be equivalently reformulated as min X,Y {f(X) + ν rank(Y ) : X −Y = 0, X ∈X, Y ∈Ω}. (14) Given a penalty parameter ϱ > 0, the associated quadratic penalty function for (14) is defined as Pϱ(X, Y ) := f(X) + ν rank(Y ) + ϱ 2∥X −Y ∥2 F . (15) Then we can easily adapt the PD method for solving (9) to solve (14) (or, equivalently, (2)) by setting the constant Υ ≥max{f(Xfeas) + ν rank(Xfeas), minX∈X Pϱ0(X, Y 0 0 )}. In addition, the set Y becomes Ω. In view of Corollary 2.2, the BCD subproblem in step 1b) when applied to minimize the penalty function (15) can be reduced to the problem in form of (6), which has closed form solution when Ωis simple enough. In addition, the practical termination criteria proposed for the previous PD method can be suitably applied to this method as well. Moreover, given that problem arising in step 1) is nonconvex, the BCD method may converge to a stationary point. To enhance the quality of approximate solutions, one may apply a similar strategy as described for the previous PD method by executing the BCD method multiple times starting from a suitable perturbation of the current approximate solution. Finally, by a similar argument as in the proof of [19, Theorem 3.1], we can show that every accumulation point of the sequence {(Xk, Y k)} is a feasible point of (14). Nevertheless, it is not clear whether a similar convergence result as in [19, Theorem 3.1(b)] can be established due to the discontinuity and nonconvexity of the objective function of (2). 5 3 Numerical results In this section, we conduct numerical experiments to test the performance of our penalty decomposition (PD) methods proposed in Section 2 by applying them to solve matrix completion and nearest low-rank correlation matrix problems. All computations below are performed on an Intel Xeon E5410 CPU (2.33GHz) and 8GB RAM running Red Hat Enterprise Linux (kernel 2.6.18). The codes of all the compared methods in this section are written in Matlab. 3.1 Matrix completion problem In this subsection, we apply our PD method proposed in Section 2 to the matrix completion problem, which has numerous applications in control and systems theory, image recovery and data mining (see, for example, [33, 24, 9, 16]). It can be formulated as min X∈ℜm×n rank(X) s.t. Xij = Mij, (i, j) ∈Θ, (16) where M ∈ℜm×n and Θ is a subset of index pairs (i, j). Recently, numerous methods were proposed to solve the nuclear norm relaxation or the variant of (16) (see, for example, [18, 6, 22, 8, 13, 14, 21, 23, 32, 17, 37, 35]). It is not hard to see that problem (16) is a special case of the general rank minimization problem (2) with f(X) ≡0, ν = 1, Ω= ℜm×n, and X = {X ∈ℜm×n : Xij = Mij, (i, j) ∈Θ}. Thus, the PD method proposed in Subsection 2.2 for problem (2) can be suitably applied to (16). The implementation details of the PD method can be found in [19]. Next we conduct numerical experiments to test the performance of our PD method for solving matrix completion problem (16) on real data. In our experiment, we aim to test the performance of our PD method for solving a grayscale image inpainting problem [2]. This problem has been used in [22, 35] to test FPCA and LMaFit, respectively and we use the same scenarios as generated in [22, 35]. For an image inpainting problem, our goal is to fill the missing pixel values of the image at given pixel locations. The missing pixel positions can be either randomly distributed or not. As shown in [33, 24], this problem can be solved as a matrix completion problem if the image is of low-rank. In our test, the original 512 × 512 grayscale image is shown in Figure 1(a). To obtain the data for problem (16), we first apply the singular value decomposition to the original image and truncate the resulting decomposition to get an image of rank 40 shown in Figure 1(e). Figures 1(b) and 1(c) are then constructed from Figures 1(a) and 1(e) by sampling half of their pixels uniformly at random, respectively. Figure 1(d) is generated by masking 6% of the pixels of Figure 1(e) in a nonrandom fashion. We now apply our PD method to solve problem (16) with the data given in Figures 1(b), 1(c) and 1(d), and the resulting recovered images are presented in Figures 1(f), 1(g) and 1(h), respectively. In addition, given an approximate recovery X∗for M, we define the relative error as rel err := ∥X∗−M∥F ∥M∥F . We observe that the relative errors of three recovered images to the original images by our method are 6.72e-2, 6.43e-2 and 6.77e-2, respectively, which are all smaller than those reported in [22, 35]. 3.2 Nearest low-rank correlation matrix problem In this subsection, we apply our PD method proposed in Section 2 to find the nearest low-rank correlation matrix, which has important applications in finance (see, for example, [4, 29, 36, 38, 30]). It can be formulated as min X∈Sn 1 2∥X −C∥2 F s.t. diag(X) = e, rank(X) ≤r, X ⪰0 (17) for some correlation matrix C ∈Sn + and some integer r ∈[1, n], where diag(X) denotes the vector consisting of the diagonal entries of X and e is the all-ones vector. Recently, a few methods have been proposed for solving problem (17) (see, for example, [28, 26, 3, 25, 12, 15]). 6 (a) original image (b) 50% masked original image (c) 50% masked rank 40 image (d) 6.34% masked rank 40 image (e) rank 40 image (f) recovered image by PD (g) recovered image by PD (h) recovered image by PD Figure 1: Image inpainting It is not hard to see that problem (17) is a special case of the general rank constraint problem (2) with f(X) = 1 2∥X −C∥2 F , Ω= Sn +, and X = {X ∈Sn : diag(X) = e}. Thus, the PD method proposed in Subsection 2.2 for problem (2) can be suitably applied to (17). The implementation details of the PD method can be found in [19]. Next we conduct numerical experiments to test the performance of our method for solving (17) on three classes of benchmark testing problems. These problems are widely used in literature (see, for example, [3, 29, 25, 15]) and their corresponding data matrices C are defined as follows: (P1) Cij = 0.5 + 0.5 exp(−0.05|i −j|) for all i, j (see [3]). (P2) Cij = exp(−|i −j|) for all i, j (see [3]). (P3) Cij = LongCorr + (1 −LongCorr) exp(κ|i −j|) for all i, j, where LongCorr = 0.6 and κ = −0.1 (see [29]). We first generate an instance for each (P1)-(P3) by letting 500. Then we apply our PD method and the method named as Major developed in [25] to solve problem (17) on the instances generated above. To fairly compare their performance, we choose the termination criterion for Major to be the one based on the relative error rather than the (default) absolute error. More specifically, it terminates once the relative error is less than 10−5. The computational results of both methods on the instances generated above with r = 5, 10, . . . , 25 are presented in Table 1. The names of all problems are given in column one and they are labeled in the same manner as described in [15]. For example, P1n500r5 means that it corresponds to problem (P1) with n = 500 and r = 5. The results of both methods in terms of number of iterations, objective function value and CPU time are reported in columns two to seven of Table 1, respectively. We observe that the objective function values for both methods are comparable though the ones for Major are slightly better on some instances. In addition, for small r (say, r = 5), Major generally outperforms PD in terms of speed, but PD substantially outperforms Major as r gets larger (say, r = 15). 4 Concluding remarks In this paper we proposed penalty decomposition (PD) methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. In the longer version of the paper [20], we have showed that under some suitable assumptions any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. The computational results on matrix completion and nearest low-rank correlation matrix problems demonstrate that our 7 Table 1: Comparison of Major and PD Problem Major PD Iter Obj Time Iter Obj Time P1n500r5 488 3107.0 22.9 2514 3107.2 80.7 P1n500r10 836 748.2 51.5 1220 748.2 48.4 P1n500r15 1690 270.2 137.0 804 270.2 37.3 P1n500r20 3106 123.4 329.1 581 123.4 31.5 P1n500r25 5444 65.5 722.0 480 65.5 29.4 P2n500r5 2126 24248.5 97.8 3465 24248.5 112.3 P2n500r10 3264 11749.5 199.6 1965 11749.5 76.6 P2n500r15 5061 7584.4 409.9 1492 7584.4 70.4 P2n500r20 4990 5503.2 532.0 1216 5503.2 67.2 P2n500r25 2995 4256.0 404.1 1022 4256.0 69.2 P3n500r5 2541 2869.3 116.4 2739 2869.4 90.4 P3n500r10 2357 981.8 144.2 1410 981.8 55.4 P3n500r15 2989 446.9 241.9 923 446.9 41.6 P3n500r20 4086 234.7 438.4 662 234.7 33.0 P3n500r25 5923 135.9 788.3 504 135.9 29.5 methods generally outperform the existing methods in terms of solution quality and/or speed. More computational results of the PD method can be found in the longer version of the paper [19]. References [1] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis, algorithms, Engineering Applications. MPS-SIAM Series on Optimization, SIAM, Philadelphia, PA, USA, 2001. [2] M. Bertalm´ıo, G. Sapiro, V. Caselles and V. Ballester. Image inpainting. SIGGRAPH 2000, New Orleans, USA, 2000. [3] D. Brigo. A note on correlation and rank reduction. Available at www.damianobrigo.it, 2002. [4] D. Brigo and F. Mercurio. Interest Rate Models: Theory and Practice. Springer-Verlag, Berlin, 2001. [5] S. Burer, R. D. C. Monteiro, and Y. Zhang. Maximum stable set formulations and heuristics based on continuous optimization. Math. Program., 94:137-166, 2002. [6] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. Technical report, 2008. [7] E. J. Cand´es and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 2009. [8] W. Dai and O. Milenkovic. SET: an algorithm for consistent matrix completion. Technical report, Department of Electrical and Computer Engineering, University of Illinois, 2009. [9] L. Eld´en. Matrix methods in data mining and pattern recognition (fundamentals of algorithms). SIAM, Philadelphia, PA, USA, 2009. [10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. P. Amer. Contr. Conf., 6:4734-4739, 2001. [11] M. X. Goemans and D. P. Williamson. .878-approximation algorithms for MAX CUT and MAX 2SAT. Lect. Notes Comput. Sc., 422-431, 1994. [12] I. Grubiˆsi´c and R. Pietersz. Efficient rank reduction of correlation matrices. Linear Algebra Appl., 422:629-653, 2007. [13] R. H. Keshavan and S. Oh. A gradient descent algorithm on the Grassman manifold for matrix completion. Technical report, Department of Electrical Engineering, Stanford University, 2009. [14] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. Technical report, University of Illinois, Urbana-Champaign, 2009. 8 [15] Q. Li and H. Qi. A sequential semismooth Newton method for the nearest low-rank correlation matrix problem. Technical report, School of Mathematics, University of Southampton, UK, 2009. [16] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. A., 31:1235-1256, 2009. [17] Y. Liu, D. Sun, and K. C. Toh. An implementable proximal point algorithmic framework for nuclear norm minimization. Technical report, National University of Singapore, 2009. [18] Z. Lu, R. D. C. Monteiro, and M. Yuan. Convex optimization methods for dimension reduction and coefficient estimation in Multivariate Linear Regression. Accepted in Math. Program., 2008. [19] Z. Lu and Y. Zhang. Penalty decomposition methods for rank minimization. Technical report, Department of Mathematics, Simon Fraser University, Canada, 2010. [20] Z. Lu and Y. Zhang. Penalty decomposition methods for l0 minimization. Technical report, Department of Mathematics, Simon Fraser University, Canada, 2010. [21] R. Mazumder, T. Hastie, and R. Tibshirani. Regularization methods for learning incomplete matrices. Technical report, Stanford University, 2009. [22] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank minimization. To appear in Math. Program., 2008. [23] R. Meka, P. Jain and I. S. Dhillon. Guaranteed rank minimization via singular value projection. Technical report, University of Texas at Austin, 2009. [24] T. Mrita and T. Kanade. A sequential factorization method for recovering shape and motion from image streams. IEEE T. Pattern Anal., 19:858-867, 1997. [25] R. Pietersz and I. Grubiˆsi´c. Rank reduction of correlation matrices by majorization. Quant. Financ., 4:649-662, 2004. [26] F. Rapisarda, D. Brigo and F. Mercurio. Parametrizing correlations: a geometric interpretation. Banca IMI Working Paper, 2002 (www.fabiomercurio.it). [27] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. To appear in SIAM Rev., 2007. [28] R. Rebonato. On the simultaneous calibration of multifactor lognormal interest rate models to Black volatilities and to the correlation matrix. J. Comput. Financ., 2:5-27, 1999. [29] R. Rebonato. Modern Pricing and Interest-Rate Derivatives. Princeton University Press, New Jersey, 2002. [30] R. Rebonato. Interest-rate term-structure pricing models: a review. P. R. Soc. Lond. A-Conta., 460:667-728, 2004. [31] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the International Conference of Machine Learning, 2005. [32] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Accepted in Pac. J. Optim., 2009. [33] C. Tpmasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vision, 9:137-154, 1992. [34] E. van den Berg and M. P. Friedlander. Sparse optimization with least-squares constraints. Technical Report, University of British Columbia, Vancouver, 2010. [35] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Technical report, Department of Computational and Applied Mathematics, Rice University, 2010. [36] L. Wu. Fast at-the-money calibration of the LIBOR market model using Lagrangian multipliers. J. Comput. Financ., 6:39-77, 2003. [37] J. Yang and X. Yuan. An inexact alternating direction method for trace norm regularized least squares problem. Technical report, Department of Mathematics, Nanjing University, China, 2010. [38] Z. Zhang and L. Wu. Optimal low-rank approximation to a correlation matrix. Linear Algebra Appl., 364:161-187, 2003. 9
2011
146
4,198
Accelerated Adaptive Markov Chain for Partition Function Computation∗ Stefano Ermon, Carla P. Gomes Dept. of Computer Science Cornell University Ithaca NY 14853, U.S.A. Ashish Sabharwal IBM Watson Research Ctr. Yorktown Heights NY 10598, U.S.A. Bart Selman Dept. of Computer Science Cornell University Ithaca NY 14853, U.S.A. Abstract We propose a novel Adaptive Markov Chain Monte Carlo algorithm to compute the partition function. In particular, we show how to accelerate a flat histogram sampling technique by significantly reducing the number of “null moves” in the chain, while maintaining asymptotic convergence properties. Our experiments show that our method converges quickly to highly accurate solutions on a range of benchmark instances, outperforming other state-of-the-art methods such as IJGP, TRW, and Gibbs sampling both in run-time and accuracy. We also show how obtaining a so-called density of states distribution allows for efficient weight learning in Markov Logic theories. 1 Introduction We propose a novel and general method to approximate the partition function of intricate probability distributions defined over combinatorial spaces. Computing the partition function is a notoriously hard computational problem. Only a few tractable cases are know. In particular, if the corresponding graphical model has low treewidth, then the problem can be solved exactly using methods based on tree decompositions, such as the junction tree algorithm [1]. The partition function for planar graphs with binary variables and no external field can also be computed in polynomial time [2]. We will consider an adaptive MCMC sampling strategy, inspired by the Wang-Landau method [3], which is a so-called flat histogram sampling strategy from statistical physics. Given a combinatorial space and an energy function (for instance, describing the negative log-likelihood of each configuration), a flat histogram method is a sampling strategy based on a Markov Chain that converges to a steady state where it spends approximately the same amount of time in states with a low density of configurations (which are usually low energy states) as in states with a high density. We propose two key improvements to the Wang-Landau method, namely energy saturation and a focused-random walk component, leading to a new and more efficient algorithm called FocusedFlatSAT. Energy saturation allows the chain to visit fewer energy levels, and the random walk style moves reduce the number of “null moves” in the Markov chain. Both improvements maintain the same global stationary distribution, while allowing us to go well beyond the domain of spin glasses where the Wang-Landau method has been traditionally applied. We demonstrate the effectiveness of our approach by a comparison with state-of-the-art methods to approximate the partition function or bound it, such as Tree Reweighed Belief Propagation [4], IJGPSampleSearch [5], and Gibbs sampling [6]. Our experiments show that our approach outperforms these approaches in a variety of problem domains, both in terms of accuracy and run-time. The density of states serves as a rich description of the underlying probabilistic model. Once computed, it can be used to efficiently evaluate the partition function for all parameter settings without ∗Supported by NSF Expeditions in Computing award for Computational Sustainability (grant 0832782). 1 the need for further inference steps — a stark contrast with competing methods for partition function computation. For instance, in statistical physics applications, we can use it to evaluate the partition function Z(T) for all values of the temperature T. This level of abstraction can be a fundamental advantage for machine learning methods: in fact, in a learning problem we can parameterize Z(·) according to the model parameters that we want to learn from the training data. For example, in the case of a Markov Logic theory [7, 8] with weights w1, . . . , wK of its K first order formulas, we can parameterize the partition function as Z(w1, . . . , wK). Upon defining an appropriate energy function and obtaining the corresponding density of states, we can then use efficient evaluations of the partition function to search for model parameters that best fit the training data, thus obtaining a promising new approach to learning in Markov Logic Networks and graphical models. 2 Probabilistic model and the partition function We focus on intricate probability distributions defined over a set of configurations, i.e., assignments to a set of N discrete variables {x1, . . . , xN}, assumed here to be Boolean for simplicity. The probability distribution is specified through a set of combinatorial features or constraints over these variables. Such constraints can be either hard or soft, with the i-th soft constraint Ci being associated with a weight wi. Let χi(x) = 1 if a configuration x violates Ci, and 0 otherwise. The probability Pw(x) of x is defined as 0 if x violates any hard constraint, and as Pw(x) = 1 Z(w) exp − X Ci∈Csoft wiχi(x) ! (1) otherwise, where Csoft is the set of soft constraints. The partition function, Z(w), is simply the normalization constant for this probability distribution, and is given by: Z(w) = X x∈Xhard exp − X Ci∈Csoft wiχi(x) ! (2) where Xhard ⊆{0, 1}N is the set of configurations satisfying all hard constraints. Note that as wi →∞, the soft constraint Ci effectively becomes a hard constraint. This factored representation is closely related to a graphical model where we use weighted Boolean formulas to specify clique potentials. This is a natural framework for combining purely logical and probabilistic inference, used for example to define grounded Markov Logic Networks [8, 9]. The partition function is a very important quantity but computing it is a well-known computational challenge, which we propose to address by employing the “density of states” method to be discussed shortly. We will compare our approach against several state-of-the-art methods available for computing the partition function or obtaining bounds on it. Wainwright et al. [4], for example, proposed a variational method known as tree re-weighting (TRW) to obtain bounds on the partition function of graphical models. Unlike standard Belief Propagation schemes which are based on Bethe free energies [10], the TRW approach uses a tree-reweighted (TRW) free energy which consists of a linear combination of free energies defined on spanning trees of the model. Using convexity arguments it is then possible to obtain upper bounds on various quantities, such as the partition function. Based on iterated join-graph propagation, IJGP-SampleSearch [5] is a popular solver for the probability of evidence problem (i.e., partition function computation with a subset of “evidence” variables fixed) for general graphical models. This method is based on an importance sampling scheme which is augmented with systematic constraint-based backtracking search. An alternative approach is to use Gibbs sampling to estimate the partition function by estimating, using sample average, a sequence of multipliers that correspond to the ratios of the partition function evaluated at different weight levels [6]. Lastly, the partition function for planar graphs where all variables are binary and have only pairwise interactions (i.e., the zero external field case) can be calculated exactly in polynomial time [2]. Although we are interested in algorithms for the general (intractable) case, we used the software associated with this approach to obtain the ground truth for planar graphs and evaluate the accuracy of the estimates obtained by other methods. 2 3 Density of states Our approach for computing the partition function is based on solving the density of states problem. Given a combinatorial space such as the one defined earlier and an energy function E : {0, 1}N → R, the density of states (DOS) n is a function n : range(E) →N that maps energy levels to the number of configurations with that energy, i.e., n(k) = |{σ ∈{0, 1}N | E(σ) = k}|. In our context, we are interested in computing the number of configurations that satisfy certain properties that are specified using an appropriate energy function. For instance, we might define the energy E(σ) of a configuration σ to be the number of hard constraints that are violated by σ. Or we may use the sum of the weights of the violated soft constraints. Once we are able to compute the full density of states, i.e., the number of configurations at each possible energy level, it is straightforward to evaluate the partition function Z(w) for any weight vector w, by summing up terms of the form n(i) exp(−E(i)), where E(i) denotes the energy of every configuration in state i. This is the method we use in this work for estimating the partition function. More complex energy functions may be defined for other related tasks, such as weight learning, i.e., given some training data x ∈X = {0, 1}N, computing arg maxw Pw(x) where Pw(x) is given by Equation (1). Here we can define the energy E(σ) to be w · ℓ, where ℓ= (ℓ1, . . . , ℓM) gives the number of constraints of weight wi violated by σ. Our focus in the rest of the paper will thus be on computing the density of states efficiently. 3.1 The MCMCFlatSAT algorithm MCMCFlatSAT [11] is an Adaptive Markov Chain Monte Carlo (adaptive MCMC) method for computing the density of states for combinatorial problems, inspired by the Wang-Landau algorithm [3] from statistical physics. Interestingly, this algorithm does not make any assumption about the form or semantics of the energy. At least in principle, the only thing it needs is a partitioning of the state space, where the “energy” just provides an index over the subsets that compose the partition. The algorithm is based on the flat histogram idea and works by trying to construct a reversible Markov Chain on the space {0, 1}N of all configurations such that the steady state probability of a configuration σ is inversely proportional to the density of states n(E(σ)). In this way, the stationary distribution is such that all the energy levels are visited equally often (i.e., when we count the visits to each energy level, we see a flat visit histogram). Specifically, we define a Markov Chain with the following transition probability: pσ→σ′ = ( 1 N min n 1, n(E(σ)) n(E(σ′)) o dH(σ, σ′) = 1 0 dH(σ, σ′) > 1 (3) where dH(σ, σ′) is the Hamming distance between σ and σ′. The probability of a self-loop pσ→σ is given by the normalization constraint pσ→σ + P σ′|dH(σ,σ′)=1 pσ→σ′ = 1. The detailed balance equation P(σ)pσ→σ′ = P(σ′)pσ′→σ is satisfied by P(σ) ∝1/n(E(σ)). This means1 that the Markov Chain will reach a stationary probability distribution P (regardless of the initial state) such that the probability of a configuration σ with energy E = E(σ) is inversely proportional to the number of configurations with energy E. This leads to an asymptotically flat histogram of the energies of the states visited because P(E) = P σ:E(σ)=E P(σ) ∝n(E) 1 n(E) = 1 (i.e., independent of E). Since the density of states is not known a priori, and computing it is precisely the goal of the algorithm, it is not possible to construct directly a random walk with transition probability (3). However it is possible to start with an initial guess g(·) for n(·) and keep updating this estimate g(·) in a systematic way to produce a flat energy histogram and simultaneously make the estimate g(E) converge to the true value n(E) for every energy level E. The estimate is adjusted using a modification factor F which controls the trade-off between the convergence rate of the algorithm and its accuracy (large initial values of F lead to fast convergence to a rather inaccurate solution). For completeness, we provide the pseudo-code as Algorithm 1; see [11] for details. 1The chain is finite, irreducible, and aperiodic, therefore ergodic. 3 Algorithm 1 MCMCFlatSAT algorithm to compute the density of states 1: Start with a guess g(E) = 1 for all E = 1, . . . , m 2: Initialize H(E) = 0 for all E = 1, . . . , m 3: Start with a modification factor F = F0 = 1.5 4: repeat 5: Randomly pick a configuration σ 6: repeat 7: Generate a new configuration σ′ (by flipping a variable) 8: Let E = E(σ) and E′ = E(σ′) (saturated energies) 9: Set σ = σ′ with probability min n 1, g(E) g(E′) o (move acceptance/rejection step) 10: Let Ec = E(σ) be the current energy level 11: Adjust the density g(Ec) = g(Ec) × F 12: Update visit histogram H(Ec) = H(Ec) + 1 13: until H is flat (all the values are at least 90% of the maximum value) 14: Reduce F, F ← √ F 15: Reset the visit histogram H 16: until F is close enough to 1 17: Normalize g so that P E g(E) = 2N 18: return g as estimate of n 4 FocusedFlatSAT: Efficient computation of density of states We propose two crucial improvements to MCMCFlatSAT, namely energy saturation and the introduction of a focused-random walk component, leading to a new algorithm called FocusedFlatSAT. As we will see in Table 1, FocusedFlatSAT provides the same accuracy as MCMCFlatSAT but is about 10 times faster on that benchmark. Moreover, our results for the Ising model (described below) in Figure 2 demonstrate that FocusedFlatSAT scales much better. Energy saturation. The time needed for each iteration of MCMCFlatSAT to converge is significantly affected by the number of different non-empty energy levels (buckets). In many cases, the weights defining the probability distribution Pw(x) are all positive (i.e., there is an incentive to satisfy the constraints), and as an effect of the exponential discounting in Equation (1), configurations that violate a large number of constraints have a negligible contribution to the sum defining the partition function Z. We therefore define a new saturated energy function E′(σ) = min{E(σ), K}, where K is a user-defined parameter. For the positive weights case, the partition function Z′ associated with the saturated energy function is a guaranteed upper bound on the original Z, for any K. When all constraints are hard, Z′ = Z for any value K ≥1 because only the first energy bucket matters. In general, when soft constraints are present, the bound gets tighter as K increases, and we can obtain theoretical worst-case error bounds when K is chosen to be a percentile of the energy distribution (e.g., saturation at median energy yields a 2x bound). In our experiments, we set K to be the average number of constraints violated by a random configuration, and we found that the error introduced by the saturation is negligible compared to other inherent approximations in density of states estimation. Intuitively, this is because the states where the probability is concentrated turn out to typically have a much lower energy than K, and thus an exponentially larger contribution to Z. Furthermore, energy saturation preserves the connectivity of the chain. Focused Random Walk. Both in the original Wang-Landau method and in MCMCFlatSAT, new configurations are generated by flipping a variable selected uniformly at random [3, 11]. Let us call this configuration selection distribution the proposal distribution, and let Tσ→σ′ denote the probability of generating a σ′ from this distribution while in configuration σ. In the Wang-Landau algorithm, proposed configurations are then rejected with a probability that depends on the density of states of the respective energy levels. Move rejections obviously lengthen the mixing time of the underlying Markov Chain. We introduce here a novel proposal distribution that significantly reduces the number of move rejections, resulting in much faster convergence rates. It is inspired by local search SAT solvers [12] and is especially critical for the class of highly combinatorial energy functions we consider in this work. We note that if the acceptance probability is taken to be min  1, n(E(σ)) Tσ′→σ n(E(σ′)) Tσ→σ′  4 0 500000 1000000 1500000 2000000 2500000 3000000 1 23 45 67 89 111 133 155 177 199 221 243 265 287 309 331 353 375 Number of moves Energy level MCMCFlatSAT Acc. up Acc. same Acc. down Rej. up Rej. same Rej. down 0 200000 400000 600000 800000 1000000 1200000 1400000 1 23 45 67 89 111 133 155 177 199 221 243 265 287 309 331 353 375 Number of moves Energy level FocusedFlatSAT Acc. up Acc. same Acc. down Rej. up Rej. same Rej. down Figure 1: Histograms depicting the number of proposed moves accepted and rejected. Left: MCMCFlatSAT. Right: FocusedFlatSAT. See PDF for color version. the properties of the steady state distribution are preserved as long as the proposal distribution is such that the ergodicity property is maintained. In order to understand the motivation behind the new proposal distribution, consider the move acceptance/rejection histogram shown in the left panel of Figure 1. For the instance under consideration, MCMCFlatSAT converged to a flat histogram after having visited each of the 385 energy levels (on x-axis) roughly 2.6M times. Each colored region shows the cumulative number of moves (on y-axis) accepted or rejected from each energy level (on x-axis) to another configuration with a higher, equal, or lower energy level, resp. This gives six possible move types, and the histogram shows how often is each taken at any energy level. Most importantly, notice that at low energy levels, a vast majority of the moves were proposed to a higher energy level and were rejected by the algorithm (shown as the dominating purple region). This is an indirect consequence of the fact that in such instances, in the low energy regime, the density of states increases drastically as the energy level is increases, i.e., g(E′) ≫g(E) when E′ > E. As a result, most of the proposed moves are to higher energy levels and are in turn rejected by the algorithm in the move acceptance/rejection step discussed above. In order to address this issue, we propose to modify the proposal distribution in a way that increases the chance of proposing moves to the same or lower energy levels, despite the fact that there are relatively few such moves. Inspired by local search SAT solvers, we enhance MCMCFlatSAT with a focused random walk component that gives preference to selecting variables to flip from violated constraints (if any), thereby introducing an indirect bias towards lower energy states. Specifically, if the given configuration σ is a satisfying assignment, pick a variable uniformly at random to be flipped (thus Tσ→σ′ = 1/N when the Hamming distance dH(σ, σ′) = 1, zero otherwise). If σ is not a solution, then with probability p a variable to be flipped is chosen uniformly at random from a randomly chosen violated constraint, and with probability 1 −p a variable is chosen uniformly at random. With this approach, when σ is not solution and σ and σ′ differ only on the i-th variable, Tσ→σ′ = (1 −p) 1 N + p P c∈C|i∈c χc(σ) · 1/|c| P c∈C χc(σ) where χc(σ) = 1 iff σ violates constraint c and |c| denotes the number of variables in constraint c. With this proposal distribution we ensure that for all 1 > p ≥0 whenever Tσ→σ′ > 0, we also have Tσ′→σ > 0. Moreover, the connectivity of the Markov Chain is preserved (since we don’t remove any edge from the original Markov Chain). We therefore have the following result: Proposition 1 For all p ∈[0, 1), the Markov Chain with proposal distribution Tσ→σ′ defined above is irreducible and aperiodic. Therefore it has a unique stationary distribution, given by 1/n(E(σ)). The right panel of Figure 1 shows the move acceptance/rejection histogram when FocusedFlatSAT is used, i.e., with the above proposal distribution. The same instance now needs under 1.2M visits per energy level for the method to converge. Moreover, the number of rejected moves (shown in purple and green) in low energy states is significantly fewer than the dominating purple region in the left panel. This allows the Markov Chain to move more freely in the space and to converge faster. Figure 2 shows a runtime comparison of FocusedFlatSAT against MCMCFlatSAT on n × n Ising models (details to be discussed in Section 5). As we see, incorporating energy saturation reduces the time to convergence (while achieving the same level of accuracy), and using focused random walk moves further decreases the convergence time, especially as n increases. 5 0 5000 10000 15000 20000 25000 30000 0 10 20 30 40 Time (s) Grid size n MCMCFlatSAT MCMCFlatSAT+Saturation FocusedFlatSAT Figure 2: Runtime comparison on ferromagnetic Ising models on square lattices of size n × n. Table 1: Comparison with model counters; only hard constraints. Runtime is in seconds. Instance n m Exact # FocusedFlatSat MCMC-FlatSat SampleCount SampleMiniSAT Models Time Models Time Models Time Models Time 2bitmax 6 252 766 2.10 × 1029 1.91 × 1029 156 1.96 × 1029 1863 ≥2.40 × 1028 29 2.08 × 1029 345 wff-3-3.5 150 525 1.40 × 1014 1.43 × 1014 20 1.34 × 1014 393 ≥1.60 × 1013 145 1.60 × 1013 240 wff-3.1.5 100 150 1.80 × 1021 1.86 × 1021 1 1.83 × 1021 21 ≥1.00 × 1020 240 1.58 × 1021 128 wff-4-5.0 100 500 9.31 × 1016 5 8.64 × 1016 189 ≥8.00 × 1015 120 1.09 × 1017 191 ls8-norm 301 1603 5.40 × 1011 5.78 × 1011 231 5.93 × 1011 2693 ≥3.10 × 1010 1140 2.22 × 1011 168 5 Experimental evaluation We compare FocusedFlatSAT against several state-of-the-art methods for computing an estimate of or bound on the partition function.2 An evaluation such as this is inherently challenging as the ground truth is very hard to obtain and computational bounds can be orders of magnitude off from the truth, making a comparison of estimates not very meaningful. We therefore propose to evaluate the methods on either small instances whose ground truth can be evaluated by “brute force,” or larger instances whose ground truth (or bounds on it) can be computed analytically or through other tools such as efficient model counters. We also consider planar cases for which a specialized polynomial time exact algorithm is available. Efficient methods for handling instances of small treewidth are also well known; here we push the boundaries to instances of relatively higher treewidth. For partition function evaluation, we compare against the tree re-weighting (TRW) variational method for upper bounds, the iterated join-graph propagation (IJGP), and Gibbs sampling; see Section 2 for a very brief discussion of these approaches. For weight learning, we compare against the Alchemy system. Unless otherwise specified, the energy function used is always the number of violated constraints, and we use a 50% ratio of random moves (p = 0.5). The algorithm is run for 20 iterations, with an initial modification factor F0 = 1.5. The experiments were conducted on a 16-core 2.4 GHz Intel Xeon machine with 32 GB memory, running RedHat Linux. Hard constraints. First, consider models with only hard constraints, which define a uniform measure on the set of satisfying assignments. In this case, the problem of computing the partition function is equivalent to standard model counting. We compare the performance of FocusedFlatSAT with MCMC-FlatSat and with two state-of-the-art approximate model counters: SampleCount [13] and SampleMiniSATExact [14]. The instances used are taken from earlier work [11]. The results in Table 1 show that FocusedFlatSAT almost always obtains much more accurate solution counts, and is often significantly faster (about an order of magnitude faster than MCMC-FlatSat). Soft Constraints. We consider Ising Models defined on an n × n square lattice where P(σ) = P σ exp(−E(σ)) with E(σ) = P (i,j) wijI[σi ̸= σj]. Here I is the indicator function. This imposes a penalty wij if spins σi and σj are not aligned. We consider a ferromagnetic case where wij = w > 0 for all edges, and a frustrated case with a mixture of positive and negative interactions. The partition function for these planar models is computable with a specialized polynomial time algorithm, as long as there is no external magnetic field [2]. In Figure 3, we compare the true value of the partition function Z∗with the estimate obtained using FocusedFlatSAT and with the upper 2Benchmark instances available online at http://www.cs.cornell.edu/∼ermonste 6 0 10 20 30 40 50 60 70 80 0 1 2 3 4 5 6 Log10(Z)-Log10(Z*) weight w FocusedFlatSAT TRW -50 0 50 100 150 200 250 300 0 1 2 3 4 5 6 Log10(Z)-Log10(Z*) weight w FocusedFlatSAT TRW Figure 3: Error in log10(Z). Left: 40 × 40 ferromagnetic grid. Right: 32 × 32 spin glass grid. Table 2: Log partition function for weighted formulas. Instance n m Weight log10 Z(w) FocusedFlatSat IJGP-SampleSearch Gibbs log10 Z(w) Time log10 Z(w) Time log10 Z(w) Time grid32x32 1024 3968 1 16.0920 16.0964 628 14.4330 600 15.4856 651 grid32x32 1024 3968 1 16.0920 16.0964 628 13.8980 2000 grid40x40 1600 6240 1 23.5434 23.4844 1522 15.9386 2000 22.3125 1650 2bitmax6 252 766 5 > 29.3222 30.4373 360 12.0526 600 25.1274 732 2bitmax6 252 766 5 > 29.3222 30.4373 360 12.3802 2000 wff.100.150 100 150 5 > 21.2553 21.3187 5 21.3373 200 21.3992 40 wff.100.150 100 150 8 > 21.2553 21.2551 5 21.2694 200 21.3107 40 ls8-normalized 301 1603 3 > 11.7324 17.6655 589 16.5458 600 8.6825 708 ls8-normalized 301 1603 6 > 11.7324 11.7974 589 -2.3987 600 -17.356 770 ls8-normalized 301 1603 6 > 11.7324 11.7974 589 -1.7459 1200 ls8-normalized 301 1603 6 > 11.7324 11.7974 589 -1.8578 2000 ls8-simplified-2 172 673 6 > 4.3083 4.3379 100 -1.8305 1200 2.8516 300 ls8-simplified-4 119 410 6 > 2.2479 2.3399 63 2.7037 1200 -6.7132 174 ls8-simplified-5 83 231 6 > 1.3424 1.3880 40 1.3688 600 1.3420 51 bound given by TRW (which is generally much faster but inaccurate), for a range of w values. What is plotted is the accuracy, log Z −log Z∗. We see that the estimate provided by FocusedFlatSAT is very accurate throughout the range of w values. For the ferromagnetic model, the bounds obtained by TRW, on the other hand, are tight only when the weights are sufficiently high, when essentially only the two ground states of energy zero matter. On spin glasses, where computing ground states is itself an intractable problem, TRW is unsurprisingly inaccurate even in the high weights regime. The consistent accuracy of FocusedFlatSAT here is a strong indication that the method is accurately computing the density of most of the underlying states. This is because, as the weight w changes, the value of the partition function is dominated by the contributions of a different set of states. Table 2 (top) shows a comparison with IJGP-SampleSearch and Gibbs Sampling for the ferromagnetic case with w = 1. Here FocusedFlatSAT provides the most accurate estimates, even when other methods are given a longer running time. E.g., IJGP is two orders of magnitude off for the 32 × 32 grid.3 Results with other weights are similar but omitted due to limited space. FocusedFlatSAT also significantly outperforms IJGP and Gibbs sampling in accuracy on the circuit synthesis instance 2bitmax6. All methods perform well on randomly generated 3-SAT instances, but FocusedFlatSAT is much faster. As another test case, we use formulas from a previously used model counting benchmark involving n × n Latin Square completion [11], and add a weight w to each constraint. Since these instances have high treewidth, are non-planar, and beyond the reach of direct enumeration, we don’t have ground truth for this benchmark. However, we are able to provide a lower bound,4 which is given by the number of models of the original formula. Our results are reported in Table 2. Our lower bound indicates that the estimate given by FocusedFlatSAT is more accurate, even when other methods are given a longer running time. As the last 3 lines of the table show, IJGP and Gibbs sampling improve in performance as the problem is simplified more and more, by fixing the values of 2, 4, or 5 “cells” and simplifying the instance. Nonetheless, on the un-simplified ls8-normalized with weight 6, both IJGP and Gibbs sampling underestimate by over 12 orders of magnitude. 3On smaller instances with limited treewidth, IJGP-SampleSearch quickly provides good estimates. 4The upper bound provided by TRW is very loose on this benchmark (possibly because of the conversion to a pairwise field) and not reported. 7 Table 3: Weight learning: likelihood of the training data x computed using learned weights. Type Training Data Optimal FocusedFlatSAT Alchemy Likelihood (O) Accuracy (F/O) Accuracy (A/O) ThreeChain(30) x =data-30-1 4.09 × 10−27 1.0 0.08 x =data-30-2 9.31 × 10−10 1.0 0.93 FourChain(5) x =dataFC-5-1 5.77 × 10−6 1.0 0.61 x =dataFC-5-2 3.84 × 10−3 1.0 0.000097 HChain(10) x =dataH-10-1 1.19 × 10−9 1.0 0.87 x =dataH-10-2 2.62 × 10−9 1.0 0.53 SocialNetwork(5) x =data-SN-1 2.98 × 10−8 1.0 0.69 x =data-SN-2 2.44 × 10−9 1.0 0.2 Weight learning. Suppose the set of soft constraints Csoft is composed of M disjoint sets of constraints {Si}M i=1, where all the constraints c ∈Si have the same weight wi that we wish to learn from data (for instance, these constraints can all be groundings of the same first order formula in Markov Logic [8]). Let us assume for simplicity that there are no hard constraints. The probability Pw(x) can be parameterized by a weight vector w = (w1, . . . , wM). The key observation is that the partition function can be written as Z(w) = P ℓ1 P ℓ2 . . . P ℓM n(ℓ1, . . . , ℓM) exp (−w · ℓ), where n(ℓ1, . . . , ℓM) gives the number of configurations that violate ℓi constraints of type Si for i = 1, . . . , M. This function n(ℓ1, . . . , ℓM) is precisely the density of states required to compute Z(w) for all values of w, without additional inference steps. Given training data x ∈{0, 1}N, the problem of weight learning is that of finding arg maxw Pw(x) where Pw(x) is given by Eqn. (1). Once we compute n(ℓ1, . . . , ℓM) using FocusedFlatSAT, we can efficiently evaluate Z(w), and therefore Pw(x), as a function of the parameters w = (w1, . . . , wM). Using this efficient evaluation as a black-box, we can solve the weight learning problem using a numerical optimization package with no additional inference steps required.5 We evaluate this learning method on relatively simple instances on which commonly used software such as Alchemy can be a few orders of magnitude off from the optimal likelihood of the training data. Specifically, Table 3 compares the likelihood of the training data under the weights learned by FocusedFlatSAT and by Generative Weight Learning [7], as implemented in Alchemy, for four types of Markov Logic theories. The Optimal Likelihood value is obtained using a partition function computed either by direct enumeration or using analytic results for the synthetic instances. The instance ThreeChain(K) is a grounding of the following first order formulas ∀xP(x) ⇒ Q(x), ∀xQ(x) ⇒R(x), ∀xR(x) ⇒P(x) while FourChain(K) is a similar chain of 4 implications. The instance HChain(K) is a grounding of ∀xP(x) ∧Q(x) ⇒R(x), ∀xR(x) ⇒P(x) where x ∈{a1, a2, . . . , aK}. The instance SocialNetwork(K) (from the Alchemy Tutorial) is a grounding of the following first order formulas where x, y ∈{a1, a2, . . . , aK}: ∀x ∀y Friend(x, y) ⇒ (Smokes(x) ⇔Smokes(y)), ∀x Smokes(x) ⇒Cancer(x). Table 3 shows the accuracy of FocusedFlatSAT and Alchemy for the weight learning task, as measured by the resulting likelihood of observing the data in the learned model, which we are trying to maximize. The accuracy is measured as the ratio of the optimal likelihood (O) and the likelihood in the learned model (F and A, resp.). In these instances, FocusedFlatSAT always matches the optimal likelihood up to two digits of precision, while Alchemy can underestimate it by several orders of magnitude, e.g., by over 4 orders in the case of FourChain(5). 6 Conclusion We introduced FocusedFlatSAT, a Markov Chain Monte Carlo technique based on the flat histogram method with a random walk style component to estimate the partition function from the density of states. We demonstrated the effectiveness of our approach on several types of problems. Our method outperforms the current state-of-the-art techniques on a variety of instances, at times by several orders of magnitude. Moreover, from the density of states we can obtain directly the partition function Z(w) as a function of the model parameters w. We show an application of this property to weight learning in Markov Logic Networks. 5Storing the full density function n(ℓ1, . . . , ℓM) of course requires space (and hence time) that is exponential in M. One must use a relatively coarse partitioning of the state space for scalability when M is large. 8 References [1] Martin J Wainwright and Michael I Jordan. Graphical Models, Exponential Families, and Variational Inference. Now Publishers Inc., Hanover, MA, USA, 2008. [2] N.N. Schraudolph and D. Kamenetsky. Efficient exact inference in planar Ising models. In Proc. of NIPS-08, 2008. [3] F. Wang and DP Landau. Efficient, multiple-range random walk algorithm to calculate the density of states. Physical Review Letters, 86(10):2050–2053, 2001. [4] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. A new class of upper bounds on the log partition function. Information Theory, IEEE Transactions on, 51(7):2313–2335, 2005. [5] Vibhav Gogate and Rina Dechter. SampleSearch: A Scheme that Searches for Consistent Samples. Journal of Machine Learning Research, 2:147–154, 2007. [6] Mark Jerrum and Alistair Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration, pages 482–520. PWS Publishing Co., Boston, MA, USA, 1997. [7] P. Domingos, S. Kok, H. Poon, M. Richardson, and P. Singla. Unifying logical and statistical ai. In Proc. of AAAI-06, pages 2–7, Boston, Massachusetts, 2006. AAAI Press. [8] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1):107–136, 2006. [9] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In Proc. of AAAI-06, pages 458–463, 2006. [10] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282–2312, 2005. [11] S. Ermon, C. Gomes, and B. Selman. Computing the density of states of Boolean formulas. In Proc. of CP-2010, 2010. [12] B. Selman, H.A. Kautz, and B. Cohen. Local search strategies for satisfiability testing. In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 1996. [13] C.P. Gomes, J. Hoffmann, A. Sabharwal, and B. Selman. From sampling to model counting. In Proc. of IJCAI-07, 2007. [14] V. Gogate and R. Dechter. Approximate counting by sampling the backtrack-free search space. In Proc. of AAAI-07, pages 198–203, 2007. 9
2011
147
4,199
On Strategy Stitching in Large Extensive Form Multiplayer Games Richard Gibson and Duane Szafron Department of Computing Science, University of Alberta Edmonton, Alberta, T6G 2E8, Canada {rggibson | dszafron}@ualberta.ca Abstract Computing a good strategy in a large extensive form game often demands an extraordinary amount of computer memory, necessitating the use of abstraction to reduce the game size. Typically, strategies from abstract games perform better in the real game as the granularity of abstraction is increased. This paper investigates two techniques for stitching a base strategy in a coarse abstraction of the full game tree, to expert strategies in fine abstractions of smaller subtrees. We provide a general framework for creating static experts, an approach that generalizes some previous strategy stitching efforts. In addition, we show that static experts can create strong agents for both 2-player and 3-player Leduc and Limit Texas Hold’em poker, and that a specific class of static experts can be preferred among a number of alternatives. Furthermore, we describe a poker agent that used static experts and won the 3-player events of the 2010 Annual Computer Poker Competition. 1 Introduction Many sequential decision-making problems are commonly modelled as an extensive form game. Extensive games are very versatile due to their ability to represent multiple agents, imperfect information, and stochastic events. For many real-world problems, however, the extensive form game representation is too large to be feasibly handled by current techniques. To address this limitation, strategies are often computed in abstract versions of the game that group similar states together into single abstract states. For very large games, these abstractions need to be quite coarse, leaving many different states indistinguishable. However, for smaller subtrees of the full game, strategies can be computed in much finer abstractions. Such “expert” strategies can then be pieced together, typically connecting to a “base strategy” computed in the full coarsely-abstracted game. A disadvantage of this approach is that we may make assumptions about the other agents’ strategies. In addition, by computing the base strategy and the experts separately, we may lose “cohesion” among the different components. We investigate stitched strategies in extensive form games, focusing on the trade-offs between the sizes of the abstractions versus the assumptions made and the cohesion among the computed strategies. We define two strategy stitching techniques: (i) static experts that are computed in very fine abstractions with varying degrees of assumptions and little cohesion, and (ii) dynamic experts that are contained in abstractions with lower granularity, but make fewer assumptions and have perfect cohesion. This paper generalizes previous strategy stitching efforts [1, 2, 11] under a more general static expert framework. We use poker as a testbed to demonstrate that, despite recent mixed results, static experts can create much stronger overall agents than the base strategy alone. Furthermore, we show that under a fixed memory limitation, a specific class of static experts are preferred to several 1 alternatives. As a final validation of these results, we describe entries to the 2010 Annual Computer Poker Competition1 (ACPC) that used static experts to win the 3-player events. 2 Background An extensive form game [9] is a rooted directed tree, where nodes represent decision states, edges represent actions, and terminal nodes hold end-game utility values for players. For each player, the decision states are partitioned into information sets such that game states within an information set are indistinguishable to the player. Non-singleton information sets arise due to hidden information that is only available to a subset of the players, such as private cards in poker. More formally: Definition 2.1 (Osborne and Rubenstein [9, p. 200]) A finite extensive game Γ with imperfect information has the following components: • A finite set N of players. • A finite set H of sequences, the possible histories of actions, such that the empty sequence is in H and every prefix of a sequence in H is also in H. Z ⊆H are the terminal histories (those which are not a prefix of any other sequence). A(h) = {a | ha ∈H} are the actions available after a nonterminal history h ∈H. • A function P that assigns to each nonterminal history h ∈H\Z a member of N ∪{C}. P is the player function. P(h) is the player who takes an action after the history h. If P(h) = C, then chance determines the action taken after history h. Define Hi := {h ∈H | P(h) = i}. • A function fC that associates with every history h for which P(h) = C a probability measure fC(·|h) on A(h) (fC(a|h) is the probability that a occurs given h), where each such probability measure is independent of every other such measure. • For each player i ∈N a partition Ii of Hi with the property that A(h) = A(h′) whenever h and h′ are in the same member of the partition. For I ∈Ii, we denote by A(I) the set of A(h) and by P(I) the player P(h) for any h ∈I. Ii is the information partition of player i; a set I ∈Ii is an information set of player i. • For each player i ∈N a utility function ui from the terminal histories Z to the real numbers R. If N = {1, 2} and u1 = −u2, it is a 2-player zero-sum extensive game. Define ∆u,i := maxz ui(z) −minz ui(z) to be the range of the utilities for player i. A strategy for player i, σi, is a function such that for each information set I ∈Ii, σi(I) is a probability distribution over A(I). Let Σi be the set of all strategies for player i. For h ∈I, we define σi(h) := σi(I). A strategy profile σ consists of a strategy σi for each player i ∈N. We let σ−i refer to all the strategies in σ except σi, and denote ui(σ) to be the expected utility for player i given that all players play according to σ. In a 2-player zero-sum game, a best response to a player 1 strategy σ1 is a player 2 strategy σBR 2 = argmaxσ2u2(σ1, σ2) (similarly for a player 2 strategy σ2). The best response value of σ1 is u2(σ1, σBR 2 ), which measures the exploitability of σ1. The exploitability of a strategy tells us how much that strategy loses to a worst-case opponent. Outside of 2-player zero-sum games, the worst-case scenario for player i would be for all other players to minimize player i’s utility instead of maximizing their own. In large games, this value is difficult to compute since opponents cannot share private information. Thus, we only investigate exploitability for 2-player zero-sum games. Counterfactual regret minimization (CFR) [14] is an iterative procedure for computing strategy profiles in extensive form games. In 2-player zero-sum games, CFR produces an approximate Nash equilibrium profile. In addition, CFR strategies have also been found to compete very well in games with more than 2 players [1]. CFR’s memory requirements are proportional to the number of information sets in the game times the number of actions available at an information set. The extensive form game representation of many real-world problems is too large to feasibly compute a strategy directly. A common approach in these games is to first create an abstract game by combining information sets into single abstract states or by disallowing certain actions: 1http://www.computerpokercompetition.org 2                                (a)                                (b) Figure 1: (a) An abstraction of an extensive game, where states connected by a bold curve are in the same information set and thin curves denote merged abstract information sets. In the unabstracted game, player 1 cannot distinguish between whether chance generated b or c and player 2 cannot distinguish between a and b. In the abstract game, neither player can distinguish between any of chance’s outcomes. (b) An example of a game Γ′ derived from the unabstracted game Γ in (a) for a dynamic expert strategy. Here, the abstraction from (a) is used as the base abstraction, and the null abstraction is employed on the subtree with G1,1 = ∅and G2,1 = {al, bl, cl} (bold states). Definition 2.2 (Waugh et al. [12]) An abstraction for player i is a pair αi = αI i , αA i , where • αI i is a partition of Hi defining a set of abstract information sets coarser than Ii (i.e., every I ∈Ii is a subset of some set in αI i ), and • αA i is a function on histories where αA i (h) ⊆A(h) and αA i (h) = αA i (h′) for all histories h and h′ in the same abstract information set. We will call this the abstract action set. The null abstraction for player i is φi = ⟨Ii, A⟩. An abstraction α is a set of abstractions αi, one for each player. Finally, for any abstraction α, the abstract game, Γα, is the extensive game obtained from Γ by replacing Ii with αI i and A(h) with αA i (h) when P(h) = i, for all i ∈N. Figure 1a shows an example of an abstracted extensive form game with no action abstraction. By reducing the number of information sets, computing strategies in an abstract game with an algorithm such as CFR requires less memory than computing strategies in the real game. Intuitively, if a strategy profile for the abstract game σ performs well in Γα, and if αI i is defined such that merged information sets are “strategically similar,” then σ is also likely to perform well in Γ. Identifying strategically similar information sets can be delicate though and typically becomes a domain-specific task. Nevertheless, we often would like to have as much granularity in our abstraction as will fit in memory to allow computed strategies to be as diverse as necessary. 3 Strategy Stitching To achieve abstractions with finer granularity, a natural approach is to break the game up into subtrees, abstract each of the subtrees, and compute a strategy for each abstract subtree independently. We introduce a formalism for doing so that generalizes Waugh et al.’s strategy grafting [11] and two poker-specific methods described in Section 5. First, select a subset S ⊆N of players. Secondly, for each i ∈S, compute a base strategy σi for playing the full game. Next, divide the game into subtrees: Definition 3.1 (Waugh et al. [11]) Gi = {Gi,0, Gi,1, ..., Gi,p} is a grafting partition for player i if • Gi is a partition of Hi (possibly containing empty parts), • ∀I ∈Ii, ∃j ∈{0, 1, ..., p} such that I ⊆Gi,j, and • ∀j ∈{1, 2, ..., p}, h ∈Gi,j, and h′ ∈Hi, if h is a prefix of h′, then h′ ∈Gi,j ∪Gi,0. For each i ∈S, choose a grafting partition Gi so that each partition has an equal number of parts p. Then, compute a strategy, or static expert, for each subtree using any strategy computation technique, such as CFR. Finally, since the subtrees are disjoint, create a static expert strategy by combining the static experts without any overlap to the base strategy in the undivided game: 3                                   (a)                                      (b) Figure 2: Two examples of a game Γj for a static expert derived from the unabstracted game Γ in Figure 1a. In both (a) and (b), G2,j = {al, bl, cl} (bold states). If player 1 takes action r, player 2 no longer controls his or her decisions. Player 2’s actions are instead generated by the base strategy σ2, computed beforehand. In (a), we have S = {2}. On the other hand, in (b), S = N = {1, 2}, G1,j = ∅, and hence all of player 1’s actions are seeded by the base strategy σ1. Definition 3.2 Let S ⊆N be a nonempty subset of players. For each i ∈S, let σi be a strategy for player i and Gi = {Gi,0, Gi,1, ..., Gi,p} be a grafting partition for player i. For j ∈{1, 2, ..., p}, define Γj to be an extensive game derived from the original game Γ where, for all i ∈S and h ∈Hi\Gi,j, we set P(h) = C and fC(a|h) = σi(h, a). That is, each player i ∈S only controls actions for histories in Gi,j and is forced to play according to σi elsewhere. Let the static expert of {Gi,j | i ∈S}, σj, be a strategy profile of the game Γj. Finally, define the static expert strategy for player i, σS i , as σS i (h, a) :=  σi(h, a) if h ∈Gi,0 σj i (h, a) if h ∈Gi,j. We call {σi | i ∈S} the base or seeding strategies and {Gi | i ∈S} the grafting profile for the static expert strategy σS i . Figure 2 shows two examples of a game Γj for a single static expert. This may be the only subtree for which a static expert is computed (p = 1), or there could be more subtrees contained in the grafting partition(s) (p > 1). Under a fixed memory limitation, we can employ finer abstractions for the subtrees Γj than we can in the full game Γ. This is because Γj removes some of the information sets belonging to players in S, freeing up memory for computing strategies on the subtrees. When |S| = 1, the static expert approach is identical to strategy grafting [11, Definition 8], with the exception that each static expert need not be an approximate Nash equilibrium. We relax the definition for static experts because Nash equilibria are difficult to compute in multiplayer games, and may not be the best solution concept outside of 2-player zero-sum games anyways. Choosing |S| > 1, however, is dangerous because we fix opponent probabilities and assume that our opponents are “static” at certain locations. For example, in Figure 2b, it may not be wise for player 2 to assume that player 1 must follow σ1. Doing so can dramatically skew player 2’s beliefs about the action generated by chance and hurt the expert’s performance against opponents that do not follow σ1. As we will see in Section 6, having more static experts with |S| > 1 can result in a more exploitable static expert strategy. On the other hand, by removing information sets for multiple players, the static expert approach creates smaller subtrees than strategy grafting does. As a result, we can employ even finer abstractions within the subtrees. Section 6 shows that despite the risks, the abstraction gains often lead to static experts with S = N being preferred. Regardless of the choice of S, the base strategy lacks “cohesion” with the static experts since its computation is based on its own play at the subtrees rather than the experts’ play. Though the experts are identically seeded, the base strategy may want to play towards the expert subtrees more often to increase utility. This observation motivates our introduction of dynamic experts that are computed concurrently with a base. The full extensive game is divided into subtrees and each subtree is supplied its own abstraction: 4 Definition 3.3 Let α0, α1, ..., αp be abstractions for the game Γ and for each i ∈N, let Gi = {Gi,0, Gi,1, ..., Gi,p} be a grafting partition for player i satisfying I ∩Gi,j ∈{∅, I} for all j ∈ {0, ..., p} and I ∈αj,I i . Thus, each abstract information set is contained entirely in some part of the grafting partition. Let Γ′ be the abstract game obtained from Γ by replacing Ii with Sp j=0{I ∈ αj,I i | I ⊆Gi,j} and A(h) with αj,A i (h) when P(h) = i and h ∈Gi,j, for all i ∈N. Let the dynamic expert strategy for player i, σ′ i, be a strategy for player i of the game Γ′. Finally define the dynamic expert of Gi,j, σj i , to be σ′ i restricted to the histories in Gi,j, σ′ i|Gi,j. The abstraction α0 is denoted as the base abstraction and the dynamic expert σ0 i is denoted as the base strategy. Figure 1b contains an abstract game tree Γ′ for a dynamic expert strategy. We can view a dynamic expert strategy as a strategy computed in an abstraction with differing granularity dependent on the history of actions taken. Note that our definition is somewhat redundant to the definition of abstraction as we are simply defining a new abstraction for Γ based on the abstractions α0, α1, ..., αp. Nonetheless, we supply Definition 3.3 to provide the terms in bold that we will use throughout. Under memory constraints, a dynamic expert strategy typically sacrifices abstraction granularity in the base strategy to achieve finer granularity in the experts. We hope doing so achieves better performance at parts of the game that we believe may be more important. For instance, importance could depend on the predicted relative frequencies of reaching different subtrees.The base strategy’s abstraction is reduced to guarantee perfect cohesion between the base and the experts; the base strategy knows about the experts and can calculate its probabilities “dynamically” during strategy computation based on the feedback from the experts. In Section 6, we contrast static and dynamic experts to compare this trade-off between abstraction size and strategy cohesion. 4 Texas and Leduc Hold’em A hand of Texas Hold’em poker (or simply Hold’em) begins with each player being dealt two private cards, and two players posting mandatory bets or blinds. There are four betting rounds, the pre-flop, flop, turn, and river where five community cards are successively revealed. Of the players that did not fold, the player with the highest ranked poker hand wins all of the bets. Full rules can be found on-line.2 We focus on the Limit Hold’em variant that fixes the bet sizes and the number of bets allowed per round. We denote the players’ actions as f (fold), c (check or call), and r (bet or raise). Leduc Hold’em [10] (or simply Leduc) is a smaller version of Hold’em, played with a six card deck consisting of two Jacks, two Queens, and two Kings with only two betting rounds, pre-flop and flop. Rather than using blinds, antes are posted by all players at the beginning of a hand. Only one private card is dealt to each player and one community card is dealt on the flop. While Leduc is small enough to bypass abstraction, Hold’em is a massive game in terms of the number of information sets; 2-player Limit Hold’em has approximately 3 × 1014 information sets, and 3-player has roughly 5 × 1017. Applying CFR to these enormous state spaces necessitates abstraction. A common abstraction technique in poker is to group many different card dealings into single abstract states or buckets. This is commonly done by ordering all possible poker hands for a specific betting round according to some metric, such as expected hand strength (E[HS]) or expected hand strength squared (E[HS2]), and then grouping hands with similar metric values into the same bucket [7]. Percentile bucketing with N buckets and M hands puts the top M/N hands into 1 bucket, the next best M/N into a second bucket, etc., so that the buckets are approximately equal in size. More advanced bucketing schemes that use multiple metrics and clustering techniques are possible, but our experiments use simple percentile bucketing with no action abstraction. 5 Related Work Our general framework for applying static experts to any extensive form game captures some previous poker-specific strategy stitching approaches. First, the PsOpti family of agents [2], which play 2-player Limit Hold’em, contain a base strategy called the “pre-flop model” and 7 static experts with S = N, or “post-flop models.” Due to resource and technology limitations, the abstractions used to 2http://en.wikipedia.org/wiki/Texas hold ’em 5 build the pre-flop and post-flop models were quite coarse, making the family no match for today’s top agents. Secondly, Abou Risk and Szafron [1] attach 6 static experts with S = N (which they call “heads-up experts”) to a base strategy for playing 3-player Limit Hold’em. Each expert focuses on a subtree immediately following a fold action, allowing much finer abstractions for these 2-player scenarios. However, their results were mixed as the stitched strategy was not always better than the base strategy alone. Nonetheless, our positive results for static experts with S = N in Section 6 provide evidence that the PsOpti approach and heads-up experts are indeed credible. In addition, Gilpin and Sandholm [5] create a poker agent for 2-player Limit Hold’em that uses a 2-phase strategy different from the approaches discussed thus far. The first phase is used to play the pre-flop and flop rounds, and is computed similarly to the PsOpti pre-flop model. For the turn and river rounds, a second phase strategy is computed on-line. One drawback of this approach is that the on-line computations must be quick enough to play in real time. Despite fixing the flop cards, this constraint forced the authors to still employ a very coarse abstraction during the second phase. Furthermore, there have been a few other related approaches to creating poker agents. While 2player poker is well studied, Ganzfried and Sandholm [3, 4] developed algorithms for computing Nash equilibria in multiplayer games and applied it to a small 3-player jam/fold poker game. Additionally, Gilpin et al. [6] use an automated abstraction building tool to dynamically bucket hands in 2-player Limit Hold’em. Here, we are not concerned with equilibrium properties or the abstraction building process itself. In fact, strategy stitching is orthogonal to both strategy computation and abstraction improvements, and could be used in conjunction with more sophisticated techniques. 6 Empirical Evaluation In this section, we create several stitched strategies in both Leduc and Hold’em using the chancesampled variant of CFR [14]. CFR is state of the art in terms of memory efficiency for strategy computation, allowing us to employ abstractions with higher granularity than otherwise possible. Results may differ with other techniques for computing strategies and building abstractions. While CFR requires iterations quadratic in the number of information sets to converge [14, Theorem 4], we restrict our resources only in terms of memory. Even though Leduc is small enough to not necessitate strategy stitching, the Leduc experiments were conducted to evaluate our hypothesis that static experts with S = N can improve play. We ran many experiments and for brevity, only a representative sample of the results are summarized. To be consistent with post-flop models [2] and heads-up experts [1], our grafting profiles are defined only in terms of the players’ actions. For each history h ∈H, define b := b(h) to be the subsequence of h obtained by removing all actions generated by chance. We refer to a b-expert for player i as an expert constructed for the subtree Gi(b) := {h ∈Hi | b is a prefix of b(h)} containing all histories where the players initially follow b. For example, the experts for the games in Figures 1b, 2a, and 2b are l-experts because the game is split after player 1 takes action l. Leduc. Our Leduc experiments use three different base abstractions, one of which is simply the null abstraction. The second and third abstractions are the “JQ-K” and “J-QK” abstractions that, on the pre-flop, cannot distinguish between whether the private card is a Jack or Queen, or whether the private card is a Queen or King respectively. In addition, these two abstractions can only distinguish between whether the flop card pairs with the private card or not rather than knowing the identity of the flop card. Because Leduc is such a small game, we do not consider a fixed memory restriction and instead just compare the techniques within the same base abstraction. For both 2-player and 3-player, for each of the three base abstractions, and for each player i, we build a base strategy, a dynamic expert strategy, an S = {i} static expert strategy, and two S = N static expert strategies. Recall choosing S = {i} means that during computation of each static expert, we only fix player i’s action probabilities outside of the expert subtree, whereas S = N means that we fix all players outside of the subtree. For 2-player Leduc, we use r, cr, ccr, and cccrexperts for both players. Thus, the base strategy plays until the first raise occurs, at which point an expert takes over for the remainder of the hand. As an exception, only one of our two S = N static expert strategies, named “All,” uses all four experts; the other, named “Pre-flop,” just uses the r and cr-experts. For 3-player Leduc, we use r, cr, ccr, cccr, ccccr, and cccccr-experts, except the “Pre-flop” static strategies use just the three experts r, cr, and ccr. The null abstraction is employed 6 Table 1: The size, earnings, and exploitability of the 2-player (2p) Leduc strategies in the JQ-K base abstraction, and the size and earnings of the 3-player (3p) strategies in the J-QK base abstraction. The sizes are measured in terms of the maximum number of information sets present within a single CFR computation. Earnings, as described in the text, and exploitability are in milli-antes per hand. Strategy (2p) Size Earns. Exploit. Strategy (3p) Size Earns. Base 132 24.73 496.31 Base 1890 -68.46 Dynamic 444 45.75 159.84 Dynamic 6903 113.04 Static.S={i} 226 28.87 167.61 Static.S={i} 3017 96.14 Static.S=N.All 186 29.20 432.74 Static.S=N.All 2145 117.01 Static.S=N.Pre-flop 186 37.77 214.44 Static.S=N.Pre-flop 2145 119.73 on every expert subtree. Each run of CFR is stopped after 100 million iterations, which for 2-player yields strategies within a milli-ante of equilibrium in the abstract game. Each strategy is evaluated against all combinations and orderings of opponent strategies where all strategies use different base abstractions, and the scores are averaged together. For example, for each of our 2-player strategy profiles σ in the JQ-K base abstraction, we compute 1/2(u1(σ1, σ′ 2) + u2(σ′ 1, σ2)), averaged over all profiles σ′ that use either the null or J-QK base abstraction. Leduc is a small enough game that the utilities can be computed exactly. A selection of these scores, along with 2-player exploitability values, are reported in Table 1. Firstly, by increasing abstraction granularity, all of the JQ-K strategies employing experts earn more than the base strategy alone. Secondly, Dynamic and Static.S=N earn more overall than Static.S={i}, despite the 2-player Static.S=N being more exploitable due to the opponent action assumptions. In fact, despite requiring much less memory to compute, Static.S=N surprisingly earns more than Dynamic in 3-player Leduc. Finally, we see that only using two pre-flop static experts as opposed to all four reduces the number of dangerous assumptions to provide a stronger and less exploitable strategy. However, as expected, Dynamic and Static.S={i} are less exploitable. Hold’em. Our Hold’em experiments enforce a fixed memory restriction per run of CFR, which we artificially set to 24 million information sets for 2-player and 162 million information sets for 3-player. We compute stitched strategies of each type using as many percentile E[HS2] buckets as possible within the restriction. Our 2-player abstractions distribute buckets as close to uniformly as possible across the betting rounds while remembering buckets from previous rounds (known as “perfect recall”). Our 3-player abstractions are similar, except they use 169 pre-flop buckets that are forgotten on later rounds (known as “imperfect recall;” see [1] and [13] for more regarding CFR and imperfect recall). For 2-player, our dynamic strategy has just an r-expert, our S = {i} static strategy uses r, cr, ccr, and cccr-experts, and our S = N static strategy employs r and cr-experts. These choices were based on preliminary experiments to make the most effective use of the limited memory available for each stitching approach. Following Abou Risk and Szafron [1], our 3-player stitched strategies all have f, rf, rrf, and rcf-experts as these appear to be the most commonly reached 2-player scenarios [1, Table 4]. Our abstractions range quite dramatically in terms of number of buckets. For example, in 3-player, our dynamic strategy’s base abstraction has just 8 river buckets with 7290 river buckets for each expert, whereas our static strategies have 16 river buckets in the base abstraction with up to 194,481 river buckets for the S = N static rcf-expert abstraction. For reference, all of the 2-player base and experts are built from 720 million iterations of CFR, while we run CFR for 100 million and 5 billion iterations for the 3-player base and experts respectively. We evaluate our 2-player strategies by playing 500,000 duplicate hands (players play both sides of the dealt cards) of poker between each pair of strategies. In addition to our base and stitched strategies, we also included a base strategy called “Base.797M” in an abstraction with over 797 million information sets that we expected to beat all of the strategies we were evaluating. Furthermore, using a specialized best response tool [8], we computed the exploitability of our 2-player strategies. For 3-player, we play 500,000 triplicate hands (each set of dealt cards played 6 times, one for each of the player seatings) between each combination of 3 strategies. We also included two other strategies: “ACPC-09,” the 2009 ACPC 3-player event winner that did not use experts (Abou Risk and Szafron [1] call it “IR16”), and “ACPC-10,” a static expert strategy that won a 3-player event at the 2010 ACPC and is outlined at the end of this section. The results are provided in Table 2. 7 Table 2: Earnings and 95% confidence intervals over 500,000 duplicate hands of 2-player Hold’em per pairing, and over 500,000 triplicate hands of 3-player Hold’em per combination. The exploitability of the 2-player strategies is also provided. All values are in milli-big-blinds per hand. Strategy (2p) Earnings Exploitability Strategy (3p) Earnings Base −10.47 ± 1.99 310.04 Base −6.09 ± 0.71 Dynamic −4.43 ± 1.98 307.76 Dynamic −4.91 ± 0.75 Static.S={i} −13.13 ± 2.00 301.00 Static.S={i} −5.20 ± 0.70 Static.S=N −4.57 ± 1.95 288.82 Static.S=N 3.06 ± 0.70 Base.797M 32.59 ± 2.14 135.43 ACPC-09 −14.15 ± 0.89 ACPC-10 27.29 ± 0.86 Firstly, in 2-player, we see that Static.S=N and Dynamic outperform Static.S={i} considerably, agreeing with the previous Leduc results. In fact, the Static.S={i} fails to even improve upon the base strategy. For 3-player, Static.S=N is noticeably ahead of both Dynamic and Static.S={i} as it is the only strategy, aside from ACPC-10, to win money. By forcing one player to fold, the static experts with S = N essentially reduce the size of the game tree from a 3-player to a 2-player game, allowing many more buckets to be used. This result indicates that at least for poker, the gains in abstraction bucketing outweigh the risks of forced action assumptions and lack of cohesion between the base strategy and the experts. Furthermore, Static.S=N is slightly less exploitable in 2-player than the base strategy and the other two stitched strategies. While there are one and two opponent static actions assumed by the r and cr-experts respectively, trading these few assumptions for an increase in abstraction granularity is beneficial. In summary, static experts with S = N are preferred to both dynamic and static experts with S = {i} in the experiments we ran. An additional validation of the quality of the static expert approach was provided by the 2010 ACPC. The winning entries in both 3-player events employed static experts with S = N. The base strategy, computed from 70 million iterations of CFR, used 169, 900, 100, and 25 buckets on each of the respective rounds. Four experts were used, f, rf, rrf, and rcf, computed from 10 billion iterations of CFR, each containing 169, 60,000, 180,000, and 26,160 buckets on the respective rounds. In addition, clustering techniques on strength distribution were used instead of percentile bucketing. Two strategies were created, where one was trained to play slightly more aggressively for the total bankroll event. Each version finished in first place in its respective competition. 7 Conclusions We discussed two strategy stitching techniques for extensive games, including static experts that generalize strategy grafting and some previous techniques used in poker. Despite the accompanying potential dangers and lack of cohesion, we have shown static experts with S = N outperform the dynamic and static experts with S = {i} that we considered, especially when memory limitations are present. However, additional static experts with several forced actions can lead to a more exploitable strategy. Static experts with S = N is currently our preferred method for creating multiplayer poker strategies and would be our first option for playing other large extensive games. Future work includes finding a way to create more cohesion between the base strategy and static experts. One possibility is to rebuild the base strategy after the experts have been created so that the base strategy’s play is more unified with the experts. In addition, we have yet to experiment with 3player “hybrid” static experts where |S| = 2. Finally, there are many ways to combine the stitching techniques described in this paper. One possibility is to use a dynamic expert strategy as a base strategy of a static expert strategy. In addition, static experts could themselves be dynamic expert strategies for the appropriate subtrees. Such combinations may produce even stronger strategies than those produced in this paper. Acknowledgments We would like to thank Westgrid and Compute Canada for their computing resources that were used during this work. We would also like to thank the members of the Computer Poker Research Group at the University of Alberta for their helpful pointers throughout this project. This research was funded by NSERC and Alberta Ingenuity, now part of Alberta Innovates - Technology Futures. 8 References [1] N. Abou Risk and D. Szafron. Using counterfactual regret minimization to create competitive multiplayer poker agents. In AAMAS, pages 159–166, 2010. [2] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In IJCAI, pages 661– 668, 2003. [3] S. Ganzfried and T. Sandholm. Computing an approximate jam/fold equilibrium for 3-agent no-limit Texas Hold’em tournaments. In AAMAS, 2008. [4] S. Ganzfried and T. Sandholm. Computing equilibria in multiplayer stochastic games of imperfect information. In IJCAI, 2009. [5] A. Gilpin and T. Sandholm. Better automated abstraction techniques for imperfect information games, with application to Texas Hold’em poker. In AAMAS, 2007. [6] A. Gilpin, T. Sandholm, and T.B. Sørensen. Potential-aware automated abstraction of sequential games, and holistic equilibrium analysis of Texas Hold’em poker. In AAAI, 2007. [7] M. Johanson. Robust strategies and counter-strategies: Building a champion level computer poker player. Master’s thesis, University of Alberta, 2007. [8] M. Johanson, K. Waugh, M. Bowling, and M. Zinkevich. Accelerating best response calculation in large extensive games. In IJCAI, 2011. To appear. [9] M. Osborne and A. Rubenstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts, 1994. [10] F. Southey, M. Bowling, B. Larson, C. Piccione, N. Burch, D. Billings, and C. Rayner. Bayes’ bluff: Opponent modelling in poker. In UAI, pages 550–558, 2005. [11] K. Waugh, M. Bowling, and N. Bard. Strategy grafting in extensive games. In NIPS-22, pages 2026–2034, 2009. [12] K. Waugh, D. Schnizlein, M. Bowling, and D. Szafron. Abstraction pathologies in extensive games. In SARA, pages 781–788, 2009. [13] Kevin Waugh, Martin Zinkevich, Michael Johanson, Morgan Kan, David Schnizlein, and Michael Bowling. A practical use of imperfect recall. In SARA, pages 175–182, 2009. [14] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with incomplete information. In NIPS-20, pages 905–912, 2008. 9
2011
148