index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
5,100
|
Annealing Between Distributions by Averaging Moments Roger Grosse Comp. Sci. & AI Lab MIT Cambridge, MA 02139 Chris J. Maddison Dept. of Computer Science University of Toronto Toronto, ON M5S 3G4 Ruslan Salakhutdinov Depts. of Statistics and Comp. Sci., University of Toronto Toronto, ON M5S 3G4, Canada Abstract Many powerful Monte Carlo techniques for estimating partition functions, such as annealed importance sampling (AIS), are based on sampling from a sequence of intermediate distributions which interpolate between a tractable initial distribution and the intractable target distribution. The near-universal practice is to use geometric averages of the initial and target distributions, but alternative paths can perform substantially better. We present a novel sequence of intermediate distributions for exponential families defined by averaging the moments of the initial and target distributions. We analyze the asymptotic performance of both the geometric and moment averages paths and derive an asymptotically optimal piecewise linear schedule. AIS with moment averaging performs well empirically at estimating partition functions of restricted Boltzmann machines (RBMs), which form the building blocks of many deep learning models. 1 Introduction Many generative models are defined in terms of an unnormalized probability distribution, and computing the probability of a state requires computing the (usually intractable) partition function. This is problematic for model selection, since one often wishes to compute the probability assigned to held-out test data. While partition function estimation is intractable in general, there has been extensive research on variational [1, 2, 3] and sampling-based [4, 5, 6] approximations. In the context of model comparison, annealed importance sampling (AIS) [4] is especially widely used because given enough computing time, it can provide high-accuracy estimates. AIS has enabled precise quantitative comparisons of powerful generative models in image statistics [7, 8] and deep learning [9, 10, 11]. Unfortunately, applying AIS in practice can be computationally expensive and require laborious hand-tuning of annealing schedules. Because of this, many generative models still have not been quantitatively compared in terms of held-out likelihood. AIS requires defining a sequence of intermediate distributions which interpolate between a tractable initial distribution and the intractable target distribution. Typically, one uses geometric averages of the initial and target distributions. Tantalizingly, [12] derived the optimal paths for some toy models in the context of path sampling and showed that they vastly outperformed geometric averages. However, as choosing an optimal path is generally intractable, geometric averages still predominate. In this paper, we present a theoretical framework for evaluating alternative paths. We propose a novel path defined by averaging moments of the initial and target distributions. We show that geometric averages and moment averages optimize different variational objectives, derive an asymptotically optimal piecewise linear schedule, and analyze the asymptotic performance of both paths. Our proposed path often outperforms geometric averages at estimating partition functions of restricted Boltzmann machines (RBMs). 1 2 Estimating Partition Functions Suppose we have a probability distribution pb(x) = fb(x)/Zb defined on a space X, where fb(x) can be computed efficiently for a given x ∈X, and we are interested in estimating the partition function Zb. Annealed importance sampling (AIS) is an algorithm which estimates Zb by gradually changing, or “annealing,” a distribution. In particular, one must specify a sequence of K + 1 intermediate distributions pk(x) = fk(x)/Zk for k = 0, . . . K, where pa(x) = p0(x) is a tractable initial distribution, and pb(x) = pK(x) is the intractable target distribution. For simplicity, assume all distributions are strictly positive on X. For each pk, one must also specify an MCMC transition operator Tk (e.g. Gibbs sampling) which leaves pk invariant. AIS alternates between MCMC transitions and importance sampling updates, as shown in Alg 1. Algorithm 1 Annealed Importance Sampling for i = 1 to M do x0 ←sample from p0(x) w(i) ←Za for k = 1 to K do w(i) ←w(i) fk(xk−1) fk−1(xk−1) xk ←sample from Tk (x | xk−1) end for end for return ˆZb = PM i=1 w(i)/M The output of AIS is an unbiased estimate ˆZb of Zb. Remarkably, unbiasedness holds even in the context of non-equilibrium samples along the chain [4, 13]. However, unless the intermediate distributions and transition operators are carefully chosen, ˆZb may have high variance and be far from Zb with high probability. The mathematical formulation of AIS leaves much flexibility for choosing intermediate distributions. However, one typically defines a path γ : [0, 1] →P through some family P of distributions. The intermediate distributions pk are chosen to be points along this path corresponding to a schedule 0=β0 < β1 < . . . < βK =1. One typically uses the geometric path γGA, defined in terms of geometric averages of pa and pb: pβ(x) = fβ(x)/Z(β) = fa(x)1−βfb(x)β/Z(β). (1) Commonly, fa is the uniform distribution, and (1) reduces to pβ(x) = fb(x)β/Z(β). This motivates the term “annealing,” and β resembles an inverse temperature parameter. As in simulated annealing, the “hotter” distributions often allow faster mixing between modes which are isolated in pb. AIS is closely related to a broader family of techniques for posterior inference and partition function estimation, all based on the following identity from statistical physics: log Zb −log Za = Z 1 0 Ex∼pβ d dβ log fβ(x) dβ. (2) Thermodynamic integration [14] estimates (2) using numerical quadrature, and path sampling [12] does so with Monte Carlo integration. The weight update in AIS can be seen as a finite difference approximation. Tempered transitions [15] is a Metropolis-Hastings proposal operator which heats up and cools down the distribution, and computes an acceptance ratio by approximating (2). The choices of a path and a schedule are central to all of these methods. Most work on adapting paths has focused on tuning schedules along a geometric path [15, 16, 17]. [15] showed that the geometric schedule was optimal for annealing the scale parameter of a Gaussian, and [16] extended this result more broadly. The aim of this paper is to propose, analyze, and evaluate a novel alternative to γGA based on averaging moments of the initial and target distributions. 3 Analyzing AIS Paths When analyzing AIS, it is common to assume perfect transitions, i.e. that each transition operator Tk returns an independent and exact sample from the distribution pk [4]. This corresponds to the (somewhat idealized) situation where the Markov chains mix quickly. As Neal [4] pointed out, assuming perfect transitions, the Central Limit Theorem shows that the samples w(i) are approximately log-normally distributed. In this case, the variances var(w(i)) and var(log w(i)) are both monotonically related to E[log w(i)]. Therefore, our analysis focuses on E[log w(i)]. Assuming perfect transitions, the expected log weights are given by: E[log w(i)] = log Za + K−1 X k=0 Epk[log fk+1(x) −log fk(x)] = log Zb − K−1 X k=0 DKL(pk∥pk+1). (3) 2 In other words, each log w(i) can be seen as a biased estimator of log Zb, where the bias δ = log Zb −E[log w(i)] is given by the sum of KL divergences PK−1 k=0 DKL(pk∥pk+1). Suppose P is a family of probability distributions parameterized by θ ∈Θ, and the K + 1 distributions p0, . . . , pK are chosen to be linearly spaced along a path γ : [0, 1] →P. Let θ(β) represent the parameters of the distribution γ(β). As K is increased, the bias δ decays like 1/K, and the asymptotic behavior is determined by a functional F(γ). Theorem 1. Suppose K + 1 distributions pk are linearly spaced along a path γ. Assuming perfect transitions, if θ(β) and the Fisher information matrix Gθ(β) = covx∼pθ(∇θ log pθ(x)) are continuous and piecewise smooth, then as K →∞the bias δ behaves as follows: Kδ = K K−1 X k=0 DKL(pk∥pk+1) →F(γ) ≡1 2 Z 1 0 ˙θ(β)T Gθ(β) ˙θ(β)dβ, (4) where ˙θ(β) represents the derivative of θ with respect to β. [See supplementary material for proof.] This result reveals a relationship with path sampling, as [12] showed that the variance of the path sampling estimator is proportional to the same functional. One useful result from their analysis is a derivation of the optimal schedule along a given path. In particular, the value of F(γ) using the optimal schedule is given by ℓ(γ)2/2, where ℓis the Riemannian path length defined by ℓ(γ) = Z 1 0 q ˙θ(β)T Gθ(β) ˙θ(β)dβ. (5) Intuitively, the optimal schedule allocates more distributions to regions where pβ changes quickly. While [12] derived the optimal paths and schedules for some simple examples, they observed that this is intractable in most cases and recommended using geometric paths in practice. The above analysis assumes perfect transitions, which can be unrealistic in practice because many distributions have separated modes between which mixing is difficult. As Neal [4] observed, in such cases, AIS can be viewed as having two sources of variance: that caused by variability within a mode, and that caused by misallocation of samples between modes. The former source of variance is well modeled by the perfect transitions analysis and can be made small by adding more intermediate distributions. The latter, however, can persist even with large numbers of intermediate distributions. While our theoretical analysis assumes perfect transitions, our proposed method often gave substantial improvement empirically in situations with poor mixing. 4 Moment Averaging As discussed in Section 2, the typical choice of intermediate distributions for AIS is the geometric averages path γGA given by (1). In this section, we propose an alternative path for exponential family models. An exponential family model is defined as p(x) = 1 Z(η)h(x) exp ηT g(x) , (6) where η are the natural parameters and g are the sufficient statistics. Exponential families include a wide variety of statistical models, including Markov random fields. In exponential families, geometric averages correspond to averaging the natural parameters: η(β) = (1 −β)η(0) + βη(1). (7) Exponential families can also be parameterized in terms of their moments s = E[g(x)]. For any minimal exponential family (i.e. one whose sufficient statistics are linearly independent), there is a one-to-one mapping between moments and natural parameters [18, p. 64]. We propose an alternative to γGA called the moment averages path, denoted γMA, and defined by averaging the moments of the initial and target distributions: s(β) = (1 −β)s(0) + βs(1). (8) This path exists for any exponential family model, since the set of realizable moments is convex [18]. It is unique, since g is unique up to affine transformation. 3 As an illustrative example, consider a multivariate Gaussian distribution parameterized by the mean µ and covariance Σ. The moments are E[x] = µ and −1 2E[xxT ] = −1 2(Σ + µµT ). By plugging these into (8), we find that γMA is given by: µ(β) = (1 −β)µ(0) + βµ(1) (9) Σ(β) = (1 −β)Σ(0) + βΣ(1) + β(1 −β)(µ(1) −µ(0))(µ(1) −µ(0))T . (10) In other words, the means are linearly interpolated, and the covariances are linearly interpolated and stretched in the direction connecting the two means. Intuitively, this stretching is a useful property, because it increases the overlap between successive distributions with different means. A comparison of the two paths is shown in Figure 1. Figure 1: Comparison of γGA and γMA for multivariate Gaussians: intermediate distribution for β = 0.5, and µ(β) for β evenly spaced from 0 to 1. Next consider the example of a restricted Boltzmann machine (RBM), a widely used model in deep learning. A binary RBM is a Markov random field over binary vectors v (the visible units) and h (the hidden units), and which has the distribution p(v, h) ∝exp aT v + bT h + vT Wh . (11) The parameters of the model are the visible biases a, the hidden biases b, and the weights W. Since these parameters are also the natural parameters in the exponential family representation, γGA reduces to linearly averaging the biases and the weights. The sufficient statistics of the model are the visible activations v, the hidden activations h, and the products vhT . Therefore, γMA is defined by: E[v]β = (1 −β)E[v]0 + βE[v]1 (12) E[h]β = (1 −β)E[h]0 + βE[h]1 (13) E[vhT ]β = (1 −β)E[vhT ]0 + βE[vhT ]1 (14) For many models of interest, including RBMs, it is infeasible to determine γMA exactly, as it requires solving two often intractable problems: (1) computing the moments of pb, and (2) solving for model parameters which match the averaged moments s(β). However, much work has been devoted to practical approximations [19, 20], some of which we use in our experiments with intractable models. Since it would be infeasible to moment match every βk even approximately, we introduce the moment averages spline (MAS) path, denoted γMAS. We choose a set of R values β1, . . . , βR called knots, and solve for the natural parameters η(βj) to match the moments s(βj) for each knot. We then interpolate between the knots using geometric averages. The analysis of Section 4.2 shows that, under the assumption of perfect transitions, using γMAS in place of γMA does not affect the cost functional F defined in Theorem 1. 4.1 Variational Interpretation By interpreting γGA and γMA as optimizing different variational objectives, we gain additional insight into their behavior. For geometric averages, the intermediate distribution γGA(β) minimizes a weighted sum of KL divergences to the initial and target distributions: p(GA) β = arg min q (1 −β)DKL(q∥pa) + βDKL(q∥pb). (15) On the other hand, γMA minimizes the weighted sum of KL divergences in the reverse direction: p(MA) β = arg min q (1 −β)DKL(pa∥q) + βDKL(pb∥q). (16) See the supplementary material for the derivations. The objective function (15) is minimized by a distribution which puts significant mass only in the “intersection” of pa and pb, i.e. those regions which are likely under both distributions. By contrast, (16) encourages the distribution to be spread out in order to capture all high probability regions of both pa and pb. This interpretation helps explain why the intermediate distributions in the Gaussian example of Figure 1 take the shape that they do. In our experiments, we found that γMA often gave more accurate results than γGA because the intermediate distributions captured regions of the target distribution which were missed by γGA. 4 4.2 Asymptotics under Perfect Transitions In general, we found that γGA and γMA can look very different. Intriguingly, both paths always result in the same value of the cost functional F(γ) of Theorem 1 for any exponential family model. Furthermore, nothing is lost by using the spline approximation γMAS in place of γMA: Theorem 2. For any exponential family model with natural parameters η and moments s, all three paths share the same value of the cost functional: F(γGA) = F(γMA) = F(γMAS) = 1 2(η(1) −η(0))T (s(1) −s(0)). (17) Proof. The two parameterizations of exponential families satisfy the relationship Gη ˙η = ˙s [21, sec. 3.3]. Therefore, F(γ) can be rewritten as 1 2 R 1 0 ˙η(β)T ˙s(β) dβ. Because γGA and γMA linearly interpolate the natural parameters and moments respectively, F(γGA) = 1 2(η(1) −η(0))T Z 1 0 ˙s(β) dβ = 1 2(η(1) −η(0))T (s(1) −s(0)) (18) F(γMA) = 1 2(s(1) −s(0))T Z 1 0 ˙η(β) dβ = 1 2(s(1) −s(0))T (η(1) −η(0)). (19) Finally, to show that F(γMAS) = F(γMA), observe that γMAS uses the geometric path between each pair of knots γ(βj) and γ(βj+1), while γMA uses the moments path. The above analysis shows the costs must be equal for each segment, and therefore equal for the entire path. This analysis shows that all three paths result in the same expected log weights asymptotically, assuming perfect transitions. There are several caveats, however. First, we have noticed experimentally that γMA often yields substantially more accurate estimates of Zb than γGA even when the average log weights are comparable. Second, the two paths can have very different mixing properties, which can strongly affect the results. Third, Theorem 2 assumes linear schedules, and in principle there is room for improvement if one is allowed to tune the schedule. For instance, consider annealing between two Gaussians pa = N(µa, σ) and pb = N(µb, σ). The optimal schedule for γGA is a linear schedule with cost F(γGA) = O(d2), where d = |µb −µa|/σ. Using a linear schedule, the moment path also has cost O(d2), consistent with Theorem 2. However, most of the cost of the path results from instability near the endpoints, where the variance changes suddenly. Using an optimal schedule, which allocates more distributions near the endpoints, the cost functional falls to O((log d)2), which is within a constant factor of the optimal path derived by [12]. (See the supplementary material for the derivations.) In other words, while F(γGA) = F(γMA), they achieve this value for different reasons: γGA follows an optimal schedule along a bad path, while γMA follows a bad schedule along a near-optimal path. We speculate that, combined with the procedure of Section 4.3 for choosing a schedule, moment averages may result in large reductions in the cost functional for some models. 4.3 Optimal Binned Schedules In general, it is hard to choose a good schedule for a given path. However, consider the set of binned schedules, where the path is divided into segments, some number Kj of intermediate distributions are allocated to each segment, and the distributions are spaced linearly within each segment. Under the assumption of perfect transitions, there is a simple formula for an asymptotically optimal binned schedule which requires only the parameters obtained through moment averaging: Theorem 3. Let γ be any path for an exponential family model defined by a set of knots βj, each with natural parameters ηj and moments sj, connected by segments of either γGA or γMA paths. Then, under the assumption of perfect transitions, an asymptotically optimal allocation of intermediate distributions to segments is given by: Kj ∝ q (ηj+1 −ηj)T (sj+1 −sj). (20) Proof. By Theorem 2, the cost functional for segment j is Fj = 1 2(ηj+1 −ηj)T (sj+1 −sj). Hence, with Kj distributions allocated to it, it contributes Fj/Kj to the total cost. The values of Kj which minimize P j Fj/Kj subject to P j Kj = K and Kj ≥0 are given by Kj ∝ p Fj. 5 Figure 2: Estimates of log Zb for a normalized Gaussian as K, the number of intermediate distributions, is varied. True value: log Zb = 0. Error bars show bootstrap 95% confidence intervals. (Best viewed in color.) 5 Experimental Results In order to compare our proposed path with geometric averages, we ran AIS using each path to estimate partition functions of several probability distributions. For all of our experiments, we report two sets of results. First, we show the estimates of log Z as a function of K, the number of intermediate distributions, in order to visualize the amount of computation necessary to obtain reasonable accuracy. Second, as recommended by [4], we report the effective sample size (ESS) of the weights for a large K. This statistic roughly measures how many independent samples one obtains using AIS.1 All results are based on 5,000 independent AIS runs, so the maximum possible ESS is 5,000. 5.1 Annealing Between Two Distant Gaussians In our first experiment, the initial and target distributions were the two Gaussians shown in Fig. 1, whose parameters are N −10 0 , 1 −0.85 −0.85 1 and N 10 0 , 1 0.85 0.85 1 . As both distributions are normalized, Za = Zb = 1. We compared γGA and γMA both under perfect transitions, and using the Gibbs transition operator. We also compared linear schedules with the optimal binned schedules of Section 4.3, using 10 segments evenly spaced from 0 to 1. Figure 2 shows the estimates of log Zb for K ranging from 10 to 1,000. Observe that with 1,000 intermediate distributions, all paths yielded accurate estimates of log Zb. However, γMA needed fewer intermediate distributions to achieve accurate estimates. For example, with K = 25, γMA resulted in an estimate within one nat of log Zb, while the estimate based on γGA was off by 27 nats. This result may seem surprising in light of Theorem 2, which implies that F(γGA) = F(γMA) for linear schedules. In fact, the average log weights for γGA and γMA were similar for all values of K, as the theorem would suggest; e.g., with K = 25, the average was -27.15 for γMA and -28.04 for γGA. However, because the γMA intermediate distributions were broader, enough samples landed in high probability regions to yield reasonable estimates of log Zb. 5.2 Partition Function Estimation for RBMs Our next set of experiments focused on restricted Boltzmann machines (RBMs), a building block of many deep learning models (see Section 4). We considered RBMs trained with three different methods: contrastive divergence (CD) [19] with one step (CD1), CD with 25 steps (CD25), and persistent contrastive divergence (PCD) [20]. All of the RBMs were trained on the MNIST handwritten digits dataset [22], which has long served as a benchmark for deep learning algorithms. We experimented both with small, tractable RBMs and with full-size, intractable RBMs. Since it is hard to compute γMA exactly for RBMs, we used the moments spline path γMAS of Section 4 with the 9 knot locations 0.1, 0.2, . . . , 0.9. We considered the two initial distributions discussed by [9]: (1) the uniform distribution, equivalent to an RBM where all the weights and biases are set to 0, and (2) the base rate RBM, where the weights and hidden biases are set to 0, and the visible biases are set to match the average pixel values over the MNIST training set. Small, Tractable RBMs: To better understand the behavior of γGA and γMAS, we first evaluated the paths on RBMs with only 20 hidden units. In this setting, it is feasible to exactly compute the 1The ESS is defined as M/(1 + s2(w(i) ∗)) where s2(w(i) ∗) is the sample variance of the normalized weights [4]. In general, one should regard ESS estimates cautiously, as they can give misleading results in cases where an algorithm completely misses an important mode of the distribution. However, as we report the ESS in cases where the estimated partition functions are close to the true value (when known) or agree closely with each other, we believe the statistic is meaningful in our comparisons. 6 Figure 3: Estimates of log Zb for the tractable PCD(20) RBM as K, the number of intermediate distributions, is varied. Error bars indicate bootstrap 95% confidence intervals. (Best viewed in color.) CD1(20) PCD(20) pa(v) path & schedule log Zb log ˆ Zb ESS log Zb log ˆ Zb ESS uniform GA linear 279.59 279.60 248 178.06 177.99 204 uniform GA optimal binned 279.51 124 177.92 142 uniform MAS linear 279.59 2686 178.09 289 uniform MAS optimal binned 279.60 2619 178.08 934 Table 1: Comparing estimates of log Zb and effective sample size (ESS) for tractable RBMs. Results are shown for K = 100,000 intermediate distributions, with 5,000 chains and Gibbs transitions. Bolded values indicate ESS estimates that are not significantly different from the largest value (bootstrap hypothesis test with 1,000 samples at α = 0.05 significance level). The maximum possible ESS is 5,000. Figure 4: Visible activations for samples from the PCD(500) RBM. (left) base rate RBM, β = 0 (top) geometric path (bottom) MAS path (right) target RBM, β = 1. partition function and moments and to generate exact samples by exhaustively summing over all 220 hidden configurations. The moments of the target RBMs were computed exactly, and moment matching was performed with conjugate gradient using the exact gradients. The results are shown in Figure 3 and Table 1. Under perfect transitions, γGA and γMAS were both able to accurately estimate log Zb using as few as 100 intermediate distributions. However, using the Gibbs transition operator, γMAS gave accurate estimates using fewer intermediate distributions and achieved a higher ESS at K = 100,000. To check that the improved performance didn’t rely on accurate moments of pb, we repeated the experiment with highly biased moments.2 The differences in log ˆZb and ESS compared to the exact moments condition were not statistically significant. Full-size, Intractable RBMs: For intractable RBMs, moment averaging required approximately solving two intractable problems: moment estimation for the target RBM, and moment matching. We estimated the moments from 1,000 independent Gibbs chains, using 10,000 Gibbs steps with 1,000 steps of burn-in. The moment averaged RBMs were trained using PCD. (We used 50,000 updates with a fixed learning rate of 0.01 and no momentum.) In addition, we ran a cheap, inaccurate moment matching scheme (denoted MAS cheap) where visible moments were estimated from the empirical MNIST base rate and the hidden moments from the conditional distributions of the hidden units given the MNIST digits. Intermediate RBMs were fit using 1,000 PCD updates and 100 particles, for a total computational cost far smaller than that of AIS itself. Results of both methods are 2In particular, we computed the biased moments from the conditional distributions of the hidden units given the MNIST training examples, where each example of digit class i was counted i + 1 times. 7 Figure 5: Estimates of log Zb for intractable RBMs. Error bars indicate bootstrap 95% confidence intervals. (Best viewed in color.) CD1(500) PCD(500) CD25(500) pa(v) path log ˆ Zb ESS log ˆ Zb ESS log ˆ Zb ESS uniform GA linear 341.53 4 417.91 169 451.34 13 uniform MAS linear 359.09 3076 418.27 620 449.22 12 uniform MAS cheap linear 359.09 3773 418.33 5 450.90 30 base rate GA linear 359.10 4924 418.20 159 451.27 2888 base rate MAS linear 359.07 2203 418.26 1460 451.31 304 base rate MAS cheap linear 359.09 2465 418.25 359 451.14 244 Table 2: Comparing estimates of log Zb and effective sample size (ESS) for intractable RBMs. Results are shown for K = 100,000 intermediate distributions, with 5,000 chains and Gibbs transitions. Bolded values indicate ESS estimates that are not significantly different from the largest value (bootstrap hypothesis test with 1,000 samples at α = 0.05 significance level). The maximum possible ESS is 5,000. shown in Figure 5 and Table 2. Overall, the MAS results compare favorably with those of GA on both of our metrics. Performance was comparable under MAS cheap, suggesting that γMAS can be approximated cheaply and effectively. As with the tractable RBMs, we found that optimal binned schedules made little difference in performance, so we focus here on linear schedules. The most serious failure was γGA for CD1(500) with uniform initialization, which underestimated our best estimates of the log partition function (and hence overestimated held-out likelihood) by nearly 20 nats. The geometric path from uniform to PCD(500) and the moments path from uniform to CD1(500) also resulted in underestimates, though less drastic. The rest of the paths agreed closely with each other on their partition function estimates, although some methods achieved substantially higher ESS values on some RBMs. One conclusion is that it’s worth exploring multiple initializations and paths for a given RBM in order to ensure accurate results. Figure 4 compares samples along γGA and γMAS for the PCD(500) RBM using the base rate initialization. For a wide range of β values, the γGA RBMs assigned most of their probability mass to blank images. As discussed in Section 4.1, γGA prefers configurations which are probable under both the initial and target distributions. In this case, the hidden activations were closer to uniform conditioned on a blank image than on a digit, so γGA preferred blank images. By contrast, γMAS yielded diverse, blurry digits which gradually coalesced into crisper ones. 6 Conclusion We presented a theoretical analysis of the performance of AIS paths and proposed a novel path for exponential families based on averaging moments. We gave a variational interpretation of this path and derived an asymptotically optimal piecewise linear schedule. Moment averages performed well empirically at estimating partition functions of RBMs. We hope moment averaging can also improve other path-based sampling algorithms which typically use geometric averages, such as path sampling [12], parallel tempering [23], and tempered transitions [15]. Acknowledgments This research was supported by NSERC and Quanta Computer. We thank Geoffrey Hinton for helpful discussions. We also thank the anonymous reviewers for thorough and helpful feedback. 8 References [1] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Inf. Theory, 51(7):2282–2312, 2005. [2] Martin J. Wainwright, Tommi Jaakkola, and Alan S. Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313–2335, 2005. [3] Amir Globerson and Tommi Jaakkola. Approximate Inference Using Conditional Entropy Decompositions. In 11th International Workshop on AI and Statistics (AISTATS’2007), 2007. [4] Radford Neal. Annealed importance sampling. Statistics and Computing, 11:125–139, 2001. [5] John Skilling. Nested sampling for general Bayesian computation. Bayesian Analysis, 1(4):833–859, 2006. [6] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Methodology), 68(3):411–436, 2006. [7] Jascha Sohl-Dickstein and Benjamin J. Culpepper. Hamiltonian annealed importance sampling for partition function estimation. Technical report, Redwood Center, UC Berkeley, 2012. [8] Lucas Theis, Sebastian Gerwinn, Fabian Sinz, and Matthias Bethge. In all likelihood, deep belief is not enough. Journal of Machine Learning Research, 12:3071–3096, 2011. [9] Ruslan Salakhutdinov and Ian Murray. On the quantitative analysis of deep belief networks. In Int’l Conf. on Machine Learning, pages 6424–6429, 2008. [10] Guillaume Desjardins, Aaron Courville, and Yoshua Bengio. On tracking the partition function. In NIPS 24. MIT Press, 2011. [11] Graham Taylor and Geoffrey Hinton. Products of hidden Markov models: It takes N > 1 to tango. In Uncertainty in Artificial Intelligence, 2009. [12] Andrew Gelman and Xiao-Li Meng. Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statistical Science, 13(2):163–186, 1998. [13] Christopher Jarzynski. Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach. Physical Review E, 56:5018–5035, 1997. [14] Daan Frenkel and Berend Smit. Understanding Molecular Simulation: From Algorithms to Applications. Academic Press, 2 edition, 2002. [15] Radford Neal. Sampling from multimodal distributions using tempered transitions. Statistics and Computing, 6:353–366, 1996. [16] Gundula Behrens, Nial Friel, and Merrilee Hurn. Tuning tempered transitions. Statistics and Computing, 22:65–78, 2012. [17] Ben Calderhead and Mark Girolami. Estimating Bayes factors via thermodynamic integration and population MCMC. Computational Statistics and Data Analysis, 53(12):4028–4045, 2009. [18] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [19] Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002. [20] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Intl. Conf. on Machine Learning, 2008. [21] Shun-ichi Amari and Hiroshi Nagaoka. Methods of Information Geometry. Oxford University Press, 2000. [22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [23] Y. Iba. Extended ensemble Monte Carlo. International Journal of Modern Physics C, 12(5):623–656, 2001. 9
|
2013
|
352
|
5,101
|
Blind Calibration in Compressed Sensing using Message Passing Algorithms Christophe Sch¨ulke Univ Paris Diderot, Sorbonne Paris Cit´e, ESPCI and CNRS UMR 7083 Paris 75005, France Francesco Caltagirone Institut de Physique Th´eorique CEA Saclay and CNRS URA 2306 91191 Gif-sur-Yvette, France Florent Krzakala ENS and CNRS UMR 8550, ESPCI and CNRS UMR 7083 Paris 75005, France Lenka Zdeborov´a Institut de Physique Th´eorique CEA Saclay and CNRS URA 2306 91191 Gif-sur-Yvette, France Abstract Compressed sensing (CS) is a concept that allows to acquire compressible signals with a small number of measurements. As such it is very attractive for hardware implementations. Therefore, correct calibration of the hardware is a central issue. In this paper we study the so-called blind calibration, i.e. when the training signals that are available to perform the calibration are sparse but unknown. We extend the approximate message passing (AMP) algorithm used in CS to the case of blind calibration. In the calibration-AMP, both the gains on the sensors and the elements of the signals are treated as unknowns. Our algorithm is also applicable to settings in which the sensors distort the measurements in other ways than multiplication by a gain, unlike previously suggested blind calibration algorithms based on convex relaxations. We study numerically the phase diagram of the blind calibration problem, and show that even in cases where convex relaxation is possible, our algorithm requires a smaller number of measurements and/or signals in order to perform well. 1 Introduction The problem of acquiring an N-dimensional signal x through M linear measurements, y = Fx, arises in many contexts. The Compressed Sensing (CS) approach [1, 2] exploits the fact that, in many cases of interest, the signal is K-sparse (in an appropriate known basis), meaning that only K = ρN out of the N components are non-zero. Compressed sensing theory shows that a K-sparse N-dimensional signal can be reconstructed from far less than N linear measurements [1, 2], thus saving acquisition time, cost or increasing the resolution. In the most common setting, the linear M × N map F is considered to be known. Nowadays, the concept of compressed sensing is very attractive for hardware implementations. However, one of the main issues when building hardware revolves around calibration. Usually the sensors introduce a distortion (or decalibration) to the measurements in the form of some unknown gains. Calibration is about how to determine the transfer function between the measurements and the readings from the sensor. In some applications dealing with distributed sensors or radars for instance, the location or intrinsic parameters of the sensors are not exactly known [3, 4]. Similar distortion can be found in applications with microphone arrays [5]. The need for calibration has been emphasized in a number of other works, see e.g. [6, 7, 8]. One common way of dealing with calibration (apart from ignoring it or considering it as measurement noise) is supervised calibration 1 when some known training signals xl, l = 1, . . . , P and the corresponding observations yl are used to estimate the distortion parameters. Given a sparse signal recovery problem, if we were not able to previously estimate the distortion parameters via supervised calibration, we will need to estimate the unknown signal and the unknown distortion parameters simultaneously - this is known as blind (unsupervised) calibration. If such blind calibration is computationally possible, then it might be simpler to do than the supervised calibration in practice. The main contribution of this paper is a computationally efficient message passing algorithm for blind calibration. 1.1 Setting We state the problem of blind calibration in the following way. First we introduce an unknown distortion parameter (we will also use equivalently the term decalibration parameter or gain) dµ for each of the sensors, µ = 1, . . . , M. Note that dµ can also represent a vector of several parameters. We consider that the signal is linearly projected by a known M × N measurement matrix F and only then distorted according to some known transfer function h. This transfer function can be probabilistic (noisy), non-linear, etc. Each sensor µ then provides the following distorted and noisy reading (measure) yµ = h(zµ, dµ, wµ) where zµ = PN i=1 Fµixi. As often in CS, we focus on the case where the measurement matrix F is iid Gaussian with zero mean. For the measurement noise wµ, one usually considers an iid Gaussian noise with variance ∆, which is added to zµ. In order to perform the blind calibration, we need to measure several statistically diverse signals. Given a set of N-dimensional K-sparse signals xl with l = 1, · · · , P, for each of the signals we consider M sensor readings yµl = h(zµl, dµ, wµl) , where zµl = N X i=1 Fµixil , (1) where dµ are the signal-independent distortion parameters, wµl is a signal-dependent measurement noise, and h is an arbitrary known function of these variables with standard regularity requirements. To illustrate a situation in which one has sample dependent noise wµl and sample independent distortion dµ, consider for instance sound sensors placed in space at positions dµ that are not exactly known. The positions, however, do not change when different sounds are recorded. The noise wµl is then the ambient noise that is different during every recording. The final inference problem is hence as follows: Given the M × P measurements yµl and a perfect knowledge of the matrix F, we want to infer both the P different signals {x1, · · · xP } and the M distortion parameters dµ, µ = 1, · · · M. In this work we place ourselves in the Bayesian setting where we assume the distribution of the signal elements, PX, and the distortion coefficients, PD, to be known. 1.2 Relation to previous work As far as we know, the problem of blind calibration was first studied in the context of compressed sensing in [9] where the distortions were considered as multiplicative, i.e. the transfer function was h(zµl, dµ, wµl) = 1 dµ (zµl + wµl) . (2) A subsequent work [10] considers a more general case when the distortion parameters are dµ = (gµ, θµ), and the transfer function h(zµl, dµ, wµl) = eiθµ(zµl + wµl)/gµ. Both [9] and [10] applied convex optimization based algorithms to the blind calibration problem and their approach seems to be limited to the above special cases of transfer functions. Our approach is able to deal with a general transfer function h, and moreover for the product-transfer-function (2) it outperforms the algorithm of [9]. The most commonly used algorithm for signal reconstruction in CS is the ℓ1 minimization of [1]. In CS without noise and for measurement matrices with iid Gaussian elements, the ℓ1 minimization algorithm leads to exact reconstruction as long as the measurement rate α = M/N > αDT in the limit of large signal dimension, where αDT is a well known phase transition of Donoho and Tanner [11]. The blind calibration algorithm of [9, 10] also directly uses ℓ1 minimization for reconstruction. 2 In the last couple of years, the theory of CS witnessed a large progress thanks to the development of message passing algorithms based on the standard loopy Belief Propagation (BP) and their analysis [12, 13, 14, 15, 16]. In the context of compressed sensing, the canonical loopy BP is difficult to implement because its messages would be probability distributions over a continuous support. At the same time in problems such as compressed sensing, Gaussian or quadratic approximation of BP still contains the information necessary for a successful reconstruction of the signal. Such approximations of loopy BP originated in works on CDMA multiuser detection [17, 18]. In compressed sensing the Gaussian approximation of BP is known as the approximate message passing (AMP) [12, 13], and it was used to prove that with properly designed measurement matrices F the signal can be reconstructed as long as the number of measurements is larger than the number of non-zero component in the signal, thus closing the gap between the Donoho-Tanner transition and the information theoretical lower bound [15, 16]. Even without particular design of the measurement matrices the AMP algorithm outperforms the ℓ1-minimization for a large class of signals. Importantly for the present work, [14] generalized the AMP algorithm to deal with a wider range of input and output functions. For some of those, generalizations of the ℓ1-minimization based approach are not convex anymore, and hence they do not have the advantage of provable computational tractability anymore. The following two works have considered blind calibration related problems with the use of AMPlike algorithms. In [19] the authors use AMP combined with expectation maximization to calibrate gains that act on the signal components rather than on the measurement components as we consider here. In [20] the authors study the case when every element of the measurement matrix F has to be calibrated, in contrast to the row-constant gains considered in this paper. The setting of [20] is much closer to the dictionary learning problem and is much more demanding, both computationally and in terms of the number of different signals necessary for successful calibration. 1.3 Contributions In this work we extend the generalized approximate message passing (GAMP) algorithm of [14] to the problem of blind calibration with a general transfer function h, eq. (1). We denote it as the calibration-AMP or Cal-AMP algorithm. The Cal-AMP uses P > 1 unknown sparse signals to learn both the different signals xl, l = 1, . . . , P, and the distortion parameters dµ, µ = 1, . . . , M, of the sensors. We hence overcome the limitations of the blind calibration algorithm presented in [9, 10] to the class of settings for which the calibration can be written as a convex optimization problem. In the second part of this paper we analyze the performance of Cal-AMP for the product transfer function (2) used in [9] and demonstrate its scalability and better performance with respect to their ℓ1-based calibration approach. In the numerical study we observe a sharp phase transition generalizing the phase transition seen for AMP in compressed sensing [21]. Note that for the blind calibration problem to be solvable, we need the amount of information contained in the sensor readings, PM, to be at least as large as the size of the vector of distortion parameters M, plus the number of the non-zero components of all the signals, KP. Defining ρ = K/N and α = M/N, this leads to αP ≥ρP + α. If we fix the number of signals P we have a well defined line in the (ρ, α)-plane given by α ≥ P P −1ρ ≡αmin , (3) below which exact calibration cannot be possible. We will compare the empirically observed phase transition for blind calibration to this theoretical bound as well as to the phase transition that would have been observed in the pure CS, i.e. if we knew the distortion parameters. 2 The Calibration-AMP algorithm The Cal-AMP algorithm is based on a Bayesian probabilistic formulation of the reconstruction problem. Denoting PX(xil) the assumed empirical distribution of the components of the signal, PW (wµl) the assumed probability distribution of the components of the noise, and PD(dµ) the assumed empirical distribution of the distortion parameters, the Bayes formula yields P(x, d|F, y) = 1 Z N,P Y i,l=1 PX(xil) M Y µ=1 PD(dµ) P,M Y l,µ=1 Z dwµlPW (wµl)δ [yµl −h (zµl, dµ, wµl)] , (4) 3 where Z is a normalization constant and zµl = P i Fµixil. We denote the marginals of the signal components νx il(xil) = R Q µ ddµ Q jn̸=il dxjn P(x, d|F, y) and those of the distortion parameters νd µ(dµ) = R Q γ̸=µ ddγ Q il dxil P(x, d|F, y). The estimators x∗ il that minimizes the expected mean-squared error (MSE) of the signals and the estimator d∗ µ of the distortion parameters are the averages w.r.t. the marginal distributions, namely x∗ il = R dxil xil νx il(xil) and d∗ µ = R ddµ dµ νd µ(dµ). An exact computation of these estimates is not tractable in any known way so we use instead a belief-propagation based approximation that has proven to be fast and efficient in the CS problem [12, 13, 14]. We remind that GAMP, that leads to a considerably simpler inference problem, is recovered if we set PD(dµ) = δ(dµ−1) and that usual AMP is recovered by setting h(z, d, w) = z+w on top of it. Figure 1: Graphical model representing the blind calibration problem. Here the dimensionality of the signal is N =8, the number of sensors is M =3, and the number of signals used for calibration P =2. The variable nodes xil and dµ are depicted as circles, the factor nodes as squares. Given the factor graph representation of the calibration problem in Fig. 1, the canonical belief propagation equations for the probability measure (4) are written in terms of NPM pairs of messages ˜mµl→il(xil) and mil→µl(xil), representing probability distributions on the signal component xil, and PM pairs of messages nµ→µl(dµ) and ˜nµl→µ(dµ), representing probability distributions on the distortion parameter dµ. Following the lines of [12, 13, 14, 15], with the use of the central limit theorem, a Gaussian approximation, and neglecting terms that go to zero as N →∞, the BP equations can be closed using only the means and variances of the messages mil→µl and nµ→µl: ail→µl = Z dxil mil→µl(xil) xil , vil→µl = Z dxil mil→µl(xil) x2 il −a2 il→µl , (5) kµ→µl = Z ddµ nµ→µl(dµ) dµ , lµ→µl = Z ddµ nµ→µl(dµ) d2 µ −k2 µ→µl . (6) Moreover, again neglecting only terms that go to zero as N →∞, we can write closed equations on quantities that correspond to the variables and factors nodes, instead of messages running between variables and factor nodes. For this we introduce ωµl = P i Fµiail→µl and Vµl = P i F 2 µivil→µl. The derivation of the Cal-AMP algorithm is similar to those in [12, 13, 14, 15]. The resulting algorithm is in the leading order equivalent to the belief propagation for the factor graph from Fig. 1. To summarize the resulting algorithm we define ˜G(y, d, ω, v) = Z dz dw PW (w) δ[h(z, d, w) −y] e−1 2 (z−ω)2 v , and (7) G(yµ·, ωµ·, Vµ·, θ) = ln "Z dd PD(d) P Y n=1 ˜G(yµn, d, ωµn, Vµn) eθd # , (8) where µ· indicates a dependence on all the variables labeled µn with n = 1, · · · , P, and δ(·) is the Dirac delta function. Similarly as Rangan in [14], we define P output functions as gl out(yµ·, ωµ·, Vµ·) = ∂ ∂ωµl G(yµ·, ωµ·, Vµ·, θ = 0) . (9) Note that each of the output functions depend on all the P different signals. We also define the following input functions f x a (Σ2, R) = [x]X , f x c (Σ2, R) = [x2]X −[x]2 X , (10) 4 where [. . . ]X indicates expectation w.r.t. the measure MX(x, Σ2, R) = 1 Z(Σ2, R)PX(x) e−(x−R)2 2Σ2 . (11) Given the above definitions, the iterative calibration-AMP algorithm reads as follows: V t+1 µl = X i F 2 µi vt il , ωt+1 µl = X i Fµiat il −V t+1 µl et+1 µl , (12) et+1 µl = gl out(yµ·, ωt µ·, V t µ·) , ht+1 µl = − ∂ ∂ωµl gl out(yµ·, ωt µ·, V t+1 µ· ) , (13) (Σt+1 il )2 = "X µ F 2 µi ht+1 µl #−1 , Rt+1 il = ail + "X µ Fµi et+1 µl # (Σt+1 il )2 , (14) at+1 il = f x a ((Σt+1 il )2, Rt+1 il ) , vt+1 il = f x c ((Σt+1 il )2, Rt+1 il ) , (15) we initialize ωt=0 µl = yµl, at=0 il and vt=0 il as the mean and variance of the assumed distribution PX(·), and iterate these equations until convergence. At every time-step the quantity ail is the estimate for the signal element xil, and vil is the approximate error of this estimate. The estimate and its error for the distortion parameter dµ can be computed as kt+1 µ = ∂ ∂θG(yt+1 µ· , ωt+1 µ· , V t+1 µ· , θ) θ=0 and lt+1 µ = ∂2 ∂θ2 G(yt+1 µ· , ωt+1 µ· , V t+1 µ· , θ) θ=0 . (16) By setting PD(dµ) = δ(dµ −dtrue µ ), and simplifying eq. (8), readers familiar with the work of Rangan [14] will recognize the GAMP algorithm in eqs. (12-15). Note that for a general transfer function h the generating function G (8) has to be evaluated numerically. The overall complexity of the Cal-AMP algorithm scales as O(MNP) and hence shares the scalability advantages of AMP [12]. 2.1 Cal-AMP for the product transfer function In the numerical section of this paper we will focus on a specific case of the transfer function h(zµl, dµ, wµl), defined in eq. (2). We consider the measurement noise wµl to be Gaussian of zero mean and variance ∆. This transfer function was considered in the work of [9] and we will hence be able to compare the performance of Cal-AMP directly to the convex optimization investigated in [9]. For the product transfer function eq. (2) most integrals requiring a numerical computation in the general case are expressed analytically and we can replace equations (13) by: et+1 µl = kt µyµl −ωt µl V t µl + ∆ , ht+1 µl = 1 V t+1 µl + ∆− lt µy2 µl (V t+1 µl + ∆)2 , (17) (Ct+1 µ )2 = "X n y2 µn V t+1 µn + ∆ #−1 , T t+1 µ = (Ct+1 µ )2 X n yµnωt+1 µn V t+1 µn + ∆, (18) kt+1 µ = f d a((Ct+1 µ )2, T t+1 µ ) , lt+1 µ = f d c ((Ct+1 µ )2, T t+1 µ ) . (19) where we have introduced the functions f d a and f d c similarly to those in eq. (10), except the expectation is made w.r.t. to the measure MD(d, C2, T) = 1 Z(C2, T)PD(d)|d|P e−(d−T )2 2C2 . (20) 3 Experimental results Our simulations were performed using a MATLAB implementation of the Cal-AMP algorithm presented in the previous section, that is available online [22]. We focused on the noiseless case ∆= 0 for which exact reconstruction is conceivable. We tested the algorithm on randomly generated Gauss-Bernoulli signals with density of non-zero elements ρ, normally distributed around zero with 5 unit variance. For the present experiments the algorithm is using this information via a matching distribution PX(xil). The situation when PX mismatches the true signal distribution was discussed for AMP for compressed sensing in [21]. The distortion parameters dµ were generated from a uniform distribution centered at d = 1 with variance σ2 and width 2 √ 3σ. This ensures that, as σ2 →0, the results of standard compressed sensing are recovered, while the distortions are growing with σ2 . For numerical stability purposes, the parameter σ2 used in the update functions of Cal-AMP was taken to be slightly larger than the variance used to create the actual distortion parameters. For the same reasons, we have also added a small noise ∆= 10−17 and used damping in the iterations in order to avoid oscillatory behavior. In this noiseless case we iterate the Cal-AMP equations until the following quantity crit = 1 MP P µl (kµyµl −P i Fµiail)2 becomes smaller than the numerical precision of implementation, around 10−16, or until that quantity does not decrease any more over 100 iterations. Success or failure of the reconstruction is usually determined by looking at the mean squared error (MSE) between the true signal x0 l and the reconstructed one al. In the noiseless setting the product transfer function h leads to a scaling invariance and therefore a better measure of success is the cross-correlation between real and recovered signal (used in [10]) or a corrected version of the MSE, defined by: MSEcorr = 1 NP X il x0 il −ˆsail 2 , where ˆs = 1 M X µ d0 µ kµ (21) is an estimation of the scaling factor s. Slight deviations between empirical and theoretical means due to the finite size of M and N lead to important differences between MSE and MSEcorr, only the latter truly going to zero for finite N and M. P = 1 α ρ 0 0.5 1 0 0.5 1 1.5 2 P = 2 ρ 0 0.5 1 0 0.5 1 1.5 2 P = 10 ρ 0 0.5 1 0 0.5 1 1.5 2 αmin αCS log10(MSEcorr) −14 −12 −10 −8 −6 −4 Figure 2: Phase diagrams for different numbers P of calibrating signals: The measurement rate α = M/N is plotted against the density of the signal ρ = K/N. The plotted value is the decimal logarithm of MSEcorr (21) achieved for one random instance. Black indicates failure of the reconstruction, while white represents perfect reconstruction (i.e. a MSE of the order of the numerical precision). In this figure the distortion variance is σ2 = 0.01 and N = 1000. While for P = 1 reconstruction is never possible, for P >1, there is a phase transition very close to the lower bound defined by αmin in equation (3) or to the phase transition line of the pure compressed sensing problem αCS. Note, however, that in the large N limit we expect the calibration phase transition to be strictly larger than both the αmin and αCS. Note also that while this diagram is usually plotted only for α ≤1 for compressed sensing, the part α > 1 displays pertinent information in blind calibration. Fig. 2 shows the empirical phase diagrams in the α-ρ plane we obtained from the Cal-AMP algorithm for different number of signals P. For P = 1 the reconstruction is never exact, and effectively this case corresponds to reconstruction without any attempt to calibrate. For any P > 1, there is a sharp phase transition taking place with a jump in MSEcorr of ten orders of magnitude. As P increases, the phase of exact reconstruction gets bigger and tends to the one observed in Bayesian compressed sensing [15]. Remarkably, for small values of the density ρ, the position of the CalAMP phase transition is very close to the CS one already for P = 2 and Cal-AMP performs almost as well as in the total absence of distortion. 6 0.45 0.5 0.55 −15 −10 −5 0 α 〈 log10(MSEcorr) 〉 0.45 0.5 0.55 0.6 0 100 200 300 400 500 600 α Iterations 50 200 400 1000 5000 10000 N Figure 3: Left: Cal-AMP phase transition as the system size N grows. The curves are obtained by averaging log10(MSEcorr) over 100 samples, reflecting the probability of correct reconstruction in the region close to the phase transition, where it is not guaranteed. Parameters are: ρ = 0.2, P = 2, σ2 = 0.0251. For higher values of N, the phase transition becomes sharper. Right: Mean number of iterations necessary for reconstruction, when the true signal is successfully recovered. Far from the phase transition, increasing N does not increase visibly the number of iterations for these system sizes, showing that our algorithm works in linear time. The number of needed iterations increases drastically as one approaches the phase transition. 0.25 0.3 0.35 0.4 0.45 0.5 0.55 10 −15 10 −10 10 −5 10 0 α MSEcorr −0.4 −1 −2 −3 −5 −7 −12 αmin αCS log10(σ2) log10(σ2) P −4 −3 −2 −1 2 4 6 8 10 12 14 16 log10(MSEcorr) −15 −10 −5 0 Figure 4: Left: Position of the phase transition in α for different distortion variances σ2. The left vertical line represents the position of the CS phase transition, the right one is the counting bound eq. (3). With growing distortion, larger measurement rates become necessary for perfect calibration and reconstruction. Intermediary values of MSEcorr are obtained in a region where perfect calibration is not possible, but distortions are small enough for the uncalibrated AMP to make only small mistakes. The parameters here are P = 2 and ρ = 0.2. Right: Phase diagram as the variance of the distortions σ2 and the number of signals P vary, for ρ = 0.5, α = 0.75 and N = 1000. Fig. 3 shows the behavior near the phase transition, giving insights about the influence of the system size and the number of iterations needed for precise calibration and reconstruction. In Fig. 4, we show the jump in the MSE on a single instance as the measurement rate α decreases. The right part is the phase diagram in the σ2-P plane. In [9, 10], a calibration algorithm using ℓ1-minimization has been proposed. While in that case, no assumption on the distribution of the signals and of the the gains is needed, for most practical cases it is expected to be less performant than the Cal-AMP if these distributions are known or reasonably approximated. We implemented the algorithm of [9] with MATLAB using the CVX package [23]. Due to longer running times, experiments were made using a smaller system size N = 100. We also remind at this point that whereas the Cal-AMP algorithm works for a generic transfer function (1), the ℓ1-minimization based calibration is restricted to the transfer functions considered by [9, 10]. Fig. 5 shows a comparison of the performances of the two algorithms in the α-ρ phase diagrams. The Cal-AMP clearly outperforms the ℓ1-minimization in the sense that the region in which calibration is possible is much larger. 7 Cal−AMP L1 α P = 2 0 0.5 1 0 0.5 1 1.5 2 P = 3 0 0.5 1 0 0.5 1 1.5 2 P = 5 0 0.5 1 0 0.5 1 1.5 2 P = 10 0 0.5 1 0 0.5 1 1.5 2 ρ α 0 0.5 1 0 0.5 1 1.5 2 ρ 0 0.5 1 0 0.5 1 1.5 2 ρ 0 0.5 1 0 0.5 1 1.5 2 ρ 0 0.5 1 0 0.5 1 1.5 2 αmin αCS αDT 〈 log10(MSEcorr) 〉 −14 −12 −10 −8 −6 −4 −2 Figure 5: Comparison of the empirical phase diagrams obtained with the Cal-AMP algorithm proposed here (top) and the ℓ1-minimization calibration algorithm of [9] (bottom) averaged over several random samples; black indicates failure, white indicates success. The area where reconstruction is possible is consistently much larger for Cal-AMP than for ℓ1-minimization-based calibration. The plotted lines are the phase transitions for CS without unknown distortions with the AMP algorithm (αCS, in red, from [21]), and with ℓ1-minimization (the Donoho-Tanner transition αDT, in blue, from [11]). The line αmin is the lower counting bound from eq. (3). The advantage of Cal-AMP over ℓ1-minimization calibration is clear. Note that in both cases, the region close to the transition is blurred due to finite system size, hence a region of grey pixels (again, the effect is more pronounced for the ℓ1 algorithm). 4 Conclusion We have presented the Cal-AMP algorithm for blind calibration in compressed sensing, a problem where the outputs of the measurements are distorted by some unknown gains on the sensors, eq. (1). The Cal-AMP algorithm allows to jointly infer sparse signals and the distortion parameters of each sensor even with a very small number of signals and is computationally as efficient as the GAMP algorithm for compressed sensing [14]. Another advantage w.r.t. previous works is that the CalAMP algorithm works for generic transfer function between the measurements and the readings from the sensor, not only those that permit a convex formulation of the inference problem as in [9, 10]. In the numerical analysis, we focussed on the case of the product transfer function (2) studied in [9]. Our results show that, for the chosen parameters, calibration is possible with a very small number of different sparse signals P (i.e. P = 2 or P = 3), even very close to the absolute minimum of measurements required by a counting bound (3). Comparison with the ℓ1-minimizing calibration algorithm clearly shows lower requirements on the measurement rate α and on the number of signals P for Cal-AMP. The Cal-AMP algorithm for blind calibration is scalable and simple to implement. Its efficiency shows that supervised training is unnecessary. We expect Cal-AMP to become useful in practical compressed sensing implementations. Asymptotic analysis of AMP can be done using the state evolution approach [12]. In the case of Cal-AMP, however, analysis of the resulting state evolution equations is more difficult and has hence been postponed to future work. Future work also includes the study of the robustness to the mismatch between assumed and true distribution of signal elements and distortion parameters, as well as the expectation-maximization based learning of the various parameters. Finally, the use of spatially coupled measurement matrices [15, 16] could further improve the performance of the algorithm and make the phase transition asymptotically coincide with the information-theoretical counting bound (3). 8 References [1] E. J. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Inform. Theory, 51:4203, 2005. [2] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52:1289, 2006. [3] B. C. Ng and C. M. S. See. Sensor-array calibration using a maximum-likelihood approach. IEEE Transactions on Antennas and Propagation, 44(6):827–835, 1996. [4] Z. Yang, C. Zhang, and L. Xie. Robustly stable signal recovery in compressed sensing with structured matrix perturbation. IEEE Transactions on Signal Processing, 60(9):4658–4671, 2012. [5] R. Mignot, L. Daudet, and F. Ollivier. Compressed sensing for acoustic response reconstruction: Interpolation of the early part. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pages 225–228, 2011. [6] T. Ragheb, J. N Laska, H. Nejati, S. Kirolos, R. G Baraniuk, and Y. Massoud. A prototype hardware for random demodulation based compressive analog-to-digital conversion. In 51st Midwest Symposium on Circuits and Systems (MWSCAS), pages 37–40. IEEE, 2008. [7] J. A Tropp, J. N. Laska, M. F. Duarte, J. K Romberg, and R. G. Baraniuk. Beyond nyquist: Efficient sampling of sparse bandlimited signals. IEEE Trans. Inform. Theory, 56(1):520–544, 2010. [8] P. J. Pankiewicz, T. Arildsen, and T. Larsen. Model-based calibration of filter imperfections in the random demodulator for compressive sensing. arXiv:1303.6135, 2013. [9] R. Gribonval, G. Chardon, and L. Daudet. Blind calibration for compressed sensing by convex optimization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2713 – 2716, 2012. [10] C. Bilen, G. Puy, R. Gribonval, and L. Daudet. Blind sensor calibration in sparse recovery using convex optimization. In 10th Int. Conf. on Sampling Theory and Applications, 2013. [11] D. L. Donoho and J. Tanner. Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci., 102(27):9446–9451, 2005. [12] D. L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci., 106(45):18914–18919, 2009. [13] D.L. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing: I. motivation and construction. In IEEE Information Theory Workshop (ITW), pages 1 –5, 2010. [14] S. Rangan. Generalized approximate message passing for estimation with random linear mixing. In Proc. of the IEEE Int. Symp. on Inform. Theory (ISIT), pages 2168 –2172, 2011. [15] F. Krzakala, M. M´ezard, F. Sausset, Y.F. Sun, and L. Zdeborov´a. Statistical physics-based reconstruction in compressed sensing. Phys. Rev. X, 2:021005, 2012. [16] D. L. Donoho, A. Javanmard, and A. Montanari. Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing. In Proc. of the IEEE Int. Symposium on Information Theory (ISIT), pages 1231–1235, 2012. [17] J. Boutros and G. Caire. Iterative multiuser joint decoding: Unified framework and asymptotic analysis. IEEE Trans. Inform. Theory, 48(7):1772–1793, 2002. [18] Y. Kabashima. A cdma multiuser detection algorithm on the basis of belief propagation. J. Phys. A: Math. and Gen., 36(43):11111, 2003. [19] U. S. Kamilov, A. Bourquard, E. Bostan, and M. Unser. Autocalibrated signal reconstruction from linear measurements using adaptive gamp. online preprint, 2013. [20] F. Krzakala, M. M´ezard, and L. Zdeborov´a. Phase diagram and approximate message passing for blind calibration and dictionary learning. ISIT 2013, arXiv:1301.5898, 2013. [21] F. Krzakala, M. M´ezard, F. Sausset, Y.F. Sun, and L. Zdeborov´a. Probabilistic reconstruction in compressed sensing: Algorithms, phase diagrams, and threshold achieving matrices. J. Stat. Mech., P08009, 2012. [22] http://aspics.krzakala.org/. [23] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx, 2012. 9
|
2013
|
353
|
5,102
|
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion Nguyen Viet Cuong Wee Sun Lee Nan Ye Department of Computer Science National University of Singapore {nvcuong,leews,yenan}@comp.nus.edu.sg Kian Ming A. Chai Hai Leong Chieu DSO National Laboratories, Singapore {ckianmin,chaileon}@dso.org.sg Abstract We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget. For practical implementations, we provide approximations to the maximum Gibbs error criterion for Bayesian conditional random fields and transductive Naive Bayes. Our experimental results on a named entity recognition task and a text classification task show that the maximum Gibbs error criterion is an effective active learning criterion for noisy models. 1 Introduction In pool-based active learning [1], we select training data from a finite set (called a pool) of unlabeled examples and aim to obtain good performance on the set by asking for as few labels as possible. If a large enough pool is sampled from the true distribution, good performance of a classifier on the pool implies good generalization performance of the classifier. Previous theoretical works on Bayesian active learning mainly deal with the noiseless case, which assumes a prior distribution on a collection of deterministic mappings from observations to labels [2, 3]. A fixed deterministic mapping is then drawn from the prior, and it is used to label the examples. In this paper, probabilistic hypotheses, rather than deterministic ones, are used to label the examples. We formulate the objective as a maximum coverage objective with a fixed budget: with a budget of k queries, we aim to select k examples such that the policy Gibbs error is maximal. The policy Gibbs error of a policy is the expected error rate of a Gibbs classifier1 on the set adaptively selected by the policy. The policy Gibbs error is a lower bound of the policy entropy, a generalization of the Shannon entropy to general (both adaptive and non-adaptive) policies. For non-adaptive policies, 1A Gibbs classifier samples a hypothesis from the prior for labeling. 1 x1 x2 x4 y1 = 1 y1 = 2 · · · y2 = 1 y2 = 2 · · · 1 2x 4x 1x 3x 4x 3x 2x 1 1 = y 2 1 = y 1 2 = y 1 4 = y . . . 2 2 = y . . . 2 4 = y . . . Figure 1: An example of a non-adaptive policy tree (left) and an adaptive policy tree (right). the policy Gibbs error reduces to the Gibbs error for sets, which is a special case of a measure of uncertainty called the Tsallis entropy [4]. By maximizing policy Gibbs error, we hope to maximize the policy entropy, whose maximality implies the minimality of the posterior label entropy of the remaining unlabeled examples in the pool. Besides, by maximizing policy Gibbs error, we also aim to obtain a small expected error of a posterior Gibbs classifier (which samples a hypothesis from the posterior instead of the prior for labeling). Small expected error of the posterior Gibbs classifier is desirable as it upper bounds the Bayes error but is at most twice of it. Maximizing policy Gibbs error is hard, and we propose a greedy criterion, the maximum Gibbs error criterion (maxGEC), to solve it. With this criterion, the next query is made on the candidate (which may be one or several examples) that has maximum Gibbs error, the probability that a randomly sampled labeling does not match the actual labeling. We investigate this criterion in three settings: the non-adaptive setting, the adaptive setting and batch setting (also called batch mode setting) [5]. In the non-adaptive setting, the set of examples is not labeled until all examples in the set have all been selected. In the adaptive setting, the examples are labeled as soon as they are selected, and the new information is used to select the next example. In the batch setting, we select a batch of examples, query their labels and proceed to select the next batch taking into account the labels. In all these settings, we prove that maxGEC is near-optimal compared to the best policy that has maximal policy Gibbs error in the setting. We examine how to compute the maxGEC criterion, particularly for large structured probabilistic models such as the conditional random fields [6]. When inference in the conditional random field can be done efficiently, we show how to compute an approximation to the Gibbs error by sampling and efficient inference. We provide an approximation for maxGEC in the non-adaptive and batch settings with Bayesian transductive Naive Bayes model. Finally, we conduct pool-based active learning experiments using maxGEC for a named entity recognition task with conditional random fields and a text classification task with Bayesian transductive Naive Bayes. The results show good performance of maxGEC in terms of the area under the curve (AUC). 2 Preliminaries Let X be a set of examples, Y be a fixed finite set of labels, and H be a set of probabilistic hypotheses. We assume H is finite, but our results extend readily to general H. For any probabilistic hypothesis h ∈H, its application to an example x ∈X is a categorical random variable with support Y, and we write P[h(x) = y|h] for the probability that h(x) has value y ∈Y. We extend the notation to any sequence S of examples from X and write P[h(S) = y|h] for the probability that h(S) has a labeling y ∈Y|S|, where Y|S| is the set of all labelings of S. We operate within the Bayesian setting and assume a prior probability p0[h] on H. We use pD[h] to denote the posterior p0[h|D] after observing a set D of labeled examples from X × Y. A pool-based active learning algorithm is a policy for choosing training examples from a pool X ⊆X. At the beginning, a fixed labeling y∗of X is given by a hypothesis h drawn from the prior p0[h] and is hidden from the learner. Equivalently, y∗can be drawn from the prior label distribution p0[y∗; X]. For any distribution p[h], we use p[y; S] to denote the probability that examples in S are assigned the labeling y by a hypothesis drawn randomly from p[h]. Formally, p[y; S] def= P h∈H p[h] P[h(S) = y|h]. When S is a singleton {x}, we write p[y; x] for p[{y}; {x}]. 2 During the learning process, each time the learner selects an unlabeled example, its label will be revealed to the learner. A policy for choosing training examples is a mapping from a set of labeled examples to an unlabeled example to be queried. This can be represented by a policy tree, where a node represents the next example to be queried, and each edge from the node corresponds to a possible label. We use policy and policy tree as synonyms. Figure 1 illustrates two policy trees with their top three levels: in the non-adaptive setting, the policy ignores the labels of the previously selected examples, so all examples at the same depth of the policy tree are the same; in the adaptive setting, the policy takes into account the observed labels when choosing the next example. A full policy tree for a pool X is a policy tree of height |X|. A partial policy tree is a subtree of a full policy tree with the same root. The class of policies of height k will be denoted by Πk. Our query criterion gives a method to build a full policy tree one level at a time. The main building block is the probability distribution pπ 0[·] over all possible paths from the root to the leaves for any (full or partial) policy tree π. This distribution over paths is induced from the uncertainty in the fixed labeling y∗for X: since y∗is drawn randomly from p0[y∗; X], the path ρ followed from the root to a leaf of the policy tree during the execution of π is also a random variable. If xρ (resp. yρ) is the sequence of examples (resp. labels) along path ρ, then the probability of ρ is pπ 0[ρ] def= p0[yρ; xρ]. 3 Maximum Gibbs Error Criterion for Active Learning A commonly used objective for active learning in the non-adaptive setting is to choose k training examples such that their Shannon entropy is maximal, as this reduces uncertainty in the later stage. We first give a generalization of the concept of Shannon entropy to general (both adaptive and nonadaptive) policies. Formally, the policy entropy of a policy π is H(π) def= Eρ∼pπ 0 [ −ln pπ 0[ρ] ]. From this definition, policy entropy is the Shannon entropy of the paths in the policy. The policy entropy reduces to the Shannon entropy on a set of examples when the policy is non-adaptive. The following result gives a formal statement that maximizing policy entropy minimizes the uncertainty on the label of the remaining unlabeled examples in the pool. Suppose a path ρ has been observed, the labels of the remaining examples in X \ xρ follow the distribution pρ[ · ; X \ xρ], where pρ is the posterior obtained after observing (xρ, yρ). The entropy of this distribution will be denoted by G(ρ) and will be called the posterior label entropy of the remaining examples given ρ. Formally, G(ρ) = −P y pρ[y; X \ xρ] ln pρ[y; X \ xρ], where the summation is over all possible labelings y of X \ xρ. The posterior label entropy of a policy π is defined as G(π) = Eρ∼pπ 0 G(ρ). Theorem 1. For any k ≥1, if a policy π in Πk maximizes H(π), then π minimizes the posterior label entropy G(π). Proof. It can be easily verified that H(π) + G(π) is the Shannon entropy of the label distribution p0[ · ; X], which is a constant (detailed proof is in the supplementary). Thus, the theorem follows. The usual maximum Shannon entropy criterion, which selects the next example x maximizing Ey∼pD[y;x][−ln pD[y; x]] where D is the previously observed labeled examples, can be thought of as a greedy heuristic for building a policy π maximizing H(π). However, it is still unknown whether this greedy criterion has any theoretical guarantee, except for the non-adaptive case. In this paper, we introduce a new objective for active learning: the policy Gibbs error. This new objective is a lower bound of the policy entropy and there are near-optimal greedy algorithms to optimize it. Intuitively, the policy Gibbs error of a policy π is the expected probability for a Gibbs classifier to make an error on the set adaptively selected by π. Formally, we define the policy Gibbs error of a policy π as V (π) def= Eρ∼pπ 0 [ 1 −pπ 0[ρ] ], (1) In the above equation, 1 −pπ 0[ρ] is the probability that a Gibbs classifier makes an error on the selected set along the path ρ. Theorem 2 below, which is straightforward from the inequality x ≥1 + ln x, states that the policy Gibbs error is a lower bound of the policy entropy. Theorem 2. For any (full or partial) policy π, we have V (π) ≤H(π). 3 Given a budget of k queries, our proposed objective is to find π∗= arg maxπ∈Πk V (π), the height k policy with maximum policy Gibbs error. By maximizing V (π), we hope to maximize the policy entropy H(π), and thus minimize the uncertainty in the remaining examples. Furthermore, we also hope to obtain a small expected error of a posterior Gibbs classifier, which upper bounds the Bayes error but is at most twice of it. Using this objective, we propose greedy algorithms for active learning that are provably near-optimal for probabilistic hypotheses. We will consider the non-adaptive, adaptive and batch settings. 3.1 The Non-adaptive Setting In the non-adaptive setting, the policy π ignores the observed labels: it never updates the posterior. This is equivalent to selecting a set of examples before any labeling is done. In this setting, the examples selected along all paths of π are the same. Let xπ be the set of examples selected by π. The Gibbs error of a non-adaptive policy π is simply V (π) = Ey∼p0[ · ;xπ][1 −p0[y; xπ]]. Thus, the optimal non-adaptive policy selects a set S of examples maximizing its Gibbs error, which is defined by ϵp0 g (S) def= 1 −P y p0[y; S]2. In general, the Gibbs error of a distribution P is 1−P i P[i]2, where the summation is over elements in the support of P. The Gibbs error is a special case of the Tsallis entropy used in nonextensive statistical mechanics [4] and is known to be monotone submodular [7]. From the properties of monotone submodular functions [8], the greedy non-adaptive policy that selects the next example xi+1 = arg max x {ϵp0 g (Si ∪{x})} = arg max x {1 − X y p0[y; Si ∪{x}]2}, (2) where Si is the set of previously selected examples, is near-optimal compared to the best nonadaptive policy. This is stated below. Theorem 3. Given a budget of k ≥1 queries, let πn be the non-adaptive policy in Πk selecting examples using Equation (2), and let π∗ n be the non-adaptive policy in Πk with the maximum policy Gibbs error. Then, V (πn) > (1 −1/e)V (π∗ n). 3.2 The Adaptive Setting In the adaptive setting, a policy takes into account the observed labels when choosing the next example. This is done via the posterior update after observing the label of a selected example. The adaptive setting is the most common setting for active learning. We now describe a greedy adaptive algorithm for this setting that is near-optimal. Assume that the current posterior obtained after observing the labeled examples D is pD. Our greedy algorithm selects the next example x that maximizes ϵpD g (x): x∗= arg max x ϵpD g (x) = arg max x {1 − X y∈Y pD[y; x]2}. (3) From the definition of ϵpD g in Section 3.1, ϵpD g (x) is in fact the Gibbs error of a 1-step policy with respect to the prior pD. Thus, we call this greedy criterion the adaptive maximum Gibbs error criterion (maxGEC). Note that in binary classification where |Y| = 2, maxGEC selects the same example as the maximum Shannon entropy and the least confidence criteria. However, they are different in the multi-class case. Theorem 4 below states that maxGEC is near-optimal compared to the best adaptive policy with respect to the objective in Equation (1). Theorem 4. Given a budget of k ≥1 queries, let πmaxGEC be the adaptive policy in Πk selecting examples using maxGEC and π∗be the adaptive policy in Πk with the maximum policy Gibbs error. Then, V (πmaxGEC) > (1 −1/e)V (π∗). The proof for this theorem is in the supplementary material. The main idea of the proof is to reduce probabilistic hypotheses to deterministic ones by expanding the hypothesis space. For deterministic hypotheses, we show that maxGEC is equivalent to maximizing the version space reduction objective, which is known to be adaptive monotone submodular [2]. Thus, we can apply a known result for optimizing adaptive monotone submodular function [2] to obtain Theorem 4. 4 Algorithm 1 Batch maxGEC for Bayesian Batch Active Learning Input: Unlabeled pool X, prior p0, number of iterations k, and batch size s. for i = 0 to k −1 do S ←∅ for j = 0 to s −1 do x∗←arg maxx ϵpi g (S ∪{x}); S ←S ∪{x∗}; X ←X \ {x∗} end for yS ←Query-labels(S); pi+1 ←Posterior-update(pi, S, yS) end for 3.3 The Batch Setting In the batch setting [5], we query the labels of s (instead of 1) examples each time, and we do this for a given number of k iterations. After each iteration, we query the labeling of the selected batch and update the posterior based on this labeling. The new posterior can be used to select the next batch of examples. A non-adaptive policy can be seen as a batch policy that selects only one batch. Algorithm 1 describes a greedy algorithm for this setting which we call the batch maxGEC algorithm. At iteration i of the algorithm with the posterior pi, the batch S is first initialized to be empty, then s examples are greedily chosen one at a time using the criterion x∗= arg max x ϵpi g (S ∪{x}). (4) This is equivalent to running the non-adaptive greedy algorithm in Section 3.1 to select each batch. Query-labels(S) returns the true labeling yS of S and Posterior-update(pi, S, yS) returns the new posterior obtained from the prior pi after observing yS. The following theorem states that batch maxGEC is near optimal compared to the best batch policy with respect to the objective in Equation (1). The proof for this theorem is in the supplementary material. The proof also makes use of the reduction to deterministic hypotheses and the adaptive submodularity of version space reduction. Theorem 5. Given a budget of k batches of size s, let πmaxGEC b be the batch policy selecting k batches using batch maxGEC and π∗ b be the batch policy selecting k batches with maximum policy Gibbs error. Then, V (πmaxGEC b ) > (1 −e−(e−1)/e)V (π∗ b). This theorem has a different bounding constant than those in Theorems 3 and 4 because it uses two levels of approximation to compute the batch policy: at each iteration, it approximates the optimal batch by greedily choosing one example at a time using equation (4) (1st approximation). Then it uses these chosen batches to approximate the optimal batch policy (2nd approximation). In contrast, the fully adaptive case has batch size 1 and only needs the 2nd approximation, while the non-adaptive case chooses 1 batch and only needs the 1st approximation. In non-adaptive and batch settings, our algorithms need to sum over all labelings of the previously selected examples in a batch to choose the next example. This summation is usually expensive and it restricts the algorithms to small batches. However, we note that small batches may be preferred in some real problems. For example, if there is a small number of annotators and labeling one example takes a long time, we may want to select a batch size that matches the number of annotators. In this case, the annotators can label the examples concurrently while we can make use of the labels as soon as they are available. It would take a longer time to label a larger batch and we cannot use the labels until all the examples in the batch are labeled. 4 Computing maxGEC We now discuss how to compute maxGEC and batch maxGEC for some probabilistic models. Computing the values is often difficult and we discuss some sampling methods for this task. 4.1 MaxGEC for Bayesian Conditional Exponential Models A conditional exponential model defines the conditional probability Pλ[⃗y|⃗x] of a structured labels ⃗y given a structured inputs ⃗x as Pλ[⃗y|⃗x] = exp (Pm i=1 λiFi(⃗y, ⃗x)) /Zλ(⃗x), where λ = 5 Algorithm 2 Approximation for Equation (4). Input: Selected unlabeled examples S, current unlabeled example x, current posterior pc D. Sample M label vectors (yi)M−1 i=0 of (X \ T) ∪T from pc D using Gibbs sampling and set r ←0. for i = 0 to M −1 do for y ∈Y do c pc D[h(S) = yi S ∧h(x) = y] ←M −1 n yj | yj S = yi S ∧yj {x} = y o r ←r + (c pc D[h(S) = yi S ∧h(x) = y])2 end for end for return 1 −r (λ1, λ2, . . . , λm) is the parameter vector, Fi(⃗y, ⃗x) is the total score of the i-th feature, and Zλ(⃗x) = P ⃗y exp (Pm i=1 λiFi(⃗y, ⃗x)) is the partition function. A well-known conditional exponential model is the linear-chain conditional random field (CRF) [6] in which ⃗x and ⃗y both have sequence structures. That is, ⃗x = (x1, x2, . . . , x|⃗x|) ∈X |⃗x| and ⃗y = (y1, y2, . . . , y|⃗x|) ∈Y|⃗x|. In this model, Fi(⃗y, ⃗x) = P|⃗x| j=1 fi(yj, yj−1, ⃗x) where fi(yj, yj−1, ⃗x) is the score of the i-th feature at position j. In the Bayesian setting, we assume a prior p0[λ] = Qm i=1 p0[λi] on λ, where p0[λi] = N(λi|0, σ2) for a known σ. After observing the labeled examples D = {(⃗xj, ⃗yj)}t j=1, we can obtain the posterior pD[λ] = p0[λ|D] ∝ tY j=1 1 Zλ(⃗xj) exp m X i=1 λiFi(⃗yj, ⃗xj) ! exp −1 2 m X i=1 λi σ 2! . For active learning, we need to estimate the Gibbs error in Equation (3) from the posterior pD. For each ⃗x, we can approximate the Gibbs error ϵpD g (⃗x) = 1 −P ⃗y pD[⃗y; ⃗x]2 by sampling N hypotheses λ1, λ2, . . . , λN from the posterior pD. In this case, ϵpD g (⃗x) ≈1 −N −2 PN j=1 PN t=1 Zλj+λt(⃗x)/Zλj(⃗x)Zλt(⃗x). The derivation for this formula is in the supplementary material. If we only use the MAP hypothesis λ∗to approximate the Gibbs error (i.e. the non-Bayesian setting), then N = 1 and ϵpD g (⃗x) ≈1 −Z2λ∗(⃗x)/Zλ∗(⃗x)2. This approximation can be done efficiently if we can compute the partition functions Zλ(⃗x) efficiently for any λ. This condition holds for a wide range of models including logistic regression, linear-chain CRF, semi-Markov CRF [9], and sparse high-order semi-Markov CRF [10]. 4.2 Batch maxGEC for Bayesian Transductive Naive Bayes We discuss an algorithm to approximate batch maxGEC for non-adaptive and batch active learning with Bayesian transductive Naive Bayes. First, we describe the Bayesian transductive Naive Bayes model for text classification. Let Y ∈Y be a random variable denoting the label of a document and W ∈W be a random variable denoting a word. In a Naive Bayes model, the parameters are θ = {θy}y∈Y ∪{θw|y}w∈W,y∈Y, where θy = P[Y = y] and θw|y = P[W = w|Y = y]. For a document X and a label Y , if X = {W1, W2, . . . , W|X|} where Wi is a word in the document, we model the joint distribution P[X, Y ] = θY Q|X| i=1 θWi|Y . In the Bayesian setting, we have a prior p0[θ] such that θy ∼Dirichlet(α) and θw|y ∼Dirichlet(αy) for each y. When we observe the labeled documents, we update the posterior by counting the labels and the words in each document label. The posterior parameters also follow Dirichlet distributions. Let X be the original pool of training examples and T be the unlabeled testing examples. In transductive setting, we work with the conditional prior pc 0[θ] = p0[θ|X; T ]. For a set D = (T, yT ) of labeled examples where T ⊆X is the set of unlabeled examples and yT is the labeling of T, the conditional posterior is pc D[θ] = p0[θ|X; T ; D] = pD[θ|(X \ T) ∪T ], where pD[θ] = p0[θ|D] is the Dirichlet posterior of the non-transductive model. To implement the batch maxGEC algorithm, we need to estimate the Gibbs error in Equation (4) from the conditional posterior. Let S be the currently selected batch. For each unlabeled example x /∈S, we need to estimate: 1 − X yS,y (pc D [h(S) = yS ∧h(x) = y])2 = 1 −EyS "P y (pc D [h(S) = yS ∧h(x) = y])2 pc D[yS; S] # , 6 Table 1: AUC of different learning algorithms with batch size s = 10. Task TPass maxGEC LC NPass LogPass LogFisher alt.atheism/comp.graphics 87.43 91.69 91.66 84.98 91.63 93.92 talk.politics.guns/talk.politics.mideast 84.92 92.03 92.16 80.80 86.07 88.36 comp.sys.mac.hardware/comp.windows.x 73.17 93.60 92.27 74.41 85.87 88.71 rec.motorcycles/rec.sport.baseball 93.82 96.40 96.23 92.33 89.46 93.90 sci.crypt/sci.electronics 60.46 85.51 85.86 60.85 82.89 87.72 sci.space/soc.religion.christian 92.38 95.83 95.45 89.72 91.16 94.04 soc.religion.christian/talk.politics.guns 91.57 95.94 95.59 85.56 90.35 93.96 Average 83.39 93.00 92.75 81.24 88.21 91.52 where the expectation is with respect to the distribution pc D[yS; S]. We can use Gibbs sampling to approximate this expectation. First, we sample M label vectors y(X\T )∪T of the remaining unlabeled examples from pc D using Gibbs sampling. Then, for each yS, we estimate pc D[yS; S] by counting the fraction of the M sampled vectors consistent with yS. For each yS and y, we also estimate pc D [h(S) = yS ∧h(x) = y] by counting the fraction of the M sampled vectors consistent with both yS and y on S ∪{x}. This approximation is equivalent to Algorithm 2. In the algorithm, yi S is the labeling of S according to yi. 5 Experiments 5.1 Named Entity Recognition (NER) with CRF In this experiment, we consider the NER task with the Bayesian CRF model described in Section 4.1. We use a subset of the CoNLL 2003 NER task [11] which contains 1928 training and 969 test sentences. Following the setting in [12], we let the cost of querying the label sequence of each sentence be 1. We implement two versions of maxGEC with the approximation algorithm in Section 4.1: the first version approximates Gibbs error by using only the MAP hypothesis (maxGEC-MAP) and the second version approximates Gibbs error by using 50 hypotheses sampled from the posterior (maxGEC-50). We sample the hypotheses for maxGEC-50 from the posterior by MetropolisHastings algorithm with the MAP hypothesis as the initial point. We compare the maxGEC algorithms with 4 other learning criteria: passive learner (Passive), active learner which chooses the longest unlabeled sequence (Longest), active learner which chooses the unlabeled sequence with maximum Shannon entropy (SegEnt), and active learner which chooses the unlabeled sequence with the least confidence (LeastConf). For SegEnt and LeastConf, the entropy and confidence are estimated from the MAP hypothesis. For all the algorithms, we use the MAP hypothesis for Viterbi decoding. To our knowledge, there is no simple way to compute SegEnt or LeastConf criteria from a finite sample of hypotheses except for using only the MAP estimation. The difficulty is to compute a summation (minimization for LeastConf) over all the outputs ⃗y in the complex structured models. For maxGEC, the summation can be rearranged to obtain the partition functions, which can be computed efficiently using known inference algorithms. This is thus an advantage of using maxGEC. We compare the total area under the F1 curve (AUC) for each algorithm after querying the first 500 sentences. As a percentage of the maximum score of 500, algorithms Passive, Longest, SegEnt, LeastConf, maxGEC-MAP and maxGEC-50 attain 72.8, 67.0, 75.4, 75.5, 75.8 and 76.0 respectively. Hence, the maxGEC algorithms perform better than all the other algorithms, and significantly so over the Passive and Longest algorithms. 5.2 Text Classification with Bayesian Transductive Naive Bayes In this experiment, we consider the text classification model in Section 4.2 with the meta-parameters α = (0.1, . . . , 0.1) and αy = (0.1, . . . , 0.1) for all y. We implement batch maxGEC (maxGEC) with the approximation in Algorithm 2 and compare with 5 other algorithms: passive learner with Bayesian transductive Naive Bayes model (TPass), least confidence active learner with Bayesian transductive Naive Bayes model (LC), passive learner with Bayesian non-transductive Naive Bayes model (NPass), passive learner with logistic regression model (LogPass), and batch active learner 7 with Fisher information matrix and logistic regression model (LogFisher) [5]. To implement the least confidence algorithm, we sample M label vectors as in Algorithm 2 and use them to estimate the label distribution for each unlabeled example. The algorithm will then select s examples whose label is least confident according to these estimates. We run the algorithms on 7 binary tasks from the 20Newsgroups dataset [13] with batch size s = 10, 20, 30 and report the areas under the accuracy curve (AUC) for the case s = 10 in Table 1. The results for s = 20, 30 are in the supplementary material. The results are obtained by averaging over 5 different runs of the algorithms, and the AUCs are normalized so that their range is from 0 to 100. From the results, maxGEC obtains the best AUC scores on 4/7 tasks for each batch size and also the best average AUC scores. LC also performs well and its scores are only slightly lower than maxGEC. The passive learning algorithms are much worse than the active learning algorithms. 6 Related Work Among pool-based active learning algorithms, greedy methods are the simplest and most common [14]. Often, the greedy algorithms try to maximize the uncertainty, e.g. Shannon entropy, of the example to be queried [12]. For non-adaptive active learning, greedy optimization of the Shannon entropy guarantees near optimal performance due to the submodularity of the entropy [2]. However, this has not been shown to extend to adaptive active learning, where each example is labeled as soon as it is selected, and the labeled examples are exploited in selecting the next example to label. Although greedy algorithms work well in practice [12, 14], they usually do not have any theoretical guarantee except for the case where data are noiseless. In noiseless Bayesian setting, an algorithm called generalized binary search was proven to be near-optimal: its expected number of queries is within a factor of (ln 1 minh p0[h] + 1) of the optimum, where p0 is the prior [2]. This result was obtained using the adaptive submodularity of the version space reduction. Adaptive submodularity is an adaptive version of submodularity, a natural diminishing returns property. The adaptive submodularity of version space reduction was also applied to the batch setting to prove the near-optimality of a batch greedy algorithm that maximizes the average version space reduction for each selected batch [3]. The maxGEC and batch maxGEC algorithms that we proposed in this paper can be seen as generalizations of these version space reduction algorithms to the noisy setting. When the hypotheses are deterministic, our algorithms are equivalent to these version space reduction algorithms. For the case of noisy data, a noisy version of the generalized binary search was proposed [15]. The algorithm was proven to be optimal under the neighborly condition, a very limited setting where “each hypothesis is locally distinguishable from all others” [15]. In another work, Bayesian active learning was modeled by the Equivalance Class Determination problem and a greedy algorithm called EC2 was proposed for this problem [16]. Although the cost of EC2 is provably near-optimal, this formulation requires an explicit noise model and the near-optimality bound is only useful when the support of the noise model is small. Our formulation, in contrast, is simpler and does not require an explicit noise model: the noise model is implicit in the probabilistic model and our algorithms are only limited by computational concerns. 7 Conclusion We considered a new objective function for Bayesian active learning: the policy Gibbs error. With this objective, we described the maximum Gibbs error criterion for selecting the examples. The algorithm has near-optimality guarantees in the non-adaptive, adaptive and batch settings. We discussed algorithms to approximate the Gibbs error criterion for Bayesian CRF and Bayesian transductive Naive Bayes. We also showed that the criterion is useful for NER with CRF model and for text classification with Bayesian transductive Naive Bayes model. Acknowledgments This work is supported by DSO grant DSOL11102 and the US Air Force Research Laboratory under agreement number FA2386-12-1-4031. 8 References [1] Andrew McCallum and Kamal Nigam. Employing EM and Pool-Based Active Learning for Text Classification. In International Conference on Machine Learning (ICML), pages 350–358, 1998. [2] Daniel Golovin and Andreas Krause. Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization. Journal of Artificial Intelligence Research, 42(1):427–486, 2011. [3] Yuxin Chen and Andreas Krause. Near-optimal Batch Mode Active Learning and Adaptive Submodular Optimization. In International Conference on Machine Learning (ICML), pages 160–168, 2013. [4] Constantino Tsallis and Edgardo Brigatti. Nonextensive statistical mechanics: A brief introduction. Continuum Mechanics and Thermodynamics, 16(3):223–235, 2004. [5] Steven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch Mode Active Learning and Its Application to Medical Image Classification. In International Conference on Machine learning (ICML), pages 417–424. ACM, 2006. [6] John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In International Conference on Machine Learning (ICML), pages 282–289, 2001. [7] Bassem Sayrafi, Dirk Van Gucht, and Marc Gyssens. The implication problem for measure-based constraints. Information Systems, 33(2):221–239, 2008. [8] G.L. Nemhauser and L.A. Wolsey. Best Algorithms for Approximating the Maximum of a Submodular Set Function. Mathematics of Operations Research, 3(3):177–188, 1978. [9] Sunita Sarawagi and William W. Cohen. Semi-Markov Conditional Random Fields for Information Extraction. Advances in Neural Information Processing Systems (NIPS), 17:1185–1192, 2004. [10] Viet Cuong Nguyen, Nan Ye, Wee Sun Lee, and Hai Leong Chieu. Semi-Markov Conditional Random Field with High-Order Features. In ICML Workshop on Structured Sparsity: Learning and Inference, 2011. [11] Erik F. Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL-2003 Shared Task: LanguageIndependent Named Entity Recognition. In Proceedings of the 17th Conference on Natural Language Learning (HLT-NAACL 2003), pages 142–147, 2003. [12] Burr Settles and Mark Craven. An Analysis of Active Learning Strategies for Sequence Labeling Tasks. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1070–1079. Association for Computational Linguistics, 2008. [13] Thorsten Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. Technical report, DTIC Document, 1996. [14] Burr Settles. Active Learning Literature Survey. Technical Report 1648, University of WisconsinMadison, 2009. [15] Robert Nowak. Noisy Generalized Binary Search. Advances in Neural Information Processing Systems (NIPS), 22:1366–1374, 2009. [16] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-Optimal Bayesian Active Learning with Noisy Observations. In Advances in Neural Information Processing Systems (NIPS), pages 766–774, 2010. 9
|
2013
|
354
|
5,103
|
Two-Target Algorithms for Infinite-Armed Bandits with Bernoulli Rewards Thomas Bonald∗ Department of Networking and Computer Science Telecom ParisTech Paris, France thomas.bonald@telecom-paristech.fr Alexandre Prouti`ere∗† Automatic Control Department KTH Stockholm, Sweden alepro@kth.se Abstract We consider an infinite-armed bandit problem with Bernoulli rewards. The mean rewards are independent, uniformly distributed over [0, 1]. Rewards 0 and 1 are referred to as a success and a failure, respectively. We propose a novel algorithm where the decision to exploit any arm is based on two successive targets, namely, the total number of successes until the first failure and until the first m failures, respectively, where m is a fixed parameter. This two-target algorithm achieves a long-term average regret in √ 2n for a large parameter m and a known time horizon n. This regret is optimal and strictly less than the regret achieved by the best known algorithms, which is in 2√n. The results are extended to any mean-reward distribution whose support contains 1 and to unknown time horizons. Numerical experiments show the performance of the algorithm for finite time horizons. 1 Introduction Motivation. While classical multi-armed bandit problems assume a finite number of arms [9], many practical situations involve a large, possibly infinite set of options for the player. This is the case for instance of on-line advertisement and content recommandation, where the objective is to propose the most appropriate categories of items to each user in a very large catalogue. In such situations, it is usually not possible to explore all options, a constraint that is best represented by a bandit problem with an infinite number of arms. Moreover, even when the set of options is limited, the time horizon may be too short in practice to enable the full exploration of these options. Unlike classical algorithms like UCB [10, 1], which rely on a initial phase where all arms are sampled once, algorithms for infinite-armed bandits have an intrinsic stopping rule in the number of arms to explore. We believe that this provides useful insights into the design of efficient algorithms for usual finite-armed bandits when the time horizon is relatively short. Overview of the results. We consider a stochastic infinite-armed bandit with Bernoulli rewards, the mean reward of each arm having a uniform distribution over [0, 1]. This model is representative of a number of practical situations, such as content recommandation systems with like/dislike feedback and without any prior information on the user preferences. We propose a two-target algorithm based on some fixed parameter m that achieves a long-term average regret in √ 2n for large m and a large known time horizon n. We prove that this regret is optimal. The anytime version of this algorithm achieves a long-term average regret in 2√n for unknown time horizon n, which we conjecture to be also optimal. The results are extended to any mean-reward distribution whose support contains 1. Specifically, if the probability that the mean reward exceeds u is equivalent to α(1 −u)β when ∗The authors are members of the LINCS, Paris, France. See www.lincs.fr. †Alexandre Prouti`ere is also affiliated to INRIA, Paris-Rocquencourt, France. 1 u →1−, the two-target algorithm achieves a long-term average regret in C(α, β)n β β+1 , with some explicit constant C(α, β) that depends on whether the time horizon is known or not. This regret is provably optimal when the time horizon is known. The precise statements and proofs of these more general results are given in the appendix. Related work. The stochastic infinite-armed bandit problem has first been considered in a general setting by Mallows and Robbins [12] and then in the particular case of Bernoulli rewards by Herschkorn, Pek¨oz and Ross [6]. The proposed algorithms are first-order optimal in the sense that they minimize the ratio Rn/n for large n, where Rn is the regret after n time steps. In the considered setting of Bernoulli rewards with mean rewards uniformly distributed over [0, 1], this means that the ratio Rn/n tends to 0 almost surely. We are interested in second-order optimality, namely, in minimizing the equivalent of Rn for large n. This issue is addressed by Berry et. al. [2], who propose various algorithms achieving a long-term average regret in 2√n, conjecture that this regret is optimal and provide a lower bound in √ 2n. Our algorithm achieves a regret that is arbitrarily close to √ 2n, which invalidates the conjecture. We also provide a proof of the lower bound in √ 2n since that given in [2, Theorem 3] relies on the incorrect argument that the number of explored arms and the mean rewards of these arms are independent random variables1; the extension to any mean-reward distribution [2, Theorem 11] is based on the same erroneous argument2. The algorithms proposed by Berry et. al. [2] and applied in [11, 4, 5, 7] to various mean-reward distributions are variants of the 1-failure strategy where each arm is played until the first failure, called a run. For instance, the non-recalling √n-run policy consists in exploiting the first arm giving a run larger than √n. For a uniform mean-reward distribution over [0, 1], the average number of explored arms is √n and the selected arm is exploited for the equivalent of n time steps with an expected failure rate of 1/√n, yielding the regret of 2√n. We introduce a second target to improve the expected failure rate of the selected arm, at the expense of a slightly more expensive exploration phase. Specifically, we show that it is optimal to explore p n/2 arms on average, resulting in the expected failure rate 1/ √ 2n of the exploited arm, for the equivalent of n time steps, hence the regret of √ 2n. For unknown horizon times, anytime versions of the algorithms of Berry et. al. [2] are proposed by Teytaud, Gelly and Sebag in [13] and proved to achieve a regret in O(√n). We show that the anytime version of our algorithm achieves a regret arbitrarily close to 2√n, which we conjecture to be optimal. Our results extend to any mean-reward distribution whose support contains 1, the regret depending on the characteristics of this distribution around 1. This problem has been considered in the more general setting of bounded rewards by Wang, Audibert and Munos [15]. When the time horizon is known, their algorithms consist in exploring a pre-defined set of K arms, which depends on the parameter β mentioned above, using variants of the UCB policy [10, 1]. In the present case of Bernoulli rewards and mean-reward distributions whose support contains 1, the corresponding regret is in n β β+1 , up to logarithmic terms coming from the exploration of the K arms, as in usual finite-armed bandits algorithms [9]. The nature of our algorithm is very different in that it is based on a stopping rule in the exploration phase that depends on the observed rewards. This does not only remove the logarithmic terms in the regret but also achieves the optimal constant. 2 Model We consider a stochastic multi-armed bandit with an infinite number of arms. For any k = 1, 2, . . ., the rewards of arm k are Bernoulli with unknown parameter θk. We refer to rewards 0 and 1 as a failure and a success, respectively, and to a run as a consecutive sequence of successes followed by a failure. The mean rewards θ1, θ2, . . . are themselves random, uniformly distributed over [0, 1]. 1Specifically, it is assumed that for any policy, the mean rewards of the explored arms have a uniform distribution over [0, 1], independently of the number of explored arms. This is incorrect. For the 1-failure policy for instance, given that only one arm has been explored until time n, the mean reward of this arm has a beta distribution with parameters 1, n. 2This lower bound is 4 p n/3 for a beta distribution with parameters 1/2, 1, see [11], while our algorithm achieves a regret arbitrarily close to 2√n in this case, since C(α, β) = 2 for α = 1/2 and β = 1, see the appendix. Thus the statement of [2, Theorem 11] is false. 2 At any time t = 1, 2, . . ., we select some arm It and receive the corresponding reward Xt, which is a Bernoulli random variable with parameter θIt. We take I1 = 1 by convention. At any time t = 2, 3, . . ., the arm selection only depends on previous arm selections and rewards; formally, the random variable It is Ft−1-mesurable, where Ft denotes the σ-field generated by the set {I1, X1, . . . , It, Xt}. Let Kt be the number of arms selected until time t. Without any loss of generality, we assume that {I1, . . . , It} = {1, 2, . . . , Kt} for all t = 1, 2, . . ., i.e., new arms are selected sequentially. We also assume that It+1 = It whenever Xt = 1: if the selection of arm It gives a success at time t, the same arm is selected at time t + 1. The objective is to maximize the cumulative reward or, equivalently, to minimize the regret defined by Rn = n −Pn t=1 Xt. Specifically, we focus on the average regret E(Rn), where expectation is taken over all random variables, including the sequence of mean rewards θ1, θ2, . . .. The time horizon n may be known or unknown. 3 Known time horizon 3.1 Two-target algorithm The two-target algorithm consists in exploring new arms until two successive targets ℓ1 and ℓ2 are reached, in which case the current arm is exploited until the time horizon n. The first target aims at discarding “bad” arms while the second aims at selecting a “good” arm. Specifically, using the names of the variables indicated in the pseudo-code below, if the length L of the first run of the current arm I is less than ℓ1, this arm is discarded and a new arm is selected; otherwise, arm I is pulled for m −1 additional runs and exploited until time n if the total length L of the m runs is at least ℓ2, where m ≥2 is a fixed parameter of the algorithm. We prove in Proposition 1 below that, for large m, the target values3 ℓ1 = ⌊3p n 2 ⌋and ℓ2 = ⌊mp n 2 ⌋provide a regret in √ 2n. Algorithm 1: Two-target algorithm with known time horizon n. Parameters: m, n Function: Explore I ←I + 1, L ←0, M ←0 Algorithm: ℓ1 = ⌊3p n 2 ⌋, ℓ2 = ⌊mp n 2 ⌋ I ←0 Explore Exploit ←false forall the t = 1, 2, . . . , n do Get reward X from arm I if not Exploit then if X = 1 then L ←L + 1 else M ←M + 1 if M = 1 then if L < ℓ1 then Explore else if M = m then if L < ℓ2 then Explore else Exploit ←true 3The first target could actually be any function ℓ1 of the time horizon n such that ℓ1 →+∞and ℓ1/√n →0 when n →+∞. Only the second target is critical. 3 3.2 Regret analysis Proposition 1 The two-target algorithm with targets ℓ1 = ⌊3p n 2 ⌋and ℓ2 = ⌊mp n 2 ⌋satisfies: ∀n ≥m2 2 , E(Rn) ≤m + ℓ2 + 1 m ℓ2 −m + 2 ℓ2 −ℓ1 −m + 2 m 2 + 1 m + 2m + 1 ℓ1 + 1 . In particular, lim sup n→+∞ E(Rn) √n ≤ √ 2 + 1 m √ 2. Proof. Note that Let U1 = 1 if arm 1 is used until time n and U1 = 0 otherwise. Denote by M1 the total number of 0’s received from arm 1. We have: E(Rn) ≤P(U1 = 0)(E(M1|U1 = 0) + E(Rn)) + P(U1 = 1)(m + nE(1 −θ1|U1 = 1)), so that: E(Rn) ≤E(M1|U1 = 0) P(U1 = 1) + m + nE(1 −θ1|U1 = 1). (1) Let Nt be the number of 0’s received from arm 1 until time t when this arm is played until time t. Note that n ≥m2 2 implies n ≥ℓ2. Since P(Nℓ1 = 0|θ1 = u) = uℓ1, the probability that the first target is achieved by arm 1 is given by: P(Nℓ1 = 0) = Z 1 0 uℓ1du = 1 ℓ1 + 1. Similarly, P(Nℓ2−ℓ1 < m|θ1 = u) = m−1 X j=0 ℓ2 −ℓ1 j uℓ2−ℓ1−j(1 −u)j, so that the probability that arm 1 is used until time n is given by: P(U1 = 1) = Z 1 0 P(Nℓ1 = 0|θ1 = u)P(Nℓ2−ℓ1 < m|θ1 = u)du, = m−1 X j=0 (ℓ2 −ℓ1)! (ℓ2 −ℓ1 −j)! (ℓ2 −j)! (ℓ2 + 1)!. We deduce: m ℓ2 + 1 ℓ2 −ℓ1 −m + 2 ℓ2 −m + 2 m ≤P(U1 = 1) ≤ m ℓ2 + 1. (2) Moreover, E(M1|U1 = 0) = 1 + (m −1)P(Nℓ1 = 0|U1 = 0) ≤1 + (m −1)P(Nℓ1 = 0) P(U1 = 0) ≤1 + 2m + 1 ℓ1 + 1, where the last inequality follows from (2) and the fact that ℓ2 ≥m2 2 . It remains to calculate E(1 −θ1|U1 = 1). Since: P(U1 = 1|θ1 = u) = m−1 X j=0 ℓ2 −ℓ1 j uℓ2−j(1 −u)j, we deduce: E(1 −θ1|U1 = 1) = 1 P(U1 = 1) Z 1 0 m−1 X j=0 ℓ2 −ℓ1 j uℓ2−j(1 −u)j+1du, = 1 P(U1 = 1) m−1 X j=0 (ℓ2 −ℓ1)! (ℓ2 −ℓ1 −j)! (ℓ2 −j)! (ℓ2 + 2)!(j + 1), ≤ 1 P(U1 = 1) m(m + 1) 2(ℓ2 + 1)(ℓ2 + 2) ≤ 1 P(U1 = 1) 1 + 1 m . The proof then follows from (1) and (2). □ 4 3.3 Lower bound The following result shows that the two-target algorithm is asymptotically optimal (for large m). Theorem 1 For any algorithm with known time horizon n, lim inf n→+∞ E(Rn) √n ≥ √ 2. Proof. We present the main ideas of the proof. The details are given in the appendix. Assume an oracle reveals the parameter of each arm after the first failure of this arm. With this information, the optimal policy explores a random number of arms, each until the first failure, then plays only one of these arms until time n. Let µ be the parameter of the best known arm at time t. Since the probability that any new arm is better than this arm is 1 −µ, the mean cost of exploration to find a better arm is 1 1−µ. The corresponding mean reward has a uniform distribution over [µ, 1] so that the mean gain of exploitation is less than (n −t) 1−µ 2 (it is not equal to this quantity due to the time spent in exploration). Thus if 1 −µ < q 2 n−t, it is preferable not to explore new arms and to play the best known arm, with mean reward µ, until time n. A fortiori, the best known arm is played until time n whenever its parameter is larger than 1 − q 2 n. We denote by An the first arm whose parameter is larger than 1 − q 2 n. We have Kn ≤An (the optimal policy cannot explore more than An arms) and E(An) = rn 2 . The parameter θAn of arm An is uniformly distributed over [1 − q 2 n, 1], so that E(θAn) = 1 − r 1 2n. (3) For all k = 1, 2, . . ., let L1(k) be the length of the first run of arm k. We have: E(L1(1)+. . .+L1(An−1)) = (E(An)−1)E(L1(1)|θ1 ≤1− r 2 n) = ( rn 2 −1) −ln( q 2 n) 1 − q 2 n , (4) using the fact that: E(L1(1)|θ1 ≤1 − r 2 n) = Z 1−√ 2 n 0 1 1 −u du 1 − q 2 n . In particular, lim n→+∞ 1 nE(L1(1) + . . . + L1(An −1)) →0 (5) and lim n→+∞ 1 nP(L1(1) + . . . + L1(An −1) ≤n 4 5 ) →1. To conclude, we write: E(Rn) ≥E(Kn) + E((n −L1(1) −. . . −L1(An −1)))(1 −θAn)). Observe that, on the event {L1(1)+. . .+L1(An −1) ≤n 4 5 }, the number of explored arms satisfies Kn ≥A′ n where A′ n denotes the first arm whose parameter is larger than 1 − q 2 n−n 4 5 . Since P(L1(1) + . . . + L1(An −1) ≤n 4 5 ) →1 and E(A′ n) = q n−n 4 5 2 , we deduce that: lim inf n→+∞ E(Kn) √n ≥ 1 √ 2. 5 By the independence of θAn and L1(1), . . . , L1(An −1), 1 √nE((n −L1(1) −. . . −L1(An −1)))(1 −θAn)) = 1 √n(n −E(L1(1) + . . . + L1(An −1)))(1 −E(θAn)), which tends to 1 √ 2 in view of (3) and (5). The announced bound follows. □ 4 Unknown time horizon 4.1 Anytime version of the algorithm When the time horizon is unknown, the targets depend on the current time t, say ℓ1(t) and ℓ2(t). Now any arm that is exploited may be eventually discarded, in the sense that a new arm is explored. This happens whenever either L1 < ℓ1(t) or L2 < ℓ2(t), where L1 and L2 are the respective lengths of the first run and the first m runs of this arm. Thus, unlike the previous version of the algorithm which consists in an exploration phase followed by an exploitation phase, the anytime version of the algorithm continuously switches between exploration and exploitation. We prove in Proposition 2 below that, for large m, the target values ℓ1(t) = ⌊ 3√ t⌋and ℓ2(t) = ⌊m √ t⌋given in the pseudo-code achieve an asymptotic regret in 2√n. Algorithm 2: Two-target algorithm with unknown time horizon. Parameter: m Function: Explore I ←I + 1, L ←0, M ←0 Algorithm: I ←0 Explore Exploit ←false forall the t = 1, 2, . . . do Get reward X from arm I ℓ1 = ⌊ 3√ t⌋, ℓ2 = ⌊m √ t⌋ if Exploit then if L1 < ℓ1 or L2 < ℓ2 then Explore Exploit ←false else if X = 1 then L ←L + 1 else M ←M + 1 if M = 1 then if L < ℓ1 then Explore else L1 ←L else if M = m then if L < ℓ2 then Explore else L2 ←L Exploit←true 6 4.2 Regret analysis Proposition 2 The two-target algorithm with time-dependent targets ℓ1(t) = ⌊ 3√ t⌋and ℓ2(t) = ⌊m √ t⌋satisfies: lim sup n→+∞ E(Rn) √n ≤2 + 1 m. Proof. For all k = 1, 2, . . ., denote by L1(k) and L2(k) the respective lengths of the first run and of the first m runs of arm k when this arm is played continuously. Since arm k cannot be selected before time k, the regret at time n satisfies: Rn ≤Kn + m Kn X k=1 1{L1(k)>ℓ1(k)} + n X t=1 (1 −Xt)1{L2(It)>ℓ2(t)}. First observe that, since the target functions ℓ1(t) and ℓ2(t) are non-decreasing, Kn is less than or equal to K′ n, the number of arms selected by a two-target policy with known time horizon n and fixed targets ℓ1(n) and ℓ2(n). In this scheme, let U ′ 1 = 1 if arm 1 is used until time n and U ′ 1 = 0 otherwise. It then follows from (2) that P(U ′ 1 = 1) ∼ 1 √n and E(Kn) ≤E(K′ n) ∼√n when n →+∞. Now, E Kn X k=1 1{L1(k)>ℓ1(k)} ! = ∞ X k=1 P(L1(k) > ℓ1(k), Kn ≥k), = ∞ X k=1 P(L1(k) > ℓ1(k))P(Kn ≥k|L1(k) > ℓ1(k)), ≤ ∞ X k=1 P(L1(k) > ℓ1(k))P(Kn ≥k) ≤ E(Kn) X k=1 P(L1(k) > ℓ1(k)), where the first inequality follows from the fact that for any arm k and all u ∈[0, 1], P(θk ≥u|L1(k) > ℓ1(k)) ≥P(θk ≥u) and P(Kn ≥k|θk ≥u) ≤P(Kn ≥k), and the second inequality follows from the fact that the random variables L1(1), L1(2), . . . are i.i.d. and the sequence ℓ1(1), ℓ1(2), . . . is non-decreasing. Since E(Kn) ≤E(K′ n) ∼√n when n →+∞and P(L1(k) > ℓ1(k)) ∼ 1 3√ k when k →+∞, we deduce: lim n→+∞ 1 √nE Kn X k=1 1{L1(k)>ℓ1(k)} ! = 0. Finally, E((1 −Xt)1{L2(It)>ℓ2(t)}) ≤E(1 −Xt|L2(It) > ℓ2(t)) ∼m + 1 m 1 2 √ t when t →+∞, so that lim sup n→+∞ 1 √n n X t=1 E((1 −Xt)1{L2(It)>ℓ2(t)}) ≤m + 1 m lim n→+∞ 1 n n X t=1 1 2 rn t , = m + 1 m Z 1 0 1 2√udu = m + 1 m . Combining the previous results yields: lim sup n→+∞ E(Rn) √n ≤2 + 1 m. □ 7 4.3 Lower bound We believe that if E(Rn)/√n tends to some limit, then this limit is at least 2. To support this conjecture, consider an oracle that reveals the parameter of each arm after the first failure of this arm, as in the proof of Theorem 1. With this information, an optimal policy exploits an arm whenever its parameter is larger than some increasing function ¯θt of time t. Assume that 1 −¯θt ∼ 1 c √ t for some c > 0 when t →+∞. Then proceeding as in the proof of Theorem 1, we get: lim inf n→+∞ E(Rn) √n ≥c + lim n→+∞ 1 n n X t=1 1 2c rn t = c + 1 c Z 1 0 du 2√u = c + 1 c ≥2. 5 Numerical results Figure 1 gives the expected failure rate E(Rn)/n with respect to the time horizon n, that is supposed to be known. The results are derived from the simulation of 105 independent samples and shown with 95% confidence intervals. The mean rewards have (a) a uniform distribution or (b) a Beta(1,2) distribution, corresponding to the probability density function u 7→2(1 −u). The single-target algorithm corresponds to the run policy of Berry et. al. [2] with the asymptotically optimal target values √n and 3√ 2n, respectively. For the two-target algorithm, we take m = 3 and the target values given in Proposition 1 and Proposition 3 (in the appendix). The results are compared with the respective asymptotic lower bounds p 2/n and 3p 3/n. The performance gains of the two-target algorithm turn out to be negligible for the uniform distribution but substantial for the Beta(1,2) distribution, where “good” arms are less frequent. 0 0.1 0.2 0.3 0.4 0.5 10 100 1000 10000 Expected failure rate Time horizon Asymptotic lower bound Single-target algorithm Two-target algorithm (a) Uniform mean-reward distribution 0 0.1 0.2 0.3 0.4 0.5 0.6 10 100 1000 10000 Expected failure rate Time horizon Asymptotic lower bound Single-target algorithm Two-target algorithm (b) Beta(1,2) mean-reward distribution Figure 1: Expected failure rate E(Rn)/n with respect to the time horizon n. 6 Conclusion The proposed algorithm uses two levels of sampling in the exploration phase: the first eliminates “bad” arms while the second selects “good” arms. To our knowledge, this is the first algorithm that achieves the optimal regrets in √ 2n and 2√n for known and unknown horizon times, respectively. Future work will be devoted to the proof of the lower bound in the case of unknown horizon time. We also plan to study various extensions of the present work, including mean-reward distributions whose support does not contain 1 and distribution-free algorithms. Finally, we would like to compare the performance of our algorithm for finite-armed bandits with those of the best known algorithms like KL-UCB [10, 3] and Thompson sampling [14, 8] over short time horizons where the full exploration of the arms is generally not optimal. Acknowledgments The authors acknowledge the support of the European Research Council, of the French ANR (GAP project), of the Swedish Research Council and of the Swedish SSF. 8 References [1] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235–256, May 2002. [2] Donald A. Berry, Robert W. Chen, Alan Zame, David C. Heath, and Larry A. Shepp. Bandit problems with infinitely many arms. Annals of Statistics, 25(5):2103–2116, 1997. [3] Olivier Capp´e, Aur´elien Garivier, Odalric-Ambrym Maillard, R´emi Munos, and Gilles Stoltz. Kullback-leibler upper confidence bounds for optimal sequential allocation. To appear in Annals of Statistics, 2013. [4] Kung-Yu Chen and Chien-Tai Lin. A note on strategies for bandit problems with infinitely many arms. Metrika, 59(2):193–203, 2004. [5] Kung-Yu Chen and Chien-Tai Lin. A note on infinite-armed bernoulli bandit problems with generalized beta prior distributions. Statistical Papers, 46(1):129–140, 2005. [6] Stephen J Herschkorn, Erol Pekoez, and Sheldon M Ross. Policies without memory for the infinite-armed bernoulli bandit under the average-reward criterion. Probability in the Engineering and Informational Sciences, 10:21–28, 1996. [7] Ying-Chao Hung. Optimal bayesian strategies for the infinite-armed bernoulli bandit. Journal of Statistical Planning and Inference, 142(1):86–94, 2012. [8] Emilie Kaufmann, Nathaniel Korda, and R´emi Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In Algorithmic Learning Theory, pages 199–213. Springer, 2012. [9] Tze L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. [10] Tze Leung Lai. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of Statistics, pages 1091–1114, 1987. [11] Chien-Tai Lin and CJ Shiau. Some optimal strategies for bandit problems with beta prior distributions. Annals of the Institute of Statistical Mathematics, 52(2):397–405, 2000. [12] C.L Mallows and Herbert Robbins. Some problems of optimal sampling strategy. Journal of Mathematical Analysis and Applications, 8(1):90 – 103, 1964. [13] Olivier Teytaud, Sylvain Gelly, and Mich`ele Sebag. Anytime many-armed bandits. In CAP07, 2007. [14] W. R. Thompson. On the Likelihood that one Unknown Probability Exceeds Another in View of the Evidence of Two Samples. Biometrika, 25:285–294, 1933. [15] Yizao Wang, Jean-Yves Audibert, and R´emi Munos. Algorithms for infinitely many-armed bandits. In NIPS, 2008. 9
|
2013
|
355
|
5,104
|
Learning to Prune in Metric and Non-Metric Spaces Leonid Boytsov Carnegie Mellon University Pittsburgh, PA, USA srchvrs@cmu.edu Bilegsaikhan Naidan Norwegian University of Science and Technology Trondheim, Norway bileg@idi.ntnu.no Abstract Our focus is on approximate nearest neighbor retrieval in metric and non-metric spaces. We employ a VP-tree and explore two simple yet effective learning-toprune approaches: density estimation through sampling and “stretching” of the triangle inequality. Both methods are evaluated using data sets with metric (Euclidean) and non-metric (KL-divergence and Itakura-Saito) distance functions. Conditions on spaces where the VP-tree is applicable are discussed. The VP-tree with a learned pruner is compared against the recently proposed state-of-the-art approaches: the bbtree, the multi-probe locality sensitive hashing (LSH), and permutation methods. Our method was competitive against state-of-the-art methods and, in most cases, was more efficient for the same rank approximation quality. 1 Introduction Similarity search algorithms are essential to multimedia retrieval, computational biology, and statistical machine learning. Resemblance between objects x and y is typically expressed in the form of a distance function d(x, y), where smaller values indicate less dissimilarity. In our work we use the Euclidean distance (L2), the KL-divergence (P xi log xi/yi), and the Itakura-Saito distance (P xi/yi −log xi/yi −1). KL-divergence is commonly used in text analysis, image classification, and machine learning [6]. Both KL-divergence and the Itakura-Saito distance belong to a class of distances called Bregman divergences. Our interest is in the nearest neighbor (NN) search, i.e., we aim to retrieve the object o that is closest to the query q. For the KL-divergence and other non-symmetric distances two types of NN-queries are defined. The left NN-query returns the object o that minimizes the distance d(o, q), while the right NN-query finds o that minimizes d(q, o). The distance function can be computationally expensive. There was a considerable effort to reduce computational costs through approximating the distance function, projecting data in a lowdimensional space, and/or applying a hierarchical space decomposition. In the case of the hierarchical space decomposition, a retrieval process is a recursion that employs an “oracle” procedure. At each step of the recursion, retrieval can continue in one or more partitions. The oracle allows one to prune partitions without directly comparing the query against data points in these partitions. To this end, the oracle assesses the query and estimates which partitions may contain an answer and, therefore, should be recursively analyzed. A pruning algorithm is essentially a binary classifier. In metric spaces, one can use the classifier based on the triangle inequality. In non-metric spaces, a classifier can be learned from data. There are numerous data structures that speedup the NN-search by creating hierarchies of partitions at index time, most notably the VP-tree [28, 31] and the KD-tree [4]. A comprehensive review of these approaches can be found in books by Zezula et al. [32] and Samet [27]. As dimensionality 1 increases, the filtering efficiency of space-partitioning methods decreases rapidly, which is known as the “curse of dimensionality” [30]. This happens because in high-dimensional spaces histograms of distances and 1-Lipschitz function values become concentrated [25]. The negative effect can be partially offset by creating overlapping partitions (see, e.g., [21]) and, thus, trading index size for retrieval time. The approximate NN-queries are less affected by the curse of the dimensionality, because it is possible to reduce retrieval time at the cost of missing some relevant answers [18, 9, 25]. Low-dimensional data sets embedded into a high-dimensional space do not exhibit high concentration of distances, i.e., their intrinsic dimensionality is low. In metric spaces, it was proposed to compute the intrinsic dimensionality as the half of the squared signal to noise ratio (for the distance distribution) [10]. A well-known approximate NN-search method is the locality sensitive hashing (LSH) [18, 17]. It is based on the idea of random projections [18, 20]. There is also an extension of the LSH for symmetric non-metric distances [23]. The LSH employs several hash functions: It is likely that close objects have same hash values and distant objects have different hash values. In the classic LSH index, the probability of finding an element in one hash table is small and, consequently, many hash tables are to be created during indexing. To reduce space requirements, Lv et al. proposed a multi-probe version of the LSH, which can query multiple buckets of the same hash table [22]. Performance of the LSH depends on the choice of parameters, which can be tuned to fit the distribution of data [11]. For approximate searching it was demonstrated that an early termination strategy could rely on information about distances from typical queries to their respective nearest neighbors [33, 1]. Amato et al. [1] showed that density estimates can be used to approximate a pruning function in metric spaces. They relied on a hierarchical decomposition method (an M-tree) and proposed to visit partitions in the order defined by density estimates. Ch´avez and Navarro [9] proposed to relax triangle-inequality based lower bounds for distances to potential nearest neighbors. The approach, which they dubbed as stretching of the triangle inequality, involves multiplying an exact bound by α > 1. Few methods were designed to work in non-metric spaces. One common indexing approach involves mapping the data to a low-dimensional Euclidean space. The goal is to find the mapping without large distortions of the original similarity measure [19, 16]. Jacobs et al. [19] review various projection methods and argue that such a coercion is often against the nature of a similarity measure, which can be, e.g., intrinsically non-symmetric. A mapping can be found using machine learning methods. This can be done either separately for each data point [12, 24] or by computing one global model [3]. There are also a number of approaches, where machine learning is used to estimate optimal parameters of classic search methods [7]. Vermorel [29] applied VP-trees to searching in undisclosed non-metric spaces without trying to learn a pruning function. Like Amato et al. [1], he proposed to visit partitions in the order defined by density estimates and employed the same early termination method as Zezula et al. [33]. Cayton [6] proposed a Bregman ball tree (bbtree), which is an exact search method for Bregman divergences. The bbtree divides data into two clusters (each covered by a Bregman ball) and recursively repeats this procedure for each cluster until the number of data points in a cluster falls below a threshold (a bucket size). At search time, the method relies on properties of Bregman divergences to compute the shortest distances to covering balls. This is an expensive iterative procedure that may require several computations of direct and inverse gradients, as well as of several distances. Additionally, Cayton [6] employed an early termination method: The algorithm can be told to stop after processing a pre-specified number of buckets. The resulting method is an approximate search procedure. Zhang et al. [34] proposed an exact search method based on estimating the maximum distance to a bounding rectangle, but it works with left queries only. The most efficient variant of this method relies on an optimization technique applicable only to certain decomposable Bregman divergences (a decomposable distance is a sum of values computed separately for each coordinate). Ch´avez et al. [8] as well as Amato and Savino [2] independently proposed permutation-based search methods. These approximate methods do not involve learning, but, nevertheless, are applicable to non-metric spaces. At index time, k pivots are selected. For every data point, we create a list, called a permutation, where pivots are sorted in the order of increasing distances from the data point. At query time, a rank correlation (e.g., Spearman’s) is computed between the permutation of the query and permutations of data points. Candidate points, which have sufficiently small correlation values, are then compared directly with the query (by computing the original distance function). One can sequentially scan the list of permutations and compute the rank correlation between the 2 permutation of the query and the permutation of every data point [8]. Data points are then sorted by rank-correlation values. This approach can be improved by incremental sorting [14], storing permutations as inverted files [2], or prefix trees [13]. In this work we experiment with two approaches to learning a pruning function of the VP-tree, which to our knowledge was not attempted previously. We compare the resulting method, which can be applied to both metric and non-metric spaces, with the following state-of-the-art methods: the multi-probe LSH, permutation methods, and the bbtree. 2 Proposed Method 2.1 Classic VP-tree In the VP-tree (also known as a ball tree) the space is partitioned with respect to a (usually randomly) chosen pivot π [28, 31]. Assume that we have computed distances from all points to the pivot π and R is a median of these distances. The sphere centered at π with the radius R divides the space into two partitions, each of which contains approximately half of all points. Points inside the pivotcentered sphere are placed into the left subtree, while points outside the pivot-centered sphere are placed into the right subtree (points on the border may be placed arbitrarily). The search algorithm proceeds recursively. When the number of data points is below a certain threshold (the bucket size), these data points are stored as a single bucket. The obtained hierarchical partition is represented by the binary tree, where buckets are leaves. π R Figure 1: Three types of query balls in the VP-tree. The black circle (centered at the pivot π) is the sphere that divides the space. The NN-search is a recursive traversal procedure that starts from the root of the tree and iteratively updates the distance r to the closest object found. When it reaches a bucket (i.e., a leaf), bucket elements are searched sequentially. Each internal node stores the pivot π and the radius R. In a metric space with the distance d(x, y), we use the triangle inequality to prune the search space. We visit: • only the left subtree if d(π, q) < R −r; • only the right subtree if d(π, q) > R + r; • both subtrees if R −r ≤d(π, q) ≤R + r. In the third case, we first visit the partition that contains q. These three cases are illustrated in Fig. 1. Let Dπ,R(x) = |R −x|. Then we need to visit both partitions if and only if r ≥Dπ,R(d(π, q)). If r < Dπ,R(d(π, q)), we visit only the partition containing the query point. In this case, we prune the other partition. Pruning is a classification task with three classes, where the prediction function is defined through Dπ,R(x). The only argument of this function is a distance between the pivot and the query, i.e., d(π, q). The function value is equal to the maximum radius of the query ball that fits inside the partition containing the query (see the red and the blue sample balls in Fig. 1). 2.2 Approximating Dπ,R(x) with a Piece-wise Linear Function In Section 2 of the supplemental materials, we describe a straightforward sampling algorithm to learn the decision function Dπ,R(x) for every pivot π. This method turned out to be inferior to most state-of-the-art approaches. It is, nevertheless, instructive to examine the decision functions Dπ,R(x) learned by sampling for the Euclidean distance and KL-divergence (see Table 1 for details on data sets). Each point in Fig. 2a-2c is a value of the decision function obtained by sampling. Blue curves are fit to these points. For the Euclidean data (Fig. 2a), Dπ,R(x) resembles a piece-wise linear function approximately equal to |R −x|. For the KL-divergence data (Fig. 2b and 2c), Dπ,R(x) looks like a U-shape and a hockey-stick curve, respectively. Yet, most data points concentrate around the median (denoted by a dashed red line). In this area, a piece-wise linear approximation of Dπ,R(x) could 3 G G G GG G GGGG GG G G GGGG G G G GGG G G G G G GG G GGGGGGGGGG GGGGGGGGGGGGGGG GGGGGGGGGG GGGGGGGGGG GGGG GGGGG GG G GGG G G G G G G G G 0.1 0.2 0.3 0.4 0.5 0.25 0.50 0.75 distance to pivot max distance to query (a) Colors, L2 G G G G G G G G G G G GG G GG G G GGGGG GG G G G GGGG G G GG G GG G GG G GGGG G GGGG G GGG G G GG G G G GGGG G G G GG G G G G GG G G G G G G G G GG G G G G G G G G G G G G 0.0 0.1 0.2 0.3 0 2 4 6 distance to pivot max distance to query (b) RCV-8, KL-divergence G G G G G G G G G G G G G G G G G GGGG G G GGGG G G G GG G G GGGGGG GGG G GGG G GG G GGGGGGGG G G G GGGGGG GGGGGGGG GGGGGGGG G GG GGG GGGGGGGGGG 0.0 0.5 1.0 0 2 4 6 distance to pivot max distance to query (c) RCV-16, gen. KL-divergence Figure 2: The empirically obtained decision function Dπ,R(x). Each point is a value of the function learned by sampling (see Section 2 of the supplemental materials). Blue curves are fit to these points. The red dashed line denotes a median distance R from data set points to the pivot π. still be reasonable. Formally, we define the decision function as: Dπ,R(x) = αleft|x −R|, if x ≤R αright|x −R|, if x ≥R (1) Once we obtain the values of αleft and αright that permit near exact searching, we can induce more aggressive pruning by increasing αleft and/or αright, thus, exploring trade-offs between retrieval efficiency and effectiveness. This is similar to stretching of the triangle inequality proposed by Ch´avez and Navarro [9]. Optimal αleft and αright are determined using a grid search. To this end, we index a small subset of the data points and seek to obtain parameters that give the shortest retrieval time at a specified recall threshold. The grid search is initialized by values a and b. Then, recall values and retrieval times for all αleft = aρi/m−0.5 and αright = bρj/m−0.5 are obtained (1 ≤i, j ≤m). The values of m and ρ are chosen so that: (1) the grid step is reasonably small (i.e., ρ1/m is close to one); (2) the search space is manageable (i.e., m is not large). If the obtained recall values are considerably larger than a specified threshold, the procedure repeats the grid search using larger values of a and b. Similarly, if the recall is not sufficient, the values of a and b are decreased and the grid search is repeated. One can see that the perfect recall can be achieved with αleft = 0 and αright = 0. In this case, no pruning is done and the data set is searched sequentially. Values of αleft = ∞and αright = ∞represent an (almost) zero recall, because one of the partitions is always pruned. 2.3 Applicability Conditions It is possible to apply the classic VP-tree algorithm only to data sets such that Dπ,R(d(π, q)) > 0 when d(π, q) ̸= R. In a relaxed version of this applicability condition, we require that Dπ,R(d(π, q)) > 0 for almost all queries and a large subset of data points. More formally: Property 1. For any pivot π, probability α, and distance x ̸= R, there exists a radius r > 0 such that, if two randomly selected points q (a potential query) and u (a potential nearest neighbor) satisfy d(π, q) = x and d(u, q) ≤r, then both p and q belong to the same partition (defined by π and R) with a probability at least α. The Property 1, which is true for all metric spaces due to the triangle inequality, holds in the case of the KL-divergence and data points u sampled randomly and uniformly from the simplex {xi|xi ≥ 0, P xi = 1}. The proof, which is given in Section 1 of supplemental materials, can be trivially extended to other non-negative distance functions d(x, y) ≥0 (e.g., to the Itakura-Saito distance) that satisfy (additional compactness requirements may be required): (1) d(x, y) = 0 ⇔x = y; (2) the set of discontinuities of d(x, y) has measure zero in L2. This suggests that the VP-tree could be applicable to a wide class of non-metric spaces. 4 Table 1: Description of the data sets Name d(x, y) Data set size Dimensionality Source Colors L2 1.1 · 105 112 Metric Space Library1 RCV-i KL-div, L2 0.5 · 106 i ∈{8, 16, 32, 128, 256} Cayton [6] SIFT-signat. KL-div, L2 1 · 104 1111 Cayton [6] Uniform L2 0.5 · 106 64 Sampled from U 64[0, 1] 3 Experiments We run experiments on a Linux server equipped with Intel Core i7 2600 (3.40 GHz, 8192 KB of L3 CPU cache) and 16 GB of DDR3 RAM (transfer rate is 20GB/sec). The software (including scripts that can be used to reproduce our results) is available online, as a part of the Non-Metric Space Library2 [5]. The code was written in C++, compiled using GNU C++ 4.7 (optimization flag -Ofast), and executed in a single thread. SIMD instructions were enabled using the flags -msse2 -msse4.1 -mssse3. All distance and rank correlation functions are highly optimized and employ SIMD instructions. Vector elements were single-precision numbers. For the KL-divergence, though, we also implemented a slower version, which computes logarithms on-line, i.e., for each distance computation. The faster version computes logarithms of vector elements off-line, i.e., during indexing, and stores with the vectors. Additionally, we need to compute logarithms of query vector elements, but this is done only once per query. The optimized implementation is about 30x times faster than the slower one. The data sets are described in Table 1. Each data set is randomly divided into two parts. The smaller part (containing 1,000 elements) is used as queries, while the larger part is indexed. This procedure is repeated 5 times (for each data sets) and results are aggregated using a classic fixedeffect model [15]. Improvement in efficiency due to indexing is measured as a reduction in retrieval time compared to a sequential, i.e., exhaustive, search. The effectiveness was measured using a simple rank error metric proposed by Cayton [6]. It is equal to the number of NN-points closer to the query than the nearest point returned by the search method. This metric is appropriate mostly for 1-NN queries. We present results only for left queries, but we also verified that in the case of right queries the VP-tree provides similar effectiveness/efficiency trade-offs. We ran benchmarks for L2, the KL-divergence,3 and the Itakura-Saito distance. Implemented methods included: • The novel search algorithm based on the VP-tree and a piece-wise linear approximation for Dπ,R(x) as described in Section 2.2. The parameters of the grid search algorithm were: m = 7 and ρ = 8. • The permutation method with incremental sorting [14]. The near-optimal performance was obtained by using 16 pivots. • The permutation prefix index, where permutation profiles are stored in a prefix tree of limited depth [13]. We used 16 pivots and the maximal prefix length 4 (again selected for best performance). • The bbtree [6], which is designed for Bregman divergences, and, thus, it was not used with L2. • The multi-probe LSH, which is designed to work only for L2. The implementation employs the LSHKit, 4 which is embedded in the Non-Metric Space Library. The best-performing configuration that we could find used 10 probes and 50 hash tables. The remaining parameters were selected automatically using the cost model proposed by Dong et al. [11]. 2https://github.com/searchivarius/NonMetricSpaceLib 3In the case of SIFT signatures, we use generalized KL-divergence (similarly to Cayton). 4Downloaded from http://lshkit.sourceforge.net/ 5 10−2 10−1 100 101 102 103 101 102 103 Number of points closer (log. scale) Improvement in efficiency (log. scale) Colors (L2) multi-probe LSH pref. index vp-tree permutation 10−1 100 101 102 103 104 100 101 102 103 104 Number of points closer (log. scale) RCV-128 (L2) multi-probe LSH pref. index vp-tree permutation 10−2 10−1 100 101 102 103 100 101 102 103 Number of points closer (log. scale) Uniform (L2) multi-probe LSH pref. index vp-tree permutation 10−2 10−1 100 101 102 103 100 101 102 103 Number of points closer (log. scale) Improvement in efficiency (log. scale) RCV-16 (L2) multi-probe LSH pref. index vp-tree permutation 10−1 100 101 102 103 104 101 102 103 104 Number of points closer (log. scale) RCV-256 (L2) multi-probe LSH pref. index vp-tree permutation 10−6 10−4 10−2 100 102 104 100 101 102 Number of points closer (log. scale) SIFT signatures (L2) multi-probe LSH pref. index vp-tree permutation Figure 3: Performance of NN-search for L2 10−2 10−1 100 101 102 100 101 102 103 Number of points closer (log. scale) Improvement in efficiency (log. scale) RCV-16 (KL-div) pref. index bbtree vp-tree permutation 10−1 100 101 102 103 101 102 103 Number of points closer (log. scale) RCV-256 (KL-div) pref. index bbtree vp-tree permutation 10−2 10−1 100 101 102 100 101 102 Number of points closer (log. scale) SIFT signatures (KL-div) pref. index bbtree vp-tree permutation 100 101 102 103 100 101 102 103 Number of points closer (log. scale) Improvement in efficiency (log. scale) RCV-16 (Itakura-Saito) pref. index bbtree vp-tree permutation 100 101 102 103 104 101 102 103 104 Number of points closer (log. scale) RCV-256 (Itakura-Saito) pref. index bbtree vp-tree permutation 100 101 102 100 101 102 Number of points closer (log. scale) SIFT signatures (Itakura-Saito) pref. index bbtree vp-tree permutation Figure 4: Performance of NN-search for the KL-divergence and Itakura-Saito distance For the bbtree and the VP-tree, vectors in the same bucket were stored in contiguous chunks of memory (allowing for about 1.5-2x reduction in retrieval times). It is typically more efficient to search elements of a small bucket sequentially, rather than using an index. A near-optimal performance was obtained with 50 elements in a bucket. The same optimization approach was also used for both permutation methods. Several parameters were manually selected to achieve various effectiveness/efficiency trade-offs. They included: the minimal number/percentage of candidates in permutation methods, the desired 6 Table 2: Improvement in efficiency and retrieval time (ms) for the bbtree without early termination Data set RCV-16 RCV-32 RCV-128 RCV-256 SIFT sign. impr. time impr. time impr. time impr. time impr. time Slow KL-divergence 15.7 8 6.7 36 1.6 613 1.1 1700 0.9 164 Fast KL-divergence 4.6 2.5 1.9 9.6 0.5 108 0.4 274 0.4 18 recall in the multi-probe LSH and in the VP-tree, as well as the maximum number of processed buckets in the bbtree. The results for L2 are given in Fig. 3. Even though a representational dimensionality of the Uniform data set is only 64, it has the highest intrinsic dimensionality among all sets in Table 1 (according to the definition of Ch´avez et al. [10]). Thus, for the Uniform data set, no method achieved more than a 10x speedup over sequential searching without substantial quality degradation. For instance, for the VP-tree, a 160x speedup was only possible, when a retrieved object was a 40-th nearest neighbor (on average) instead of the first one. The multi-probe LSH can be twice as fast as the VP-tree at the expense of having a 4.7x larger index. All the remaining data sets have low or moderate intrinsic dimensionality (smaller than eight). For example, the SIFT signatures have the representational dimensionality of 1111, but the intrinsic dimensionality is only four. For data sets with low and moderate intrinsic dimensionality, the VP-tree is faster than the other methods most of the time. For the data sets Colors and RCV-16 there is a two orders of magnitude difference. The results for the KL-divergence and Itakura-Saito distance are summarized in Fig. 4. The bbtree is never substantially faster than the VP-tree, while being up to an order of magnitude slower for RCV-16 and RCV-256 in the case of Itakura-Saito distance. Similar to results in L2, in most cases, the VP-tree is at least as fast as other methods. Yet, for the SIFT signatures data set and the Itakura-Saito distance, permutation methods can be twice as fast. Additional analysis has showed that the VP-tree is a good rank-approximation method, but it is not necessarily the best approach in terms of recall. When the VP-tree misses the nearest neighbor, it often returns the second nearest or the third nearest neighbor instead. However, when other examined methods miss the nearest neighbor, they frequently return elements that are far from the true result. For example, the multi-probe LSH may return a true nearest neighbor 50% of the time, and 50% of the time it would return the 100-th nearest neighbor. This observation about the LSH is in line with previous findings [26]. Finally, we measured improvement in efficiency (over exhaustive search) for the bbtree, where the early termination algorithm was disabled. This was done using both the slow and the fast implementation of the KL-divergence. The results are given in Table 2. Improvements in efficiency for the case of the slower KL-divergence (reported in the first row) are consistent with those reported by Cayton [6]. The second row shows improvements in efficiency for the case of the faster KL-divergence and these improvements are substantially smaller than those reported in the first row, despite the fact that using the faster KL-divergence greatly reduces retrieval times. The reason is that the pruning algorithm of the bbtree is quite expensive. It involves computations of logarithms/exponents for coordinates of unknown vectors, and, thus, these computations cannot be deferred to index time. 4 Discussion and conclusions We evaluated two simple yet effective learning-to-prune methods and showed that the resulting approach was competitive against state-of-the-art methods in both metric and non-metric spaces. In most cases, this method provided better trade-offs between rank approximation quality and retrieval speed. For datasets with low or moderate intrinsic dimensionality, the VP-tree could be one-two orders of magnitude faster than other methods (for the same rank approximation quality). We discussed applicability of our method (a VP-tree with the learned pruner) and proved a theorem supporting the point of view that our method can be applicable to a class of non-metric distances, which includes 7 the KL-divergence. We also showed that a simple trick of pre-computing logarithms at index time substantially improved performance of existing methods (e.g., bbtree) for the studied distances. It should be possible to improve over basic learning-to-prune methods (employed in this work) using: (1) a better pivot-selection strategy [31]; (2) a more sophisticated sampling strategy; (3) a more accurate (non-linear) approximation for the decision function Dπ,R(x) (see section 2.1). 5 Acknowledgements We thank Lawrence Cayton for providing the data sets, the bbtree code, and answering our questions; Anna Belova for checking the proof of Property 1 (supplemental materials) and editing the paper. References [1] G. Amato, F. Rabitti, P. Savino, and P. Zezula. Region proximity in metric spaces and its use for approximate similarity search. ACM Trans. Inf. Syst., 21(2):192–227, Apr. 2003. [2] G. Amato and P. Savino. Approximate similarity search in metric spaces using inverted files. In Proceedings of the 3rd international conference on Scalable information systems, InfoScale ’08, pages 28:1–28:10, ICST, Brussels, Belgium, Belgium, 2008. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering). [3] V. Athitsos, J. Alon, S. Sclaroff, and G. Kollios. BoostMap: A method for efficient approximate similarity rankings. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–268 – II–275 Vol.2, june-2 july 2004. [4] J. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509–517, 1975. [5] L. Boytsov and B. Naidan. Engineering efficient and effective Non-Metric Space Library. In N. Brisaboa, O. Pedreira, and P. Zezula, editors, Similarity Search and Applications, volume 8199 of Lecture Notes in Computer Science, pages 280–293. Springer Berlin Heidelberg, 2013. [6] L. Cayton. Fast nearest neighbor retrieval for Bregman divergences. In Proceedings of the 25th international conference on Machine learning, ICML ’08, pages 112–119, New York, NY, USA, 2008. ACM. [7] L. Cayton and S. Dasgupta. A learning framework for nearest neighbor search. Advances in Neural Information Processing Systems, 20, 2007. [8] E. Ch´avez, K. Figueroa, and G. Navarro. Effective proximity retrieval by ordering permutations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(9):1647 –1658, sept. 2008. [9] E. Ch´avez and G. Navarro. Probabilistic proximity search: Fighting the curse of dimensionality in metric spaces. Information Processing Letters, 85(1):39–46, 2003. [10] E. Ch´avez, G. Navarro, R. Baeza-Yates, and J. L. Marroquin. Searching in metric spaces. ACM Computing Surveys, 33(3):273–321, 2001. [11] W. Dong, Z. Wang, W. Josephson, M. Charikar, and K. Li. Modeling LSH for performance tuning. In Proceedings of the 17th ACM conference on Information and knowledge management, CIKM ’08, pages 669–678, New York, NY, USA, 2008. ACM. [12] O. Edsberg and M. L. Hetland. Indexing inexact proximity search with distance regression in pivot space. In Proceedings of the Third International Conference on SImilarity Search and APplications, SISAP ’10, pages 51–58, New York, NY, USA, 2010. ACM. [13] A. Esuli. Use of permutation prefixes for efficient and scalable approximate similarity search. Inf. Process. Manage., 48(5):889–902, Sept. 2012. [14] E. Gonzalez, K. Figueroa, and G. Navarro. Effective proximity retrieval by ordering permutations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(9):1647–1658, 2008. [15] L. V. Hedges and J. L. Vevea. Fixed-and random-effects models in meta-analysis. Psychological methods, 3(4):486–504, 1998. 8 [16] G. Hjaltason and H. Samet. Properties of embedding methods for similarity searching in metric spaces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(5):530–549, 2003. [17] P. Indyk. Nearest neighbors in high-dimensional spaces. In J. E. Goodman and J. O’Rourke, editors, Handbook of discrete and computational geometry, pages 877–892. Chapman and Hall/CRC, 2004. [18] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, STOC ’98, pages 604–613, New York, NY, USA, 1998. ACM. [19] D. Jacobs, D. Weinshall, and Y. Gdalyahu. Classification with nonmetric distances: Image retrieval and class representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(6):583–600, 2000. [20] E. Kushilevitz, R. Ostrovsky, and Y. Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. In Proceedings of the 30th annual ACM symposium on Theory of computing, STOC ’98, pages 614–623, New York, NY, USA, 1998. ACM. [21] H. Lejsek, F. ´Asmundsson, B. J´onsson, and L. Amsaleg. NV-Tree: An efficient disk-based index for approximate search in very large high-dimensional collections. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):869 –883, may 2009. [22] Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li. Multi-probe LSH: efficient indexing for high-dimensional similarity search. In Proceedings of the 33rd international conference on Very large data bases, VLDB ’07, pages 950–961. VLDB Endowment, 2007. [23] Y. Mu and S. Yan. Non-metric locality-sensitive hashing. In AAAI, 2010. [24] T. Murakami, K. Takahashi, S. Serita, and Y. Fujii. Versatile probability-based indexing for approximate similarity search. In Proceedings of the Fourth International Conference on SImilarity Search and APplications, SISAP ’11, pages 51–58, New York, NY, USA, 2011. ACM. [25] V. Pestov. Indexability, concentration, and {VC} theory. Journal of Discrete Algorithms, 13(0):2 – 18, 2012. Best Papers from the 3rd International Conference on Similarity Search and Applications (SISAP 2010). [26] P. Ram, D. Lee, H. Ouyang, and A. G. Gray. Rank-approximate nearest neighbor search: Retaining meaning and speed in high dimensions. In Advances in Neural Information Processing Systems, pages 1536–1544, 2009. [27] H. Samet. Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann Publishers Inc., 2005. [28] J. Uhlmann. Satisfying general proximity similarity queries with metric trees. Information Processing Letters, 40:175–179, 1991. [29] J. Vermorel. Near neighbor search in metric and nonmetric space, 2005. http:// hal.archives-ouvertes.fr/docs/00/03/04/85/PDF/densitree.pdf last accessed on Nov 1st 2012. [30] R. Weber, H. J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proceedings of the 24th International Conference on Very Large Data Bases, pages 194–205. Morgan Kaufmann, August 1998. [31] P. N. Yianilos. Data structures and algorithms for nearest neighbor search in general metric spaces. In Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms, SODA ’93, pages 311–321, Philadelphia, PA, USA, 1993. Society for Industrial and Applied Mathematics. [32] P. Zezula, G. Amato, V. Dohnal, and M. Batko. Similarity Search: The Metric Space Approach (Advances in Database Systems). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005. [33] P. Zezula, P. Savino, G. Amato, and F. Rabitti. Approximate similarity retrieval with M-trees. The VLDB Journal, 7(4):275–293, Dec. 1998. [34] Z. Zhang, B. C. Ooi, S. Parthasarathy, and A. K. H. Tung. Similarity search on Bregman divergence: towards non-metric indexing. Proc. VLDB Endow., 2(1):13–24, Aug. 2009. 9
|
2013
|
356
|
5,105
|
Learning from Limited Demonstrations Beomjoon Kim School of Computer Science McGill University Montreal, Quebec, Canada Amir-massoud Farahmand School of Computer Science McGill University Montreal, Quebec, Canada Joelle Pineau School of Computer Science McGill University Montreal, Quebec, Canada Doina Precup School of Computer Science McGill University Montreal, Quebec, Canada Abstract We propose a Learning from Demonstration (LfD) algorithm which leverages expert data, even if they are very few or inaccurate. We achieve this by using both expert data, as well as reinforcement signals gathered through trial-and-error interactions with the environment. The key idea of our approach, Approximate Policy Iteration with Demonstration (APID), is that expert’s suggestions are used to define linear constraints which guide the optimization performed by Approximate Policy Iteration. We prove an upper bound on the Bellman error of the estimate computed by APID at each iteration. Moreover, we show empirically that APID outperforms pure Approximate Policy Iteration, a state-of-the-art LfD algorithm, and supervised learning in a variety of scenarios, including when very few and/or suboptimal demonstrations are available. Our experiments include simulations as well as a real robot path-finding task. 1 Introduction Learning from Demonstration (LfD) is a practical framework for learning complex behaviour policies from demonstration trajectories produced by an expert. In most conventional approaches to LfD, the agent observes mappings between states and actions in the expert trajectories, and uses supervised learning to estimate a function that can approximately reproduce this mapping. Ideally, the function (i.e., policy) should also generalize well to regions of the state space that are not observed in the demonstration data. Many of the recent methods focus on incrementally querying the expert in appropriate regions of the state space to improve the learned policy, or to reduce uncertainty [1, 2, 3]. Key assumptions of most these works are that (1) the expert exhibits optimal behaviour, (2) the expert demonstrations are abundant, and (3) the expert stays with the learning agent throughout the training. In practice, these assumptions significantly reduce the applicability of LfD. We present a framework that leverages insights and techniques from the reinforcement learning (RL) literature to overcome these limitations of the conventional LfD methods. RL is a general framework for learning optimal policies from trial-and-error interactions with the environment [4, 5]. The conventional RL approaches alone, however, might have difficulties in achieving a good performance from relatively little data. Moreover, they are not particularly cautious to risk involved in trial-and-error learning, which could lead to catastrophic failures. A combination of both expert and interaction data (i.e., mixing LfD and RL), however, offers a tantalizing way to effectively address challenging real-world policy learning problems under realistic assumptions. Our primary contribution is a new algorithmic framework that integrates LfD, tackled using a large margin classifier, with a regularized Approximate Policy Iteration (API) method. The method is 1 formulated as a coupled constraint convex optimization, in which expert demonstrations define a set of linear constraints in API. The optimization is formulated in a way that permits mistakes in the demonstrations provided by the expert, and also accommodates variable availability of demonstrations (i.e., just an initial batch or continued demonstrations). We provide a theoretical analysis describing an upper bound on the Bellman error achievable by our approach. We evaluate our algorithm in a simulated environment under various scenarios, such as varying the quality and quantity of expert demonstrations. In all cases, we compare our algorithm with LeastSquare Policy Iteration (LSPI) [6], a popular API method, as well as with a state-of-the-art LfD method, Dataset Aggregation (DAgger) [1]. We also evaluate the algorithm’s practicality in a real robot path finding task, where there are few demonstrations, and exploration data is expensive due to limited time. In all of the experiments, our method outperformed LSPI, using fewer exploration data and exhibiting significantly less variance. Our method also significantly outperformed Dataset Aggregation (DAgger), a state-of-art LfD algorithm, in cases where the expert demonstrations are few or suboptimal. 2 Proposed Algorithm We consider a continuous-state, finite-action discounted MDP (X, A, P, R, γ), where X is a measurable state space, A is a finite set of actions, P : X × A →M(X) is the transition model, R : X × A →M(R) is the reward model, and γ ∈[0, 1) is a discount factor.1 Let r(x, a) = E [R(·|x, a)], and assume that r is uniformly bounded by Rmax. A measurable mapping π : X →A is called a policy. As usual, V π and Qπ denote the value and action-value function for π, and V ∗and Q∗denote the corresponding value functions for the optimal policy π∗[5]. Our algorithm is couched in the framework of API [7]. A standard API algorithm starts with an initial policy π0. At the (k + 1)th iteration, given a policy πk, the algorithm approximately evaluates πk to find ˆQk, usually as an approximate fixed point of the Bellman operator T πk: ˆQk ≈T πk ˆQk.2 This is called the approximate policy evaluation step. Then, a new policy πk+1 is computed, which is greedy with respect to ˆQk. There are several variants of API that mostly differ on how the approximate policy evaluation is performed. Most methods attempt to exploit the structures in the value function [8, 9, 10, 11], but in some problems one might have extra information about the structure of good or optimal policies as well. This is precisely our case, since we have expert demonstrations. To develop the algorithm, we start with regularized Bellman error minimization, which is a common flavour of policy evaluation used in API. Suppose that we want to evaluate a policy π given a batch of data DRL = {(Xi, Ai)}n i=1 containing n examples, and that the exact Bellman operator T π is known. Then, the new value function ˆQ is computed as: ˆQ ←argmin Q∈F|A| ∥Q −T πQ∥2 n + λJ2(Q), (1) where F|A| is the set of action-value functions, the first term is the squared Bellman error evaluated on the data,3 J2(Q) is the regularization penalty, which can prevent overfitting when F|A| is complex, and λ > 0 is the regularization coefficient. The regularizer J(Q) measures the complexity of function Q. Different choices of F|A| and J lead to different notions of complexity, e.g., various definitions of smoothness, sparsity in a dictionary, etc. For example, F|A| could be a reproducing kernel Hilbert space (RKHS) and J its corresponding norm, i.e., J(Q) = ∥Q∥H. In addition to DRL, we have a set of expert examples DE = {(Xi, πE(Xi))}m i=1, which we would like to take into account in the optimization process. The intuition behind our algorithm is that we want to use the expert examples to “shape” the value function where they are available, while using the RL data to improve the policy everywhere else. Hence, even if we have few demonstration examples, we can still obtain good generalization everywhere due to the RL data. To incorporate the expert examples in the algorithm one might require that at the states Xi from DE, the demonstrated action πE(Xi) be optimal, which can be expressed as a large-margin constraint: 1For a space Ωwith σ-algebra σΩ, M(Ω) denotes the set of all probability measures over σΩ. 2For discrete state spaces, (T πkQ)(x, a) = r(x, a) + γ P x′ P(x′|x, a)Q(x′, πk(x′)). 3∥Q −T πQ∥2 n ≜1 n Pn i=1 |Q(Xi, Ai) −(T πQ)(Xi, Ai)|2 with (Xi, Ai) from DRL. 2 Q(Xi, πE(Xi)) −maxa∈A\πE(Xi) Q(Xi, a) ≥1. However, this might not always be feasible, or desirable (if the expert itself is not optimal), so we add slack variables ξi ≥0 to allow occasional violations of the constraints (similar to soft vs. hard margin in the large-margin literature [12]). The policy evaluation step can then be written as the following constrained optimization problem: ˆQ ← argmin Q∈F|A|,ξ∈Rm + ∥Q −T πQ∥2 n + λJ2(Q) + α m m X i=1 ξi (2) s.t. Q(Xi, πE(Xi)) − max a∈A\πE(Xi) Q(Xi, a) ≥1 −ξi. for all (Xi, πE(Xi)) ∈DE The parameter α balances the influence of the data obtained by the RL algorithm (generally by trialand-error) vs. the expert data. When α = 0, we obtain (1), while when α →∞, we essentially solve a structured classification problem based on the expert’s data [13]. Note that the right side of the constraints could also be multiplied by a coefficient ∆i > 0, to set the size of the acceptable margin between the Q(Xi, πE(Xi)) and maxa∈A\πE(Xi) Q(Xi, a). Such a coefficient can then be set adaptively for different examples. However, this is beyond the scope of the paper. The above constrained optimization problem is equivalent to the following unconstrained one: ˆQ ←argmin Q∈F|A| ∥Q −T πQ∥2 n + λJ2(Q) + α m m X i=1 1 − Q(Xi, πE(Xi)) − max a∈A\πE(Xi) Q(Xi, a) + (3) where [1 −z]+ = max{0, 1 −z} is the hinge loss. In many problems, we do not have access to the exact Bellman operator T π, but only to samples DRL = {(Xi, Ai, Ri, X′ i)}n i=1 with Ri ∼R(·|Xi, Ai) and X′ i ∼P(·|Xi, Ai). In this case, one might want to use the empirical Bellman error ∥Q −ˆT πQ∥2 n (with ( ˆT πQ)(Xi, Ai) ≜ Ri + γQ(X′ i, π(X′ i)) for 1 ≤i ≤n) instead of ∥Q −T πQ∥2 n. It is known, however, that this is a biased estimate of the Bellman error, and does not lead to proper solutions [14]. One approach to address this issue is to use the modified Bellman error [14]. Another approach is to use Projected Bellman error, which leads to an LSTD-like algorithm [8]. Using the latter idea, we formulate our optimization as: ˆQ ← argmin Q∈F|A|,ξ∈Rm +
Q −ˆhQ
2 n + λJ2(Q) + α m m X i=1 ξi (4) s.t. ˆhQ = argmin h∈F|A|
h −ˆT πQ
2 n + λhJ2(h) Q(Xi, πE(Xi)) − max a∈A\πE(Xi) Q(Xi, a) ≥1 −ξi. for all (Xi, πE(Xi)) ∈DE Here λh > 0 is the regularization coefficient for ˆhQ, which might be different from λ. For some choices of the function space F|A| and the regularizer J, the estimate ˆhQ can be found in closedform. For example, one can use linear function approximators h(·) = φ(·)⊤u and Q(·) = φ(·)⊤w where u, w ∈Rp are parameter vectors and φ(·) ∈Rp is a vector of p linearly independent basis functions defined over the space of state-action pairs. Using L2-regularization, J2(h) = u⊤u and J2(Q) = w⊤w, the best parameter vector u∗can be obtained as a function of w by solving a ridge regression problem: u∗(w) = Φ⊤Φ + nλhI −1 Φ⊤(r + γΦ′w), where Φ, Φ′ and r are the feature matrices and reward vector, respectively: Φ = (φ(Z1), . . . , φ(Zn))⊤, Φ′ = (φ(Z′ 1), . . . , φ(Z′ n))⊤, r = (R1, . . . , Rn)⊤, with Zi = (Xi, Ai) and Z′ i = (X′ i, π(X′ i)) (for data belonging to DRL). More generally, as discussed above, we might choose the function space F|A| to be a reproducing kernel Hilbert space (RKHS) and J to be its corresponding norm, which provides the flexibility of working with a nonparametric representation while still having a closed-form solution for ˆhQ. We do not provide the detail of formulation here due to space constraints. The approach presented so far tackles the policy evaluation step of the API algorithm. As usual in API, we alternate this step with the policy improvement step (i.e., greedification). The resulting algorithm is called Approximate Policy Iteration with Demonstration (APID). 3 Up to this point, we have left open the problem of how the datasets DRL and DE are obtained. These datasets might be regenerated at each iteration, or they might be reused, depending on the availability of the expert and the environment. In practice, if the expert data is rare, DE will be a single fixed batch, but DRL could be increased, e.g., by running the most current policy (possibly with some exploration) to collect more data. The approach used should be tailored to the application. Note that the values of the regularization coefficients as well as α should ideally change from iteration to iteration as a function of the number of samples as well as the value function Qπk. The choice of these parameters might be automated by model selection [15]. 3 Theoretical Analysis In this section we focus on the kth iteration of APID and consider the solution ˆQ to the optimization problem (2). The theoretical contribution is an upper bound on the true Bellman error of ˆQ. Such an upper bound allows us to use error propagation results [16, 17] to provide a performance guarantee on the value of the outcome policy πK (the policy obtained after K iterations of the algorithm) compared to the optimal value function V ∗. We make the following assumptions in our analysis. Assumption A1 (Sampling) DRL contains n independent and identically distributed (i.i.d.) samples (Xi, Ai) i.i.d. ∼νRL ∈M(X × A) where νRL is a fixed distribution (possibly dependent on k) and the states in DE = {(Xi, πE(Xi)}m i=1 are also drawn i.i.d. Xi i.i.d. ∼νE ∈M(X) from an expert distribution νE. DRL and DE are independent from each other. We denote N = n + m. Assumption A2 (RKHS) The function space F|A| is an RKHS defined by a kernel function K : (X × A) × (X × A) →R, i.e., F|A| = n z 7→PN i=1 wiK(z, Zi) : w ∈RN o with {Zi}N i=1 = DRL ∪DE. We assume that supz∈X×A K (z, z) ≤1. Moreover, the function space F|A| is Qmaxbounded. Assumption A3 (Function Approximation Property) For any policy π, Qπ ∈F|A|. Assumption A4 (Expansion of Smoothness) For all Q ∈F|A|, there exist constants 0 ≤LR, LP < ∞, depending only on the MDP and F|A|, such that for any policy π, J(T πQ) ≤LR + γLP J(Q). Assumption A5 (Regularizers) The regularizer functionals J : B(X) →R and J : B(X × A) → R are pseudo-norms on F and F|A|, respectively,4 and for all Q ∈F|A| and a ∈A, we have J(Q(·, a)) ≤J(Q). Some of these assumptions are quite mild, while some are only here to simplify the analysis, but are not necessary for practical application of the algorithm. For example, the i.i.d. assumption A1 can be relaxed using independent block technique [18] or other techniques to handle dependent data, e.g., [19]. The method is certainly not specific to RKHS (Assumption A2), so other function spaces can be used without much change in the proof. Assumption A3 holds for “rich” enough function spaces, e.g., universal kernels satisfy it for reasonable Qπ. Assumption A4 ensures that if Q ∈F|A| then T πQ ∈F|A|. It holds if F|A| is rich enough and the MDP is “well-behaving”. Assumption A5 is mild and ensures that if we control the complexity of Q ∈F|A|, the complexity of Q(·, a) ∈F is controlled too. Finally, note that focusing on the case when we have access to the true Bellman operator simplifies the analysis while allowing us to gain more understanding about APID. We are now ready to state the main theorem of this paper. Theorem 1. For any fixed policy π, let ˆQ be the solution to the optimization problem (2) with the choice of α > 0 and λ > 0. If Assumptions A1–A5 hold, for any n, m ∈N and 0 < δ < 1, with probability at least 1 −δ we have 4 B(X) and B(X × A) denote the space of bounded measurable functions defined on X and X × A. Here we are slightly abusing notation as the same symbol is used for the regularizer over both spaces. However, this should not cause any confusion since the identity of the regularizer should always be clear from the context. 4
ˆQ −T π ˆQ
2 νRL ≤64Qmax √n + m n (1 + γLP )√R2max + α √ λ + LR + min ( 2αEX∼νE " 1 − Qπ(X, πE(X)) − max a∈A\πE(X) Qπ(X, a)) + # + λJ2(Qπ), 2 ∥QπE −T πQπE∥2 νRL + 2αEX∼νE " 1 − QπE(X, πE(X)) − max a∈A\πE(X) QπE(X, a)) + # + λJ2(QπE) ) + 4Q2 max r 2 ln(4/δ) n + 6 ln(4/δ) n ! + α20(1 + 2Qmax) ln(8/δ) 3m . The proof of this theorem is in the supplemental material. Let us now discuss some aspects of the result. The theorem guarantees that when the amount of RL data is large enough (n ≫m), we indeed minimize the Bellman error if we let α →0. In that case, the upper bound would be OP ( 1 √ nλ) + min{λJ2(Qπ), 2 ∥QπE −T πQπE∥2 νRL + λJ2(QπE)}. Considering only the first term inside min, the upper bound is minimized by the choice of λ = [n1/3J4/3(Qπ)]−1, which leads to OP (J2/3(Qπ) n−1/3) behaviour of the upper bound. The bound shows that the difficulty of learning depends on J(Qπ), which is the complexity of the true (but unknown) action-value function Qπ measured according to J in F|A|. Note that Qπ might be “simple” with respect to some choice of function space/regularizer, but complex in another one. The choice of F|A| and J reflects prior knowledge regarding the function space and complexity measure that are suitable. When the number of samples n increases, we can afford to increase the size of the function space by making λ smaller. Since we have two terms inside min, the complexity of the problem might actually depend on 2 ∥QπE −T πQπE∥2 νRL +λJ2(QπE), which is the Bellman error of QπE (the true action-value function of the expert) according to π plus the complexity of QπE in F|A|. Roughly speaking, if π is close to πE, the Bellman error would be small. Two remarks are in order. First, this result does not provide a proper upper bound on the Bellman error when m dominates n. This is to be expected, because if π is quite different from πE and we do not have enough samples in DRL, we cannot guarantee that the Bellman error, which is measured according to π, will be small. But, one can still provide a guarantee by choosing a large α and using a margin-based error bound (cf. Section 4.1 of [20]). Second, this upper bound is not optimal, as we use a simple proof technique based on controlling the supremum of the empirical process. More advanced empirical processes techniques can be used to obtain a faster error rate (cf. [12]). 4 Experiments We evaluate APID on a simulated domain, as well as a real robot path-finding task. In the simulated environment, we compare APID against other benchmarks under varying availability and optimality of the expert demonstrations. In the real robot task, we evaluate the practicality of deploying APID on a live system, especially when DRL and DE are both expensive to obtain. 4.1 Car Brake Control Simulation In the vehicle brake control simulation [21], the agent’s goal is reach a target velocity, then maintain that target. It can either press the acceleration pedal or the brake pedal, but not both simultaneously. A state is represented by four continuous-valued features: target and current velocities, current positions of brake pedal and acceleration pedal. Given a state, the learned policy has to output one of five actions: acceleration up, acceleration down, brake up, brake down, do nothing. The reward is −10 times the error in velocity. The initial velocity is set to 2m/s, and the target velocity is set to 7m/s. The expert was implemented using the dynamics between the pedal pressure and output velocity, from which we calculate the optimal velocity at each state. We added random noise to the dynamics to simulate a realistic scenario, in which the output velocity is governed by factors such as friction and wind. The agent has no knowledge of the dynamics, and receives only DE and DRL. For all experiments, we used a linear Radial Basis Function (RBF) approximator for the value function and CVX, a package for specifying and solving convex programs [22], to solve the optimization 5 1 2 3 4 5 6 7 8 9 10 −80 −70 −60 −50 −40 −30 −20 −10 0 10 20 Number of Iterations Average Rewards APID LSPI Supervised (a) 1 2 3 4 5 6 7 8 9 10 −80 −70 −60 −50 −40 −30 −20 −10 0 10 20 Number of Iterations Average Rewards APID LSPI Supervised (b) Figure 1: (a) Average reward with m = 15 optimal demonstrations. (b) Average reward with m = 100 sub-optimal demonstrations. Each iteration adds 500 new RL data to APID and LSPI, while the expert data stays the same. First iteration has n = 500 −m for APID. LSPI treats all the data at this iteration RL data. problem (4). We set α m to 1 if the expert is optimal and 0.01 otherwise. The regularization parameter was set according to 1/√n + m. We averaged results over 10 runs and computed confidence intervals as well. We compare APID with the regularized version of LSPI [6] in all the experiments. Depending on the availability of expert data, we either compare APID with the standard supervised LfD, or DAgger [1], a state-of-the-art LfD algorithm that has strong theoretical guarantees and good empirical performance when the expert is optimal. DAgger is designed to query for more demonstrations at each iteration; then, it aggregates all demonstrations and trains a new policy. The number of queries in DAgger increases linearly with the task horizon. For Dagger and supervised LfD, we use random forests to train the policy. We first consider the case with little but optimal expert data, with task horizon 1000. At each iteration, the agent gathers more RL data using a random policy. In this case, shown in Figure 1a, LSPI performs worse than APID on average, and it also has much higher variance, especially when DRL is small. This is consistent with empirical results in [6], in which LSPI showed significant variance even for simple tasks. In the first iteration, APID has moderate variance, but it is quickly reduced in the next iteration. This is due to the fact that expert constraints impose a particular shape to the value function, as noted in Section 2. The supervised LfD performs the worst, as the amount of expert data is insufficient. Results for the case in which the agent has more but sub-optimal expert data are shown in Figure 1b. Here, with probability 0.5 the expert gives a random action rather than the optimal action. Compared to supervised LfD, APID and LSPI are both able to overcome sub-optimality in the expert’s behaviour to achieve good performance, by leveraging the RL data. Next, we consider the case of abundant demonstrations from a sub-optimal expert, who gives random actions with probability 0.25, to characterize the difference between APID and DAgger. The task horizon is reduced to 100, due to the number of demonstrations required by DAgger. As can be seen in Figure 2a, the sub-optimal demonstrations cause DAgger to diverge, because it changes the policy at each iteration, based on the newly aggregated sub-optimal demonstrations. APID, on the other hand, is able to learn a better policy by leveraging DRL. APID also outperforms LSPI (which uses the same DRL), by generalizing from DE via function approximation. This result illustrates well APID’s robustness to sub-optimal expert demonstrations. Figure 2b shows the result for the case of optimal and abundant demonstrations. In this case, which fits Dagger’s assumptions, the performance of APID is on par with that of DAgger. 4.2 Real Robot Path Finding We now evaluate the practicality of APID on a real robot path-finding task and compare it with LSPI and supervised LfD, using only one demonstrated trajectory. We do not assume that the expert is optimal (and abundant), and therefore do not include DAgger, which was shown to perform poorly for this case. In this task, the robot needs to get to the goal in an unmapped environment by learning 6 1 2 3 4 5 6 7 8 9 10 11 −20 −18 −16 −14 −12 −10 −8 −6 −4 Number of Iterations Average Rewards APID LSPI DAgger (a) 1 2 3 4 5 6 7 8 9 10 11 −20 −18 −16 −14 −12 −10 −8 −6 −4 Number of Iterations Average Rewards APID LSPI DAgger (b) Figure 2: (a) Performance with a sub-optimal expert. (b) Performance with an optimal expert. Each iteration (X-axis) adds 100 new expert data points to APID and DAgger. We use n = 3000 −m for APID. LSPI treats all data as RL data. to avoid obstacles. We use an iRobot Create equipped with Kinect RGB-depth sensor and a laptop. We encode the Kinect observations with 1 × 3 grid cells (each 1m × 1m). The robot also has three bumpers to detect a collision from the front, left, and right. Figures 3a and 3b show a picture of the robot and its environment. In order to reach the goal, the robot needs to turn left to avoid a first box and wall on the right, while not turning too much, to avoid the couch. Next, the robot must turn right to avoid a second box, but make sure not to turn too much or too soon to avoid colliding with the wall or first box. Then, the robot needs to get into the hallway, turn right, and move forward to reach the goal position; the goal position is 6m forward and 1.5m right from the initial position. The state space is represented by 3 non-negative integer features to represent number of point clouds produced by Kinect in each grid cell, and 2 continuous features (robot position). The robot has three discrete actions: turn left, turn right, and move forward. The reward is minus the distance to the goal, but if the robot’s front bumper is pressed and the robot moves forward, it receives a penalty equal to 2 times the current distance to the goal. If the robot’s left bumper is pressed and the robot does not turn right, and vice-versa, it also receives 2 times the current distance to the goal. The robot outputs actions at a rate of 1.7Hz. We started from a single trajectory of demonstration, then incrementally added only RL data. The number of data points added varied at each iteration, but the average was 160 data points, which is around 1.6 minutes of exploration using ϵ-greedy exploration policy (decreasing ϵ over iterations). For 11 iterations, the training time was approximately 18 minutes. Initially, α m was set to 0.9, then it was decreased as new data was acquired. To evaluate the performance of each algorithm, we ran each iteration’s policy for a task horizon of 100 (≈1min); we repeated each iteration 5 times, to compute the mean and standard deviation. As shown in Figure 3c, APID outperformed both LSPI and supervised LfD; in fact, these two methods could not reach the goal. The supervised LfD kept running into the couch, as state distributions of expert and robot differed, as noted in [1]. LSPI had a problem due to exploring unnecessary states; specifically, the ϵ-greedy policy of LSPI explored regions of state space that were not relevant in learning the optimal plan, such as the far left areas from the initial position. ϵ-greedy policy of APID, on the other hand, was able to leverage the expert data to efficiently explore the most relevant states and avoid unnecessary collisions. For example, it learned to avoid the first box in the first iteration, then explored states near the couch, where supervised LfD failed. Table 1 gives the time it took for the robot to reach the goal (within 1.5m). Only iterations 9, 10 and 11 of APID reached the goal. Note that the times achieved by APID (iteration 11) are similar to the expert. Table 1: Average time to reach the goal Average Vals Demonstration APID-9th APID-10th APID-11th Time To Goal(s) 35.9 38.4 ± 0.81 37.7 ± 0.84 36.1 ± 0.24 7 (a) (b) 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 Number of Iterations Distance to the Goal APID LSPI supervised (c) Figure 3: (a) Picture of iRobot Create equipped with Kinect. (b) Top-down view of the environment. The star represents the goal, the circle represents the initial position, black lines indicate walls, and the three grid cells represent the vicinity of the Kinect. (c) Distance to the goal for LSPI, APID and supervised LfD with random forest. 5 Discussion We proposed an algorithm that learns from limited demonstrations by leveraging a state-of-the-art API method. To our knowledge, this is the first LfD algorithm that learns from few and/or suboptimal demonstrations. Most LfD methods focus on solving the issue of violation of i.i.d. data assumptions by changing the policy slowly [23], by reducing the problem to online learning [1], by querying the expert [2] or by obtaining corrections from the expert [3]. These methods all assume that the expert is optimal or close-to-optimal, and demonstration data is abundant. The TAMER system [24] uses rewards provided by the expert (and possibly blended with MDP rewards), instead of assuming that an action choice is provided. There are a few Inverse RL methods that do not assume optimal experts [25, 26], but their focus is on learning the reward function rather than on planning. Also, these methods require a model of the system dynamics, which is typically not available in practice. In the simulated environment, we compared our method with DAgger (a state-of-the-art LfD method) as well as with a popular API algorithm, LSPI. We considered four scenarios: very few but optimal demonstrations, a reasonable number of sub-optimal demonstrations, abundant sub-optimal demonstrations, and abundant optimal demonstrations. In the first three scenarios, which are more realistic, our method outperformed the others. In the last scenario, in which the standard LfD assumptions hold, APID performed just as well as DAgger. In the real robot path-finding task, our method again outperformed LSPI and supervised LfD. LSPI suffered from inefficient exploration, and supervised LfD was affected by the violation of the i.i.d. assumption, as pointed out in [1]. We note that APID accelerated API by utilizing demonstration data. Previous approaches [27, 28] accelerated policy search, e.g. by using LfD to find initial policy parameters. In contrast, APID leverages the expert data to shape the policy throughout the planning. The most similar to our work, in terms of goals, is [29], where the agent is given multiple suboptimal trajectories, and infers a hidden desired trajectory using Expectation Maximization and Kalman Filtering. However, their approach is less general, as it assumes a particular noise model in the expert data, whereas APID is able to handle demonstrations that are sub-optimal non-uniformly along the trajectory. In future work, we will explore more applications of APID and study its behaviour with respect to ∆i. For instance, in safety-critical applications, large ∆i could be used at critical states. Acknowledgements Funding for this work was provided by the NSERC Canadian Field Robotics Network, Discovery Grants Program, and Postdoctoral Fellowship Program, as well as by the CIHR (CanWheel team), and the FQRNT (Regroupements strat´egiques INTER et REPARTI). 8 References [1] S. Ross, G. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011. 1, 2, 6, 7, 8 [2] S. Chernova and M. Veloso. Interactive policy learning through confidence-based autonomy. Journal of Artificial Intelligence Research, 34, 2009. 1, 8 [3] B. Argall, M. Veloso, and B. Browning. Teacher feedback to scaffold and refine demonstrated motion primitives on a mobile robot. Robotics and Autonomous Systems, 59(3-4), 2011. 1, 8 [4] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 1 [5] Cs. Szepesv´ari. Algorithms for Reinforcement Learning. Morgan Claypool Publishers, 2010. 1, 2 [6] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4: 1107–1149, 2003. 2, 6 [7] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3):310–335, 2011. 2 [8] A.-m. Farahmand, M. Ghavamzadeh, Cs. Szepesv´ari, and S. Mannor. Regularized policy iteration. In NIPS 21, 2009. 2, 3 [9] J. Z. Kolter and A. Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML, 2009. 2 [10] G. Taylor and R. Parr. Kernelized value function approximation for reinforcement learning. In ICML, 2009. 2 [11] M. Ghavamzadeh, A. Lazaric, R. Munos, and M. Hoffman. Finite-sample analysis of Lasso-TD. In ICML, 2011. 2 [12] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. 3, 5 [13] I. Tsochantaridis, T. Joachims, T. Hofmann, Y. Altun, and Y. Singer. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6(2):1453–1484, 2006. 3 [14] A. Antos, Cs. Szepesv´ari, and R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71:89–129, 2008. 3 [15] A.-m. Farahmand and Cs. Szepesv´ari. Model selection in reinforcement learning. Machine Learning, 85 (3):299–332, 2011. 4 [16] R. Munos. Error bounds for approximate policy iteration. In ICML, 2003. 4 [17] A.-m. Farahmand, R. Munos, and Cs. Szepesv´ari. Error propagation for approximate policy and value iteration. In NIPS 23, 2010. 4 [18] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22(1):94–116, January 1994. 4 [19] P.-M. Samson. Concentration of measure inequalities for Markov chains and φ-mixing processes. The Annals of Probability, 28(1):416–461, 2000. 4 [20] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: A survey of some recent advances. ESAIM: Probability and Statistics, 9:323–375, 2005. 5 [21] T. Hester, M. Quinlan, and P. Stone. RTMBA: A real-time model-based reinforcement learning architecture for robot control. In ICRA, 2012. 5 [22] CVX Research, Inc. CVX: Matlab software for disciplined convex programming, version 2.0. http: //cvxr.com/cvx, August 2012. 5 [23] S. Ross and J. A. Bagnell. Efficient reductions for imitation learning. In AISTATS, 2010. 8 [24] W. B Knox and P. Stone. Reinforcement learning from simultaneous human and MDP reward. In AAMAS, 2012. 8 [25] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In IJCAI, 2007. 8 [26] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, 2008. 8 [27] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In NIPS 15, 2002. 8 [28] J. Kober and J. Peters. Policy search for motor primitives in robotics. Machine Learning, 84(1-2):171– 203, 2011. 8 [29] A. Coates, P. Abbeel, and A. Y. Ng. Learning for control from multiple demonstrations. In ICML, 2008. 8 9
|
2013
|
357
|
5,106
|
Aggregating Optimistic Planning Trees for Solving Markov Decision Processes Gunnar Kedenburg INRIA Lille - Nord Europe / idalab GmbH gunnar.kedenburg@inria.fr Raphaël Fonteneau University of Liège / INRIA Lille - Nord Europe raphael.fonteneau@ulg.ac.be Rémi Munos INRIA Lille - Nord Europe / Microsoft Research New England remi.munos@inria.fr Abstract This paper addresses the problem of online planning in Markov decision processes using a randomized simulator, under a budget constraint. We propose a new algorithm which is based on the construction of a forest of planning trees, where each tree corresponds to a random realization of the stochastic environment. The trees are constructed using a “safe” optimistic planning strategy combining the optimistic principle (in order to explore the most promising part of the search space first) with a safety principle (which guarantees a certain amount of uniform exploration). In the decision-making step of the algorithm, the individual trees are aggregated and an immediate action is recommended. We provide a finite-sample analysis and discuss the trade-off between the principles of optimism and safety. We also report numerical results on a benchmark problem. Our algorithm performs as well as state-of-the-art optimistic planning algorithms, and better than a related algorithm which additionally assumes the knowledge of all transition distributions. 1 Introduction Adaptive decision making algorithms have been used increasingly in the past years, and have attracted researchers from many application areas, like artificial intelligence [16], financial engineering [10], medicine [14] and robotics [15]. These algorithms realize an adaptive control strategy through interaction with their environment, so as to maximize an a priori performance criterion. A new generation of algorithms based on look-ahead tree search techniques have brought a breakthrough in practical performance on planning problems with large state spaces. Techniques based on planning trees such as Monte Carlo tree search [4, 13], and in particular the UCT algorithm (UCB applied to Trees, see [12]) have allowed to tackle large scale problems such as the game of Go [7]. These methods exploit that in order to decide on an action at a given state, it is not necessary to build an estimate of the value function everywhere. Instead, they search locally in the space of policies, around the current state. We propose a new algorithm for planning in Markov Decision Problems (MDPs). We assume that a limited budget of calls to a randomized simulator for the MDP (the generative model in [11]) is available for exploring the consequences of actions before making a decision. The intuition behind our algorithm is to achieve a high exploration depth in the look-ahead trees by planning in fixed realizations of the MDP, and to achieve the necessary exploration width by aggregating a forest of planning trees (forming an approximation of the MDP from many realizations). Each of the trees is developed around the state for which a decision has to be made, according to the principle of optimism in the face of uncertainty [13] combined with a safety principle. 1 We provide a finite-sample analysis depending on the budget, split into the number of trees and the number of node expansions in each tree. We show that our algorithm is consistent and that it identifies the optimal action when given a sufficiently large budget. We also give numerical results which demonstrate good performance on a benchmark problem. In particular, we show that our algorithm achieves much better performance on this problem than OP-MDP [2] when both algorithms generate the same number of successor states, despite the fact that OP-MDP assumes knowledge of all successor state probabilities in the MDP, whereas our algorithm only samples states from a simulator. The paper is organized as follows: first, we discuss some related work in section 2. In section 3, the problem addressed in this paper is formalized, before we describe our algorithm in section 4. Its finite-sample analysis is given in section 5. We provide numerical results on the inverted pendulum benchmark in section 6. In section 7, we discuss and conclude this work. 2 Related work The optimism in the face of uncertainty paradigm has already lead to several successful results for solving decision making problems. Specifically, it has been applied in the following contexts: multi-armed bandit problems [1] (which can be seen as single state MDPs), planning algorithms for deterministic systems and stochastic systems [8, 9, 17], and global optimization of stochastic functions that are only accessible through sampling. See [13] for a detailed review of the optimistic principle applied to planning and optimization. The algorithm presented in this paper is particularly closely related to two recently developed online planning algorithms for solving MDPs, namely the OPD algorithm [9] for MDPs with deterministic transitions, and the OP-MDP algorithm [2] which addresses stochastic MDPs where all transition probabilities are known. A Bayesian adaptation of OP-MDP has also been proposed [6] for planning in the context where the MDP is unknown. Our contribution is also related to [5], where random ensembles of state-action independent disturbance scenarios are built, the planning problem is solved for each scenario, and a decision is made based on majority voting. Finally, since our algorithm proceeds by sequentially applying the first decision of a longer plan over a receding horizon, it can also be seen as a Model Predictive Control [3] technique. 3 Formalization Let (S, A, p, r, γ) be a Markov decision process (MDP), where the set S and A respectively denote the state space and the finite action space, with |A| > 1, of the MDP. When an action a ∈A is selected in state s ∈S of the MDP, it transitions to a successor state s′ ∈S(s, a) with probability p(s′|s, a). We further assume that every successor state set S(s, a) is finite and their cardinality is bounded by K ∈N. Associated with the transition is an deterministic instantaneous reward r(s, a, s′) ∈[0, 1]. While the transition probabilities may be unknown, it is assumed that a randomized simulator is available, which, given a state-action pair (s, a), outputs a successor state s′ ∼p(·|s, a). The ability to sample is a weaker assumption than the knowledge of all transition probabilities. In this paper we consider the problem of planning under a budget constraint: only a limited number of samples may be drawn using the simulator. Afterwards, a single decision has to be made. Let π : S →A denote a deterministic policy. Define the value function of the policy π in a state s as the discounted sum of expected rewards: vπ : S →R, vπ : s 7→E " ∞ X t=0 γtr(st, π(st), st+1) s0 = s # , (1) where the constant γ ∈(0, 1) is called the discount factor. Let π∗be an optimal policy (i.e. a policy that maximizes vπ in all states). It is well known that the optimal value function v∗:= vπ∗is the 2 solution to the Bellman equation ∀s ∈S : v∗(s) = max a∈A X s′∈S(s,a) p(s′|s, a) (r(s, a, s′) + γv∗(s′)) . Given the action-value function Q∗: (s, a) 7→P s′∈S(s,a) p(s′|s, a)(r(s, a, s′) + γv∗(s′)), an optimal policy can be derived as π∗: s 7→argmaxa∈A Q∗(s, a). 4 Algorithm We name our algorithm ASOP (for “Aggregated Safe Optimistic Planning”). The main idea behind it is to use a simulator to obtain a series of deterministic “realizations” of the stochastic MDP, to plan in each of them individually, and to then aggregate all the information gathered in the deterministic MDPs into an empirical approximation to the original MDP, on the basis of which a decision is made. We refer to the planning trees used here as single successor state trees (S3-trees), in order to distinguish them from other planning trees used for the same problem (e.g. the OP-MDP tree, where all possible successor states are considered). Every node of a S3-tree represents a state s ∈S, and has at most one child node per state-action a, representing a successor state s′ ∈S. The successor state is drawn using the simulator during the construction of the S3-tree. The planning tree construction, using the SOP algorithm (for “Safe Optimistic Planning”), is described in section 4.1. The ASOP algorithm, which integrates building the forest and deciding on an action by aggregating the information in the forest, is described in section 4.2. 4.1 Safe optimistic planning in S3-trees: the SOP algorithm SOP is an algorithm for sequentially constructing a S3-tree. It can be seen as a variant of the OPD algorithm [9] for planning in deterministic MDPs. SOP expands up to two leaves of the planning tree per iteration. The first leaf (the optimistic one) is a maximizer of an upper bound (called b-value) on the value function of the (deterministic) realization of the MDP explored in the S3-tree. The b-value of a node x is defined as b(x) := d(x)−1 X i=0 γiri + γd(x) 1 −γ (2) where (ri) is the sequence of rewards obtained along the path to x, and d(x) is the depth of the node (the length of the path from the root to x). Only expanding the optimistic leaf would not be enough to make ASOP consistent; this is shown in the appendix. Therefore, a second leaf (the safe one), defined as the shallowest leaf in the current tree, is also expanded in each iteration. A pseudo-code is given as algorithm 1. Algorithm 1: SOP Data: The initial state s0 ∈S and a budget n ∈N Result: A planning tree T Let T denote a tree consisting only of a leaf, representing s0. Initialize the cost counter c := 0. while c < n do Form a subset of leaves of T, L, containing a leaf of minimal depth, and a leaf of maximal b-value (computed according to (2); the two leaves can be identical). foreach l ∈L do Let s denote the state represented by l. foreach a ∈A do if c < n then Use the simulator to draw a successor state s′ ∼p(·|s, a). Create an edge in T from l to a new leaf representing s′. Let c := c + 1. return T 3 4.2 Aggregation of S3-trees: the ASOP algorithm ASOP consists of three steps. In the first step, it runs independent instances of SOP to collect information about the MDP, in the form of a forest of S3-trees. It then computes action-values ˆQ∗(s0, a) of a single “empirical” MDP based on the collected information, in which states are represented by forests: on a transition, the forest is partitioned into groups by successor states, and the corresponding frequencies are taken as the transition probabilities. Leaves are interpreted as absorbing states with zero reward on every action, yielding a trivial lower bound. A pseudo-code for this computation is given as algorithm 2. ASOP then outputs the action ˆπ(s0) ∈argmax a∈A ˆQ∗(s0, a). The optimal policy of the empirical MDP has the property that the empirical lower bound of its value, computed from the information collected by planning in the individual realizations, is maximal over the set of all policies. We give a pseudo-code for the ASOP algorithm as algorithm 3. Algorithm 2: ActionValue Data: A forest F and an action a, with each tree in F representing the same state s Result: An empirical lower bound for the value of a in s Let E denote the edges representing action a at any of the root nodes of F. if E = ∅then return 0 else Let F be the set of trees pointed to by the edges in E. Enumerate the states represented by any tree in F by {s′ i : i ∈I} for some finite I. foreach i ∈I do Denote the set of trees in F which represent si by Fi. Let ˆνi := maxa′∈A ActionValue(Fi, a′). Let ˆpi := |Fi|/|F|. return P i∈I ˆpi (r(s, a, s′ i) + γˆνi) Algorithm 3: ASOP Data: The initial state s0, a per-tree budget b ∈N and the forest size m ∈N Result: An action to take for i = 1, . . . , m do Let Ti := SOP(s0, b). return argmaxa∈A ActionValue({T1, . . . , Tm}, a) 5 Finite-sample analysis In this section, we provide a finite-sample analysis of ASOP in terms of the number of planning trees m and per-tree budget n. An immediate consequence of this analysis is that ASOP is consistent: the action returned by ASOP converges to the optimal action when both n and m tend to infinity. Our loss measure is the “simple” regret, corresponding to the expected value of first playing the action ˆπ(s0) returned by the algorithm at the initial state s0 and acting optimally from then on, compared to acting optimally from the beginning: Rn,m(s0) = Q∗(s0, π∗(s0)) −Q∗(s0, ˆπ(s0)). First, let us use the “safe” part of SOP to show that each S3-tree is fully explored up to a certain depth d when given a sufficiently large per-tree budget n. Lemma 1. For any d ∈N, once a budget of n ≥2|A| |A|d+1−1 |A|−1 has been spent by SOP on an S3-tree, the state-actions of all nodes up and including those at depth d have all been sampled exactly once. 4 Proof. A complete |A|-ary tree contains |A|l nodes in level l, so it contains Pd l=0 |A|l = |A|d+1−1 |A|−1 nodes up to and including level d. In each of these nodes, |A| actions need to be explored. We complete the proof by noticing that SOP spends at least half of its budget on shallowest leaves. Let vπ ω and vπ ω,n denote the value functions for a policy π in the infinite, completely explored S3-tree defined by a random realization ω and the finite S3-tree constructed by SOP for a budget of n in the same realization ω, respectively. From Lemma 1 we deduce that if the per-tree budget is at least n ≥2 |A| |A| −1 [ϵ(1 −γ)]−log |A| log(1/γ) . (3) we obtain |vπ ω(s0) −vπ ω,n(s0)| ≤ P∞ i=d+1 γiri ≤γd+1 1−γ ≤ϵ for any policy π. ASOP aggregates the trees and computes the optimal policy ˆπ of the resulting empirical MDP whose transition probabilities are defined by the frequencies (over the m S3-trees) of transitions from state-action to successor states. Therefore, ˆπ is actually a policy maximizing the function π 7→1 m m X i=1 vπ ωi,n(s0). (4) If the number m of S3-trees and the per-tree budget n are large, we therefore expect the optimal policy ˆπ of the empirical MDP to be close to the optimal policy π∗of the true MDP. This is the result stated in the following theorem. Theorem 1. For any δ ∈(0, 1) and ϵ ∈(0, 1), if the number of S3-trees is at least m ≥ 8 ϵ2(1 −γ)2 log |A| K K −1 h ϵ 4(1 −γ) i− log K log(1/γ) + log(4/δ) (5) and the per-tree budget is at least n ≥2 |A| |A| −1 h ϵ 4(1 −γ) i−log |A| log(1/γ) , (6) then P (Rm,n(s0) < ϵ) ≥1 −δ. Proof. Let δ ∈(0, 1), and ϵ ∈(0, 1) and fix realizations {ω1, . . . , ωm} of the stochastic MDP, for some m satisfying (5). Each realization ωi corresponds to an infinite, completely explored S3-tree. Let n denote some per-tree budget satisfying (6). Analogously to (3), we know from Lemma 1 that, given our choice of n, SOP constructs trees which are completely explored up to depth d := ⌊log(ϵ(1−γ)/4) log γ ⌋, fulfilling γd+1 1−γ ≤ϵ 4. Consider the following truncated value functions: let νπ d (s0) denote the sum of expected discounted rewards obtained in the original MDP when following policy π for d steps and then receiving reward zero from there on, and let νπ ωi,d(s0) denote the analogous quantity in the MDP corresponding to realization ωi. Define, for all policies π, the quantities ˆvπ m,n := 1 m Pm i=1 vπ ωi,n(s0) and ˆνπ m,d := 1 m Pm i=1 νπ ωi,d(s0). Since the trees are complete up to level d and the rewards are non-negative, we deduce that we have 0 ≤vπ ωi,n −νπ ωi,d ≤ϵ 4 for each i and each policy π, thus the same will be true for the averages: 0 ≤ˆvπ m,n −ˆνπ m,d ≤ϵ 4 ∀π. (7) Notice that νπ d (s0) = Eω[νπ ω,d(s0)]. From the Chernoff-Hoeffding inequality, we have that for any fixed policy π (since the truncated values lie in [0, 1 1−γ ]), P |ˆνπ m,d −νπ d (s0)| ≥ϵ 4 ≤2e−mϵ2(1−γ)2/8. Now we need a uniform bound over the set of all possible policies. The number of distinct policies is |A| · |A|K · · · · · |A|Kd (at each level l, there are at most Kl states that can be reached by following a 5 policy at previous levels, so there are |A|Kl different choices that policies can make at level l). Thus since m ≥ 8 ϵ2(1−γ)2 h Kd+1 K−1 log |A| + log( 4 δ ) i we have P max π |ˆνπ m,d −νπ d (s0)| ≥ϵ 4 ≤δ 2. (8) The action returned by ASOP is ˆπ(s0), where ˆπ := argmaxπ ˆvπ m,n. Finally, it follows that with probability at least 1 −δ: Rn,m(s0) = Q∗(s0, π∗(s0)) −Q∗(s0, ˆπ(s0)) ≤v∗(s0) −vˆπ(s0) = v∗(s0) −ˆvπ∗ m,n + ˆvπ∗ m,n −ˆvˆπ m,n | {z } ≤0, by definition of ˆπ + ˆvˆπ m,n −ˆν ˆπ m,d | {z } ≤ϵ/4, by (7) + ˆν ˆπ m,d −ν ˆπ d (s0) | {z } ≤ϵ/4, by (8) + ν ˆπ d (s0) −vˆπ(s0) | {z } ≤0, by truncation ≤vπ∗(s0) −νπ∗ d (s0) | {z } ≤ϵ/4, by truncation + νπ∗ d (s0) −ˆνπ∗ m,d | {z } ≤ϵ/4, by (8) + ˆνπ∗ m,d −ˆvπ∗ m,n | {z } ≤0, by (7) + ϵ 2 ≤ϵ Remark 1. The total budget (nm) required to return an ϵ-optimal action with high probability is thus of order ϵ−2−log(K|A|) log(1/γ) . Notice that this rate is poorer (by a ϵ−2 factor) than the rate obtained for uniform planning in [2]; this is a direct consequence of the fact that we are only drawing samples, whereas a full model of the transition probabilities is assumed in [2]. Remark 2. Since there is a finite number of actions, by denoting ∆> 0 the optimality gap between the best and the second-best optimal action values, we have that the optimal arm is identified (in high probability) (i.e. the simple regret is 0) after a total budget of order ∆−2−log(K|A|) log(1/γ) . Remark 3. The optimistic part of the algorithm allows a deep exploration of the MDP. At the same time, it biases the expression maximized by ˆπ in (4) towards near-optimal actions of the deterministic realizations. Under the assumptions of theorem 1, the bias becomes insignificant. Remark 4. Notice that we do not use the optimistic properties of the algorithm in the analysis. The analysis only uses the “safe” part of the SOP planning, i.e. the fact that one sample out of two are devoted to expanding the shallowest nodes. An analysis of the benefit of the optimistic part of the algorithm, similar to the analyses carried out in [9, 2] would be much more involved and is deferred to a future work. However the impact of the optimistic part of the algorithm is essential in practice, as shown in the numerical results. 6 Numerical results In this section, we compare the performance of ASOP to OP-MDP [2], UCT [12], and FSSS [17]. We use the (noisy) inverted pendulum benchmark problem from [2], which consists of swinging up and stabilizing a weight attached to an actuated link that rotates in a vertical plane. Since the available power is too low to push the pendulum up in a single rotation from the initial state, the pendulum has to be swung back and forth to gather energy, prior to being pushed up and stabilized. The inverted pendulum is described by the state variables (α, ˙α) ∈[−π, π] × [−15, 15] and the differential equation ¨α = (mgl sin(α) −b ˙α −K(K ˙α + u)/R) /J, where J = 1.91 · 10−4 kg · m2, m = 0.055 kg, g = 9.81 m/s2, l = 0.042 m, b = 3 · 10−6 Nm · s/rad, K = 0.0536 Nm/A, and R = 9.5 Ω. The state variable ˙α is constrained to [−15, 15] by saturation. The discrete time problem is obtained by mapping actions from A = {−3V, 0V, 3V} to segments of a piecewise control signal u, each 0.05s in duration, and then numerically integrating the differential equation on the constant segments using RK4. The actions are applied stochastically: with probability 0.6, the intended voltage is applied in the control signal, whereas with probability 0.4, the smaller voltage 0.7a is applied. The goal is to stabilize the pendulum in the unstable equilibrium s∗= (0, 0) (pointing up, at rest) when starting from state (−π, 0) (pointing down, at rest). This goal is expressed by the penalty function (s, a, s′) 7→−5α′2 −0.1 ˙α′2 −a2, where s′ = (α′, ˙α′). The reward function r is obtained by scaling and translating the values of the penalty function so that it maps to the interval [0, 1], with r(s, 0, s∗) = 1. The discount factor is set to γ = 0.95. 6 102 103 104 15.8 16 16.2 16.4 16.6 16.8 17 Calls to the simulator per step Sum of discounted rewards OP-MDP UCT 0.2 7 FSSS 1 7 FSSS 2 7 FSSS 3 7 ASOP 2 ASOP 3 Figure 1: Comparison of ASOP to OP-MDP, UCT, and FSSS on the inverted pendulum benchmark problem, showing the sum of discounted rewards for simulations of 50 time steps. The algorithms are compared for several budgets. In the cases of ASOP, UCT, and FSSS, the budget is in terms of calls to the simulator. OP-MDP does not use a simulator. Instead, every possible successor state is incorporated into the planning tree, together with its precise probability mass, and each of these states is counted against the budget. As the benchmark problem is stochastic, and internal randomization (for the simulator) is used in all algorithms except OP-MDP, the performance is averaged over 50 repetitions. The algorithm parameters have been selected manually to achieve good performance. For ASOP, we show results for forest sizes of two and three. For UCT, the Chernoff-Hoeffding term multiplier is set to 0.2 (the results are not very sensitive in the value, therefore only one result is shown). For FSSS, we use one to three samples per state-action. For both UCT and FSSS, a rollout depth of seven is used. OP-MDP does not have any parameters. The results are shown in figure 1. We observe that on this problem, ASOP performs much better than OP-MDP for every value of the budget, and also performs well in comparison to the other sampling based methods, UCT and FSSS. Figure 2 shows the impact of optimistic planning on the performance of our aggregation method. For forest sizes of both one and three, optimistic planning leads to considerably increased performance. This is due to the greater planning depth in the lookahead tree when using optimistic exploration. For the case of a single tree, performance decreases (presumably due to overfitting) on the stochastic problem for increasing budget. The effect disappears when more than one tree is used. 7 Conclusion We introduced ASOP, a novel algorithm for solving online planning problems using a (randomized) simulator for the MDP, under a budget constraint. The algorithm works by constructing a forest of single successor state trees, each corresponding to a random realization of the MDP transitions. Each tree is constructed using a combination of safe and optimistic planning. An empirical MDP is defined, based on the forest, and the first action of the optimal policy of this empirical MDP is returned. In short, our algorithm targets structured problems (where the value function possesses some smoothness property around the optimal policies of the deterministic realizations of the MDP, in a sense defined e.g. in [13]) by using the optimistic principle to focus rapidly on the most promising area(s) of the search space. It can also find a reasonable solution in unstructured problems, since some of the budget is allocated for uniform exploration. ASOP shows good performance on the inverted pendulum benchmark. Finally, our algorithm is also appealing in that the numerically heavy part of constructing the planning trees, in which the simulator is used, can be performed in a distributed way. 7 101 102 103 104 15.5 16 16.5 17 Calls to the simulator per step Sum of discounted rewards Safe+Optimistic 1 Safe+Optimistic 3 Safe 1 Safe 3 Optimistic 1 Optimistic 3 Figure 2: Comparison of different planning strategies (on the same problem as in figure 1). The “Safe” strategy is to use uniform planning in the individual trees, the “Optimistic” strategy is to use OPD. ASOP corresponds to the “Safe+Optimistic” strategy. Acknowledgements We acknowledge the support of the BMBF project ALICE (01IB10003B), the European Community’s Seventh Framework Programme FP7/2007-2013 under grant no 270327 CompLACS and the Belgian PAI DYSCO. Raphaël Fonteneau is a post-doctoral fellow of the F.R.S. - FNRS. We also thank Lucian Busoniu for sharing his implementation of OP-MDP. Appendix: Counterexample to consistency when using purely optimistic planning in S3-trees Consider the MDP in figure 3 with k zero reward transitions in the middle branch, where γ ∈(0, 1) and k ∈N are chosen such that 1 2 > γk > 1 3 (e.g. γ = 0.95 and k = 14). The trees are constructed iteratively, and every iteration consists of exploring a leaf of maximal b-value, where exploring a leaf means introducing a single successor state per action at the selected leaf. The state-action values are: Q∗(x, a) = 1 3 1 1−γ + 2 3 γk 1−γ > 1 3 1 1−γ + 2 3 1 3 1 1−γ = 5 9 1 1−γ and Q∗(x, b) = 1 2 1 1−γ . There are two possible outcomes when sampling the action a, which occur with probabilities 1 3 and 2 3, respectively: Outcome I: The upper branch of action a is sampled. In this case, the contribution to the forest is an arbitrarily long reward 1 path for action a, and a finite reward 1 2 path for action b. Outcome II: The lower branch of action a is sampled. Because γk 1−γ < 1 2 1 1−γ , the lower branch will be explored only up to k times, as its b-value is then lower than the value (and therefore any b-value) of action b. The contribution of this case to the forest is a finite reward 0 path for action a and an arbitrary long (depending on the budget) reward 1 2 path for action b. For an increasing exploration budget per tree and an increasing number of trees, the approximate action values of action a and b obtained by aggregation converge to 1 3 1 1−γ and 1 2 1 1−γ , respectively. Therefore, the decision rule will select action b for a sufficiently large budget, even though a is the optimal action. This leads to simple regret of R(x) = Q∗(x, a) −Q∗(x, b) > 1 18 1 1−γ . s0 a b 1 2 1 2 . . . 1 2 1 2 1 2 . . . 1 1 3 1 . . . 1 1 1 . . . (I) 0 2 3 0 . . . 0 1 1 . . . (II) Figure 3: The middle branch (II) of this MDP is never explored deep enough if only the node with the largest b-value is sampled in each iteration. Transition probabilities are given in gray where not equal to one. 8 References [1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of multiarmed bandit problems. Machine Learning, 47:235–256, 2002. [2] L. Busoniu and R. Munos. Optimistic planning for Markov decision processes. In International Conference on Artificial Intelligence and Statistics (AISTATS), JMLR W & CP 22, pages 182–189, 2012. [3] E. F. Camacho and C. Bordons. Model Predictive Control. Springer, 2004. [4] R. Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. Computers and Games, pages 72–83, 2007. [5] B. Defourny, D. Ernst, and L. Wehenkel. Lazy planning under uncertainty by optimizing decisions on an ensemble of incomplete disturbance trees. In Recent Advances in Reinforcement Learning - European Workshop on Reinforcement Learning (EWRL), pages 1–14, 2008. [6] R. Fonteneau, L. Busoniu, and R. Munos. Optimistic planning for belief-augmented Markov decision processes. In IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2013. [7] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in MonteCarlo go. Technical report, INRIA RR-6062, 2006. [8] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. Systems Science and Cybernetics, IEEE Transactions on, 4(2):100–107, 1968. [9] J. F. Hren and R. Munos. Optimistic planning of deterministic systems. Recent Advances in Reinforcement Learning, pages 151–164, 2008. [10] J. E. Ingersoll. Theory of Financial Decision Making. Rowman and Littlefield Publishers, Inc., 1987. [11] M. Kearns, Y. Mansour, and A. Y. Ng. A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2-3):193–208, 2002. [12] L. Kocsis and C. Szepesvári. Bandit based Monte-Carlo planning. Machine Learning: ECML 2006, pages 282–293, 2006. [13] R. Munos. From bandits to Monte-Carlo Tree Search: The optimistic principle applied to optimization and planning. To appear in Foundations and Trends in Machine Learning, 2013. [14] S. A. Murphy. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society, Series B, 65(2):331–366, 2003. [15] J. Peters, S. Vijayakumar, and S. Schaal. Reinforcement learning for humanoid robotics. In IEEE-RAS International Conference on Humanoid Robots, pages 1–20, 2003. [16] R. S. Sutton and A. G. Barto. Reinforcement Learning. MIT Press, 1998. [17] T. J. Walsh, S. Goschin, and M. L. Littman. Integrating sample-based planning and model-based reinforcement learning. In AAAI Conference on Artificial Intelligence, 2010. 9
|
2013
|
358
|
5,107
|
Action from Still Image Dataset and Inverse Optimal Control to Learn Task Specific Visual Scanpaths Stefan Mathe1,3 and Cristian Sminchisescu2,1 1Institute of Mathematics of the Romanian Academy of Science 2Department of Mathematics, Faculty of Engineering, Lund University 3Department of Computer Science, University of Toronto stefan.mathe@imar.ro, cristian.sminchisescu@math.lth.se Abstract Human eye movements provide a rich source of information into the human visual information processing. The complex interplay between the task and the visual stimulus is believed to determine human eye movements, yet it is not fully understood, making it difficult to develop reliable eye movement prediction systems. Our work makes three contributions towards addressing this problem. First, we complement one of the largest and most challenging static computer vision datasets, VOC 2012 Actions, with human eye movement recordings collected under the primary task constraint of action recognition, as well as, separately, for context recognition, in order to analyze the impact of different tasks. Our dataset is unique among the eyetracking datasets of still images in terms of large scale (over 1 million fixations recorded in 9157 images) and different task controls. Second, we propose Markov models to automatically discover areas of interest (AOI) and introduce novel sequential consistency metrics based on them. Our methods can automatically determine the number, the spatial support and the transitions between AOIs, in addition to their locations. Based on such encodings, we quantitatively show that given unconstrained read-world stimuli, task instructions have significant influence on the human visual search patterns and are stable across subjects. Finally, we leverage powerful machine learning techniques and computer vision features in order to learn task-sensitive reward functions from eye movement data within models that allow to effectively predict the human visual search patterns based on inverse optimal control. The methodology achieves state of the art scanpath modeling results. 1 Introduction Eye movements provide a rich source of knowledge into the human visual information processing and result from the complex interplay between the visual stimulus, prior knowledge of the visual world, and the task. This complexity poses a challenge to current models, which often require a complete specification of the cognitive processes and of the way visual input is integrated by them[4, 20]. The advent of modern eyetracking systems, powerful machine learning techniques, and visual features opens up the prospect of learning eye movement models directly from large real human eye movement datasets, collected under task constraints. This trend is still in its infancy, here we aim to advance it on several fronts: • We introduce a large scale dataset of human eye movements collected under the task constraints of both action and context recognition from a single image, for the VOC 2012 Actions dataset. The eye movement data is introduced in §3 and is publicly available at http://vision.imar.ro/eyetracking-voc-actions/. • We present a model to automatically discover areas of interest (AOIs) from eyetracking data, in §4. The model integrates both spatial and sequential eye movement information, in order to better 1 Figure 1: Saliency maps obtained from the gaze patterns of 12 viewers under action recognition (left image in pair) and context recognition (right, in pair), from a single image. Note that human gaze significantly depends on the task (see tab. 1b for quantitative results). The visualization also suggests the existence of stable consistently fixated areas of interest (AOIs). See fig. 2 for illustration. constrain estimates and to automatically identify the spatial support and the transitions between AOIs in addition to their locations. We use the proposed AOI discovery tools to study inter-subject consistency and show that, on this dataset, task instructions have a significant influence on human visual attention patterns, both spatial and sequential. Our findings are presented in §5. • We leverage the large amount of collected fixations and saccades in order to develop a novel, fully trainable, eye movement prediction model. The method combines inverse reinforcement learning and advanced computer vision descriptors in order to learn task sensitive reward functions based on human eye movements. The model has the important property of being able to efficiently predict scanpaths of arbitrary length, by integrating information over a long time horizon. This leads to significantly improved estimates. Section §6.2 gives the model and its assessment. 2 Related Work Human gaze pattern annotations have been collected for both static images[11, 13, 14, 12, 26, 18] and for video[19, 23, 15], see [24] for a recent overview. Most of the image datasets available have been collected under free-viewing, and the few task controlled ones[14, 7] have been designed for small scale studies. In contrast, our dataset is both task controlled and more than one order of magnitude larger than the existing image databases. This makes it adequate to using machine learning techniques for saliency modeling and eye movement prediction. The influence of task on eye movements has been investigated in early human vision studies[25, 3] for picture viewing, but these groundbreaking studies have been fundamentally qualitative. Statistical properties like the saccade amplitude and the fixation duration have been shown to be influenced by the task[5]. A quantitative analysis of task influence on visual search in the context of action recognition from video appears in our prior work[19]. Human visual saliency prediction has received significant interest in computer vision (see [2] for an overview). Recently, the trend has been to learn saliency models from fixation data in images[13, 22] and video[15, 19]. The prediction of eye movements has been less studied. In contrast, predefined visual saliency measures can be used to obtain scanpaths[11] in conjunction with non-maximum suppression. Eye movements have also been modeled explicitly by maximizing the expected future information gain[20, 4] (as one step in [20] or until the goal is reached in [4]). The methods operate on pre-specified reward functions, which limits their applicability. The method we propose shares some resemblance with these later methods, in that we also aim at maximizing the future expected reward, albeit our reward function is learned instead of being pre-specified, and we work in an inverse optimal control setting, which allows, in principle, an arbitrary time horizon. We are not aware of any eye movement models that are learned from eye movement data. 3 Action from a Single Image – New Human Eye Movement Dataset One objective of this work is to introduce eye movement recordings for the PASCAL VOC image dataset used for action recognition. Presented in [10], it is one of the largest and most challenging 2 Figure 2: Illustration of areas of interest (AOI) obtained from scanpaths of subjects on three stimuli for the action (left) and context (right) recognition tasks. Ellipses depict states, scaled to match the learned spatial support, whereas dotted arrows illustrate high probability saccades. Visual search patterns are highly consistent both spatially and sequentially and are strongly influenced by task. See fig. 3 and tab. 1 for quantitative results on spatial and sequential consistency. available datasets of real world actions in static images. It contains 9157 images, covering 10 classes (jumping, phoning, playing instrument, reading, riding bike, riding horse, running, taking photo, using computer, walking). Several persons may appear in each image. Multiple actions may be performed by the same person and some instances belong to none of the 10 target classes. Human subjects: We have collected data from 12 volunteers (5 male and 7 female) aged 22 to 46. Task: We split the subjects into two groups based on the given task. The first, action group (8 subjects) was asked to recognize the actions in the image and indicate them from the labels provided by the PASCAL VOC dataset. To assess the effects of task on visual search, we asked the members of the second, context group (4 subjects), to find which of 8 contextual elements occur in the background of each image. Two of these contextual elements – furniture, painting/wallpaper – are typical of indoors scenes, while the remaining 6 – body of water, building, car/truck, mountain/hill, road, tree – occur mostly outdoors. Recording protocol: The recording setup is identical to the one used in [19]. Before each image was shown, participants were required to fixate a target in the center of a uniform background on the screen. We asked subjects in the action group to solve a multi-target ‘detect and classify’ task: press a key each time they have identified a person performing an action from the given set and also list the actions they have seen. The exposure time for this task was 3 seconds.1 Their multiple choice answers were recorded through a set of check-boxes displayed immediately following each image exposure. Participants in the context group underwent a similar protocol, having a slightly lower exposure time of 2.5 seconds. The images were shown to each subject in a different random order. Dataset statistics: The dataset contains 1,085,381 fixations. The average scanpath length is 10.0 for the action subjects and 9.5 for the context subjects, including the initial central fixation. The time elapsed from stimulus display until the first three key presses, averaged over trials in which they occur, are 1, 1.6 and 1.9 seconds, respectively. 4 Automatic Discovery of Areas of Interest and Transitions using HMMs Human fixations tend to cluster on salient regions that generally correspond to objects and object parts (fig. 1). Such areas of interest (AOI) offer an important tool for human visual pattern analysis, e.g. in evaluating inter-subject consistency[19] or the prediction quality of different saliency models. Manually specifying AOIs is both time consuming and subjective. In this section, we propose a model to automatically discover the AOI locations, their spatial support and the transitions between them, from human scanpaths recorded for a given image. While this may appear straightforward, we are not aware of a similar model in the literature. In deriving the model, we aim at four properties. First, we want to be able to exploit not only human fixations, but also constraints from saccades. Consider the case of several human subjects fixating the face of a person and the book she is reading. Based on fixations alone, it can be difficult to separate the book and the person’s face into two distinct AOIs due to proximity. Nevertheless, frequent saccades between the book and the person’s face provide valuable hints for hypothesizing two distinct, semantically meaningful AOIs. Second, we wish to adapt to an unknown and varying number of AOIs in different images. Third, we want to estimate not only the center of the AOI, but also the spatial support and location uncertainty. Finally, we wish to find the transition probabilities between AOIs. To meet such criteria in a visual representation, we use a statistical model. 1Protocol may result in multiple keypresses per image. Exposure times were set empirically in a pilot study. 3 consistency measure task action recognition context recognition agreement 92.2%±1.1% 81.3%±1.5% cross-stimulus control 64.0%±0.7% 59.1%±0.9% random baseline 50.0%±0.0% 50.0%±0.0% (a) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False alarm rate Detection rate Inter−Subject Agreement Cross−Stimulus Control 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False alarm rate Detection rate Inter−Subject Agreement Cross−Stimulus Control action recognition context recognition (b) Figure 3: (a) Spatial inter-subject consistency for the tasks of action and context recognition, with standard deviations across subjects. (b) ROC curves for predicting the fixations of one subject from the fixations of the other subjects in the same group on the same image (blue) or on an image (green) randomly selected from the dataset. See tab. 1 for sequential consistency results. Image Specific Human Gaze Model: We model human gaze patterns in an image as a Hidden Markov Model (HMM) where states {si}n i=1 correspond to AOIs fixated by the subjects and transitions correspond to saccades. The observations are the fixation coordinates z = (x, y). The emission probability for AOI i is a Gaussian: p(z|si) = N(z|µi, Σi), where µi and Σi model the center and the spatial extent of the area of interest (AOI) i. In training, we are given a set of scanpaths δj = z1, z2, . . . , ztj k j=1 and we find the parameters θ = {µi, Σi}n i=1 that maximize the joint log likelihood Pk j=1 log p(δj|θ), using EM[9]. We obtain AOIs, for each image and task, by training the HMM using the recorded human eye scanpaths. We compute the number of states N ∗ that maximizes the leave-one-out cross validation likelihood over the scanpaths within the training set, with N ∈[1, 10]. We then re-train the model with N ∗states over the entire set of scanpaths. Results: Fig. 2 shows several HMMs trained from the fixations of subjects performing action recognition. On average, the model discovers 8.0 AOIs for action recognition and 5.6 for context recognition. The recovered AOIs are task dependent and tend to center on object and object parts with high task relevance, like phones, books, hands or legs. Context recognition AOIs generally appear on the background and have larger spatial support, in agreement with the scale of the corresponding structures. There is a small subset of AOIs that is common to both tasks. Most of these AOIs fall on faces, an effect that has also been noted in [6]. Interestingly, some AOI transitions suggest the presence of cognitive routines aimed at establishing relevant relationships between object parts, e.g. whether a person is looking at the manipulated object (fig. 2). The HMM allows us to visualize and analyze the sequential inter-subject consistency (§5) among subjects. It also allows us to evaluate the performance of eye movement prediction models (§6.2). 5 Consistency Analysis Qualitative studies in human vision[25, 16] have advocated a high degree of agreement between the gaze patterns of humans in answering questions regarding static stimuli and have shown that gaze patterns are highly task dependent, although such findings have not yet been confirmed by largescale quantitative analysis. In this section, we confirm these effects on our large scale dataset for action and context recognition, from a single image. We first study spatial consistency using saliency maps, then analyze sequential consistency in terms of AOI ordering under various metrics. Spatial Consistency: In this section, we evaluate the spatial inter-subject agreement in images. Evaluation Protocol: To measure the inter-subject agreement, we predict the regions fixated by a particular subject from a saliency map derived from the fixations of the other subjects on the same image. Samples represent image pixels and each pixel’s score is the empirical saliency map derived from training subjects[14]. Labels are 1 at pixels fixated by the test subject, and 0 elsewhere. For unbiased cross-stimulus control, we check how well a subject’s fixations on one stimulus can be predicted from those of the other subjects on a different, unrelated, stimulus. The average precision for predicting fixations on the same stimulus is expected to be much greater than on different stimuli. Findings: Area under the curve (AUC) measured for the two subject groups and the corresponding ROC curves are shown in fig. 3. We find good inter-subject agreement for both tasks, consistent with previously reported results for both images and video [14, 19]. 4 Sequential Consistency using AOIs: Next we evaluate the degree to which scanpaths agree in the order in which interesting locations are fixated. We do this as a three step process. First, we map each fixation to an AOI obtained with the HMM presented in §4, converting scanpaths to sequences of symbols. Then, we define two metrics for comparing scanpaths, and compute intersubject agreement in a leave-one-out fashion, for each. Matching fixations to AOIs: We assign a subject’s fixation to an AOI, if it falls within an ellipse corresponding to its spatial support (fig. 2). If no match is found, we assign the fixation as null. However, due to noise, we allow the spatial support to be increased by a factor. The dashed blue curve in fig. 4c-left shows the fraction (AOIP) of fixations of each human subject, with 2D positions that fall inside AOIs derived from scanpaths of other subjects, as a function of the scale factor. Through the rest of this section, we report results for the threshold to twice the estimated AOI scale, which ensures a 75% fixation match rate across subjects in both task groups. AOI based inter-subject consistency: Once we have converted each scanpath to a sequence of fixations, we define two metrics for inter-subject agreement. Given two sequences of symbols, the AOI transition (AOIT) metric is defined as the number of consecutive non-null symbol pairs (AOI transitions) that two sequences have in common. The second metric (AOIS), is obtained by sequence alignment, as in [19], and represents the longest common subsequence among the two scanpaths. Both metrics are normalized by the length of the longest scanpath. To measure inter-subject agreement, we match the scanpath of each subject i to the scanpaths belonging to other subjects, under the two metrics defined above. The value of the metric for the best match defines the leave-one-out agreement for subject i. We then average over all subjects. Baselines: In addition to inter-subject agreement, we define three baselines. First, for cross-stimulus control, we evaluate agreement as in the case of spatial consistency, when the test and reference scanpaths correspond to different randomly selected images. Second, for the random baseline, we generate for each image a set of 100 random scanpaths, where fixations are uniformly distributed across the image. The average metric assigned to these scanpaths with respect to the subjects represents the baseline for sequential inter-subject agreement, in the absence of bias. Third, we randomize the order of each subject’s fixations in each image, while keeping their locations fixed, and compute inter-subject agreement with respect to the original scanpaths of the rest of the subjects. The initial central fixation is left unchanged during randomization. This baseline is intended to measure the amount of observed consistency due to the fixation order. Findings: Both metrics reveal considerable inter-subject agreement (table 1), with values significantly higher than for cross-stimulus control and the random baselines. When each subject’s fixations are randomized, the fraction of matched saccades (AOIT) drops sharply, suggesting that sequential effects have a significant share in the overall inter-subject agreement. The AOIS metric is less sensitive to these effects, as it allows for gaps in matching AOI sequences.2 Influence of Task: We will next study the task influence on human visual patterns. We compare the visual patterns of the two subject groups using saliency map and sequential AOI metrics. Evaluation Protocol: For each image, we derive a saliency map from the fixations of subjects doing action recognition, and report the average p-statistic at the locations fixated by subjects performing context recognition. We also compute agreement under the AOI-based metrics between the scanpaths of subjects performing context recognition, and subjects from the action recognition group. Findings: Only 44.1% of fixations made during context recognition fall onto action recognition AOIs, with an average p-value of 0.28 with respect to the action recognition fixation distribution. Only 10% of the context recognition saccades have also been made by active subjects, and the AOIS metric between context and active subjects’ scanpaths is 23.8%. This indicates significant differences between the subject groups in terms of their visual search patterns. 6 Task-Specific Human Gaze Prediction In this section, we show that it is possible to effectively predict task-specific human gaze patterns, both spatially and sequentially. To achieve this, we combine the large amounts of information available in our dataset with state-of-the art visual features and machine learning techniques. 2Although harder to interpret numerically, the negative log likelihood of scanpaths under HMMs also defines a valid sequential consistency measure. We observe the following values for the action recognition task: agreement 9.2, agreement (random order) 13.1, cross-stimulus control 25.8, random baseline 46.6. 5 consistency measure task action recognition context recognition AOIP AOIT AOIS AOIP AOIT AOIS agreement 79.9%±1.9% 34.0%±1.3% 39.9%±1.0% 76.4%±2.6% 35.6%±0.9% 44.9%±0.4% agreement (random order) 79.9%±1.9% 21.8%±0.7% 31.0%±0.7% 76.4%±2.6% 23.2%±0.3% 35.5%±0.3% cross-stimulus control 29.4%±0.8% 4.9% ± 0.3% 13.9%±0.3% 40.0%±2.1% 7.9% ± 0.5% 19.6%±0.2% random scanpaths 15.5%±0.1% 1.5% ± 0.0% 2.5% ± 0.0% 31.9%±0.1% 4.2% ± 0.0% 7.6% ± 0.0% Table 1: Sequential inter-subject consistency measured using AOIs (fig. 2), for both task groups. A large fraction of each subject’s fixations falls onto AOIs derived from the scanpaths of the other subjects (AOIP). Significant inter-subject consistency exists in terms of AOI transitions (AOIT) and scanpath alignment score (AOIS). 6.1 Task-Specific Human Visual Saliency Prediction We first study the prediction of human visual saliency maps. Human fixations typically fall onto image regions that are meaningful for the visual task (fig. 2). These regions often contain objects and object parts that have similar identities and configurations for each semantic class involved, e.g. the configuration of the legs while running. We exploit this repeatability and represent each human fixation by HoG descriptors[8]. We then train a sliding window detector with human fixations and compare it with competitive approaches reported in the literature. Evaluation Protocol: For each subject group, we obtain positive examples from fixated locations across the training portion of the dataset. Negative examples are extracted similarly at random image locations positioned at least 3o away from all human fixations. We extract 7 HoG descriptors with different grid configurations and concatenate them, then represent the resulting descriptor using an explicit, approximate χ2 kernel embedding[17]. We train a linear SVM to obtain a detector, which we run in sliding window fashion over the test set in order to predict saliency maps. We evaluate the detector under the AUC metric and the spatial KL divergence criterion presented in [19]. We use three baselines for comparison. The first two are the uniform saliency map and the central bias map (with intensity inversely proportional to distance from center). As an upper bound on performance, we also compute saliency maps derived from the fixations recorded from subjects. The KL divergence score for this baseline is derived by splitting the human subjects into two groups and computing the KL divergence between the saliency maps derived from these two groups, while the AUC metric is computed in a leave-one-out fashion, as for spatial consistency. We compare the model with two state of the art predictors. The first is the bottom-up saliency model of Itti&Koch[11]. The second is a learned saliency predictor introduced by Judd et al.[13], which integrates low and mid-level features with several high-level object detectors such as cars and people and is capable to optimally weight these features given a training set of human fixations. Note that many of these objects often occur in the VOC 2012 actions dataset. Findings: Itti&Koch’s model is not designed to predict task-specific saliency and cannot handle task influences on visual attention (fig. 4). Judd’s model can adapt to some extent by adjusting feature weights, which were trained on our dataset. Out of the evaluated models, we find that the taskspecific HoG detector performs best under both metrics, especially under the spatial KL divergence, which is relevant for computer vision applications[19]. Its flexibility stems from its large scale training using human fixations, the usage of general-purpose computer vision features (as opposed, e.g., to the specific object detectors used by Judd et al.[13]), and in part from the use of a powerful nonlinear kernel for which good linear approximations are available[17, 1]. 6.2 Scanpath Prediction via Maximum Entropy Inverse Reinforcement Learning We now consider the problem of eye movement prediction under specific task constraints. Models of human visual saliency can be used to generate scanpaths, e.g. [11]. However, current models are designed to predict saliency for the free-viewing condition and do not capture the focus induced by the cognitive task. Others [20, 4] hypothesize that the reward driving eye movements is the expected future information gain. Here we take a markedly different approach. Instead of specifying the reward function, we learn it directly from large amounts of human eye movement data, by exploiting policies that operate over long time horizons. We cast the problem as Inverse Reinforcement Learning (IRL), where we aim to recover the intrinsic reward function that induces, with high probability, the scanpaths recorded from human subjects solving a specific visual recognition task. Our learned model can imitate 6 baselines feature action recognition context recognition KL AUC KL AUC uniform baseline 12.00 0.500 11.02 0.500 central bias 9.59 0.780 8.82 0.685 human 6.14 0.922 5.90 0.813 predictors HOG detector∗ 8.54 0.736 8.10 0.646 Itti & Koch[11] 16.53 0.533 15.04 0.512 Judd et al.[13]∗ 11.00 0.715 9.66 0.636 (a) human visual saliency prediction baselines feature action recognition context recognition AOIP AOIT AOIS AOIP AOIT AOIS human scanpaths 79.9% 34.0% 39.9% 76.4% 35.6% 44.9% random scanpaths 15.5% 1.5% 2.5% 31.9% 4.2% 7.6% predictors IRL∗ 35.6% 6.6% 18.4% 44.9% 11.6% 25.7% Renninger [20] 24.4% 2.0% 14.6% 40.3% 7.0% 23.9% Itti & Koch [11] 28.6% 2.7% 16.8% 42.9% 7.5% 24.1% (b) eye movement prediction 0 1 2 3 4 0 0.2 0.4 0.6 0.8 1 AOI scale factor AOIP score agreement cross−stimulus random cross−task Itti & Koch Renninger et al. IRL 0 1 2 3 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 AOI scale factor AOIT score agreement cross−stimulus random cross−task Itti & Koch Renninger et al. IRL 0 1 2 3 4 0 0.1 0.2 0.3 0.4 0.5 AOI scale factor AOIS score agreement cross−stimulus random cross−task Itti & Koch Renninger et al. IRL (c) Figure 4: Task-specific human gaze prediction performance on the VOC 2012 actions dataset. (a) Our trained HOG detector outperforms existing saliency models, when evaluated under both the KL divergence and AUC metrics. (b-c) Learning techniques can also be used to predict eye movements under task constraints. Our proposed Inverse Reinforcement Learning (IRL) model better matches observed human visual search scanpaths when compared with two existing methods, under each of the AOI based metrics we introduce. Methods marked by ‘*’ have been trained on our dataset. useful saccadic strategies associated with cognitive processes involved in complex tasks such as action recognition, but avoids the difficulty of explicitly specifying these processes. Problem Formulation: We model a scanpath δ as a sequence of states st = (xt, yt) and actions at = (∆x, ∆y), where states correspond to fixations, represented by their visual angular coordinates with respect to the center of the screen, and actions model saccades, represented as displacement vectors expressed in visual degrees. We rely on a maximum entropy IRL formulation[27] to model the distribution over the set ∆(s,T ) of all possible scanpaths of length T starting from state s for a given image as: p(s,T ) θ (δ) = 1 Z(T )(s) · exp " T X t=1 rθ(st, at) # , ∀δ ∈∆(s,T ) (1) where rθ(st, at) is the reward function associated with taking the saccadic action at while fixating at position st, θ are the model parameters and Z(T )(s) is the partition function for paths of length T starting with state s, see (3). The reward function rθ(st, at) = f⊤(st)θat is the inner product between a feature vector f(st) extracted at image location st and a vector of weights corresponding to action at. Note that reward functions in our formulation depend on the subject’s action. This enables the model to encode saccadic preferences conditioned on the current observation, in addition to planning future actions by maximizing the cumulative reward along the entire scanpath, as implied by (1). In our formulation, the goal of Maximum Entropy IRL is to find the weights θ that maximize the likelihood of the demonstrated scanpaths across all the images in the dataset. For a single image and given the set of human scanpaths E, all starting at the image center sc, the likelihood is: Lθ = 1 |E| X δ∈E log p(sc,T ) θ (δ) (2) This maximization problem can be solved using a two step dynamic programming formulation. In the backward step, we compute the state and state-action partition functions for each possible state s and action a, and for each scanpath length i = 1, T: Z(i) θ (s) = X δ∈∆(s,i) exp " i X t=1 rθ(st, at) # , Z(i) θ (s, a) = X δ∈∆(s,i) s.t. a1=a exp " i X t=1 rθ(st, at) # (3) 7 The optimal policy π(i) θ at the ith fixation is: π(i) θ (a|s) = Z(T −i+1) θ (s, a)/Z(T −i+1) θ (s) (4) This policy induces the maximum entropy distribution p(sc,T ) θ over scanpaths for the image and is used in the forward step to efficiently compute the expected mean feature count for each action a, which is ˆf a θ = Eδ∼p(sc,T ) θ hPT t=1 f(st) · I [at = a] i , where I [·] is the indicator function. The gradient of the likelihood function (2) with respect to the parameters θa is: ∂Lθ ∂θa = ˜f a −ˆf a θ (5) where ˜f a = 1 |E| P δ∈E P t f(st) · I [at = a] is the empirical feature count along training scanpaths. Eqs. (1)–(5) are defined for a given input image. The likelihood and its gradient over the training set are obtained by summing up the corresponding quantities. In our formulation policies encode the image specific strategy of the observer, based on a task specific reward function that is learned across all images. We thus learn two different IRL models, for action and context analysis. Note that we restrict ourselves to scanpaths of length T starting from the center of the screen and do not predefine goal states. We validate T to the average scanpath length in the dataset. Experimental Procedure: We use a fine grid with 0.25o stepsize for the state space. The space of all possible saccades on this grid is too large to be practical (≈105). We obtain a reduced vocabulary of 1, 000 actions by clustering saccades in the training set, using k-means. We then encode all scanpaths in this discrete (state,action) space, with an average positional error of 0.47o. We extract HoG features at each grid point and augment them with the output of our saliency detector. We optimize the weight vector θ in the IRL framework and use a BFGS solver for fast convergence. Findings: A trained MaxEnt IRL eye movement predictor performs better than the bottom up models of Itti&Koch[11] and Renninger et al.[20] (fig. 4bc). The model is particularly powerful for predicting saccades (see the AOIT metric), as it can match more than twice the number of AOI transitions generated by bottom up models for the action recognition task. It also outperforms the other models under the AOIP and AOIS metrics. Note that the latter only captures the overall ranking among AOIs as defined by the order in which these are fixated. A gap still remains to human performance, underlining the difficulty of predicting eye movements in real world images and for complex tasks such as action recognition. For context recognition, prediction scores are generally closer to the human baseline. This is, at least in part, facilitated by the often larger size of background structures as compared to the humans or the manipulated objects involved in actions (fig. 2). 7 Conclusions We have collected a large set of eye movement recordings for VOC 2012 Actions, one of the most challenging datasets for action recognition in still images. Our data is obtained under the task constraints of action and context recognition and is made publicly available. We have leveraged this large amount of data (1 million human fixations) in order to develop Hidden Markov Models that allow us to determine fixated AOI locations, their spatial support and the transitions between them automatically from eyetracking data. This technique has made possible to develop novel evaluation metrics and to perform quantitative analysis regarding inter-subject consistency and the influence of task on eye movements. The results reveal that given real world unconstrained image stimuli, the task has a significant influence on the observed eye movements both spatially and sequentially. At the same time such patterns are stable across subjects. We have also introduced a novel eye movement prediction model that combines state-of-the-art reinforcement learning techniques with advanced computer vision operators to learn task-specific human visual search patterns. To our knowledge, the method is the first to learn eye movement models from human eyetracking data. When measured under various evaluation metrics, the model shows superior performance to existing bottom-up eye movement predictors. To close the human performance gap, better image features, and more complex joint state and action spaces, within reinforcement learning schemes, will be explored in future work. Acknowledgments: Work supported in part by CNCS-UEFISCDI under CT-ERC-2012-1. 8 References [1] E. Bazavan, F. Li, and C. Sminchisescu. Fourier kernel learning. In European Conference on Computer Vision, 2012. [2] A. Borji and L. Itti. State-of-the-art in visual attention modelling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 2011. [3] G. T. Buswell. How People Look at Pictures: A Study of the Psychology of Perception in Art. Chicago University Press, 1935. [4] N. J. Butko and J. R. Movellan. Infomax control of eye movements. IEEE Transactions on Autonomous Mental Development, 2:91–107, 2010. [5] M. S. Castelhano, M. L. Mack, and J. M. Henderson. Viewing task influences eye movement control during active scene perception. Journal of Vision, 9, 2008. [6] M. Cerf, E. P. Frady, and C. Koch. Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9, 2009. [7] M. Cerf, J. Harel, W. Einhauser, and C. Koch. Predicting human gaze using low-level saliency combined with face detection. In Advances in Neural Information Processing Systems, 2007. [8] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE International Conference on Computer Vision and Pattern Recognition, 2005. [9] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 1977. [10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. [11] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 2000. [12] T. Judd, F. Durand, and A. Torralba. Fixations on low resolution images. In IEEE International Conference on Computer Vision, 2009. [13] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In IEEE International Conference on Computer Vision, 2009. [14] K.A.Ehinger, B.Sotelo, A.Torralba, and A.Oliva. Modeling search for people in 900 scenes: A combined source model of eye guidance. Visual Cognition, 17, 2009. [15] W. Kienzle, B. Scholkopf, F. Wichmann, and M. Franz. How to find interesting locations in video: a spatiotemporal interest point detector learned from human eye movements. In DAGM, 2007. [16] M. F. Land and B. W. Tatler. Looking and Acting. Oxford University Press, 2009. [17] F. Li, G. Lebanon, and C. Sminchisescu. Chebyshev approximations to the histogram χ2 kernel. In IEEE International Conference on Computer Vision and Pattern Recognition, 2012. [18] E. Marinoiu, D. Papava, and C. Sminchisescu. Pictorial human spaces: How well do humans perceive a 3d articulated pose? In IEEE International Conference on Computer Vision, 2013. [19] S. Mathe and C. Sminchisescu. Dynamic eye movement datasets and learnt saliency models for visual action recognition. In European Conference on Computer Vision, 2012. [20] L. W. Renninger, J. Coughlan, P. Verghese, and J. Malik. An information maximization model of eye movements. In Advances in Neural Information Processing Systems, pages 1121–1128, 2004. [21] R. Subramanian, H. Katti, N. Sebe, and T.-S. Kankanhalli, M. Chua. An eye fixation database for saliency detection in images. In European Conference on Computer Vision, 2010. [22] A. Torralba, A. Oliva, M. Castelhano, and J. Henderson. Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 2006. [23] E. Vig, M. Dorr, and D. D. Cox. Space-variant descriptor sampling for action recognition based on saliency and eye movements. In European Conference on Computer Vision, 2012. [24] S. Winkler and R. Subramanian. Overview of eye tracking datasets. In International Workshop on Quality of Multimedia Experience, 2013. [25] A. Yarbus. Eye Movements and Vision. New York Plenum Press, 1967. [26] K. Yun, Y. Pen, D. Samaras, G. J. Zelinsky, and T. L. Berg. Studying relationships between human gaze, description and computer vision. In IEEE International Conference on Computer Vision and Pattern Recognition, 2013. [27] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI Conference on Artificial Intelligence, 2008. 9
|
2013
|
359
|
5,108
|
Extracting regions of interest from biological images with convolutional sparse block coding Marius Pachitariu1, Adam Packer2, Noah Pettit2, Henry Dagleish2, Michael Hausser2 and Maneesh Sahani1 1Gatsby Unit, UCL, UK {marius, maneesh}@gatsby.ucl.ac.uk 2The Wolfson Institute for Biomedical Research, UCL, UK {a.packer, noah.pettit.10, henry.dalgleish.09, m.hausser}@ucl.ac.uk Abstract Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few different types. Formally, the model can be described as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the KSVD learning algorithm to subspaces by retaining several principal vectors from the SVD decomposition instead of just one. Good models with little cross-talk between subspaces can be obtained by learning the blocks incrementally. We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives. We fit the convolutional model to noisy GCaMP6 two-photon images of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based representation is reflected in the variability of the recovered cell shapes. 1 Introduction For evolutionary reasons, biological tissue at all spatial scales is composed of repeating patterns. This is because successful biological motifs are reused and multiplied by evolutionary pressures. At a small spatial scale eukaryotic cells contain only a few types of major organelles like mitochondria and vacuoles and several dozen minor organelles like vesicles and ribosomes. Each of the organelles is replicated a large number of times within each cell and has a distinctive visual appearance. At the scale of whole cells, most tissue types like muscle and epithelium are composed primarily of single cell types. Some of the more diverse biological tissues are probably in the brain where gray matter contains different types of neurons and glia, often spatially overlapping. Repetition is also encouraged at large spatial scales. Striate muscles are made out of similar axially-aligned fibers called sarcomers and human cortical surfaces are highly folded inside the skull producing repeating surface patterns called gyri and sulci. Much biological data at all spatial scales comes in the form of two- or three-dimensional images. Non-invasive techniques like magnetic resonance imaging allow visualization of details on the order of one millimeter. Cells in tissue can be seen with light microscopy and cellular organelles can be seen with the electron microscope. Given the stereotypical nature of biological motifs, these images often appear as collections of similar elements over a noisy background, as shown in figure 1(a). We developed a generative image model that automatically discovers the repeating motifs, and segments biological images into the most common elements that form them. We apply the model to two-dimensional images composed of several hundred cells of possibly different types, such as 1 (a) (b) Figure 1: a. Mean image of a two-photon recording of calcium-based fluorescence. b. Same image as in (a) after subtractive and divisive normalization locally. images of cortical tissue expressing fluorescent GCaMP6, a calcium indicator, taken with a twophoton microscope in vivo. We also apply the model to Nissl-stained cortical tissue imaged in slice. Each experimental exposure can contain hundreds of cells and many exposures are usually taken over a single experimental session. Our main aim is to automate the cell detection stage, because tracing cell contours by hand can be a laborious and inexact process, especially given the multitude of confounds usually present in these images. One confound clearly visible in figure 1(a) is the large variation in contrast and luminance over a single image. A second confound, also visible in figure 1(a), is that many cells tend to cluster together and press their boundaries against each other. Assigning pixels to the correct cell can be difficult. A third confound is that calcium, the marker which the fluorescent images report, is present in the entire neuropil (in the dendrites and axons of the cells). Activation of calcium in the neuropil makes a noisy background for the estimation of cell somata. Given such large confounds, a properly-formulated image model is needed to resolve the ambiguities as well as the human eye can resolve them. 1.1 Background on automated extraction of cell somata Histological examination of biological tissue with light-microscopy is an important application for techniques of cell identification and segmentation. Most algorithms for identifying cell somata from such images are based on hand-crafted filtering and thresholding techniques. For example, [1] proposes a pipeline of as many as fourteen separate steps, each of which is meant to deal with some particular dimension of variability in the images. Our approach is to instead propose a fully generative model of the biological tissue which encapsulates our beliefs about the stereotypical structure of such images. Inference in the model inverts the generative model — or in other words deconvolves the image — and thereby replaces the filtering and thresholding techniques usually employed. Learning the parameters of the generative model replaces the hand-crafting of the filters and thresholds. For one image type we use here, fluorescent images of neuronal tissue, the approach of [2] is closer in spirit to our methodology of model design and inference. The authors propose an independent components analysis (ICA) model of the movies which expresses their beliefs that all the pixels belonging to a cell should brighten together, but only rarely. The model effectively uses the temporal correlations between pixels to segment each image, much like [3] but the pipeline of [3] is manual and not model-designed like that of [2]. Both of these studies are different from our approach, because we aim to recover cell bodies from single images alone. The method of [2] applies well to small fields of view and large coherent fluorescence fluctuations in single cells, but fails when applied to our data with large fields of view containing hundreds of small neurons. The failure is due to long-range spatial correlations between many thousands of pixels which overcome the noisy correlations between the few dozen pixels belonging to each cell. Consequently, the independent components extracted by the algorithm of [2]1 have large spatial domains as can be seen in supplemental figure 1. Our approach is robust to large non-local correlations because we analyze the 1available online at http://www.snl.salk.edu/∼emukamel/ 2 mean image alone. One advantage is that the resulting model can be applied not just to data from functional imaging experiments but to data from any imaging technique. 1.2 Background on convolutional image models Our proposed image model is a novel extension of a family of recent algorithms based on sparse coding that are commonly used in object recognition experiments [4], [5], [6], [7], [8]. A starting point for our model was the convolutional matching pursuit (MP) implementation of [5] (but see [6] for more details). The authors show that convolutional MP learns a diverse set of basis functions from natural images. Most of these basis functions are edges, but some have a globular appearance and others represent curved edges and corners. Their implied generative model of an image is to pick out randomly a few basis functions and place them at random locations. While this is a poor generative model for natural images, it is much better suited to biological images which are composed of many repeating and seemingly randomly distributed elements of a few different types. One disadvantage of convolutional MP as described by [6] is that it uses fixed templates for each dictionary element. Although it seems like the cells in figure 1(b) might be well described by a single ring shape, there are size and shape variations which could be better captured by more flexible templates. In general, we expect the repeating elements in a biological image to have similar appearances to a first approximation, but patterned variability is unavoidable. A better model of the image of a single cell might be to assume it was generated by combining a few different prototypes with different coefficients, effectively interpolating between the prototypes. We group the prototypes related to a single object into blocks and every image is formed by activating a small number of such blocks. We call this model sparse block coding. Note that the blocking principle is common in natural image modelling, where Gabor filters in quadrature are combined with different coefficients to produce edges of different spatial phases. Independent subspace analysis (ISA [7]) also entails distributing basis functions into non-overlapping blocks. However, in our formulation the blocks are either activated or not, while ISA assumes a continuous distribution on the activations of each block. This property of sparse block coding makes it valuable in making hard assignments of inferred cell locations, rather than giving a continuous coefficient for each location. Closer to our formulation, [8] have used a similar sparse block coding model on natural movie patches and added a temporal smoothness prior on the activation probabilities of blocks in consecutive movie frames. The expensive variational iterative techniques used by [8] for inference and learning in small image patches are computationally infeasible for the convolutional model of large images we present here. Instead, we use a convolutional block pursuit technique which is an extension of standard matching pursuit and has similarly low computational complexity even for arbitrarily large blocks and arbitrarily large images. 2 Model 2.1 Convolutional sparse block coding Following [8], we distinguish between identity and attribute variables in the generative model of each object in an image. An object can be a cell, a cell fragment or any other spatially-localized object. Identity variables hk xy, where (x, y) is the location of the object and k the type of object, are Bernoulli-distributed with very small prior probabilities. Each of the objects also has several continuous-valued attribute variables xkl xy, with l indexing the attribute. In the generative model these attributes are given a broad uniform probability and specify the coefficients with which a set of basis functions Akl are combined at spatial location (x, y) before being linearly combined with objects generated at other locations. The full description of the generative process is best captured in terms of two-dimensional convolutions by the following set of equations hk xy ∼Bernoulli(p) xkl xy ∼N 0, σ2 x y ∼ X k,l Akl ∗ xkl ◦hk + N (0, σy) , where σy is the (small) noise variance for the image, σx is the (large) prior variance for the coefficients, p is a small activation probability specific to each object type, hk and xkl represent the full two-dimensional maps of the binary and continuous coefficients respectively, “◦” represents the elementwise or Hadamard product and “∗” denotes two-dimensional convolution where the result is 3 taken to have the same dimensions as the input image.2 The joint log-likelihood (or negative energy) can now be derived easily L (x, h, A) = − ∥y −P k,l Akl ∗ xkl ◦hk ∥2 2σ2y − P klxy (xkl xy)2 2σ2x + X kxy hk xy log(p) + (1 −hk xy) log(1 −p) + constants (1) In practice, we used σx = ∞as we found that it gave similar results to finite values of σx. This model can be fit by alternately optimizing the cost function in equation 1 over the unobserved variables x and h and the parameters A. The prior bias parameter p will not be optimized over but instead will be adjusted so as to guarantee a mean number of elements per image. We also set ∥Akl∥= 1 without loss of generality, since the absolute values of x can scale to compensate. 2.2 Inference by convolutional block pursuit Given a set of basis functions Akl and an image y, we would like to infer the most likely locations of objects of each type in an image. This inference is generally NP-hard but good solutions can nonetheless be obtained with greedy methods like matching pursuit (MP). In standard matching pursuit, a sequential process is followed where at each step a basis function Akl is chosen which if activated increases most the log-likelihood of equation 1. In our model, at each step we activate a full block k which includes multiple templates Akl. Due to the quadratic nature of equation 1, for a proposal hk xy = 1 we can easily compute the MAP estimate for each xk xy given the current residual image yres = y − X k,l Akl ∗ xkl ◦hk . Here we understand xk xy as a vector concatenating xkl xy for all l. The MAP estimate for xk xy is ˆxk xy = (Ak)T Ak−1 vk xy vk xy(l) = ¯Akl ∗yres xy where ¯Akl is the basis function Akl rotated by 180 degrees and the matrix Ak contains as columns the vectorized basis functions Akl. The corresponding increase in likelihood in equation 1 is δLk xy = vk xy T ˆxkl xy 2σ2y −log p 1 −p. Inference stops when the activation penalty log p 1 −p from the prior overcomes the data term for all possible objects k at all possible locations (x, y). A simple trick common to all matching pursuit algorithms [9], [6] allows us to save computation when sequentially calculating vklxy = ¯Akl ∗yres by keeping track of v and updating it after each new coefficient is turned on: vnew = v −G(....),(k.xy)ˆxk xy, where G is the grand Gram matrix of all basis functions Akl xy at all positions (x, y), and the indexing means that every dot runs over all possible values of that index. Because the basis functions are much smaller in length and width than the entire image, most entries in the Gram matrix are actually 0. In practice, we do not keep track of these and instead keep track only of G(k′l′x′y′),(klxy) for |x −x′| < d and |y −y′| < d, where d is the width and length of the basis function. We also keep track during inference of ˆx and δLk xy and only need to update these quantities at positions (x, y) around the extracted object. These caching techniques make the complexity of the inference scale linearly with the number of objects in each image, regardless of image or object size. Thus, our algorithm benefits from the computational efficacy of matching pursuit. One additional computation lies in determining the inverse of (Ak)T Ak for each k. This cost is negligible, since each block contains a small number of attributes and we only need to do the inversions once per iteration. Every iteration of block pursuit requires updating v, ˆx and δLk xy locally around the extracted 2In other words, the convolution uses “zero-padding”. 4 block, which is several times more expensive than the corresponding update in simple matching pursuit. However, this cost is also negligible compared to the cost of finding the best block at each iteration: the single most intensive operation during inference is the loop through all the elements in all the convolutional maps to find the block which most increases the likelihood if activated. All the other update operations are local around the extracted block, and thus negligible. In practice for the datasets we use (for example, 18 images of 256 by 256 pixels each), a model can be learned in minutes on a modern CPU and inference on a single large image takes under one second. 2.3 Learning with block K-SVD Given the inferred active blocks and their coefficients, we would like to adapt the parameters of the basis functions Akl so as to maximize the cost function in eq 1. This can most easily be accomplished by gradient descent (GD). Unfortunately, for general dictionary learning setups gradient descent can produce suboptimal solutions, where a proportion of the basis function fail to learn meaningful structure [10]. Similarly, for our block-based representations we found that gradient descent often mixed together subspaces that should have been separated (see fig 2(c)). We considered the option of estimating the subspaces in each Ak sequentially where we run a couple of iterations of learning with a single subspace in each Ak and then every couple of iterations we increase the number of subspaces we estimate for Ak. This incremental approach always resulted in demixed subspaces like those in figure 2(a). Note also that the standard approach in MP-based models is to extract a fixed number of coefficients per image, but in our database of biological images there are large variations in the number of cells present in each image so we needed the inference method to be flexible enough to accomodate varying numbers of objects. To control the total number of active coefficients, we adjusted during learning the prior activation probability p whenever the average number of active elements was too small or too large compared to our target mean activation rate. Although incremental gradient descent worked well, it tended to be slow in practice. A popular learning algorithm that was proposed to accelerate patch-based dictionary learning is K-SVD [10]. In every iteration of K-SVD, coefficients are extracted for all the image patches in the training set. Then the algorithm modifies each basis function sequentially to exactly minimize the squared reconstruction cost. The convolutional MP implementation of [6] indeed uses K-SVD for learning and we here show how K-SVD can be adapted to block-based representations. At every iteration of K-SVD, given a set of active basis functions per image obtained with an inference method, the objective is to minimize the reconstruction cost with respect to the basis functions and coefficients simultaneously [10]. We consider each basis function Akl sequentially, extract all image patches {yi}i where that basis function is active and assume all coefficients for the other basis functions are fixed. In the convolutional setting, these patches are extracted from locations in the images where each basis function is active [6]. We add back the contribution of basis function Akl to each patch in {yi}i and now make the observation that to minimize the reconstruction error with a single basis function ˆAkl we must find the direction in pixel space where most of the variance in {yi}i lies. This can be done with an SVD decomposition followed by retaining the first principal vector ˆAkl. The new reconstructions for each patch yi are yi −ˆAkl( ˆAkl)T yi and with this new residual we move on to the next basis function to be reestimated. By analogy, in block K-SVD we are given a set of active blocks per image, each block consisting of K basis functions. We consider each block Ak sequentially, extract all image patches {yi}i where that block is active and assume all coefficients for the other blocks are fixed. We add back the contribution of block Ak to each patch in {yi}i and like before perform an SVD decomposition of these residuals. However, we are now looking for a K-dimensional subspace where most of the variance in {yi}i lies and this is exactly achieved by considering the first K principal vectors returned by SVD. The reconstructions for each patch are yi −ˆAk( ˆAk)T yi where ˆAk are the first K principal vectors. On a more technical note, after each iteration of K-SVD we centered the parameters spatially so that the center of mass of the first direction of variability in each block was aligned to the center of its window, otherwise the basis functions did not center by themselves. Although K-SVD was an order of magnitude faster than GD and converged in practice, we noted that in the convolutional setting K-SVD is biased. This is because at the step of re-estimating a block Ak from a set of patches {yi}i, some of these patches may be spatially overlapping in the full image. Therefore, the subspaces in Ak are driven to explain the residual at some pixels multiple times. One way around the problem would be to enforce non-overlapping windows during inference, 5 (a) (b) (c) (d) (e) Figure 2: a. Typical recovered parameters with incremental gradient descent learning on GCaMP6 fluorescent images. Each column is a block and is sorted in the order of variance from the SVD decomposition. Left columns capture the structure of cell somatas, while right columns represent dendrite fragments. b. Like (a) but with incremental block K-SVD. Similar subspaces are recovered with ten times fewer iterations. c. and d. Typical failure modes of learning with non-incremental gradient descent and block K-SVD, respectively. The subspaces from (a) appear mixed together. e. Subspaces obtained from Nissl-stained slices of cortex. but in our images many cell pairs touch and would in fact require overlapping windows. Instead, we decided to fine-tune the parameters returned by block K-SVD with a few iterations of gradient descent which worked well in practice and in simulations recovered good model parameters with little further computational effort. 3 Results 3.1 Qualitative results on fluorescent images of neurons The main applications of our work are to nissl-stained slices and to fields of neurons and neuropil imaged with a two-photon microscope (figure 1(a)). The neurons were densely labeled with a fluorescent calcium indicator GCaMP6 in a small area of the mouse somatosensory (barrel) cortex. While the mice were either anesthetized or awake, their whiskers were stimulated which activated corresponding barrel cortex neurons, leading to an influx of calcium into the cells and consequently an increase in fluorescence which was reported by the two-photon microscope. Although cell somas receive a large influx of calcium, dendrites and axons can also be seen. Individual images of the fluorescence can be very noisy purely due to the low number of photons released over each exposure. Better spatial accuracy can be obtained at the expense of temporal accuracy or at the expense of a smaller field of view. In practice, cell locations can be identified based on the mean images recorded over the duration of an entire experiment, in our case 1000 or 5000 frames. Using 18 images like the one in figure 1(b) we learned a full model with two types of objects each with three subspaces. One of the object types, the left column in figure 2(a) was clearly a model of single neurons. The right column of figure 2(a) represented small pieces of dendrite that were also highly fluorescent. Note how within a block each of the two objects includes dimensions of variability that capture anisotropies in the shape of the cell or dendritic fragments. Figure 3(a) shows in alternating odd rows patches from the training set identified by the algorithm to contain cells and the respective reconstructions in the even rows. Note that while most cells are ring-shaped, some appear filled and some appear to be larger and the model’s flexibility is sufficient to capture these variations. Figure 2(c) shows a typical failure for gradient based learning that motivated us to use incremental block learning. The two subspaces recovered in figure 2(a) are mixed in figure 2(c) and the likelihood from equation 1 is correspondingly lower. 3.2 Simulated data We ran extensive experiments on simulated data to assess the algorithm’s ability to learn and infer cell locations. There are two possible failure modes: the inference algorithm might not be accurate enough or the learning algorithm might not recover good parameters. We address each of these failure modes separately. We wanted to have simulated data as similar as possible to the real data so we first fitted a model to the GCaMP6 data. We then took the learned model and generated a new dataset from it using the same number of objects of each type and similar amounts of Gaussian noise as the real images. To generate diverse shapes of cells, we fit a K-dimensional multivariate Gaussian 6 (a) (b) Figure 3: a. Patches from the GCaMP6 training images (odd rows) and their reconstructions (even rows) with the subspaces shown in figure 2(b). b. One area from a Nissl-stained image together with a human segmentation (open circles) and the model segmentation (stars). Larger zoom versions are available in the supplementary material. to the posteriors of each block on the real data and generated coefficients from this model for the simulated images. Supplemental figure 6 shows a simulated image and it can be seen to resemble images in the training set. Note that we are not modelling some of the structured variability in the noise, for example the blood vessels and dendrites visible in figure 1(b). This structured variability is the likely reason why the model performs better on simulated than on real images. 3.2.1 Inference quality of convolutional block pursuit We kept the ground truths for the simulated dataset and investigated how well we can recover cell locations when we know perfectly what the simulation parameters were. There is one free parameter in our model that we cannot learn automatically which is the average number of extracted objects per image. We varied this parameter and report ROC curves for true positives and false positives as we vary the number of extracted coefficients. Sometimes we observed that cells were identified not exactly at the correct location but one or a few pixels away. Such small deviations are acceptable in practice, so we considered inferred cells as correctly identified if they were within four pixels of the correct location (cells were 8-16 pixels in diameter). We enforced that a true cell could only be identified once. If the algorithm made two predictions within ¡4 pixels of a true cell, only the first of these was considered a true positive. Figure 4(a) reports the typical performance of convolutional block pursuit. We also investigated the quality of inference without considering the full structure of the subspaces in each object. Using a single subspace per object is equivalent to matching pursuit, achieved significantly worse performance and saturated at a smaller number of true positives because the model could not recognize some of the variations in cell shape. 3.2.2 Learning quality of K-SVD + gradient descent We next tested how well the algorithm recovers the generative parameters. We assume that the model knows how many object types there are and how many attributes each object type has. To compare the various learning strategies we could in principle just evaluate the joint log-likelihood of equation 1. However the differences, although consistent, were relatively small and hard to interpret. More relevant to us is the ROC performance in recovering correctly cell locations. Block K-SVD consistently recovers good parameters but does not perform quite as well as the true parameters because of its bias (figure 4(b)). However refinement with GD consistently recovers the best parameters which approach the performance of the true generative parameters. We also asked how well the model recovers the parameters when the true number of objects per image is unknown, by running several experiments with different mean numbers of objects per image. The performance of the learned subspaces is reported in figure 4(c). Although the correct number of elements per image was 600, learning with as few as 200 or as many as 1400 objects resulted in equally well-performing models. If performance on simulated data is at all indicative of behavior on real data, we conclude that our algorithm is not sensitive to the only free parameter in the model. 7 0 10 20 0 50 100 150 200 False positives True positives Inference with known parameters B1P (MP) B2P B3P B3P−learn Oracle (a) 0 10 20 100 110 120 130 140 150 160 170 False positives True positives Learning + Inference B3P K−SVD K−SVD + GD known parameters (b) 0 10 20 100 110 120 130 140 150 160 170 False positives True positives Learning with X elements per image X = 200 400 600 (true) 800 1000 1200 1400 known parameters (c) 0 20 40 0 50 100 150 200 False positives True positives Compare against Human3 GCaMP6 fluorescence BP1 BP2 BP3 Human1 Human2 Oracle (d) 0 50 100 0 50 100 150 200 250 300 False positives True positives Compare against Human3 Nissl stains BP1 BP2 BP4 Human 1 Human 2 Oracle (e) Figure 4: ROC curves show the model’s behavior on simulated data (a-c) and on manuallysegmented GCaMP6 images (d) and Nissl-stained images (e) . a. Inference with block pursuit with all three subspaces per object (B3P) as well as block pursuit with only the first or first two principal subspaces (B1P and B2P). We also show for comparison the performance of B3P with model parameters identified by learning. Notice the small number of false negatives when a large proportion of the cells are identified. The cells not identified were too dim to pick out even with a large number of false negative, hence the quick saturation of the ROC curve. b. Ten runs of block K-SVD followed by gradient descent. Refining with GD improved performance. c. Not knowing the average number of elements per image does not make a difference on simulated data. 3.3 Comparison with human segmentation on biological images We compare the segmentation of the model with manual segmentations on one example each of the GCaMP6 and Nissl-stained images (figures 4(d) and 4(e)). The human segmenters were instructed to locate cells in approximately the order of confidence, thus producing an ordering similar to the ordering returned by the algorithm. As we retain more cells from that ordering we can build ROC curves showing the agreement of the humans with each other, and of the model’s segmentation to the humans’. We found that using multiple templates per block helped the model agree more with the human segmentations. In the case of the Nissl-stain, block coding with four templates identified fifty more cells than matching pursuit. Although the model generally performs below inter-human agreement, the gap is sufficiently small to warrant practical use. In addition, a post-hoc analysis suggests that many of the model’s false positives are in fact cells that were not selected in the manual segmentations. Examples of these false positives can be seen both in figure 3(b) and in figures in the supplementary material. As we anticipated in the introduction, a standard method based on thresholded and localized correlation maps only reached 25 true positives at 50 false positives and is not shown in figure 4(d). 4 Conclusions We have presented an image model that can be used to automatically and effectively infer the locations and shapes of cells from biological image data. This application of generative image models is to our knowledge novel and should allow automating many types of biological studies. Our contribution to the image modelling literature is to extend the sparse block coding model presented in [8] to the convolutional setting where each block is allowed to be present at any location in an image. We also derived convolutional block pursuit, a greedy inference algorithm which scales gracefully to images of large dimensions with many possible object types in the generative model. For learning the model, we extended the K-SVD learning algorithm to the block-based and convolutional representation. We identified a bias in convolutional K-SVD and used gradient descent to fine-tune the model parameters towards good local optima. On simulated data, convolutional block pursuit recovers with good accuracy cell locations in simulated biological images and the learning rule recovers well and consistently the parameters of the generative model. Using the block pursuit algorithm recovers significantly more cells than simple matching pursuit. On data from calcium imaging experiments and nissl-stained tissue, the model succeeds in recovering cell locations and learns good models of the variability among different cell shapes. 8 References [1] M Oberlaender, VJ Dercksen, R Egger, M Gensel, B Sakmann, and HC Hege. Automated threedimensional detection and counting of neuron somata. J Neuroscience Methods, 180:147–160, 2009. [2] EA Mukamel, A Nimmerjahn, and MJ Schnitzer. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron, 63:747–760, 2009. [3] I Ozden, HM Lee, MR Sullivan, and SSH Wang. Identification and clustering of event patterns from in vivo multiphoton optical recordings of neuronal ensembles. J Neurophysiol, 100:495–503, 2008. [4] K Kavukcuoglu, P Sermanet, YL Boureau, K Gregor, M Mathieu, and Y LeCun. Learning convolutional feature hierarchies for visual recognition. Advances in Neural Information Processing, 2010. [5] K Gregor, A Szlam, and Y LeCun. Structured sparse coding via lateral inhibition. Advances in Neural Information Processing, 2011. [6] A Szlam, K Kavukcuoglu, and Y LeCun. Convolutional matching pursuit and dictionary training. arXiv, page 1010.0422v1, 2010. [7] A Hyvarinen, J Hurri, and PO Hoyer. Natural Image Statistics. Springer, 2009. [8] P Berkes, RE Turner, and M Sahani. A structured model of video produces primary visual cortical organisation. PLoS Computational Biology, 5, 2009. [9] SG Mallat and Z Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. [10] M Aharon, M Elad, and A Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006. 9
|
2013
|
36
|
5,109
|
How to Hedge an Option Against an Adversary: Black-Scholes Pricing is Minimax Optimal Jacob Abernethy University of Michigan jabernet@umich.edu Peter L. Bartlett University of California at Berkeley and Queensland University of Technology bartlett@cs.berkeley.edu Rafael M. Frongillo Microsoft Research raf@cs.berkeley.edu Andre Wibisono University of California at Berkeley wibisono@cs.berkeley.edu Abstract We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor. In the Black-Scholes option pricing model from 1973, the Investor can continuously hedge the risk of an option by trading the underlying asset, assuming that the asset’s price fluctuates according to Geometric Brownian Motion (GBM). We consider a worst-case model, in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, and the Investor can make a sequence of hedging decisions. Our main result is to show that the value of our proposed game, which is the “regret” of hedging strategy, converges to the Black-Scholes option price. We use significantly weaker assumptions than previous work—for instance, we allow large jumps in the asset price—and show that the Black-Scholes hedging strategy is near-optimal for the Investor even in this non-stochastic framework. 1 Introduction An option is a financial contract that allows the purchase or sale of a given asset, such as a stock, bond, or commodity, for a predetermined price on a predetermined date. The contract is named as such because the transaction in question is optional for the purchaser of the contract. Options are bought and sold for any number of reasons, but in particular they allow firms and individuals with risk exposure to hedge against potential price fluctuations. Airlines, for example, have heavy fuel costs and hence are frequent buyers of oil options. What ought we pay for the privilege of purchasing an asset at a fixed price on a future expiration date? The difficulty with this question, of course, is that while we know the asset’s previous prices, we are uncertain as to its future price. In a seminal paper from 1973, Fischer Black and Myron Scholes introduced what is now known as the Black-Scholes Option Pricing Model, which led to a boom in options trading as well as a huge literature on the problem of derivative pricing [2]. Black and Scholes had a key insight that a firm which had sold/purchased an option could “hedge” against the future cost/return of the option by buying and selling the underlying asset as its price fluctuates. Their model is based on stochastic calculus and requires a critical assumption that the asset’s price behaves according to a Geometric Brownian Motion (GBM) with known drift and volatility. The GBM assumption in particular implies that (almost surely) an asset’s price fluctuates continuously. The Black-Scholes model additionally requires that the firm be able to buy and sell continuously until the option’s expiration date. Neither of these properties are true in practice: the stock market is only open eight hours per day, and stock prices are known to make significant jumps even 1 during regular trading. These and other empirical observations have led to much criticism of the Black-Scholes model. An alternative model for option pricing was considered1 by DeMarzo et al. [3], who posed the question: “Can we construct hedging strategies that are robust to adversarially chosen price fluctuations?” Essentially, the authors asked if we may consider hedging through the lens of regret minimization in online learning, an area that has proved fruitful, especially for obtaining guarantees robust to worst-case conditions. Within this minimax option pricing framework, DeMarzo et al. provided a particular algorithm resembling the Weighted Majority and Hedge algorithms [5, 6] with a nice bound. Recently, Abernethy et al. [1] took the minimax option pricing framework a step further, analyzing the zero-sum game being played between an Investor, who is attempting to replicate the option payoff, and Nature, who is sequentially setting the price changes of the underlying asset. The Investor’s goal is to “hedge” the payoff of the option as the price fluctuates, whereas Nature attempts to foil the Investor by choosing a challenging sequence of price fluctuations. The value of this game can be interpreted as the “minimax option price,” since it is what the Investor should pay for the option against an adversarially chosen price path. The main result of Abernethy et al. was to show that the game value approaches the Black-Scholes option price as the Investor’s trading frequency increases. Put another way, the minimax price tends to the option price under the GBM assumption. This lends significant further credibility to the Black-Scholes model, as it suggests that the GBM assumption may already be a “worst-case model” in a certain sense. The previous result, while useful and informative, left two significant drawbacks. First, their techniques used minimax duality to compute the value of the game, but no particular hedging algorithm for the Investor is given. This is in contrast to the Black-Scholes framework (as well as to the DeMarzo et al.’s result [3]) in which a hedging strategy is given explicitly. Second, the result depended on a strong constraint on Nature’s choice of price path: the multiplicative price variance is uniformly constrained, which forbids price jumps and other large fluctuations. In this paper, we resolve these two drawbacks. We consider the problem of minimax option pricing with much weaker constraints: we restrict the sum over the length of the game of the squared price fluctuations to be no more than a constant c, and we allow arbitrary price jumps, up to a bound ⇣. We show that the minimax option price is exactly the Black-Scholes price of the option, up to an additive term of O(c⇣1/4). Furthermore, we give an explicit hedging strategy: this upper bound is achieved when the Investor’s strategy is essentially a version of the Black-Scholes hedging algorithm. 2 The Black-Scholes Formula Let us now briefly review the Black-Scholes pricing formula and hedging strategy. The derivation requires some knowledge of continuous random walks and stochastic calculus—Brownian motion, Itˆo’s Lemma, a second-order partial differential equation—and we shall only give a cursory treatment of the material. For further development we recommend a standard book on stochastic calculus, e.g. [8]. Let us imagine we have an underlying asset A whose price is fluctuating. We let W(t) be a Brownian motion, also known as a Weiner process, with zero drift and unit variance; in particular, W(0) = 0 and W(t) ⇠N(0, t) for t > 0. We shall imagine that A’s price path G(t) is described by a geometric Brownian motion with drift µ and volatility σ, which we can describe via the definition of a Brownian motion: G(t) d= exp{(µ −1 2σ2)t + σW(t)}. If an Investor purchases a European call option on some asset A (say, MSFT stock) with a strike price of K > 0 that matures at time T, then the Investor has the right to buy a share of A at price K at time T. Of course, if the market price of A at T is G(T), then the Investor will only “exercise” the option if G(T) > K, since the Investor has no benefit of purchasing the asset at a price higher than the market price. Hence, the payoff of a European call option has a profit function of the form max{0, G(T) −K}. Throughout the paper we shall use gEC(x) := max{0, x −K} to refer to the payout of the European call when the price of asset A at time T is x (the parameter K is implicit). 1Although it does not have quite the same flavor, a similar approach was explored in the book of Vovk and Shafer [7]. 2 We assume the current time is t. The Black-Scholes derivation begins with a guess: assume that the “value” of the European call option can be described by a smooth function V(G(t), t), depending only on the current price of the asset G(t) and the time to expiration T −t. We can immediately define a boundary condition on V, since at the expiration time T the value of the option is V(G(T), 0) = gEC(G(T)). So how do we arrive at a value for the option at another time point t? We assume the Investor has a hedging strategy, ∆(x, t) that determines the amount to invest when the current price is x and the time is t. Notice that if the asset’s current price is G(t) and the Investor purchases ∆(G(t), t) dollars of asset A at t, then the incremental amount of money made in an infinitesimal amount of time is ∆(G(t), t) dG/G(t), since dG/G(t) is the instantaneous multiplicative price change at time t. Of course, if the earnings of the Investor are guaranteed to exactly cancel out the infinitesimal change in the value of the option dV(G(t), t), then the Investor is totally hedged with respect to the option payout for any sample of G for the remaining time to expiration. In other words, we hope to achieve dV(G, t) = ∆(G, t) dG/G. However, by Itˆo’s Lemma [8] we have the following useful identity: dV(G, t) = @V @x dG + @V @t dt + 1 2σ2G2 @2V @x2 dt. (1) Black and Scholes proposed a generic hedging strategy, that the investor should invest ∆(x, t) = x@V @x (2) dollars in the asset A when the price of A is x at time t. As mentioned, the goal of the Investor is to hedge out risk so that it is always the case that dV(G, t) = ∆(G, t) dG/G. Combining this goal with Equations (1) and (2), we have @V @t + 1 2σ2x2 @2V @x2 = 0. (3) Notice the latter is an entirely non-stochastic PDE, and indeed it can be solved explicitly: V(x, t) = EY [gEC(x · exp(Y ))] where Y ⇠N(−1 2σ2(T −t), σ2(T −t)) (4) Remark: While we have described the derivation for the European call option, with payoff function gEC, the analysis above does not rely on this specific choice of g. We refer the reader to a standard text on asset pricing for more on this [8]. 3 The Minimax Hedging Game We now describe a sequential decision protocol in which an Investor makes a sequence of trading decisions on some underlying asset, with the goal of hedging away the risk of some option (or other financial derivative) whose payout depends on the final price of the asset at the expiration time T. We assume the Investor is allowed to make a trading decision at each of n time periods, and before making this trade the investor observes how the price of the asset has changed since the previous period. Without loss of generality, we can assume that the current time is 0 and the trading periods occur at {T/n, 2T/n, . . . , 1}, although this will not be necessary for our analysis. The protocol is as follows. 1: Initial price of asset is S = S0. 2: for i = 1, 2, . . . , n do 3: Investor hedges, invests ∆i 2 R dollars in asset. 4: Nature selects a price fluctuation ri and updates price S S(1 + ri). 5: Investor receives (potentially negative) profit of ∆iri. 6: end for 7: Investor is charged the cost of the option, g(S) = g (S0 · Qn i=1(1 + ri)). Stepping back for a moment, we see that the Investor is essentially trying to minimize the following objective: g S0 · n Y i=1 (1 + ri) ! − n X i=1 ∆iri. 3 We can interpret the above expression as a form of regret: the Investor chose to execute a trading strategy, earning him Pn i=1 ∆iri, but in hindsight might have rather purchased the option instead, with a payout of g (S0 · Qn i=1(1 + ri)). What is the best hedging strategy the Investor can execute to minimize the difference between the option payoff and the gains/losses from hedging? Indeed, how much regret may be suffered against a worst-case sequence of price fluctuations? Constraining Nature. The cost of playing the above sequential game is clearly going to depend on how much we expect the price to fluctuate. In the original Black-Scholes formulation, the price volatility σ is a major parameter in the pricing function. In the work of Abernethy et al., a key assumption was that Nature may choose any r1, . . . , rn with the constraint that E[r2 i | r1, . . . , ri−1] = O(1/n). 2 Roughly, this constraint means that in any ✏-sized time interval, the price fluctuation variance shall be no more than ✏. This constraint, however, does not allow for large price jumps during trading. In the present work, we impose a much weaker set of constraints, described as follows:3 • TotVarConstraint: The total price fluctuation is bounded by a constant c: Pn i=1 r2 i c. • JumpConstraint: Every price jump |ri| is no more than ⇣, for some ⇣> 0 (which may depend on n). The first constraint above says that Nature is bounded by how much, in total, the asset’s price path can fluctuate. The latter says that at no given time can the asset’s price jump more than a given value. It is worth noting that if c ≥n⇣2 then TotVarConstraint is superfluous, whereas JumpConstraint becomes superfluous if c < ⇣2. The Minimax Option Price We are now in a position to define the value of the sequential option pricing game using a minimax formulation. That is, we shall ask how much the Investor loses when making optimal trading decisions against worst-case price fluctuations chosen by Nature. Let V (n) ⇣ (S; c, m) be the value of the game, measured by the investor’s loss, when the asset’s current price is S ≥0, the TotVarConstraint is c ≥0, the JumpConstraint is ⇣> 0, the total number of trading rounds are n 2 N, and there are 0 m n rounds remaining. We define recursively: V (n) ⇣ (S; c, m) = inf ∆2R sup r : |r|min{⇣,pc} −∆r + V (n) ⇣ ((1 + r)S; c −r2, m −1), (5) with the base case V (n) ⇣ (S; c, 0) = g(S). Notice that the constraint under the supremum enforces both TotVarConstraint and JumpConstraint. For simplicity, we will write V (n) ⇣ (S; c) := V (n) ⇣ (S; c, n). This is the value of the game that we are interested in analyzing. Towards establishing an upper bound on the value (5), we shall discuss the question of how to choose the hedge parameter ∆on each round. We can refer to a “hedging strategy” in this game as a function of the tuple (S, c, m, n, ⇣, g(·)) that returns hedge position. In our upper bound, in fact we need only consider hedging strategies ∆(S, c) that depend on S and c; there certainly will be a dependence on g(·) as well but we leave this implicit. 4 Asymptotic Results The central focus of the present paper is the following question: “For fixed c and S, what is the asymptotic behavior of the value V (n) ⇣ (S; c)?” and “Is there a natural hedging strategy ∆(S, c) that (roughly) achieves this value?” In other words, what is the minimax value of the option, as well as the optimal hedge, when we fix the variance budget c and the asset’s current price S, but let the number of rounds tend to 1? We now give answers to these questions, and devote the remainder of the paper to developing the results in detail. We consider payoff functions g: R0 ! R0 satisfying three constraints: 2The constraint in [1] was E[r2 i | r1, . . . , ri−1] exp(c/n) −1, but this is roughly equivalent. 3We note that Abernethy et al. [1] also assumed that the multiplicative price jumps |ri| are bounded by ˆ⇣n = ⌦( p (log n)/n); this is a stronger assumption than what we impose on (⇣n) in Theorem 1. 4 1. g is convex. 2. g is L-Lipschitz, i.e. |g(x) −g(y)| L|x −y|. 3. g is eventually linear, i.e. there exists K > 0 such that g(x) is a linear function for all x ≥K; in this case we also say g is K-linear. We believe the first two conditions are strictly necessary to achieve the desired results. The Klinearity may not be necessary but makes our analysis possible. We note that the constraints above encompass the standard European call and put options. Henceforth we shall let G be a zero-drift GBM with unit volatility. In particular, we have that log G(t) ⇠N(−1 2t, t). For S, c ≥0, define the function U(S, c) = EG[g(S · G(c))], and observe that U(S, 0) = g(S). Our goal will be to show that U is asymptotically the minimax price of the option. Most importantly, this function U(S, c) is identical to V(S, 1 σ2 (T −c)), the Black-Scholes value of the option in (4) when the GBM volatility parameter is σ in the BlackScholes analysis. In particular, analogous to to (3), U(S, c) satisfies a differential equation: 1 2S2 @2U @S2 −@U @c = 0. (6) The following is our main result of this paper. Theorem 1. Let S > 0 be the initial asset price and let c > 0 be the variance budget. Assume we have a sequence {⇣n} with limn!1 ⇣n = 0 and lim infn!1 n⇣2 n > c. Then lim n!1 V (n) ⇣n (S; c) = U(S, c). This statement tells us that the minimax price of an option, when Nature has a total fluctuation budget of c, approaches the Black-Scholes price of the option when the time to expiration is c. This is particularly surprising since our minimax pricing framework made no assumptions as to the stochastic process generating the price path. This is the same conclusion as in [1], but we obtained our result with a significantly weaker assumption. Unlike [1], however, we do not show that the adversary’s minimax optimal stochastic price path necessarily converges to a GBM. The convergence of Nature’s price path to GBM in [1] was made possible by the uniform per-round variance constraint. The previous theorem is the result of two main technical contributions. First, we prove a lower bound on the limiting value of V (n) ⇣n (S; c) by exhibiting a simple randomized strategy for Nature in the form of a stochastic price path, and appealing to the Lindeberg-Feller central limit theorem. Second, we prove an O(c⇣1/4) upper bound on the deviation between V (n) ⇣ (S; c) and U(S, c). The upper bound is obtained by providing an explicit strategy for the Investor: ∆(S, c) = S @U(S, c) @S and carefully bounding the difference between the output using this strategy and the game value. In the course of doing so, because we are invoking Taylor’s remainder theorem, we need to bound the first few derivatives of U(S, c). Bounding these derivatives turns out to be the crux of the analysis; in particular, it uses the full force of the assumptions on g, including that g is eventually linear, to avoid the pathological cases when the derivative of g converges to its limiting value very slowly. 5 Lower Bound In this section we prove that U(S, c) is a lower bound to the game value V (n) ⇣n (S; c). We note that the result in this section does not use the assumptions in Theorem 1 that ⇣n ! 0, nor that g is convex and eventually linear. In particular, the following result also applies when the sequence (⇣n) is a constant ⇣> 0. 5 Theorem 2. Let g: R0 ! R0 be an L-Lipschitz function, and let {⇣n} be a sequence of positive numbers with lim infn!1 n⇣2 n > c. Then for every S, c > 0, lim inf n!1 V (n) ⇣n (S; c) ≥U(S, c). The proof of Theorem 2 is based on correctly “guessing” a randomized strategy for Nature. For each n 2 N, define the random variables R1,n, . . . , Rn,n ⇠Uniform{± p c/n} i.i.d. Note that (Ri,n)n i=1 satisfies TotVarConstraint by construction. Moreover, the assumption lim infn!1 n⇣2 n > c implies ⇣n > p c/n for all sufficiently large n, so eventually (Ri,n) also satisfies JumpConstraint. We have the following convergence result for (Ri,n), which we prove in Appendix A. Lemma 3. Under the same setting as in Theorem 2, we have the convergence in distribution n Y i=1 (1 + Ri,n) d −! G(c) as n ! 1. Moreover, we also have the convergence in expectation lim n!1 E " g S · n Y i=1 (1 + Ri,n) !# = U(S, c). (7) With the help of Lemma 3, we are now ready to prove Theorem 2. Proof of Theorem 2. Let n be sufficiently large such that n⇣2 n > c. Let Ri,n ⇠Uniform{± p c/n} i.i.d., for 1 i n. As noted above, (Ri,n) satisfies both TotVarConstraint and JumpConstraint. Then we have V (n) ⇣n (S; c) = inf ∆1 sup r1 · · · inf ∆n sup rn g ⇣ S · n Y i=1 (1 + ri) ⌘ − n X i=1 ∆iri ≥ inf ∆1 · · · inf ∆n E h g ⇣ S · n Y i=1 (1 + Ri,n) ⌘ − n X i=1 ∆iRi,n i = E h g ⇣ S · n Y i=1 (1 + Ri,n) ⌘i . The first line follows from unrolling the recursion in the definition (5); the second line from replacing the supremum over (ri) with expectation over (Ri,n); and the third line from E[Ri,n] = 0. Taking limit on both sides and using (7) from Lemma 3 give us the desired conclusion. 6 Upper Bound In this section we prove that U(S, c) is an upper bound to the limit of V (n) ⇣ (S; c). Theorem 4. Let g: R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let 0 < ⇣1/16. Then for any S, c > 0 and n 2 N, we have V (n) ⇣ (S; c) U(S, c) + ✓ 18c + 8 p 2⇡ ◆ LK ⇣1/4. We remark that the right-hand side of the above bound does not depend on the number of trading periods n. The key parameter is ⇣, which determines the size of the largest price jump of the stock. However, we expect that as the trading frequency increases, the size of the largest price jump will shrink. Plugging a sequence {⇣n} in place of ⇣in Theorem 4 gives us the following corollary. Corollary 1. Let g: R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let {⇣n} be a sequence of positive numbers with ⇣n ! 0. Then for S, c > 0, lim sup n!1 V (n) ⇣n (S; c) U(S, c). Note that the above upper bound relies on the convexity of g, for if g were concave, then we would have the reverse conclusion: V (n) ⇣ (S; c) ≥g(S) = g(S · E[G(c)]) ≥E[g(S · G(c))] = U(S, c). Here the first inequality follows from setting all r = 0 in (5), and the second is by Jensen’s inequality. 6 6.1 Intuition For brevity, we write the partial derivatives Uc(S, c) = @U(S, c)/@c, US(S, c) = @U(S, c)/@S, and US2(S, c) = @2U(S, c)/@S2. The proof of Theorem 4 proceeds by providing a “guess” for the Investor’s action, which is a modification of the original Black-Scholes hedging strategy. Specifically, when the current price is S and the remaining budget is c, then the Investor invests ∆(S, c) := SUS(S, c). We now illustrate how this strategy gives rise to a bound on V (n) ⇣ (S; c) as stated in Theorem 4. First suppose for some m ≥1 we know that V (n) ⇣ (S; c, m−1) is a rough approximation to U(S, c). Note that a Taylor approximation of the function rm 7! U(S + Srm, c −r2 m) around U(S, c) gives us U(S + Srm, c −r2 m) = U(S, c) + rmSUS(S, c) −r2 mUc(S, c) + 1 2r2 mS2US2(S, c) + O(r3 m) = U(S, c) + rmSUS(S, c) + O(r3 m), where the last line follows from the Black-Scholes equation (6). Now by setting ∆= SUS(S, c) in the definition (5), and using the assumption and the Taylor approximation above, we obtain V (n) ⇣ (S; c, m) = inf ∆2R sup |rm|min{⇣,pc} −∆rm + V (n) ⇣ (S + Srm; c −r2 m, m −1) sup rm −rmSUS(S, c) + V (n) ⇣ (S + Srm; c −r2 m, m −1) = sup rm −rmSUS(S, c) + U(S + Srm, c −r2 m) + (approx terms) = U(S, c) + O(r3 m) + (approx terms). In other words, on each round of the game we add an O(r3 m) term to the approximation error. By the time we reach V (n) ⇣ (S; c, n) we will have an error term that is roughly on the order of Pn m=1 |rm|3. Since Pn m=1 r2 m c and |rm| ⇣by assumption, we get Pn m=1 |rm|3 ⇣c. The details are more intricate because the error O(r3 m) from the Taylor approximation also depends on S and c. Trading off the dependencies of c and ⇣leads us to the bound stated in Theorem 4. 6.2 Proof (Sketch) of Theorem 4 In this section we describe an outline of the proof of Theorem 4. Throughout, we assume g is a convex, L-Lipschitz, K-linear function, and 0 < ⇣1/16. The proofs of Lemma 5 and Lemma 7 are provided in Appendix B, and Lemma 6 is proved in Appendix C. For S, c > 0 and |r| pc, we define the (single-round) error term of the Taylor approximation, ✏r(S, c) := U(S + Sr, c −r2) −U(S, c) −rSUS(S, c). (8) We also define a sequence {↵(n)(S, c, m)}n m=0 to keep track of the cumulative errors. We define this sequence by setting ↵(n)(S, c, 0) = 0, and for 1 m n, ↵(n)(S, c, m) := sup |r|min{⇣,pc} ✏r(S, c) + ↵(n)(S + Sr, c −r2, m −1). (9) For simplicity, we write ↵(n)(S, c) ⌘↵(n)(S, c, n). Then we have the following result, which formalizes the notion from the preceding section that V (n) ⇣ (S; c, m) is an approximation to U(S, c). Lemma 5. For S, c > 0, n 2 N, and 0 m n, we have V (n) ⇣ (S; c, m) U(S, c) + ↵(n)(S, c, m). (10) It now remains to bound ↵(n)(S, c) from above. A key step in doing so is to show the following bounds on ✏r. This is where the assumptions that g be L-Lipschitz and K-linear are important. 7 Lemma 6. For S, c > 0, and |r| min{1/16, pc/8}, we have ✏r(S, c) 16LK ⇣ max{c−3/2, c−1/2} |r|3 + max{c−2, c−1/2} r4⌘ . (11) Moreover, for S > 0, 0 < c 1/4, and |r| pc, we also have ✏r(S, c) 4LK p 2⇡· r2 pc. (12) Using Lemma 6, we have the following bound on ↵(n)(S, c). Lemma 7. For S, c > 0, n 2 N, and 0 < ⇣1/16, we have ↵(n)(S, c) ✓ 18c + 8 p 2⇡ ◆ LK ⇣1/4. Proof (sketch). By unrolling the inductive definition (9), we can write ↵(n)(S, c) as the supremum of f(r1, . . . , rn), where f(r1, . . . , rn) = n X m=1 ✏rm ⇣ S m−1 Y i=1 (1 + ri), c − m−1 X i=1 r2 i ⌘ . Let (r1, . . . , rn) be such that |rm| ⇣and Pn m=1 r2 m c. We will show that f(r1, . . . , rn) (18c + 8/ p 2⇡) LK ⇣1/4. Let 0 n⇤n be the largest index such that Pn⇤ m=1 r2 m c −p⇣. We split the analysis into two parts. 1. For 1 m min{n, n⇤+ 1}: By (11) from Lemma 6 and a little calculation, we have ✏rm ⇣ S m−1 Y i=1 (1 + ri), c − m−1 X i=1 r2 i ⌘ 18LK ⇣1/4 r2 m. Summing over 1 m min{n, n⇤+ 1} then gives us min{n, n⇤+1} X m=1 ✏rm ⇣ S m−1 Y i=1 (1+ri), c− m−1 X i=1 r2 i ⌘ 18LK ⇣1/4 min{n, n⇤+1} X m=1 r2 m 18LK ⇣1/4c. 2. For n⇤+ 2 m n (if n⇤n −2): By (12) from Lemma 6, we have ✏rm ⇣ S m−1 Y i=1 (1 + ri), c − m−1 X i=1 r2 i ⌘ 4LK p 2⇡· r2 m pPn i=m r2 i . Therefore, n X m=n⇤+2 ✏rm ⇣ S m−1 Y i=1 (1 + ri), c − m−1 X i=1 r2 i ⌘ 4LK p 2⇡ n X m=n⇤+2 r2 m pPn i=m r2 i 8LK p 2⇡⇣1/4, where the last inequality follows from Lemma 8 in Appendix B. Combining the two cases above gives us the desired conclusion. Proof of Theorem 4. Theorem 4 follows immediately from Lemma 5 and Lemma 7. Acknowledgments. We gratefully acknowledge the support of the NSF through grant CCF1115788 and of the ARC through Australian Laureate Fellowship FL110100281. 8 References [1] J. Abernethy, R. M. Frongillo, and A. Wibisono. Minimax option pricing meets Black-Scholes in the limit. In Howard J. Karloff and Toniann Pitassi, editors, STOC, pages 1029–1040. ACM, 2012. [2] F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal of Political Economy, pages 637–654, 1973. [3] P. DeMarzo, I. Kremer, and Y. Mansour. Online trading algorithms and robust option pricing. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 477–486. ACM, 2006. [4] R. Durrett. Probability: Theory and Examples (Fourth Edition). Cambridge University Press, 2010. [5] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational learning theory, pages 23–37. Springer, 1995. [6] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [7] G. Shafer and V. Vovk. Probability and Finance: It’s Only a Game!, volume 373. WileyInterscience, 2001. [8] J. M. Steele. Stochastic Calculus and Financial Applications, volume 45. Springer Verlag, 2001. 9
|
2013
|
360
|
5,110
|
Training and Analyzing Deep Recurrent Neural Networks Michiel Hermans, Benjamin Schrauwen Ghent University, ELIS departement Sint Pietersnieuwstraat 41, 9000 Ghent, Belgium michiel.hermans@ugent.be Abstract Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales. 1 Introduction The last decade, machine learning has seen the rise of neural networks composed of multiple layers, which are often termed deep neural networks (DNN). In a multitude of forms, DNNs have shown to be powerful models for tasks such as speech recognition [17] and handwritten digit recognition [4]. Their success is commonly attributed to the hierarchy that is introduced due to the several layers. Each layer processes some part of the task we wish to solve, and passes it on to the next. In this sense, the DNN can be seen as a processing pipeline, in which each layer solves a part of the task before passing it on to the next, until finally the last layer provides the output. One type of network that debatably falls into the category of deep networks is the recurrent neural network (RNN). When folded out in time, it can be considered as a DNN with indefinitely many layers. The comparison to common deep networks falls short, however, when we consider the functionality of the network architecture. For RNNs, the primary function of the layers is to introduce memory, not hierarchical processing. New information is added in every ‘layer’ (every network iteration), and the network can pass this information on for an indefinite number of network updates, essentially providing the RNN with unlimited memory depth. Whereas in DNNs input is only presented at the bottom layer, and output is only produced at the highest layer, RNNs generally receive input and produce output at each time step. As such, the network updates do not provide hierarchical processing of the information per se, only in the respect that older data (provided several time steps ago) passes through the recursion more often. There is no compelling reason why older data would require more processing steps (network iterations) than newly received data. More likely, the recurrent weights in an RNN learn during the training phase to select what information they need to pass onwards, and what they need to discard. Indeed, this quality forms the core motivation of the so-called Long Short-term memory (LSTM) architecture [11], a special form of RNN. 1 1-layer RNN 3-layer RNN time time DRNN-1O DRNN-AO Figure 1: Schematic illustration of a DRNN. Arrows represent connection matrices, and white, black and grey circles represent input frames, hidden states, and output frames respectively. Left: Standard RNN, folded out in time. Middle: DRNN of 3 layers folded out in time. Each layer can be interpreted as an RNN that receives the time series of the previous layer as input. Right: The two alternative architectures that we study in this paper, where the looped arrows represent the recurrent weights. Either only the top layer connects to the output (DRNN-1O), or all layers do (DRNN-AO). One potential weakness of a common RNN is that we may need complex, hierarchical processing of the current network input, but this information only passes through one layer of processing before going to the output. Secondly, we may need to process the time series at several time scales. If we consider for example speech, at the lowest level it is built up of phonemes, which exist on a very short time-scale. Next, on increasingly longer time scales, there are syllables, words, phrases, clauses, sentences, and at the highest level for instance a full conversation. Common RNNs do not explicitly support multiple time scales, and any temporal hierarchy that is present in the input signal needs to be embedded implicitly in the network dynamics. In past research, some hierarchical architectures employing RNNs have been proposed [3, 5, 6]. Especially [5] is interesting in the sense that they construct a hierarchy of RNNs, which all operate on different time-scales (using subsampling). The authors limit themselves to artificial tasks, however. The architecture we study in this paper has been used in [8]. Here, the authors employ stacked bi-directional LSTM networks, and train it on the TIMIT phoneme dataset [7] in which they obtain state-of-the-art performance. Their paper is strongly focused on reaching good performance, however, and little analysis on the actual contribution of the network architecture is provided. The architecture we study in this paper is essentially a common DNN (a multilayer perceptron) with temporal feedback loops in each layer, which we call a deep recurrent neural network (DRNN). Each network update, new information travels up the hierarchy, and temporal context is added in each layer (see Figure 1). This basically combines the concept of DNNs with RNNs. Each layer in the hierarchy is a recurrent neural network, and each subsequent layer receives the hidden state of the previous layer as input time series. As we will show, stacking RNNs automatically creates different time scales at different levels, and therefore a temporal hierarchy. In this paper we will study character-based language modelling and provide a more in-depth analysis of how the network architecture relates to the nature of the task. We suspect that DRNNs are wellsuited to capture temporal hierarchies, and character-based language modeling is an excellent realworld task to validate this claim, as the distribution of characters is highly nonlinear and covers both short- and long-term dependencies. As we will show, DRNNs embed these different timescales directly in their structure, and they are able to model long-term dependencies. Using only stochastic gradient descent (SGD) we are able to get state-of-the-art performance for recurrent networks on a Wikipedia-based text corpus, which was previously only obtained using the far more advanced Hessian-free training algorithm [19]. 2 Deep RNNs 2.1 Hidden state evolution We define a DRNN with L layers, and N neurons per layer. Suppose we have an input time series s(t) of dimensionality Nin, and a target time series y∗(t). In order to simplify notation we will not explicitly write out bias terms, but augment the corresponding variables with an element equal to 2 one. We use the notation ¯x = [x; 1]. We denote the hidden state of the i-th layer with ai(t). Its update equation is given by: ai(t) = tanh (Wiai(t −1) + Zi¯ai−1(t)) if i > 1 ai(t) = tanh (Wiai(t −1) + Zi¯s(t)) if i = 1. Here, Wi and Zi are the recurrent connections and the connections from the lower layer or input time series, respectively. A schematic drawing of the DRNN is presented in Figure 1. Note that the network structure inherently offers different time scales. The bottom layer has fading memory of the input signal. The next layer has fading memory of the hidden state of the bottom layer, and consequently a fading memory of the input which reaches further in the past, and so on for each additional layer. 2.2 Generating output The task we consider in this paper is a classification task, and we use a softmax function to generate output. The DRNN generates an output which we denote by y(t). We will consider two scenarios: that where only the highest layer in the hierarchy couples to the output (DRNN-1O), and that where all layers do (DRNN-AO). In the two respective cases, y(t) is given by: y(t) = softmax (U¯aL(t)) , (1) where U is the matrix with the output weights, and y(t) = softmax L X i=1 Ui¯ai(t) ! , (2) such that Ui corresponds to the output weights of the i-th layer. The two resulting architectures are depicted in the right part of Figure 1. The reason that we use output connections at each layer is twofold. First, like any deep architecture, DRNNs suffer from a pathological curvature in the cost function. If we use backpropagation through time, the error will propagate from the top layer down the hierarchy, but it will have diminished in magnitude once it reaches the lower layers, such that they are not trained effectively. Adding output connections at each layer amends this problem to some degree as the training error reaches all layers directly. Secondly, having output connections at each layer provides us with a crude measure of its role in solving the task. We can for instance measure the decay of performance by leaving out an individual layer’s contribution, or study which layer contributes most to predicting characters in specific instances. 2.3 Training setup In all experiments we used stochastic gradient descent. To avoid extremely large gradients near bifurcations, we applied the often-used trick of normalizing the gradient before using it for weight updates. This simple heuristic seems to be effective to prevent gradient explosions and sudden jumps of the parameters, while not diminishing the end performance. We write the number of batches we train on as T. The learning rate is set at an initial value η0, and drops linearly with each subsequent weight update. Suppose θ(j) is the set of all trainable parameters after j updates, and ∇θ(j) is the gradient of a cost function w.r.t. this parameter set, as computed on a randomly sampled part of the training set. Parameter updates are given by: θ(j + 1) = θ(j) −η0 1 −j T ∇θ(j) ||∇θ(j)||. (3) In the case where we use output connections at the top layer only, we use an incremental layer-wise method to train the network, which was necessary to reach good performance. We add layers one by one and at all times an output layer only exists at the current top layer. When adding a layer, the previous output weights are discarded and new output weights are initialised connecting from the new top layer. In this way each layer has at least some time during training in which it is directly 3 coupled to the output, and as such can be trained effectively. Over the course of each of these training stages we used the same training strategy as described before: training the full network with BPTT and linearly reducing the learning rate to zero before a new layer is added. Notice the difference to common layer-wise training schemes where only a single layer is trained at a time. We always train the full network after each layer is added. 3 Text prediction In this paper we consider next character prediction on a Wikipedia text-corpus [19] which was made publicly available1. The total set is about 1.4 billion characters long, of which the final 10 million is used for testing. Each character is represented by one-out-of-N coding. We used 95 of the most common characters2 (including small letters, capitals, numbers and punctuation), and one ‘unknown’ character, used to map any character not part of the 95 common ones, e.g. Cyrillic and Chinese characters. We need time in the order of 10 days to train a single network, largely due to the difficulty of exploiting massively parallel computing for SGD. Therefore we only tested three network instantiations3. Each experiment was run on a single GPU (NVIDIA GeForce GTX 680, 4GB RAM). The task is as follows: given a sequence of text, predict the probability distribution of the next character. The used performance metric is the average number of bits-per-character (BPC), given by BPC = −⟨log2 pc⟩, where pc is the probability as predicted by the network of the correct next character. 3.1 Network setups The challenge in character-level language modelling lies in the great diversity and sheer number of words that are used. In the case of Wikipedia this difficulty is exacerbated due to the large number of names of persons and places, scientific jargon, etc. In order to capture this diversity we need large models with many trainable parameters. All our networks have a number of neurons selected such that in total they each had approximately 4.9 million trainable parameters, which allowed us to make a comparison to other published work [19]. We considered three networks: a common RNN (2119 units), a 5-layer DRNN-1O (727 units per layer), and a 5-layer DRNN-AO (706 units per layer)4. Initial learning rates η0 were chosen at 0.5, except for the the top layer of the DRNN-1O, where we picked η0 = 0.25 (as we observed that the nodes started to saturate if we used a too high learning rate). The RNN and the DRNN-AO were trained over T = 5 × 105 parameter updates. The network with output connections only at the top layer had a different number of parameter updates per training stage, T = {0.5, 1, 1.5, 2, 2.5} × 105, for the 5 layers respectively. As such, for each additional layer the network is trained for more iterations. All gradients are computed using backpropagation through time (BPTT) on 75 randomly sampled sequences in parallel, drawn from the training set. All sequences were 250 characters long, and the first 50 characters were disregarded during the backwards pass, as they may have insufficient temporal context. In the end the DRNN-AO sees the full training set about 7 times in total, and the DRNN-1O about 10 times. The matrices Wi and Zi>1 were initialised with elements drawn from N(0, N −1/2). The input weights Z1 were drawn from N(0, 1). We chose to have the same number of neurons for every layer, mostly to reduce the number of parameters that need to be optimised. Output weights were always initialised on zero. 1http://www.cs.toronto.edu/˜ilya/mrnns.tar.gz 2In [19] only 86 character are used, but most of the additional characters in our set are exceedingly rare, such that cross-entropy is not affected meaningfully by this difference. 3In our experience the networks are so large that there is very little difference in performance for different initialisations 4The decision for 5 layers is based on a previous set of experiments (results not shown). 4 Model BPC test RNN 1.610 DRNN-AO 1.557 DRNN-1O 1.541 MRNN 1.55 PAQ 1.51 Hutter Prize (current record) [12] 1.276 Human level (estimated) [18] 0.6 – 1.3 Table 1: Results on the Wikipedia character prediction task. The first three numbers are our measurements, the next two the results on the same dataset found in [19]. The bottom two numbers were not measured on the same text corpus. 1 2 3 4 5 0 0.5 1 1.5 2 Removed layer Increase in BPC test Figure 2: Increase in BPC on the test set from removing the output contribution of a single layer of the DRNN-AO. 3.2 Results Performance and text generation The resulting BPCs for our models and comparative results in literature are shown in Table 1. The common RNN performs worst, and the DRNN-1O the best, with the DRNN-AO slightly worse. Both DRNNs perform well and are roughly similar to the state-of-the-art for recurrent networks with the same number of trainable parameters5, which was established with a multiplicative RNN (MRNN), trained with Hessian-free optimization in the course of 5 days on a cluster of 8 GPUs6. The same authors also used the PAQ compression algorithm [14] as a comparison, which we included in the list. In the table we also included two results which were not measured on the same dataset (or even using the same criteria), but which give an estimation of the true number of BPC for natural text. To check how each layer influences performance in the case of the DRNN-AO, we performed tests in which the output of a single layer is set to zero. This can serve as a sanity check to ensure that the model is efficiently trained. If for instance removing the top layer output contribution does not significantly harm performance, this essentially means that it is redundant (as it does no preprocessing for higher layers). Furthermore we can use this test to get an overall indication of which role a particular layer has in producing output. Note that these experiments only have a limited interpretability, as the individual layer contributions are likely not independent. Perhaps some layers provide strong negative output bias which compensates for strong positive bias of another, or strong synergies might exists between them. First we measure the increase in test BPC by removing a single layer’s output contribution, which can then be used as an indicator for the importance of this layer for directly generating output. In Figure 2 we show the result. The contribution of the top layer is the most important, and that of the bottom layer second important. The intermediate layers contribute less to the direct output and seem to be more important in preprocessing the data for the top layer. As in [19], we also used the networks in a generative mode, where we use the output probabilities of the DRNN-AO to recursively sample a new input character in order to complete a given sentence. We too used the phrase “The meaning of life is ”. We performed three tests: first we generated text with an intact network, next we see how the text quality deteriorates when we leave out the contributions of the bottom and top layer respectively7 (by setting it equal to zero before adding up 5This similarity might reflect limitations caused by the network size. We also performed a long-term experiment with a DRNN-AO with 9.6 million trainable parameters, which resulted in a test BPC of 1.472 after 1,000,000 weight updates (training for over a month). More parameters offer more raw storage power, and hence provide a straightforward manner in which to increase performance. 6This would suggest a computational cost of roughly 4 times ours, but an honest comparison is hard to make as the authors did not specify explicitly how much data their training algorithm went through in total. Likely the cost ratio is smaller than 4, as we use a more modern GPU. 7Leaving out the contributions of intermediate layers only has a minimal effect on the subjective quality of the produced text. 5 The meaning of life is the ”decorator of Rose”. The Ju along with its perspective character survive, which coincides with his eromine, water and colorful art called ”Charles VIII”.??In ”Inferno” (also 220: ”The second Note Game Magazine”, a comic at the Old Boys at the Earl of Jerusalem for two years) focused on expanded domestic differences from 60 mm Oregon launching, and are permitted to exchange guidance. The meaning of life is impossible to unprecede ?Pok.{* PRER)!—KGOREMFHEAZ CTX=R M —S=6 5?&+——=7xp*= 5FJ4—13/TxI JX=—b28O=&4+E9F=&Z26 —R&N== Z8&A=58=84&T=RESTZINA=L&95Y 2O59&FP85=&&#=&H=S=Z IO =T @—CBOM=6&9Y1= 9 5 The meaning of life is man sistasteredsteris bus and nuster eril”n ton nis our ousNmachiselle here hereds?d toppstes impedred wisv.”-hor ens htls betwez rese, and Intantored wren in thoug and elit toren on the marcel, gos infand foldedsamps que help sasecre hon Roser and ens in respoted we frequen enctuivat herde pitched pitchugismissedre and loseflowered Table 2: Three examples of text, generated by the DRNN-AO. The left one is generated by the intact network, the middle one by leaving out the contribution of the first layer, and the right one by leaving out the contribution of the top layer. 20 40 60 80 100 10 −3 10 −2 10 −1 10 0 nr. of presented characters normalised average distance RNN layer 1 layer 2 layer 3 layer 4 layer 5 20 40 60 80 100 10 −3 10 −2 10 −1 10 0 10 1 nr. of presented characters average increase in BPC RNN DRNN−1O DRNN−AO layer 1 layer 2 layer 3 layer 4 layer 5 Figure 3: Left panel: normalised average distance between hidden states of a perturbed and unperturbed network as a function of presented characters. The perturbation is a single typo at the first character. The coloured full lines are for the individual layers of the DRNN-1O, and the coloured dashed lines are those of the layers of the DRNN-AO. Distances are normalised on the distance of the occurrence of the typo. Right panel: Average increase in BPC between a perturbed and unperturbed network as a function of presented characters. The perturbation is by replacing the initial context (see text), and the result is shown for the text having switched back to the correct context. Coloured lines correspond to the individual contributions of the layers in the DRNN-AO. layer contributions and applying the softmax function). Resulting text samples are shown in Table 2. The text sample of the intact network shows short-term correct grammar, phrases, punctuation and mostly existing words. The text sample with the bottom layer output contribution disabled very rapidly becomes ‘unstable’, and starts to produce long strings of rare characters, indicating that the contribution of the bottom layer is essential in modeling some of the most basic statistics of the Wikipedia text corpus. We verified this further by using such a random string of characters as initialization of the intact network, and observed that it consistently fell back to producing ‘normal’ text. The text sample with the top layer disabled is interesting in the sense that it produces roughly word-length strings of common characters (letters and spaces), of which substrings resemble common syllables. This suggests that the top layer output contribution captures text statistics longer than word-length sequences. Time scales In order to gauge at what time scale each individual layer operates, we have performed several experiments on the models. First of all we considered an experiment in which we run the DRNN on two identical text sequences from the test set, but after 100 characters we introduce a typo in one of them (by replacing it by a character randomly sampled from the full set). We record the hidden states after the typo as a function of time for both the perturbed and unperturbed network 6 −5 0 5 10 15 output 50 100 150 200 250 300 350 400 450 500 0 0.2 0.4 prob. nr. presented characters Figure 4: Network output example for a particularly long phrase between parentheses (296 characters), sampled from the test set. The vertical dashed lines indicate the opening and closing parentheses in the input text sequence. Top panel: output traces for the closing parenthesis character for each layer in the DRNN-AO. Coloring is identical to that of Figure 3. Bottom panel: total predicted output probability of the closing parenthesis sign of the DRNN-AO. and measure the Euclidean distance between them as a function of time, to see how long the effect of the typo remains present in each layer. Next we measured what the length of the context is the DRNNs effectively employ. In order to do so we measured the average difference in BPC between normal text and a perturbed copy, in which we replaced the first 100 characters by text randomly sampled from elsewhere in the test set. This will give an indication of how long the lack of correct context lingers after the text sequence switched. All measurements were averaged over 50,000 instances. Results are shown in Figure 3. The left panel shows how fast each individual layer in the DRNNs forgets the typo-perturbation. It appears that the layer-wise time scales behave quite differently in the case of the DRNN-1O and the DRNNAO. The DRNN-AO has very short time-scales in the three bottom layers and longer memory only appears for the two top ones, whereas in the DRNN-1O, the bottom two layers have relatively short time scales, but the top three layers have virtually the same, very long time scale. This is almost certainly caused by the way in which we trained the DRNN-1O, such that intermediate layers already assumed long memory when they were at the top of the hierarchy. The effect of the perturbation of the normal RNN is also shown. Even though it decays faster at the start, the effect of the perturbation remains present in the network for a long period as well. The right panel of Figure 3 depicts the effect on switching the context on the actual prediction accuracy, which gives some insight in what the actual length of the context used by the networks is. Both DRNNs seem to recover more slowly from the context switch than the RNN, indicating that they employ a longer context for prediction. The time scales of the individual layers of the DRNN-AO are also depicted (by using the perturbed hidden states of an individual layer and the unperturbed states for the other layers for generating output), which largely confirms the result from the typo-perturbation test. The results shown here verify that a temporal hierarchy develops when training a DRNN. We have also performed a test to see what the time scales of an untrained DRNN are (by performing the typo test), which showed that here the differences in time-scales for each layer were far smaller (results not shown). The big differences we see in the trained DRNNs are hence a learned property. Long-term interactions: parentheses In order to get a clearer picture on some of the long-term dependencies the DRNNs have learned we look at their capability of closing parentheses, even when the phrase between parentheses is long. To see how well the networks remember the opening of a parenthesis, we observe the DRNN-AO output for the closing parenthesis-character8. In Figure 4 we show an example for an especially long phrase between parentheses. We both show the output probability and the individual layers’ output 8Results on the DRNN-1O are qualitatively similar. 7 contribution for the closing parenthesis (before they are added up and sent to the softmax function). The output of the top layer for the closing parenthesis is increased strongly for the whole duration of the phrase, and is reduced immediately after it is closed. The total output probability shows a similar pattern, showing momentary high probabilities for the closing parenthesis only during the parenthesized phrase, and extremely low probabilities elsewhere. These results are quite consistent over the test set, with some notable exceptions. When several sentences appear between parentheses (which occasionally happens in the text corpus), the network reduces the closing bracket probability (i.e., essentially ‘forgets’ it) as soon as a full stop appears9. Similarly, if a sentence starts with an opening bracket it will not increase closing parenthesis probability at all, essentially ignoring it. Furthermore, the model seems not able to cope with nested parentheses (perhaps because they are quite rare). The fact that the DRNN is able to remember the opening parenthesis for sequences longer than it has been trained on indicates that it has learned to model parentheses as a pseudo-stable attractor-like state, rather than memorizing parenthesized phrases of different lengths. In order to see how well the networks can close parentheses when they operate in the generative mode, we performed a test in which we initialize it with a 100-character phrase drawn from the test set ending in an opening bracket and observe in how many cases the network generates a closing bracket. A test is deemed unsuccessful if the closing parenthesis doesn’t appear in 500 characters, or if it produces a second opening parenthesis. We averaged the results over 2000 initializations. The DRNN-AO performs best in this test; only failing in 12% of the cases. The DRNN-1O fails in 16%, and the RNN in 28%. The results presented in this section hint at the fact that DRNNs might find it easier to learn longterm relations between input characters than common RNNs. This could lead to test DRNNs on the tasks introduced in [11]. These tasks are challenging in the sense that they require to retain very long memory of past input, while being driven by so-called distractor input. It has been shown that LSTMs and later common RNNs trained with Hessian-free methods [16] and Echo State Networks [13] are able to model such long-term dependencies. These tasks, however, purely focus on memory depth, and very little additional processing is required, let alone hierarchical processing. Therefore we do not suspect that DRNNs pose a strong advantage over common RNNs for these tasks in particular. 4 Conclusions and Future Work We have shown that using a deep recurrent neural network (DRNN) is beneficial for characterlevel language modeling, reaching state-of-the-art performance for recurrent neural networks on a Wikipedia text corpus, confirming the observation that deep recurrent architectures can boost performance [8]. We also present experimental evidence for the appearance of a hierarchy of time-scales present in the layers of the DRNNs. Finally we have demonstrated that in certain cases the DRNNs can have extensive memory of several hundred characters long. The training method we obtained on the DRNN-1O indicates that supervised pre-training for deep architectures is helpful, which on its own can provide an interesting line of future research. Another one is to extend common pre-training schemes, such as the deep belief network approach [9] and deep auto-encoders [10, 20] for DRNNs. The results in this paper can potentially contribute to the ongoing debate on training algorithms, especially whether SGD or second order methods are more suited for large-scale machine learning problems [2]. Therefore, applying second order techniques such as Hessian-free training [15] on DRNNs seems an attractive line of future research in order to obtain a solid comparison. Acknowledgments This work is partially supported by the interuniversity attraction pole (IAP) Photonics@be of the Belgian Science Policy Office and the ERC NaResCo Starting grant. We would like to thank Sander Dieleman and Philemon Brakel for helping with implementations. All experiments were performed using Theano [1]. 9It is consistently resilient against points appearing in abbreviations such as ‘e.g.,’ and ‘dr.’ though. 8 References [1] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. [2] L. Bottou and O. Bousquet. The tradeoffs of large-scale learning. Optimization for Machine Learning, page 351, 2011. [3] W.-Y. Chen, Y.-F. Liao, and S.-H. Chen. Speech recognition with hierarchical recurrent neural networks. Pattern Recognition, 28(6):795 – 805, 1995. [4] D. Ciresan, U. Meier, L. Gambardella, and J. Schmidhuber. Deep, big, simple neural nets for handwritten digit recognition. Neural computation, 22(12):3207–3220, 2010. [5] S. El Hihi and Y. Bengio. Hierarchical recurrent neural networks for long-term dependencies. Advances in Neural Information Processing Systems, 8:493–499, 1996. [6] S. Fern´andez, A. Graves, and J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI 2007, Hyderabad, India, January 2007. [7] J. Garofolo, N. I. of Standards, T. (US, L. D. Consortium, I. Science, T. Office, U. States, and D. A. R. P. Agency. TIMIT Acoustic-phonetic Continuous Speech Corpus. Linguistic Data Consortium, 1993. [8] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In To appear in ICASSP 2013, 2013. [9] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [10] G. E. Hinton. Reducing the dimensionality of data with neural networks. Science, 313:504–507, 2006. [11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [12] M. Hutter. The human knowledge compression prize, 2006. [13] H. Jaeger. Long short-term memory in echo state networks: Details of a simulation study. Technical report, Jacobs University, 2012. [14] M. Mahoney. Adaptive weighing of context models for lossless data compression. Florida Tech., Melbourne, USA, Tech. Rep, 2005. [15] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning, pages 735–742, 2010. [16] J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proceedings of the 28th International Conference on Machine Learning, volume 46, page 68. Omnipress Madison, WI, 2011. [17] A. Mohamed, G. Dahl, and G. Hinton. Acoustic modeling using deep belief networks. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):14–22, 2012. [18] C. E. Shannon. Prediction and entropy of printed english. Bell system technical journal, 30(1):50–64, 1951. [19] I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning, pages 1017–1024, 2011. [20] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine learning, pages 1096–1103, 2008. 9
|
2013
|
37
|
5,111
|
Low-Rank Matrix and Tensor Completion via Adaptive Sampling Akshay Krishnamurthy Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 akshaykr@cs.cmu.edu Aarti Singh Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 aartisingh@cs.cmu.edu Abstract We study low rank matrix and tensor completion and propose novel algorithms that employ adaptive sampling schemes to obtain strong performance guarantees. Our algorithms exploit adaptivity to identify entries that are highly informative for learning the column space of the matrix (tensor) and consequently, our results hold even when the row space is highly coherent, in contrast with previous analyses. In the absence of noise, we show that one can exactly recover a n ⇥n matrix of rank r from merely ⌦(nr3/2 log(r)) matrix entries. We also show that one can recover an order T tensor using ⌦(nrT −1/2T 2 log(r)) entries. For noisy recovery, our algorithm consistently estimates a low rank matrix corrupted with noise using ⌦(nr3/2polylog(n)) entries. We complement our study with simulations that verify our theory and demonstrate the scalability of our algorithms. 1 Introduction Recently, the machine learning and signal processing communities have focused considerable attention toward understanding the benefits of adaptive sensing. This theme is particularly relevant to modern data analysis, where adaptive sensing has emerged as an efficient alternative to obtaining and processing the large data sets associated with scientific investigation. These empirical observations have lead to a number of theoretical studies characterizing the performance gains offered by adaptive sensing over conventional, passive approaches. In this work, we continue in that direction and study the role of adaptive data acquisition in low rank matrix and tensor completion problems. Our study is motivated not only by prior theoretical results in favor of adaptive sensing but also by several applications where adaptive sensing is feasible. In recommender systems, obtaining a measurement amounts to asking a user about an item, an interaction that has been deployed in production systems. Another application pertains to network tomography, where a network operator is interested in inferring latencies between hosts in a communication network while injecting few packets into the network. The operator, being in control of the network, can adaptively sample the matrix of pair-wise latencies, potentially reducing the total number of measurements. In particular, the operator can obtain full columns of the matrix by measuring from one host to all others, a sampling strategy we will exploit in this paper. Yet another example centers around gene expression analysis, where the object of interest is a matrix of expression levels for various genes across a number of conditions. There are typically two types of measurements: low-throughput assays provide highly reliable measurements of single entries in this matrix while high-throughput microarrays provide expression levels of all genes of interest across operating conditions, thus revealing entire columns. The completion problem can be seen as a strategy for learning the expression matrix from both low- and high-throughput data while minimizing the total measurement cost. 1 1.1 Contributions We develop algorithms with theoretical guarantees for three low-rank completion problems. The algorithms find a small subset of columns of the matrix (tensor) that can be used to reconstruct or approximate the matrix (tensor). We exploit adaptivity to focus on highly informative columns, and this enables us to do away with the usual incoherence assumptions on the row-space while achieving competitive (or in some cases better) sample complexity bounds. Specifically our results are: 1. In the absence of noise, we develop a streaming algorithm that enjoys both low sample requirements and computational overhead. In the matrix case, we show that ⌦(nr3/2 log r) adaptively chosen samples are sufficient for exact recovery, improving on the best known bound of ⌦(nr2 log2 n) in the passive setting [21]. This also gives the first guarantee for matrix completion with coherent row space. 2. In the tensor case, we establish that ⌦(nrT −1/2T 2 log r) adaptively chosen samples are sufficient for recovering a n ⇥. . . ⇥n order T tensor of rank r. We complement this with a necessary condition for tensor completion under random sampling, showing that our adaptive strategy is competitive with any passive algorithm. These are the first sample complexity upper and lower bounds for exact tensor completion. 3. In the noisy matrix completion setting, we modify the adaptive column subset selection algorithm of Deshpande et al. [10] to give an algorithm that finds a rank-r approximation to a matrix using ⌦(nr3/2polylog(n)) samples. As before, the algorithm does not require an incoherent row space but we are no longer able to process the matrix sequentially. 4. Along the way, we improve on existing results for subspace detection from missing data, the problem of testing if a partially observed vector lies in a known subspace. 2 Related Work The matrix completion problem has received considerable attention in recent years. A series of papers [6, 7, 13, 21], culminating in Recht’s elegent analysis of the nuclear norm minimization program, address the exact matrix completion problem through the framework of convex optimization, establishing that ⌦((n1 + n2)r max{µ0, µ2 1} log2(n2)) randomly drawn samples are sufficient to exactly identify an n1 ⇥n2 matrix with rank r. Here µ0 and µ1 are parameters characterizing the incoherence of the row and column spaces of the matrix, which we will define shortly. Candes and Tao [7] proved that under random sampling ⌦(n1rµ0 log(n2)) samples are necessary, showing that nuclear norm minimization is near-optimal. The noisy matrix completion problem has also received considerable attention [5, 17, 20]. The majority of these results also involve some parameter that quantifies how much information a single observation reveals, in the same vein as incoherence. Tensor completion, a natural generalization of matrix completion, is less studied. One challenge stems from the NP-hardness of computing most tensor decompositions, pushing researchers to study alternative structure-inducing norms in lieu of the nuclear norm [11, 22]. Both papers derive algorithms for tensor completion, but neither provide sample complexity bounds for the noiseless case. Our approach involves adaptive data acquisition, and consequently our work is closely related to a number of papers focusing on using adaptive measurements to estimate a sparse vector [9, 15]. In these problems, specifically, problems where the sparsity basis is known a priori, we have a reasonable understanding of how adaptive sampling can lead to performance improvements. As a low rank matrix is sparse in its unknown eigenbasis, the completion problem is coupled with learning this basis, which poses a new challenge for adaptive sampling procedures. Another relevant line of work stems from the matrix approximations literature. Broadly speaking, this research is concerned with efficiently computing a structured matrix, i.e. sparse or low rank, that serves as a good approximation to a fully observed input matrix. Two methods that apply to the missing data setting are the Nystrom method [12, 18] and entrywise subsampling [1]. While the sample complexity bounds match ours, the analysis for the Nystrom method has focused on positive-semidefinite kernel matrices and requires incoherence of both the row and column spaces. On the other hand, entrywise subsampling is applicable, but the guarantees are weaker than ours. 2 It is also worth briefly mentioning the vast body of literature on column subset selection, the task of approximating a matrix by projecting it onto a few of its columns. While the best algorithms, namely volume sampling [14] and sampling according to statistical leverages [3], do not seem to be readily applicable to the missing data setting, some algorithms are. Indeed our procedure for noisy matrix completion is an adaptation of an existing column subset selection procedure [10]. Our techniques are also closely related to ideas employed for subspace detection – testing whether a vector lies in a known subspace – and subspace tracking – learning a time-evolving low-dimensional subspace from vectors lying close to that subspace. Balzano et al. [2] prove guarantees for subspace detection with known subspace and a partially observed vector, and we will improve on their result en route to establishing our guarantees. Subspace tracking from partial information has also been studied [16], but little is known theoretically about this problem. 3 Definitions and Preliminaries Before presenting our algorithms, we clarify some notation and definitions. Let M 2 Rn1⇥n2 be a rank r matrix with singular value decomposition U⌃V T . Let c1, . . . cn2 denote the columns of M. Let M 2 Rn1⇥...⇥nT denote an order T tensor with canonical decomposition: M = r X k=1 a(1) k ⌦a(2) k ⌦. . . ⌦a(T ) k (1) where ⌦is the outer product. Define rank(M) to be the smallest value of r that establishes this equality. Note that the vectors {a(t) k }r k=1 need not be orthogonal, nor even linearly independent. The mode-t subtensors of M, denoted M(t) i , are order T −1 tensors obtained by fixing the ith coordinate of the t-th mode. For example, if M is an order 3 tensor, then M(3) i are the frontal slices. We represent a d-dimensional subspace U ⇢Rn as a set of orthonormal basis vectors U = {ui}d i=1 and in some cases as n ⇥d matrix whose columns are the basis vectors. The interpretation will be clear from context. Define the orthogonal projection onto U as PUv = U(U T U)−1U T v. For a set ⌦⇢[n]1, c⌦2 R|⌦| is the vector whose elements are ci, i 2 ⌦indexed lexicographically. Similarly the matrix U⌦2 R|⌦|⇥d has rows indexed by ⌦lexicographically. Note that if U is a orthobasis for a subspace, U⌦is a |⌦| ⇥d matrix with columns ui⌦where ui 2 U, rather than a set of orthonormal basis vectors. In particular, the matrix U⌦need not have orthonormal columns. These definitions extend to the tensor setting with slight modifications. We use the vec operation to unfold a tensor into a single vector and define the inner product hx, yi = vec(x)T vec(y). For a subspace U ⇢R⌦ni, we write it as a (Q ni) ⇥d matrix whose columns are vec(ui), ui 2 U. We can then define projections and subsampling as we did in the vector case. As in recent work on matrix completion [7, 21], we will require a certain amount of incoherence between the column space associated with M (M) and the standard basis. Definition 1. The coherence of an r-dimensional subspace U ⇢Rn is: µ(U) , n r max 1jn ||PUej||2 (2) where ej denotes the jth standard basis element. In previous analyses of matrix completion, the incoherence assumption is that both the row and column spaces of the matrix have coherences upper bounded by µ0. When both spaces are incoherent, each entry of the matrix reveals roughly the same amount of information, so there is little to be gained from adaptive sampling, which typically involves looking for highly informative measurements. Thus the power of adaptivity for these problems should center around relaxing the incoherence assumption, which is the direction we take in this paper. Unfortunately, even under adaptive sampling, it is impossible to identify a rank one matrix that is zero in all but one entry without observing the entire matrix, implying that we cannot completely eliminate the assumption. Instead, we will retain incoherence on the column space, but remove the restrictions on the row space. 1We write [n] for {1, . . . , n} 3 Algorithm 1: Sequential Tensor Completion (M, {mt}T t=1) 1. Let U = ;. 2. Randomly draw entries ⌦⇢QT −1 t=1 [nt] uniformly with replacement w. p. mT / QT −1 t=1 nt. 3. For each mode-T subtensor M(T ) i of M, i 2 [nT ]: (a) If ||M(T ) i⌦−PU⌦M(t) i⌦||2 2 > 0: i. ˆM(T ) i recurse on (M(T ) i , {mt}T −1 t=1 ) ii. Ui PU? ˆM(T ) i ||PU? ˆM(T ) i ||. U U [ Ui. (b) Otherwise ˆM(T ) i U(UT ⌦U⌦)−1U⌦M(T ) i⌦ 4. Return ˆM with mode-T subtensors ˆ Mi (T ). 4 Exact Completion Problems In the matrix case, our sequential algorithm builds up the column space of the matrix by selecting a few columns to observe in their entirety. In particular, we maintain a candidate column space ˜U and test whether a column ci lives in ˜U or not, choosing to completely observe ci and add it to ˜U if it does not. Balzano et al. [2] observed that we can perform this test with a subsampled version of ci, meaning that we can recover the column space using few samples. Once we know the column space, recovering the matrix, even from few observations, amounts to solving determined linear systems. For tensors, the algorithm becomes recursive in nature. At the outer level of the recursion, the algorithm maintains a candidate subspace U for the mode T subtensors M(T ) i . For each of these subtensors, we test whether M(T ) i lives in U and recursively complete that subtensor if it does not. Once we complete the subtensor, we add it to U and proceed at the outer level. When the subtensor itself is just a column; we observe the columns in its entirety. The pseudocode of the algorithm is given in Algorithm 1. Our first main result characterizes the performance of the tensor completion algorithm. We defer the proof to the appendix. Theorem 2. Let M = Pr i=1 ⌦T t=1a(t) j be a rank r order-T tensor with subspaces A(t) = span({a(t) j }r j=1). Suppose that all of A(1), . . . A(T −1) have coherence bounded above by µ0. Set mt = 36rt−1/2µt−1 0 log(2r/δ) for each t. Then with probability ≥1 −5δTrT , Algorithm 1 exactly recovers M and has expected sample complexity 36( T X t=1 nt)rT −1/2µT −1 0 log(2r/δ) (3) In the special case of a n ⇥. . . ⇥n tensor of order T, the algorithm succeeds with high probability using ⌦(nrT −1/2µT −1 0 T 2 log(Tr/δ)) samples, exhibiting a linear dependence on the tensor dimensions. In comparison, the only guarantee we are aware of shows that ⌦ ⇣⇣QT1 t=2 nt ⌘ r ⌘ samples are sufficient for consistent estimation of a noisy tensor, exhibiting a much worse dependence on tensor dimension [23]. In the noiseless scenario, one can unfold the tensor into a n1 ⇥QT t=2 nt matrix and apply any matrix completion algorithm. Unfortunately, without exploiting the additional tensor structure, this approach will scale with QT t=2 nt, which is similarly much worse than our guarantee. Note that the na¨ıve procedure that does not perform the recursive step has sample complexity scaling with the product of the dimensions and is therefore much worse than the our algorithm. The most obvious specialization of Theorem 2 is to the matrix completion problem: Corollary 3. Let M := U⌃V T 2 Rn1⇥n2 have rank r, and fix δ > 0. Assume µ(U) µ0. Setting m , m2 ≥36r3/2µ0 log( 2r δ ), the sequential algorithm exactly recovers M with probability at least 1 −4rδ + δ while using in expectation 36n2r3/2µ0 log(2r/δ) + rn1 (4) 4 observations. The algorithm runs in O(n1n2r + r3m) time. A few comments are in order. Recht [21] guaranteed exact recovery for the nuclear norm minimization procedure as long as the number of observations exceeds 32(n1+n2)r max{µ0, µ2 1}β log2(2n2) where β controls the probability of failure and ||UV T ||1 µ1 p r/(n1n2) with µ1 as another coherence parameter. Without additional assumptions, µ1 can be as large as µ0 pr. In this case, our bound improves on his in its the dependence on r, µ0 and logarithmic terms. The Nystrom method can also be applied to the matrix completion problem, albeit under nonuniform sampling. Given a PSD matrix, one uses a randomly sampled set of columns and the corresponding rows to approximate the remaining entries. Gittens showed that if one samples O(r log r) columns, then one can exactly reconstruct a rank r matrix [12]. This result requires incoherence of both row and column spaces, so it is more restrictive than ours. Almost all previous results for exact matrix completion require incoherence of both row and column spaces. The one exception is a recent paper by Chen et al. that we became aware of while preparing the final version of this work [8]. They show that sampling the matrix according to statistical leverages of the rows and columns can eliminate the need for incoherence assumptions. Specifically, when the matrix has incoherent column space, they show that by first estimating the leverages of the columns, sampling the matrix according to this distribution, and then solving the nuclear norm minimization program, one can recover the matrix with ⌦(nrµ0 log2 n) samples. Our result improves on theirs when r is small compared to n, specifically when pr log r log2 n, which is common. Our algorithm is also very computationally efficient. Existing algorithms involve successive singular value decompositions (O(n1n2r) per iteration), resulting in much worse running times. The key ingredient in our proofs is a result pertaining to subspace detection, the task of testing if a subsampled vector lies in a subspace. This result, which improves over the results of Balzano et al. [2], is crucial in obtaining our sample complexity bounds, and may be of independent interest. Theorem 4. Let U be a d-dimensional subspace of Rn and y = x + v where x 2 U and v 2 U ?. Fix δ > 0, m ≥8 3dµ(U) log ' 2d δ ( and let ⌦be an index set with entries sampled uniformly with replacement with probability m/n. Then with probability at least 1 −4δ: m(1 −↵) −dµ(U) β (1−γ) n ||v||2 2 ||y⌦−PU⌦y⌦||2 2 (1 + ↵)m n ||v||2 2 (5) Where ↵= q 2 µ(v) m log(1/δ) + 2 µ(v) 3m log(1/δ), β = 6 log(d/δ) + 4 3 dµ(v) m log2(d/δ), γ = q 8dµ(U) 3m log(2d/δ) and µ(v) = n||v||2 1/||v||2 2. This theorem shows that if m = ⌦(max{µ(v), dµ(U), d p µ(U)µ(v)} log d) then the orthogonal projection from missing data is within a constant factor of the fully observed one. In contrast, Balzano et al. [2] give a similar result that requires m = ⌦(max{µ(v)2, dµ(U), dµ(U)µ(v)} log d) to get a constant factor approximation. In the matrix case, this improved dependence on incoherence parameters brings our sample complexity down from nr2µ2 0 log r to nr3/2µ0 log r. We conjecture that this theorem can be further improved to eliminate another pr factor from our final bound. 4.1 Lower Bounds for Uniform Sampling We adapt the proof strategy of Candes and Tao [7] to the tensor completion problem and establish the following lower bound for uniform sampling: Theorem 5 (Passive Lower Bound). Fix 1 m, r mint nt and µ0 > 1. Fix 0 < δ < 1/2 and suppose that we do not have the condition: −log 1 − m QT i=1 ni ! ≥µT −1 0 rT −1 QT i=2 ni log ⇣n1 2δ ⌘ (6) Then there exist infinitely many pairs of distinct n1 ⇥. . . ⇥nT order-T tensors M 6= M0 of rank r with coherence parameter µ0 such that P⌦(M) = P⌦(M0) with probability at least δ. Each entry is observed independently with probability T = m QT i=1 ni . 5 Theorem 5 implies that as long as the right hand side of Equation 6 is at most ✏< 1, and: m n1rT −1µT −1 0 log ⇣n1 2δ ⌘ (1 −✏/2) (7) then with probability at least δ there are infinitely many matrices that agree on the observed entries. This gives a necessary condition on the number of samples required for tensor completion. Note that when T = 2 we recover the known lower bound for matrix completion. Theorem 5 gives a necessary condition under uniform sampling. Comparing with Theorem 2 shows that our procedure outperforms any passive procedure in its dependence on the tensor dimensions. However, our guarantee is suboptimal in its dependence on r. The extra factor of pr would be eliminated by a further improvement to Theorem 5, which we conjecture is indeed possible. For adaptive sampling, one can obtain a lower bound via a parameter counting argument. Observing the (i1, . . . , iT )th entry leads to a polynomial equation of the form P k Q t a(t) k (it) = Mi1,...,iT . If m < r(P t nt), this system is underdetermined showing that ⌦((P t nt)r) observations are necessary for exact recovery, even under adaptive sampling. Thus, our algorithm enjoys sample complexity with optimal dependence on matrix dimensions. 5 Noisy Matrix Completion Our algorithm for noisy matrix completion is an adaptation of the column subset selection (CSS) algorithm analyzed by Deshpande et al. [10]. The algorithm builds a candidate column space in rounds; at each round it samples additional columns with probability proportional to their projection on the orthogonal complement of the candidate column space. To concretely describe the algorithm, suppose that at the beginning of the lth round we have a candidate subspace Ul. Then in the lth round, we draw s additional columns according to the distribution where the probability of drawing the ith column is proportional to ||PU ? l ci||2 2. Observing these s columns in full and then adding them to the subspace Ul gives the candidate subspace Ul+1 for the next round. We initialize the algorithm with U1 = ;. After L rounds, we approximate each column c with ˆc = UL(U T L⌦UL⌦)−1U T L⌦c⌦and concatenate these estimates to form ˆ M. The challenge is that the algorithm cannot compute the sampling probabilities without observing entries of the matrix. However, our results show that with reliable estimates, which can be computed from few observations, the algorithm still performs well. We assume that the matrix M 2 Rn1⇥n2 can be decomposed as a rank r matrix A and a random gaussian matrix R whose entries are independently drawn from N(0, σ2). We write A = U⌃V T and assume that µ(U) µ0. As before, the incoherence assumption is crucial in guaranteeing that one can estimate the column norms, and consequently sampling probabilities, from missing data. Theorem 6. Let ⌦be the set of all observations over the course of the algorithm, let UL be the subspace obtained after L = log(n1n2) rounds and ˆ M be the matrix whose columns ˆci = UL(U T L⌦UL⌦)−1U T L⌦c⌦i. Then there are constants c1, c2 such that: ||A −ˆ M||2 F c1 (n1n2)||A||2 F + c2||R⌦||2 F ˆ M can be computed from ⌦((n1 + n2)r3/2µ(U)polylog(n1n2)) observations. In particular, if ||A||2 F = 1 and Rij ⇠N(0, σ2/(n1n2)), then there is a constant c? for which: ||A −ˆA||2 F c? n1n2 ⇣ 1 + σ2 ⇣ (n1 + n2)r3/2µ(U)polylog(n1n2) ⌘⌘ The main improvement in the result is in relaxing the assumptions on the underlying matrix A. Existing results for noisy matrix completion require that the energy of the matrix is well spread out across both the rows and the columns (i.e. incoherence), and the sample complexity guarantees deteriorate significantly without such an assumption [5, 17]. As a concrete example, Negahban and Wainwright [20] use a notion of spikiness, measured as pn1n2 ||A||1 ||A||F which can be as large as pn2 in our setup, e.g. when the matrix is zero except for on one column and constant across that column. 6 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 1: Probability of success curves for our noiseless matrix completion algorithm (top) and SVT (middle). Top: Success probability as a function of: Left: p, the fraction of samples per column, Center: np, total samples per column, and Right: np log2 n, expected samples per column for passive completion. Bottom: Success probability of our noiseless algorithm for different values of r as a function of p, the fraction of samples per column (left), p/r3/2 (middle) and p/r (right). The choices of ||A||2 F = 1 and noise variance rescaled by 1 n1n2 enable us to compare our results with related work [20]. Thinking of n1 = n2 = n and the incoherence parameter as a constant, our results imply consistent estimation as long as σ2 = ! ⇣ n r2polylog(n) ⌘ . On the other hand, thinking of the spikiness parameter as a constant, [20] show that the error is bounded by σ2nr log n m where m is the total number of observations. Using the same number of samples as our procedure, their results implies consistency as long as σ2 = !(rpolylog(n)). For small r (i.e. r = O(1)), our noise tolerance is much better, but their results apply even with fewer observations, while ours do not. 6 Simulations We verify Corollary 3’s linear dependence on n in Figure 1, where we empirically compute the success probability of the algorithm for varying values of n and p = m/n, the fraction of entries observed per column. Here we study square matrices of fixed rank r = 5 with µ(U) = 1. Figure 1(a) shows that our algorithm can succeed with sampling a smaller and smaller fraction of entries as n increases, as we expect from Corollary 3. In Figure 1(b), we instead plot success probability against total number of observations per column. The fact that the curves coincide suggests that the samples per column, m, is constant with respect to n, which is precisely what Corollary 3 implies. Finally, in Figure 1(c), we rescale instead by n/ log2 n, which corresponds to the passive sample complexity bound [21]. Empirically, the fact that these curves do not line up demonstrates that our algorithm requires fewer than log2 n samples per column, outperforming the passive bound. The second row of Figure 1 plots the same probability of success curves for the Singular Value Thresholding (SVT) algorithm [4]. As is apparent from the plots, SVT does not enjoy a linear dependence on n; indeed Figure 1(f) confirms the logarithmic dependency that we expect for passive matrix completion, and establishes that our algorithm has empirically better performance. 7 Figure 2: Reconstruction error as a function of row space incoherence for our noisy algorithm (CSS) and the semidefinite program of [20]. Unknown M Results n r m/dr m/n2 time (s) 1000 10 3.4 0.07 16 50 3.3 0.33 29 100 3.2 0.61 45 5000 10 3.4 0.01 3 50 3.5 0.07 27 100 3.4 0.14 104 10000 10 3.4 0.01 10 50 3.5 0.03 84 100 3.5 0.07 283 Table 1: Computational results on large lowrank matrices. dr = r(2n −r) is the degrees of freedom, so m/dr is the oversampling ratio. In the third row, we study the algorithm’s dependence on r on 500 ⇥500 square matrices. In Figure 1(g) we plot the probability of success of the algorithm as a function of the sampling probability p for matrices of various rank, and observe that the sample complexity increases with r. In Figure 1(h) we rescale the x-axis by r−3/2 so that if our theorem is tight, the curves should coincide. In Figure 1(i) we instead rescale the x-axis by r−1 corresponding to our conjecture about the performance of the algorithm. Indeed, the curves line up in Figure 1(i), demonstrating that empirically, the number of samples needed per column is linear in r rather than the r3/2 dependence in our theorem. To confirm the computational improvement over existing methods, we ran our matrix completion algorithm on large-scale matrices, recording the running time and error in Table 1. To contrast with SVT, we refer the reader to Table 5.1 in [4]. As an example, recovering a 10000 ⇥10000 matrix of rank 100 takes close to 2 hours with the SVT, while it takes less than 5 minutes with our algorithm. For the noisy algorithm, we study the dependence on row-space incoherence. In Figure 2, we plot the reconstruction error as a function of the row space coherence for our procedure and the semidefinite program of Negahban and Wainwright [20], where we ensure that both algorithms use the same number of observations. It’s readily apparent that the SDP decays in performance as the row space becomes more coherent while the performance of our procedure is unaffected. 7 Conclusions and Open Problems In this work, we demonstrate how sequential active algorithms can offer significant improvements in time, and measurement overhead over passive algorithms for matrix and tensor completion. We hope our work motivates further study of sequential active algorithms for machine learning. Several interesting theoretical questions arise from our work: 1. Can we tighten the dependence on rank for these problems? In particular, can we bring the dependence on r down from r3/2 to linear? Simulations suggest this is possible. 2. Can one generalize the nuclear norm minimization program for matrix completion to the tensor completion setting while providing theoretical guarantees on sample complexity? We hope to pursue these directions in future work. Acknowledgements This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant IIS-1116458. AK is supported in part by a NSF Graduate Research Fellowship. AK would like to thank Martin Azizyan, Sivaraman Balakrishnan and Jayant Krishnamurthy for fruitful discussions. References [1] Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations. Journal of the ACM (JACM), 54(2):9, 2007. 8 [2] Laura Balzano, Benjamin Recht, and Robert Nowak. High-dimensional matched subspace detection when data are missing. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 1638–1642. IEEE, 2010. [3] Christos Boutsidis, Michael W Mahoney, and Petros Drineas. An improved approximation algorithm for the column subset selection problem. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 968–977. Society for Industrial and Applied Mathematics, 2009. [4] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [5] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010. [6] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009. [7] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053–2080, 2010. [8] Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, and Rachel Ward. Coherent matrix completion. arXiv preprint arXiv:1306.2979, 2013. [9] Mark A Davenport and Ery Arias-Castro. Compressive binary search. In Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on, pages 1827–1831. IEEE, 2012. [10] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. Theory of Computing, 2:225–247, 2006. [11] Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011. [12] Alex Gittens. The spectral norm error of the naive nystrom extension. arXiv preprint arXiv:1110.5305, 2011. [13] David Gross. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE Transactions on, 57(3):1548–1566, 2011. [14] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1207–1214. SIAM, 2012. [15] Jarvis D Haupt, Richard G Baraniuk, Rui M Castro, and Robert D Nowak. Compressive distilled sensing: Sparse recovery using adaptivity in compressive measurements. In Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on, pages 1551–1555. IEEE, 2009. [16] Jun He, Laura Balzano, and John Lui. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011. [17] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries. The Journal of Machine Learning Research, 99:2057–2078, 2010. [18] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling methods for the nystr¨om method. The Journal of Machine Learning Research, 98888:981–1006, 2012. [19] B´eatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. The annals of Statistics, 28(5):1302–1338, 2000. [20] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 2012. [21] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research, 7777777:3413–3430, 2011. [22] Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010. [23] Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, and Hisashi Kashima. Statistical performance of convex tensor decomposition. In Advances in Neural Information Processing Systems, pages 972–980, 2011. [24] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. 9
|
2013
|
38
|
5,112
|
Fast Determinantal Point Process Sampling with Application to Clustering Byungkon Kang ∗ Samsung Advanced Institute of Technology Yongin, Korea bk05.kang@samsung.com Abstract Determinantal Point Process (DPP) has gained much popularity for modeling sets of diverse items. The gist of DPP is that the probability of choosing a particular set of items is proportional to the determinant of a positive definite matrix that defines the similarity of those items. However, computing the determinant requires time cubic in the number of items, and is hence impractical for large sets. In this paper, we address this problem by constructing a rapidly mixing Markov chain, from which we can acquire a sample from the given DPP in sub-cubic time. In addition, we show that this framework can be extended to sampling from cardinalityconstrained DPPs. As an application, we show how our sampling algorithm can be used to provide a fast heuristic for determining the number of clusters, resulting in better clustering. There are some crucial errors in the proofs of the theorem which invalidate the theoretical claims of this paper. Please consult the appendix for more details. 1 Introduction Determinantal Point Process (DPP) [1] is a well-known framework for representing a probability distribution that models diversity. Originally proposed to model repulsion among physical particles, it has found its way into many applications in AI, such as image search [2] and text summarization [3]. In a nutshell, given an itemset S = [n] = {1, 2, · · · , n} and a symmetric positive definite (SPD) matrix L ∈Rn×n that describes pairwise similarities, a (discrete) DPP is a probability distribution over 2S proportional to the determinant of the corresponding submatrix of L. It is known that this distribution assigns more probability mass on set of points that have larger diversity, which is quantified by the entries of L. Although the size of the support is exponential, DPP offers tractable inference and sampling algorithms. However, sampling from a DPP requires O(n3) time, as an eigen-decomposition of L is necessary [4]. This presents a huge computational problem when there are a large number of items; e.g., n > 104. A motivating problem we consider is that of kernelized clustering [5]. In this problem, we are given a large number of points plus a kernel function that serves as a dot product between the points in a feature space. The objective is to partition the points into some number clusters, each represented by a point called centroid, in a way that a certain cost function is minimized. Our approach is to sample the centroids via DPP. This heuristic is based on the fact that each cluster should be different from one another as much as possible, which is precisely what DPPs prefer. Naively using the cubic-complexity sampling algorithm is inefficient, since it can take up to a whole day to eigen-decompose a 10000 × 10000 matrix. In this paper, we present a rapidly mixing Markov chain whose stationary distribution is the DPP ∗This work was submitted when the author was a graduate student at KAIST. 1 of interest. Our Markov chain does not require the eigen-decomposition of L, and is hence timeefficient. Moreover, our algorithm works seamlessly even when new items are added to S (and L), while the previous sampling algorithm requires expensive decompositions whenever S is updated. 1.1 Settings More specifically, a DPP over the set S = [n], given a positive-definite similarity matrix L ≻0, is a probability distribution PL over any Y ⊆S in the following form: PL(Y = Y ) = det(LY ) P Y ′⊆S det(LY ′) = det(LY ) det(L + I), where I is the identity matrix of corresponding dimension, Y is a random subset of S, and LY ≻0 is the principal minior of L whose rows and columns are restricted to the elements of Y . i.e., LY = [L(i, j)]i,j∈Y , where L(i, j) is the (i, j) entry of L. Many literatures introduce DPP in terms of a marginal kernel that describes marginal probabilities of inclusion. However, since directly modeling probabilities over each subset of S1 offers a more flexible framework, we will focus on the latter representation. There is a variant of DPPs that places a constraint on the size of the random subsets. Given an integer k, a k-DPP is a DPP over size-k sets [2]: P k L(Y = Y ) = ( det(LY ) P |Y ′|=k det(LY ′), if |Y | = k 0, otherwise. During the discussions, we will use a characteristic vector representation of each Y ⊆S; i.e., vY ∈{0, 1}|S|, ∀Y ⊆S, such that vY (i) = 1 if i ∈Y , and 0 otherwise. With abuse of notation, we will often use set operations on characteristic vectors to indicate the same operation on the corresponding sets. e.g., vY \ {u} is equivalent to setting vY (u) = 0 and correspondingly, Y \ {u}. 2 Algorithm The overall idea of our algorithm is to design a rapidly-mixing Markov chain whose stationary distribution is PL. The state space of our chain consists of the characteristic vectors of each subset of S. This Markov chain is generated by a standard Metropolis-Hastings algorithm, where the transition probability from state vY to vZ is given as the ratio of PL(Z) to PL(Y ). In particular, we will only consider transitions between adjacent states - states that have Hamming distance 1. Hence, the transition probability of removing an element u from Y is of the following form: Pr(Y ∪{u} →Y ) = min 1, det(LY ) det(LY ∪{u}) . The addition probability is defined similarly. The overall chain is an insertion/deletion chain, where a uniformly proposed element is either added to, or subtracted from the current state. This procedure is outlined in Algorithm 1. Note that this algorithm has a potentially high computational complexity, as the determinant of LY for a given Y ⊆S must be computed on every iteration. If the size of Y is large, then a single iteration will become very costly. Before discussing how to address this issue in Section 2.1, we analyze the properties of Algorithm 1 to show that it efficiently samples from PL. First, we state that the chain induced by Algorithm 1 does indeed represent our desired distribution2. Proposition 1. The Markov chain in Algorithm 1 has a stationary distribution PL. The computational complexity of sampling from PL using Algorithm 1 depends on the mixing time of the Markov chain; i.e., the number of steps required in the Markov chain to ensure that the current distribution is “close enough” to the stationary distribution. More specifically, we are interested in the ǫ-mixing time τ(ǫ), which guarantees a distribution that is ǫ-close to PL in terms of total variation. In other words, we must spend at least this many time steps in order to acquire a sample from a distribution close to PL. Our next result states that the chain in Algorithm 1 mixes rapidly: 1Also known as L-ensembles. 2All proofs, including those of irreducibility of our chains, are given in the Appendix of the full version of our paper. 2 Algorithm 1 Markov chain for sampling from PL Require: Itemset S = [n], similarity matrix L ≻0 Randomly initialize state Y ⊆S while Not mixed do Sample u ∈S uniformly at random Set p+ u (Y ) ←min 1, det(LY ∪{u}) det(LY ) p− u (Y ) ←min 1, det(LY \{u}) det(LY ) if u /∈Y then Y ←Y ∪{u} with prob. p+ u (Y ) else Y ←Y \ {u} with prob. p− u (Y ) end if end while return Y Theorem 1. The Markov chain in Algorithm 1 has a mixing time τ(ǫ) = O (n log (n/ǫ)). One advantage of having a rapidly-mixing Markov chain as means of sampling from a DPP is that it is robust to addition/deletion of elements. That is, when a new element is introduced to or removed from S, we may simply continue the current chain until it is mixed again to obtain a sample from the new distribution. Previous sampling algorithm, on the other hand, requires to expensively eigendecompose the updated L. The mixing time of the chain in Algorithm 1 seems to offer a promising direction for sampling from PL. However, note that this is subject to the presence of an efficient procedure for computing det(LY ). Unfortunately, computing the determinant already costs O(|Y |3) operations, rendering Algorithm 1 impractical for large Y ’s. In the following sections, we present a linear-algebraic manipulation of the determinant ratio so that explicit computation of the determinants is unnecessary. 2.1 Determinant Ratio Computation It turns out that we do not need to explicitly compute the determinants, but rather the ratio of determinants. Suppose we wish to compute det(LY ∪{u})/ det(LY ). Since the determinant is permutationinvariant with respect to the index set, we can represent LY ∪{u} as the following block matrix form, due to its symmetry: LY ∪{u} = LY bu b⊤ u cu , where bu = (L(i, u))i∈Y ∈R|Y | and cu = L(u, u). With this, the determinant of LY ∪{u} is expressed as det(LY ∪{u}) = det(LY ) cu −b⊤ u L−1 Y bu . (1) This allows us to re-formulate the insertion transition probability as a determinant-free ratio. p+ u (Y ) = min 1, det(LY ∪{u}) det(LY ) = min 1, cu −b⊤ u L−1 Y bu . (2) The deletion transition probability p− u (Y ∪{u}) is computed likewise: p− u (Y ∪{u}) = min 1, det(LY ) det(LY ∪{u}) = min 1, (cu −b⊤ u L−1 Y bu)−1 . However, this transformation alone does not seem to result in enhanced computation time, as computing the inverse of a matrix is just as time-consuming as computing the determinant. 3 To save time on computing L−1 Y ′ , we incrementally update the inverse through blockwise matrix inversion. Suppose that the matrix L−1 Y has already been computed at the current iteration of the chain. First, consider the case when an element u is added (‘if’ clause). The new inverse L−1 Y ∪{u} must be updated from the current L−1 Y . This is achieved by the following block-inversion formula [6]: L−1 Y ∪{u} = LY bu b⊤ u cu −1 = L−1 Y + L−1 Y bub⊤ u L−1 Y /du −L−1 Y bu/du −b⊤ u L−1 Y /du du , (3) where du = cu −b⊤ u L−1 Y bu is the Schur complement of LY . Since L−1 Y is already given, computing each block of the new inverse matrix costs O(|Y |2), which is an order faster than the O(|Y |3) complexity required by the determinant. Moreover, only half of the entries may be computed, due to symmetry. Next, consider the case when an element u is removed (‘else’ clause) from the current set Y . The matrix to be updated is L−1 Y \{u}, and is given by the rank-1 update formula. We first represent the current inverse L−1 Y as L−1 Y = LY \{u} bu b⊤ u cu −1 ≜ D e e⊤ f , where D ∈R(|Y |−1)×(|Y |−1), e ∈R|Y |−1, and f ∈R. Then, the inverse of the submatrix LY \{u} is given by L−1 Y \{u} = D −ee⊤ f . (4) Again, updating L−1 Y \{u} only requires matrix arithmetic, and hence costs O(|Y |2). However, the initial L−1 Y , from which all subsequent inverses are updated, must be computed in full at the beginning of the chain. This complexity can be reduced by restricting the size of the initial Y . That is, we first randomly initialize Y with a small size, say o(n1/3), and compute the inverse L−1 Y . As we proceed with the chain, update L−1 Y using Equations 3 and 4 when an insertion or a deletion proposal is accepted, respectively. Therefore, the average complexity of acquiring a sample from a distribution that is ǫ-close to PL is O(T 2n log(n/ǫ)), where T is the average size of Y encountered during the progress of chain. In Section 3, we introduce a scheme to maintain a small-sized Y . 2.2 Extension to k-DPPs The idea of constructing a Markov chain to obtain a sample can be extended to k-DPPs. The only known algorithm so far for sampling from a k-DPP also requires the eigen-decomposition of L. Extending the previous idea, we provide a Markov chain sampling algorithm similar to Algorithm 1 that samples from P k L. The main idea behind the k-DPP chain is to propose a new configuration by choosing two elements: one to remove from the current set, and another to add. We accept this move according to the probability defined by the ratio of the proposed determinant to the current determinant. This is equivalent to selecting a row and column of LX, and replacing it with the ones corresponding to the element to be added. i.e., for X = Y ∪{u} LX=Y ∪{u} = LY bu b⊤ u cu ⇒LX′=Y ∪{v} = LY bv b⊤ v cv , where u and v are the elements being removed and added, respectively. Following Equation 2, we set the transition probability as the ratio of the determinants of the two matrices. det(LX′) det(LX) = cv −b⊤ v L−1 Y bv cu −b⊤ u L−1 Y bu . The final procedure is given in Algorithm 2. Similarly to Algorithm 1, we present the analysis on the stationary distribution and the mixing time of Algorithm 2. Proposition 2. The Markov chain in Algorithm 2 has a stationary distribution P k L. 4 Algorithm 2 Markov chain for sampling from P k L Require: Itemset S = [n], similarity matrix L ≻0 Randomly initialize state X ⊆S, s.t. |X| = k while Not mixed do Sample u ∈X, and v ∈S \ X u.a.r. Letting Y = X \ {u}, set p ←min 1, cv −b⊤ v L−1 Y bv cu −b⊤ u L−1 Y bu . (5) X ←Y ∪{v} with prob. p end while return X Theorem 2. The Markov chain in Algorithm 2 has a mixing time τ(ǫ) = O(k log(k/ǫ)). The main computational bottleneck of Algorithm 2 is the inversion of LY . Since LY is a (k −1) × (k−1) matrix, the per-iteration cost is O(k3). However, this complexity can be reduced by applying Equation 3 on every iteration to update the inverse. This leads to the final sampling complexity of O(k3 log(k/ǫ)), which dominates the O(k3) cost of computing the intitial inverse, for acquiring a single sample from the chain. In many cases, k will be a constant much smaller than n, so our algorithm is efficient in general. 3 Application to Clustering Finally, we show how our algorithms lead to an efficient heuristic for a k-means clustering problem when the number of clusters is not known. First, we briefly overview the k-means problem. Given a set of points P = {xi ∈Rd}n i=1, the goal of clustering is to construct a partition C = {C1, · · · , Ck|Ci ⊆P} of P such that the distortion DC = k X i=1 X x∈Ci ∥x −mi∥2 2 (6) is minimized, where mi is the centroid of cluster Ci. It is known that the optimal centroid is the mean of the points of Ci. i.e., mi = (P x∈Ci x)/|Ci|. Iteratively minimizing this expression converges to a local optimum, and is hence the preferred approach in many works. However, determining the number of clusters k is the factor that makes this problem NP-hard [7]. Note that the problem of unknown k prevails in other types of clustering algorithm, such as kernel k-means [5] and spectral clustering [8]: Kernel k-means is exactly the same as regular k-means except that the inner-products are substituted with a positive semi-definite kernel function, and spectral clustering uses regular k-means clustering as a subroutine. Some common techniques to determine k include performing a density-based analysis of the data [9], or selecting k that minimizes the Bayesian information criterion (BIC) [10]. In this work, we propose to sample the initial centroids of the clustering via our DPP sampling algorithms. The similarity matrix L will be the Gram matrix determined by L(i, j) = κ(xi, xj), where κ(·) is simply the inner-product for regular k-means, and a specified kernel function for kernel k-means. Since DPPs naturally capture the notion of diversity, the sampled points will tend to be more diverse, and thus serve better as initial representatives for each cluster. Once we have a sample, we perform a Voronoi partition around the elements of the sample to obtain a clustering3. Note that it is not necessary to determine k beforehand as it can be obtained from the DPP samples. This approach is closely related to the MAP inference problem for DPPs [11], which is known to be NP-Hard as well. We use the proposed algorithms to sample the representative points that have high probability under PL, and cluster the rest of the points around the sample. Subsequently, standard (kernel) k-means algorithms can be applied to improve this initial clustering. 3The distance between x and y is defined as p κ(x, x) −2κ(x, y) + κ(y, y), for any positive semi-definite kernel κ 5 Since DPPs model both size and diversity, it seems that we could simply collect samples from Algorithm 1 directly, and use those samples as representatives. However, as pointed out by [2], modeling both properties simultaneously can negatively bias the quality of diversity. To reduce this possible negative influence, we adopt a two-step sampling strategy: First, we gather C samples from Algorithm 1 and construct a histogram H over the sizes of the samples. Next, we sample from k-DPPs, by Algorithm 2, on a k acquired from H. This last sample is the representatives we use to cluster. Another problem we may encounter in this scheme is the sensitivity to outliers. The presence of an outlier in P can cause the DPP in the first phase to favor the inclusion of that outlier, resulting in a possibly bad clustering. To make our approach more robust to outliers, we introduce the following cardinality-penalized DPP: PL;λ(Y = Y ) ∝exp(tr(log(LY )) −λ|Y |) = det(LY ) exp(λ|Y |), where λ ≥0 is a hyper-parameter that controls the weight to be put on |Y |. This regularization scheme smoothes the original PL by exponentially discounting the size of Y ’s. This does not increase the order of the mixing time of the induced chain, since only a constant factor of exp(±λ) is multiplied to the transition probabilities. Algorithm 3 describes the overall procedure of our DPPbased clustering. Algorithm 3 DPP-based Clustering Require: L ≻0, λ ≥0, R > 0, C > 0 Gather {S1, · · · , SC|Si ∼PL;λ} (Algorithm 1) Construct histogram H = {|Si|}C i=1 on the sizes of Si’s for j = 1, · · · , R do Sample Mj ∼P kj L (Algorithm 2), where kj ∼H Voronoi partition around Mj end for return clustering with lowest distortion (Equation 6) Choosing the right value of λ usually requires a priori knowledge of the data set. Since this information is not always available, one may use a small subset of P to heuristically choose λ. For example, examine the BIC of the initial clustering with respect to the centroids sampled from O(√n) randomly chosen elements P ′ ⊂P, with λ = 0. Then, increase λ by 1 until we encounter the point where the BIC hits the local maximum to choose the final value. An additional binary search step may be used between λ and λ + 1 to further fine-tune its value. Because we only use O(√n) points to sample from the DPP, each search step has at most linear complexity, allowing for ample time to choose better λ’s. This procedure may not appear to have an apparent advantage over a standard BIC-based model selection to choose the number of clusters k. However, tuning λ not only allows one to determine k, but also gives better initial partitions in terms of distortion. 4 Experiments In this section, we empirically demonstrate how our proposed method, denoted DPP-MC, of choosing an initial clustering compares to other methods, in terms of distortion and running time. The methods we compare against include: • DPP-Full: Sample using full DPP sampling procedure as given in [4]. • DPP-MAP: Sample the initial centroids according to the MAP configuration, using the algorithm of [11]. • KKM: Plain kernel k-means clustering given by [5], run on the “true” number of clusters. DPP-Full and DPP-MAP were used only in the first phase of Algorithm 3. To summarize the testing procedure, DPP-MC, DPP-Full, DPP-MAP were used to choose the initial centroids. After this initialization, KKM was carried out to improve the initial partitioning. Hence, the only difference between the algorithms tested and KKM is the initialization. 6 The real-world data sets we use are the letter recognition data set [12] (LET), and a subset of the power consumption data set [13] (PWC), The LET set is represented as 10,000 points in R16, and the PWC set 10,000 points in R7. While the LET set has 26 ground-truth clusters, the PWC set is only labeled with timestamps. Hence, we manually divided all points into four clusters, based on the month of timestamps. Since this partitioning is not the ground truth given by the data collector, we expected the KKM algorithm to perform badly on this set. In addition, we also tested our algorithm on an artificially-generated set consisting of 15,000 points in R10 from five mixtures of Gaussians (MG). However, this task is made challenging by roughly merging the five Gaussians so that it is more likely to discover fewer clusters. The purpose of this set is to examine how well our algorithm finds the appropriate number of clusters. For the MG set, we present the result of DBSCAN [9]: another clustering algorithm that does not require k beforehand. We used a simple polynomial kernel of the form κ(x, y) = (x · y + 0.05)3 for the real-world data sets, and a dot product for the artificial set. Algorithm 3 was run with τ1 = n log(n/0.01) and τ2 = k log(k/0.01) mixing steps for first and second phases, respectively, and C = R = 10. The running time of our algorithm includes the time taken to heuristically search for λ using the following BIC [14]: BICk ≜ X x∈P log Pr(x|{mi}k i=1, σ) −kd 2 log n, where σ is the average of each cluster’s distortion, and d is the dimension of the data set. The tuning procedure is the same as the one given at the end of the previous section, without using binary search. 4.1 Real-World Data Sets The plots of the distortion and time for the LET set over the clustering iterations are given in Figure 1. Recall that KKM was run with the true number of clusters as its input, so one may expect it to perform relatively better, in terms of distortion and running time, than the other algorithms, which must compute k. The plots show that this is indeed the case, with our DPP-MC outperforming its competitors. Both DPP-Full and DPP-MAP require long running time for the eigen-decomposition of the similarity matrix. It is interesting to note that DPP-MAP does not perform better than a plain DPP-Full. We conjecture that this phenomenon is due to the approximate nature of the MAP inference. 1 2 3 4 5 6 7 8 9 10 3 3.5 4 4.5 Iterations Distortion (× 104) DPP−MC KKM DPP−Full DPP−MAP 1 2 3 4 5 6 7 8 9 0 500 1000 1500 2000 2500 3000 3500 Iterations Cumulative time (sec.) DPP−MC KKM DPP−Full DPP−MAP Figure 1: Distortion (left) and cumulated runtime (right) of the clustering induced by the competing algorithms on the LET set. In Table 1, we give a summary of the DPP-based initialization procedures. The reported values are the immediate results of the initialization. For DPP-MC, the running time includes the automated λ tuning. Taking this fact into account, DPP-MC was able to recover the true value of k quickly. In Figure 2, we show the same results on the PWC set. As in the previous case, DPP-MC exhibits the lowest distortion with the fastest running time. For this set, we have omitted the results for DPP7 DPP-MC DPP-Full DPP-MAP DPP-MC DPP-Full DPP-MAP Distortion 36020 42841 43719 9.78 20.15 150 Time (sec.) 20 820 2850 15 50 220 k 26 6 16 13 6 1 λ 2 4 Table 1: Comparison among the DPP-based initializations for the LET set (left) and the PWC set (right). MAP, as it yielded a degenereate result of a single cluster. Nevertheless, we give the final result in Table 1. 1 2 3 4 5 6 7 8 9 0 10 20 30 40 50 60 Iterations Distortion DPP−MC KKM DPP−Full 1 2 3 4 5 6 7 8 9 0 200 400 600 800 1000 1200 1400 Iterations Cumulative time (sec.) DPP−MC KKM DPP−Full Figure 2: Distortion (left) and time (right) of the clustering induced by the competing algorithms on the PWC set. 4.2 Artificial Data Set Finally, we present results on clustering the artificial MG set. In this set, we compare our algorithm with another clustering algorithm DBSCAN that does not require k a priori. Due to page constraints, we summarize the result in Table 2. DPP-MC DBSCAN Distortion 6.127 35.967 Time (sec.) 416 60 k 34 1 Table 2: Comparison among the DPP-based initializations for the PWC set. Due to the merged configuration of the MG set, DBSCAN is not able to successfuly discover multiple clusters, and ends up with a singleton clustering. On the other hand, DPP-MC managed to find many distinct clusters in a way the distortion is lowered. 5 Discussion and Future Works We have proposed a fast method for sampling from an ǫ-close DPP distribution and its application to kernelized clustering. Although the exact computational complexity of sampling (O(T 2n log(n/ǫ)) is not explicitly superior to the previous approach (O(n3)), we emperically show that T is generally small enough to account for our algorithm’s efficiency. Furthermore, the extension to k-DPP sampling yields very fast speed-up compared to the previous sampling algorithm. However, one must keep in mind that the mixing time analysis is for a single sample only: i.e., we must mix the chain for each sample we need. For a small number of samples, this may compensate for the cubic complexity of the previous approach. For a larger number of samples, we must further 8 investigate the effect of sample correlation after mixing in order to prove long-term efficiency. References [1] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. ArXiv, 2012. [2] A. Kulesza and B. Taskar. k-DPPs: Fixed-size determinantal point processes. In Proceedings of ICML, 2011. [3] A. Kulesza and B. Taskar. Learning determinantal point processes. In Proceedings of UAI, 2011. [4] J.B. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Determinantal processes and independence. Probability Surveys, 3, 2006. [5] I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means, spectral clustering and normalized cuts. In Proceedings of ACM SIGKDD, 2004. [6] G. Golub and C. van Loan. Matrix Computations. Johns Hopkins University Press, 1996. [7] A. Daniel, D. Amit, H. Pierre, and P. Preyas. NP-hardness of euclidean sum-of-squares clustering. Machine Learning, 75:245–248, 2009. [8] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Proceedings of NIPS, 2001. [9] M. Ester, H. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of KDD, 1996. [10] C. Fraley and A. E. Raftery. How many clusters? which clustering method? answers via model-based cluster analysis. The Computer Journal, 41(8), 1998. [11] J. Gillenwater, A. Kulesza, and B. Taskar. Near-optimal MAP inference for determinantal point processes. In Proceedings of NIPS, 2012. [12] D. Slate. Letter recognition data set. http://archive.ics.uci.edu/ml/ datasets/Letter+Recognition, 1991. [13] G. H´ebrail and A. B´erard. Individual household electric power consumption data set. http://archive.ics.uci.edu/ml/datasets/Individual+household+ electric+power+consumption, 2012. [14] C. Goutte, L. K. Hansen, M. G. Liptrot, and E. Rostrup. Feature-space clustering for fMRI meta-analysis. Human Brain Mapping, 13, 2001. 9
|
2013
|
39
|
5,113
|
Transportability from Multiple Environments with Limited Experiments Elias Bareinboim∗ UCLA Sanghack Lee∗ Penn State University Vasant Honavar Penn State University Judea Pearl UCLA Abstract This paper considers the problem of transferring experimental findings learned from multiple heterogeneous domains to a target domain, in which only limited experiments can be performed. We reduce questions of transportability from multiple domains and with limited scope to symbolic derivations in the causal calculus, thus extending the original setting of transportability introduced in [1], which assumes only one domain with full experimental information available. We further provide different graphical and algorithmic conditions for computing the transport formula in this setting, that is, a way of fusing the observational and experimental information scattered throughout different domains to synthesize a consistent estimate of the desired effects in the target domain. We also consider the issue of minimizing the variance of the produced estimand in order to increase power. 1 Motivation Transporting and synthesizing experimental knowledge from heterogeneous settings are central to scientific discovery. Conclusions that are obtained in a laboratory setting are transported and applied elsewhere in an environment that differs in many aspects from that of the laboratory. In data-driven sciences, experiments are conducted on disparate domains, but the intention is almost invariably to fuse the acquired knowledge, and translate it into some meaningful claim about a target domain, which is usually different than any of the individual study domains. However, the conditions under which this extrapolation can be legitimized have not been formally articulated until very recently. Although the problem has been discussed in many areas of statistics, economics, and the health sciences, under rubrics such as “external validity” [2, 3], “meta-analysis” [4], “quasi-experiments” [5], “heterogeneity” [6], these discussions are limited to verbal narratives in the form of heuristic guidelines for experimental researchers – no formal treatment of the problem has been attempted to answer the practical challenge of generalizing causal knowledge across multiple heterogeneous domains with disparate experimental data posed in this paper. The fields of artificial intelligence and statistics provide the theoretical underpinnings necessary for tackling transportability. First, the distinction between statistical and causal knowledge has received syntactic representation through causal diagrams [7, 8, 9], which became a popular tool for causal inference in data-driven fields. Second, the inferential machinery provided by the causal calculus (do-calculus) [7, 9, 10] is particularly suitable for handling knowledge transfer across domains. Armed with these techniques, [1] introduced a formal language for encoding differences and commonalities between domains accompanied with necessary or sufficient conditions under which transportability of empirical findings is feasible between two domains, a source and a target; then, these conditions were extended for a complete characterization for transportability in one domain with unrestricted experimental data [11]. Subsequently, these results were generalized for the settings when ∗These authors contributed equally to this paper. The authors’ addresses are respectively eb@cs.ucla.edu, sxl439@ist.psu.edu, vhonavar@ist.psu.edu, judea@cs.ucla.edu. 1 only limited experiments are available in the source domain [12, 13], and further for when multiple source domains with unrestricted experimental information are available [14, 15]. This paper broadens these discussions introducing a more general setting in which multiple heterogeneous sources with limited and distinct experiments are available, a task that we call here “mz-transportability”.1 More formally, the mz-transportability problem concerns with the transfer of causal knowledge from a heterogeneous collection of source domains Π = {π1, ..., πn} to a target domain π∗. In each domain πi ∈Π, experiments over a set of variables Zi can be performed, and causal knowledge gathered. In π∗, potentially different from πi, only passive observations can be collected (this constraint is weakened later on). The problem is to infer a causal relationship R in π∗using knowledge obtained in Π. Clearly, if nothing is known about the relationship between Π and π∗, the problem is trivial; no transfer can be justified. Yet the fact that all scientific experiments are conducted with the intent of being used elsewhere (e.g., outside the lab) implies that scientific progress relies on the assumption that certain domains share common characteristics and that, owed to these commonalities, causal claims would be valid in new settings even where experiments cannot be conducted. The problem stated in this paper generalizes the one-dimensional version of transportability with limited scope and the multiple dimensional with unlimited scope. Remarkably, while the effects of interest might not be individually transportable to the target domain from the experiments in any of the available sources, combining different pieces from the various sources may enable the estimation of the desired effects (to be shown later on). The goal of this paper is to formally understand under which conditions the target quantity is (non-parametrically) estimable from the available data. 2 Previous work and our contributions Consider Fig. 1(a) in which the node S represents factors that produce differences between source and target populations. Assume that we conduct a randomized trial in Los Angeles (LA) and estimate the causal effect of treatment X on outcome Y for every age group Z = z, denoted by P(y|do(x), z). We now wish to generalize the results to the population of the United States (U.S.), but we find the distribution P(x, y, z) in LA to be different from the one in the U.S. (call the latter P ∗(x, y, z)). In particular, the average age in the U.S. is significantly higher than that in LA. How are we to estimate the causal effect of X on Y in U.S., denoted R = P ∗(y|do(x))? 2 3 The selection diagram for this example (Fig. 1(a)) conveys the assumption that the only difference between the two populations are factors determining age distributions, shown as S →Z, while agespecific effects P ∗(y|do(x), Z = z) are invariant across populations. Difference-generating factors are represented by a special set of variables called selection variables S (or simply S-variables), which are graphically depicted as square nodes (■). From this assumption, the overall causal effect in the U.S. can be derived as follows: R = X z P ∗(y|do(x), z)P ∗(z) = X z P(y|do(x), z)P ∗(z) (1) The last line is the transport formula for R. It combines experimental results obtained in LA, P(y|do(x), z), with observational aspects of the U.S. population, P ∗(z), to obtain an experimental claim P ∗(y|do(x)) about the U.S.. In this trivial example, the transport formula amounts to a simple re-calibration (or re-weighting) of the age-specific effects to account for the new age distribution. In general, however, a more involved mixture of experimental and observational findings would be necessary to obtain a bias-free estimate of the target relation R. Fig. 1(b) depicts the smallest example in which transportability is not feasible even when experiments over X in π are available. In real world applications, it may happen that certain controlled experiments cannot be conducted in the source environment (for financial, ethical, or technical reasons), so only a limited amount 1The machine learning literature has been concerned about discrepancies among domains in the context, almost exclusively, on predictive or classification tasks as opposed to learning causal or counterfactual measures [16, 17]. Interestingly enough, recent work on anticausal learning moves towards more general modalities of learning and also leverages knowledge about the underlying data-generating structure [18, 19]. 2We will use Px(y | z) interchangeably with P(y | do(x), z). 3We use the structural interpretation of causal diagrams as described in [9, pp. 205]. 2 S Y X Z2 S Z1 Y X S Y X S (a) (c) (b) Z Figure 1: The selection variables S are depicted as square nodes (■). (a) Selection diagram illustrating when transportability between two domains is trivially solved through simple recalibration. (b) The smallest possible selection diagram in which a causal relation is not transportable. (c) Selection diagram illustrating transportability when only experiments over {Z1} are available in the source. of experimental information can be gathered. A natural question arises whether the investigator in possession of a limited set of experiments would still be able to estimate the desired effects at the target domain. For instance, we assume in Fig. 1(c) that experiments over Z1 are available and the target quantity is R = P ∗(y|do(x)), which can be shown to be equivalent to P(y|x, do(Z1)), the conditional distribution of Y given X in the experimental study when Z1 is randomized. 4 One might surmise that multiple pairwise z-transportability would be sufficient to solve the mztransportability problem, but this is not the case. To witness, consider Fig. 2(a,b) which concerns the transport of experimental results from two sources ({πa, πb}) to infer the effect of X on Y in π∗, R = P ∗(y|do(x)). In these diagrams, X may represent the treatment (e.g., cholesterol level), Z1 represents a pre-treatment variable (e.g., diet), Z2 represents an intermediate variable (e.g., biomarker), and Y represents the outcome (e.g., heart failure). We assume that experimental studies randomizing {Z1, Z2} can be conducted in both domains. A simple analysis based on [12] can show that R cannot be z-transported from either source alone, but it turns out that combining in a special way experiments from both sources allows one to determine the effect in the target. More interestingly, we consider the more stringent scenario where only certain experiments can be performed in each of the domains. For instance, assume that it is only possible to conduct experiments over {Z2} in πa and over {Z1} in πb. Obviously, R will not be z-transported individually from these domains, but it turns out that taking both sets of experiments into account, R = P z2 P (a)(y|do(z2))P (b)(z2|x, do(Z1)), which fully uses all pieces of experimental data available. In other words, we were able to decompose R into subrelations such that each one is separately z-transportable from the source domains, and so is the desired target quantity. Interestingly, it is the case in this example that if the domains in which experiments were conducted were reversed (i.e., {Z1} randomized in πa, {Z2} in πb), it will not be possible to transport R by any method – the target relation is simply not computable from the available data (formally shown later on). This illustrates some of the subtle issues mz-transportability entails, which cannot be immediately cast in terms of previous instances of the transportability class. In the sequel, we try to better understand some of these issues, and we develop sufficient or (specific) necessary conditions for deciding special transportability for arbitrary collection of selection diagrams and set of experiments. We further construct an algorithm for deciding mz-transportability of joint causal effects and returning the correct transport formula whenever this is possible. We also consider issues relative to the variance of the estimand aiming for improving sample efficiency and increasing statistical power. 3 Graphical conditions for mz-transportability The basic semantical framework in our analysis rests on structural causal models as defined in [9, pp. 205], also called data-generating models. In the structural causal framework [9, Ch. 7], actions are modifications of functional relationships, and each action do(x) on a causal model M produces 4A typical example is whether we can estimate the effect of cholesterol (X) on heart failure (Y ) by experiments on diet (Z1) given that cholesterol levels cannot be randomized [20]. 3 2 3 2 2 3 1 1 2 1 1 Z Z Z Z Z Z (a) Z Z Y X (b) Z Y (c) Z (d) Y X X U X Y W W W U Figure 2: Selection diagrams illustrating impossibility of estimating R = P ∗(y|do(x)) through individual transportability from πa and πb even when Z = {Z1, Z2} (for (a, b), (c, d))). If we assume, more stringently, availability of experiments Za = {Z2}, Zb = {Z1}, Z∗= {}, a more elaborated analysis can show that R can be estimated combining different pieces from both domains. a new model Mx = ⟨U, V, Fx, P(U)⟩, where Fx is obtained after replacing fX ∈F for every X ∈X with a new function that outputs a constant value x given by do(x). 5 We follow the conventions given in [9]. We denote variables by capital letters and their realized values by small letters. Similarly, sets of variables will be denoted by bold capital letters, sets of realized values by bold small letters. We use the typical graph-theoretic terminology with the corresponding abbreviations Pa(Y)G and An(Y)G, which will denote respectively the set of observable parents and ancestors of the node set Y in G. A graph GY will denote the induced subgraph G containing nodes in Y and all arrows between such nodes. Finally, GXZ stands for the edge subgraph of G where all incoming arrows into X and all outgoing arrows from Z are removed. Key to the analysis of transportability is the notion of “identifiability,” defined below, which expresses the requirement that causal effects are computable from a combination of data P and assumptions embodied in a causal graph G. Definition 1 (Causal Effects Identifiability (Pearl, 2000, pp. 77)). The causal effect of an action do(x) on a set of variables Y such that Y ∩X = ∅is said to be identifiable from P in G if Px(y) is uniquely computable from P(V) in any model that induces G. Causal models and their induced graphs are usually associated with one particular domain (also called setting, study, population, or environment). In ordinary transportability, this representation was extended to capture properties of two domains simultaneously. This is possible if we assume that the structural equations share the same set of arguments, though the functional forms of the equations may vary arbitrarily [11]. 6 Definition 2 (Selection Diagram). Let ⟨M, M ∗⟩be a pair of structural causal models [9, pp. 205] relative to domains ⟨π, π∗⟩, sharing a causal diagram G. ⟨M, M ∗⟩is said to induce a selection diagram D if D is constructed as follows: 1. Every edge in G is also an edge in D; 2. D contains an extra edge Si →Vi whenever there might exist a discrepancy fi ̸= f ∗ i or P(Ui) ̸= P ∗(Ui) between M and M ∗. In words, the S-variables locate the mechanisms where structural discrepancies between the two domains are suspected to take place.7 Alternatively, the absence of a selection node pointing to a variable represents the assumption that the mechanism responsible for assigning value to that variable is identical in both domains. 5The results presented here are also valid in other formalisms for causality based on potential outcomes. 6As discussed in the reference, the assumption of no structural changes between domains can be relaxed, but some structural assumptions regarding the discrepancies between domains must still hold. 7Transportability assumes that enough structural knowledge about both domains is known in order to substantiate the production of their respective causal diagrams. In the absence of such knowledge, causal discovery algorithms might be used to infer the diagrams from data [8, 9]. 4 Armed with the concept of identifiability and selection diagrams, mz-transportability of causal effects can be defined as follows: Definition 3 (mz-Transportability). Let D = {D(1), ..., D(n)} be a collection of selection diagrams relative to source domains Π = {π1, ..., πn}, and target domain π∗, respectively, and Zi (and Z∗) be the variables in which experiments can be conducted in domain πi (and π∗). Let ⟨P i, Ii z⟩be the pair of observational and interventional distributions of πi, where Ii z = S Z′⊆Zi P i(v|do(z′)), and in an analogous manner, ⟨P ∗, I∗ z ⟩be the observational and interventional distributions of π∗. The causal effect R = P ∗ x(y|w) is said to be mz-transportable from Π to π∗in D if P ∗ x(y|w) is uniquely computable from S i=1,...,n⟨P i, Ii z⟩∪⟨P ∗, I∗ z ⟩in any model that induces D. The requirement that R is uniquely computable from ⟨P ∗, I∗ z ⟩and ⟨P i, Ii z⟩from all sources has a syntactic image in the causal calculus, which is captured by the following sufficient condition. Theorem 1. Let D = {D(1), ..., D(n)} be a collection of selection diagrams relative to source domains Π = {π1, ..., πn}, and target domain π∗, respectively, and Si represents the collection of S-variables in the selection diagram D(i). Let {⟨P i, Ii z⟩} and ⟨P ∗, I∗ z ⟩be respectively the pairs of observational and interventional distributions in the sources Π and target π∗. The relation R = P ∗(y|do(x), w) is mz-transportable from Π to π∗in D if the expression P(y|do(x), w, S1, ..., Sn) is reducible, using the rules of the causal calculus, to an expression in which (1) do-operators that apply to subsets of Ii z have no Si-variables or (2) do-operators apply only to subsets of I∗ z . This result provides a powerful way to syntactically establish mz-transportability, but it is not immediately obvious whether a sequence of applications of the rules of causal calculus that achieves the reduction required by the theorem exists, and even if such sequence exists, it is not obvious how to obtain it. For concreteness, we illustrate this result using the selection diagrams in Fig. 2(a,b). Corollary 1. P ∗(y|do(x)) is mz-transportable in Fig. 2(a,b) with Za = {Z2} and Zb = {Z1}. Proof. The goal is to show that R = P ∗(y|do(x)) is mz-transportable from {πa, πb} to π∗using experiments conducted over {Z2} in πa and {Z1} in πb. Note that naively trying to transport R from each of the domains individually is not possible, but R can be decomposed as follows: P ∗(y|do(x)) = P ∗(y|do(x), do(Z1)) (2) = X z2 P ∗(y|do(x), do(Z1), z2)P ∗(z2|do(x), do(Z1)) (3) = X z2 P ∗(y|do(x), do(Z1), do(z2))P ∗(z2|do(x), do(Z1)), (4) where Eq. (2) follows by rule 3 of the causal calculus since (Z1 ⊥⊥Y |X)DX,Z1 holds, we condition on Z2 in Eq. (3), and Eq. (4) follows by rule 2 of the causal calculus since (Z2 ⊥⊥ Y |X, Z1)DX,Z1,Z2, where D is the diagram in π∗(despite the location of the S-nodes). Now we can rewrite the first term of Eq. (4) as indicated by the Theorem (and suggested by Def. 2): P ∗(y|do(x), do(Z1), do(z2)) = P(y|do(x), do(Z1), do(z2), Sa, Sb) (5) = P(y|do(x), do(Z1), do(z2), Sb) (6) = P(y|do(z2), Sb) (7) = P (a)(y|do(z2)), (8) where Eq. (5) follows from the theorem (and the definition of selection diagram), Eq. (6) follows from rule 1 of the causal calculus since (Sa ⊥⊥Y |Z1, Z2, X)D(a) Z1,Z2,X, Eq. (7) follows from rule 3 of the causal calculus since (Z1, X ⊥⊥Y |Z2)D(a) Z1,Z2,X. Note that this equation matches with the syntactic goal of Theorem 1 since we have precisely do(z2) separated from Sa (and Z2 ∈Ia z ); so, we can rewrite the expression which results in Eq. (8) by the definition of selection diagram. Finally, we can rewrite the second term of Eq. (4) as follows: P ∗(z2|do(x), do(Z1)) = P(z2|do(x), do(Z1), Sa, Sb) (9) = P(z2|do(x), do(Z1), Sa) (10) = P(z2|x, do(Z1), Sa) (11) = P (b)(z2|x, do(Z1)), (12) 5 where Eq. (9) follows from the theorem (and the definition of selection diagram), Eq. (10) follows from rule 1 of the causal calculus since (Sb ⊥⊥Z2|Z1, X)D(b) Z1,X, Eq. (11) follows from rule 2 of the causal calculus since (X ⊥⊥Z2|Z1)D(b) Z1X. Note that this equation matches the condition of the theorem, separate do(Z1) from Sb (i.e., experiments over Z1 can be used since they are available in πb), so we can rewrite Eq. (12) using the definition of selection diagrams, the corollary follows. The next condition for mz-transportability is more visible than Theorem 1 (albeit weaker), which also demonstrates the challenge of relating mz-transportability to other types of transportability. Corollary 2. R = P ∗(y|do(x)) is mz-transportable in D if there exists Z′ i ⊆Zi such that all paths from Z′ i to Y are blocked by X, (Si ⊥⊥Y|X, Z′ i)D(i) X,Z′ i , and R is computable from do(Zi). Remarkably, randomizing Z2 when applying Corollary 1 was instrumental to yield transportability in the previous example, despite the fact that the directed paths from Z2 to Y were not blocked by X, which suggests how different this transportability is from z-identifiability. So, it is not immediately obvious how to combine the topological relations of Zi’s with X and Y in order to create a general condition for mz-transportability, the relationship between the distributions in the different domains can get relatively intricate, but we defer this discussion for now and consider a simpler case. It is not usually trivial to pursue a derivation of mz-transportability in causal calculus, and next we show an example in which such derivation does not even exist. Consider again the diagrams in Fig. 2(a,b), and assume that randomized experiments are available over {Z1} in πa and {Z2} in πb. Theorem 2. P ∗(y|do(x)) is not mz-transportable in Fig. 2(a,b) with Za = {Z1} and Zb = {Z2}. Proof. Formally, we need to display two models M1, M2 such that the following relations hold (as implied by Def. 3): P (a) M1(Z1, X, Z2, Y ) = P (a) M2(Z1, X, Z2, Y ), P (b) M1(Z1, X, Z2, Y ) = P (b) M2(Z1, X, Z2, Y ), P (a) M1(X, Z2, Y |do(Z1)) = P (a) M2(X, Z2, Y |do(Z1)), P (b) M1(Z1, X, Y |do(Z2)) = P (b) M2(Z1, X, Y |do(Z2)), P ∗ M1(Z1, X, Z2, Y ) = P ∗ M2(Z1, X, Z2, Y ), (13) for all values of Z1, X, Z2, and Y , and also, P ∗ M1(Y |do(X)) ̸= P ∗ M2(Y |do(X)), (14) for some value of X and Y . Let V be the set of observable variables and U be the set of unobservable variables in D. Let us assume that all variables in U ∪V are binary. Let U1, U2 ∈U be the common causes of Z1 and X and Z2, respectively; let U3, U4, U5 ∈U be a random disturbance exclusive to Z1, Z2, and Y , respectively, and U6 ∈U be an extra random disturbance exclusive to Z2, and U7, U8 ∈U to Y . Let Sa and Sb index the model in the following way: the tuples ⟨Sa = 1, Sb = 0⟩, ⟨Sa = 0, Sb = 1⟩, ⟨Sa = 0, Sb = 0⟩represent domains πa, πb, and π∗, respectively. Define the two models as follows: M1 = Z1 = U1 ⊕U2 ⊕ U3 ∧Sa X = Z1 ⊕U1 Z2 = (X ⊕U2 ⊕(U4 ∧Sa)) ∨U6 Y = (Z2 ∧U5) ⊕(U5 ∧U7) ⊕(Sb ∧U8) M2 = Z1 = U1 ⊕U2 ⊕ U3 ∧Sa X = U1 Z2 = U4 ∧Sa ∧U6 ⊕U6 Y = (Z2 ∧U5) ⊕(U5 ∧U7) ⊕(Sb ∧U8) where ⊕represents the exclusive or function. Both models agree in respect to P(U), which is defined as P(Ui) = 1/2, i = 1, ..., 8. It is not difficult to evaluate these models and note that the constraints given in Eqs. (13) and (14) are satisfied (including positivity), the theorem follows. 4 Algorithm for computing mz-transportability In this section, we build on previous analyses of identifiability [7, 21, 22, 23] in order to obtain a mechanic procedure in which a collection of selection diagrams and experimental data is inputted, and the procedure returns a transport formula whenever it is able to produce one. More specifically, 6 PROCEDURE TRmz(y, x, P, I, S, W, D) INPUT: x, y: value assignments; P: local distribution relative to domain S (S = 0 indexes π∗) and active experiments I; W: weighting scheme; D: backbone of selection diagram; Si: selection nodes in πi (S0 = ∅ relative to π∗); [The following set and distributions are globally defined: Zi, P ∗, P (i) Zi .] OUTPUT: P ∗ x(y) in terms of P ∗, P ∗ Z, P (i) Zi or FAIL(D, C0). 1 if x = ∅, return P V\Y P. 2 if V \ An(Y)D ̸= ∅, return TRmz(y, x ∩An(Y)D, P V\An(Y)D P, I, S, W, DAn(Y)). 3 set W = (V \ X) \ An(Y)DX. if W ̸= ∅, return TRmz(y, x ∪w, P, I, S, W, D). 4 if C(D \ X) = {C0, C1, ..., Ck}, return P V\{Y,X} Q i TRmz(ci, v \ ci, P, I, S, W, D). 5 if C(D \ X) = {C0}, 6 if C(D) ̸= {D}, 7 if C0 ∈C(D), return Q i|Vi∈C0 P V\V (i) D P/ P V\V (i−1) D P. 8 if (∃C′)C0 ⊂C′ ∈C(D), for {i|Vi ∈C′}, set κi = κi ∪v(i−1) D \ C′. return TRmz(y, x ∩C′, Q i|Vi∈C′ P(Vi|V (i−1) D ∩C′, κi), I, S, W, C′). 9 else, 10 if I = ∅, for i = 0, ..., |D|, if ` (Si ⊥⊥Y | X)D(i) X ∧(Zi ∩X ̸= ∅) ´ , Ei = TRmz(y, x \ zi, P, Zi ∩X, i, W, D \ {Zi ∩X}). 11 if |E| > 0, return P|E| i=1 w(j) i Ei. 12 else, FAIL(D, C0). Figure 3: Modified version of identification algorithm capable of recognizing mz-transportability. our algorithm is called TRmz (see Fig. 3), and is based on the C-component decomposition for identification of causal effects [22, 23] (and a version of the identification algorithm called ID). The rationale behind TRmz is to apply Tian’s factorization and decompose the target relation into smaller, more manageable sub-expressions, and then try to evaluate whether each sub-expression can be computed in the target domain. Whenever this evaluation fails, TRmz tries to use the experiments available from the target and, if possible, from the sources; this essentially implements the declarative condition delineated in Theorem 1. Next, we consider the soundness of the algorithm. Theorem 3 (soundness). Whenever TRmz returns an expression for P ∗ x(y), it is correct. In the sequel, we demonstrate how the algorithm works through the mz-transportability of Q = P ∗(y|do(x)) in Fig. 2(c,d) with Z∗= {Z1}, Za = {Z2}, and Zb = {Z1}. Since (V \ X) \ An(Y)DX = {Z2}, TRmz invokes line 3 with {Z2} ∪{X} as interventional set. The new call triggers line 4 and C(D \ {X, Z2}) = {C0, C1, C2, C3}, where C0 = DZ1, C1 = DZ3, C2 = DU, and C3 = DW,Y , we invoke line 4 and try to mz-transport individually Q0 = P ∗ x,z2,z3,u,w,y(z1), Q1 = P ∗ x,z1,z2,u,w,y(z3), Q2 = P ∗ x,z1,z2,z3,w,y(u), and Q3 = P ∗ x,z1,z2,z3,u(w, y). Thus the original problem reduces to try to evaluate the equivalent expression P z1,z3,u,w P ∗ x,z2,z3,u,w,y(z1)P ∗ x,z1,z2,u,w,y(z3) P ∗ x,z1,z2,z3,w,y(u)P ∗ x,z1,z2,z3,u(w, y). First, TRmz evaluates the expression Q0 and triggers line 2, noting that all nodes can be ignored since they are not ancestors of {Z1}, which implies after line 1 that P ∗ x,z2,z3,u,w,y(z1) = P ∗(z1). Second, TRmz evaluates the expression Q1 triggering line 2, which implies that P ∗ x,z1,z2,u,w,y(z3) = P ∗ x,z1,z2(z3) with induced subgraph D1 = DX,Z1,Z2,Z3. TRmz goes to line 5, in which in the local call C(D \ {X, Z1, Z2}) = {DZ3}. Thus it proceeds to line 6 testing whether C(D \ {X, Z1, Z2}) is different from D1, which is false. In this call, ordinary identifiability would fail, but TRmz proceeds to line 9. The goal of this line is to test whether some experiment can help for computing Q1. In this case, πa fails immediately the test in line 10, but πb and π∗succeed, which means experiments in these domains may eventually help; the new call is P (i) x,z2(z3)D\Z1, for i = {b, ∗} with induced graph D′ 1 = DX,Z2,Z3. Finally, TRmz triggers line 8 since X is not part of Z3’s components in D′ 1 (or, Z3 ∈C′ = {Z2 L9999K Z3}), so line 2 is triggered since Z2 is no longer an ancestor of Z3 in D′ 1, and then line 1 is triggered since the interventional set is empty in this local call, so P ∗ x,z1,z2(z3) = P Z′ 2 P (i) z1 (z3|x, Z′ 2)P (i) z1 (Z′ 2), for i = {b, ∗}. 7 Third, evaluating the expression Q2, TRmz goes to line 2, which implies that P ∗ x,z1,z2,z3,w,y(u) = P ∗ x,z1,z2,z3,w(u) with induced subgraph D2 = DX,Z1,Z2,Z3,W,U. TRmz goes to line 5, and in this local call C(D \ {X, Z1, Z2, Z3, W}) = {DU}, and the test in 6 succeed, since there are more components in D. So, it triggers line 8 since W is not part of U’s component in D2. The algorithm makes P ∗ x,z1,z2,z3,w(u) = P ∗ x,z1,z2,z3(u)D2|W (and update the working distribution); note that in this call, ordinary identifiability would fail since the nodes are in the same C-component and the test in line 6 fails. But TRmz proceeds to line 9 trying to find experiments that can help in Q2’s computation. In this case, πb cannot help but πa and π∗perhaps can, noting that new calls are launched for computing P (a) x,z1,z3(u)D2\Z2|W relative to πa, and P ∗ x,z2,z3(u)D2\Z1|W relative to π∗with the corresponding data structures set. In πa, the algorithm triggers line 7, which yields P (a) x,z1,z3(u)D2\Z2|W = P (a) z2 (u|w, z3, x, z1), and a bit more involved analysis for πb yields (after simplification) P ∗ x,z2,z3(u)D2\Z1|W = P Z′ 2 P ∗ z1(u|w, z3, x, Z′ 2)P ∗ z1(z3|x, Z′ 2)P ∗ z1(Z′ 2) / P Z′′ 2 P ∗ z1(z3|x, Z′′ 2 )P ∗ z1(Z′′ 2 ) . Fourth, TRmz evaluates the expression Q3 and triggers line 5, C(D\{X, Z1, Z2, Z3, U}) = DW,Y . In turn, both tests at lines 6 and 7 succeed, which makes the procedure to return P ∗ x,z1,z2,z3,u(w, y) = P ∗(w|z3, x, z1, z2)P ∗(y|w, x, z1, z2, z3, u). The composition of the return of these calls generates the following expression: P ∗ x (y) = X z1,z3,w,u P ∗(z1) w(1) 1 X Z′ 2 P ∗ z1(z3|x, Z′ 2)P ∗ z1(Z′ 2) + w(1) 2 X Z′ 2 P (b) z1 (z3|x, Z′ 2)P (b) z1 (Z′ 2) ! w(2) 1 „ X Z′ 2 P ∗ z1(u|w, z3, x, Z′ 2)P ∗ z1(z3|x, Z′ 2)P ∗ z1(Z′ 2) « / „ X Z′′ 2 P ∗ z1(z3|x, Z′′ 2 )P ∗ z1(Z′′ 2 ) « + w(2) 2 P (a) z2 (u|w, z3, x, z1) ! P ∗(w|x, z1, z2, z3) P ∗(y|x, z1, z2, z3, w, u) (15) where w(k) i represents the weight for each factor in estimand k (i = 1, ..., nk), and nk is the number of feasible estimands of k. Eq. (15) depicts a powerful way to estimate P ∗(y|do(x)) in the target domain, and depending on weighting choice a different estimand will be entailed. For instance, one might use an analogous to inverse-variance weighting, which sets the weights for the normalized inverse of their variances (i.e., w(k) i = σ−2 i / Pnk j=1 σ−2 j , where σ2 j is the variance of the jth component of estimand k). Our strategy resembles the approach taken in meta-analysis [4], albeit the latter usually disregards the intricacies of the relationships between variables, so producing a statistically less powerful estimand. Our method leverages this non-trivial and highly structured relationships, as exemplified in Eq. (15), which yields an estimand with less variance and statistically more powerful. 5 Conclusions In this paper, we treat a special type of transportability in which experiments can be conducted only over limited sets of variables in the sources and target domains, and the goal is to infer whether a certain effect can be estimated in the target using the information scattered throughout the domains. We provide a general sufficient graphical conditions for transportability based on the causal calculus along with a necessary condition for a specific scenario, which should be generalized for arbitrary structures. We further provide a procedure for computing transportability, that is, generate a formula for fusing the available observational and experimental data to synthesize an estimate of the desired causal effects. Our algorithm also allows for generic weighting schemes, which generalizes standard statistical procedures and leads to the construction of statistically more powerful estimands. Acknowledgment The work of Judea Pearl and Elias Bareinboim was supported in part by grants from NSF (IIS1249822, IIS-1302448), and ONR (N00014-13-1-0153, N00014-10-1-0933). The work of Sanghack Lee and Vasant Honavar was partially completed while they were with the Department of Computer Science at Iowa State University. The work of Vasant Honavar while working at the National Science Foundation (NSF) was supported by the NSF. The work of Sanghack Lee was supported in part by the grant from NSF (IIS-0711356). Any opinions, findings, and conclusions contained in this article are those of the authors and do not necessarily reflect the views of the sponsors. 8 References [1] J. Pearl and E. Bareinboim. Transportability of causal and statistical relations: A formal approach. In W. Burgard and D. Roth, editors, Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence, pages 247–254. AAAI Press, Menlo Park, CA, 2011. [2] D. Campbell and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Wadsworth Publishing, Chicago, 1963. [3] C. Manski. Identification for Prediction and Decision. Harvard University Press, Cambridge, Massachusetts, 2007. [4] L. V. Hedges and I. Olkin. Statistical Methods for Meta-Analysis. Academic Press, January 1985. [5] W.R. Shadish, T.D. Cook, and D.T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton-Mifflin, Boston, second edition, 2002. [6] S. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research). Cambridge University Press, New York, NY, 2007. [7] J. Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669–710, 1995. [8] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, Cambridge, MA, 2nd edition, 2000. [9] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. 2nd edition, 2009. [10] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [11] E. Bareinboim and J. Pearl. Transportability of causal effects: Completeness results. In J. Hoffmann and B. Selman, editors, Proceedings of the Twenty-Sixth National Conference on Artificial Intelligence, pages 698–704. AAAI Press, Menlo Park, CA, 2012. [12] E. Bareinboim and J. Pearl. Causal transportability with limited experiments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 95–101, Menlo Park, CA, 2013. AAAI Press. [13] S. Lee and V. Honavar. Causal transportability of experiments on controllable subsets of variables: ztransportability. In A. Nicholson and P. Smyth, editors, Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), pages 361–370. AUAI Press, 2013. [14] E. Bareinboim and J. Pearl. Meta-transportability of causal effects: A formal approach. In C. Carvalho and P. Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 135–143. JMLR W&CP 31, 2013. [15] S. Lee and V. Honavar. m-transportability: Transportability of a causal effect from multiple environments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 583–590, Menlo Park, CA, 2013. AAAI Press. [16] H. Daume III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126, 2006. [17] A.J. Storkey. When training and test sets are different: characterising learning transfer. In J. Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence, editors, Dataset Shift in Machine Learning, pages 3–28. MIT Press, Cambridge, MA, 2009. [18] B. Sch¨olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal learning. In J Langford and J Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255–1262, New York, NY, USA, 2012. Omnipress. [19] K. Zhang, B. Sch¨olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In Proceedings of the 30th International Conference on Machine Learning (ICML). JMLR: W&CP volume 28, 2013. [20] E. Bareinboim and J. Pearl. Causal inference by surrogate experiments: z-identifiability. In N. Freitas and K. Murphy, editors, Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI), pages 113–120. AUAI Press, 2012. [21] M. Kuroki and M. Miyakawa. Identifiability criteria for causal effects of joint interventions. Journal of the Royal Statistical Society, 29:105–117, 1999. [22] J. Tian and J. Pearl. A general identification condition for causal effects. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, pages 567–573. AAAI Press/The MIT Press, Menlo Park, CA, 2002. [23] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, pages 1219–1226. AAAI Press, Menlo Park, CA, 2006. 9
|
2013
|
4
|
5,114
|
Matrix factorization with Binary Components Martin Slawski, Matthias Hein and Pavlo Lutsik Saarland University {ms,hein}@cs.uni-saarland.de, p.lutsik@mx.uni-saarland.de Abstract Motivated by an application in computational biology, we consider low-rank matrix factorization with {0, 1}-constraints on one of the factors and optionally convex constraints on the second one. In addition to the non-convexity shared with other matrix factorization schemes, our problem is further complicated by a combinatorial constraint set of size 2m·r, where m is the dimension of the data points and r the rank of the factorization. Despite apparent intractability, we provide −in the line of recent work on non-negative matrix factorization by Arora et al. (2012)−an algorithm that provably recovers the underlying factorization in the exact case with O(mr2r + mnr + r2n) operations for n datapoints. To obtain this result, we use theory around the Littlewood-Offord lemma from combinatorics. 1 Introduction Low-rank matrix factorization techniques like the singular value decomposition (SVD) constitute an important tool in data analysis yielding a compact representation of data points as linear combinations of a comparatively small number of ’basis elements’ commonly referred to as factors, components or latent variables. Depending on the specific application, the basis elements may be required to fulfill additional properties, e.g. non-negativity [1, 2], smoothness [3] or sparsity [4, 5]. In the present paper, we consider the case in which the basis elements are constrained to be binary, i.e. we aim at factorizing a real-valued data matrix D into a product T A with T ∈{0, 1}m×r and A ∈Rr×n, r ≪min{m, n}. Such decomposition arises e.g. in blind source separation in wireless communication with binary source signals [6]; in network inference from gene expression data [7, 8], where T encodes connectivity of transcription factors and genes; in unmixing of cell mixtures from DNA methylation signatures [9] in which case T represents presence/absence of methylation; or in clustering with overlapping clusters with T as a matrix of cluster assignments [10, 11]. Several other matrix factorizations involving binary matrices have been proposed in the literature. In [12] and [13] matrix factorization for binary input data, but non-binary factors T and A is discussed, whereas a factorization T WA with both T and A binary and real-valued W is proposed in [14], which is more restrictive than the model of the present paper. The model in [14] in turn encompasses binary matrix factorization as proposed in [15], where all of D, T and A are constrained to be binary. It is important to note that this ine of research is fundamentally different from Boolean matrix factorization [16], which is sometimes also referred to as binary matrix factorization. A major drawback of matrix factorization schemes is non-convexity. As a result, there is in general no algorithm that is guaranteed to compute the desired factorization. Algorithms such as block coordinate descent, EM, MCMC, etc. commonly employed in practice lack theoretical guarantees beyond convergence to a local minimum. Substantial progress in this regard has been achieved recently for non-negative matrix factorization (NMF) by Arora et al. [17] and follow-up work in [18], where it is shown that under certain additional conditions, the NMF problem can be solved globally optimal by means of linear programming. Apart from being a non-convex problem, the matrix factorization studied in the present paper is further complicated by the {0, 1}-constraints imposed on the left factor T , which yields a combinatorial optimization problem that appears to be computationally intractable except for tiny dimensions m and r even in case the right factor A were 1 already known. Despite the obvious hardness of the problem, we present as our main contribution an algorithm that provably provides an exact factorization D = T A whenever such factorization exists. Our algorithm has exponential complexity only in the rank r of the factorization, but scales linearly in m and n. In particular, the problem remains tractable even for large values of m as long as r remains small. We extend the algorithm to the approximate case D ≈T A and empirically show superior performance relative to heuristic approaches to the problem. Moreover, we establish uniqueness of the exact factorization under the separability condition from the NMF literature [17, 19], or alternatively with high probability for T drawn uniformly at random. As a corollary, we obtain that at least for these two models, the suggested algorithm continues to be fully applicable if additional constraints e.g. non-negativity, are imposed on the right factor A. We demonstrate the practical usefulness of our approach in unmixing DNA methylation signatures of blood samples [9]. Notation. For a matrix M and index sets I, J, MI,J denotes the submatrix corresponding to I and J; MI,: and M:,J denote the submatrices formed by the rows in I respectively columns in J. We write [M; M ′] and [M, M ′] for the row- respectively column-wise concatenation of M and M ′. The affine hull generated by the columns of M is denoted by aff(M). The symbols 1/0 denote vectors or matrices of ones/zeroes and I denotes the identity matrix. We use | · | for the cardinality of a set. Supplement. The supplement contains all proofs, additional comments and experimental results. 2 Exact case We start by considering the exact case, i.e. we suppose that a factorization having the desired properties exists. We first discuss the geometric ideas underlying our basic approach for recovering such factorization from the data matrix before presenting conditions under which the factorization is unique. It is shown that the question of uniqueness as well as the computational performance of our approach is intimately connected to the Littlewood-Offord problem in combinatorics [20]. 2.1 Problem formulation. Given D ∈Rm×n, we consider the following problem. find T ∈{0, 1}m×r and A ∈Rr×n, A⊤1r = 1n such that D = T A. (1) The columns {T:,k}r k=1 of T , which are vertices of the hypercube [0, 1]m, are referred to as components. The requirement A⊤1r = 1n entails that the columns of D are affine instead of linear combinations of the columns of T . This additional constraint is not essential to our approach; it is imposed for reasons of presentation, in order to avoid that the origin is treated differently from the other vertices of [0, 1]m, because otherwise the zero vector could be dropped from T , leaving the factorization unchanged. We further assume w.l.o.g. that r is minimal, i.e. there is no factorization of the form (1) with r′ < r, and in turn that the columns of T are affinely independent, i.e. ∀λ ∈Rr, λ⊤1r = 0, T λ = 0 implies that λ = 0. Moreover, it is assumed that rank(A) = r. This ensures the existence of a submatrix A:,C of r linearly independent columns and of a corresponding submatrix of D:,C of affinely independent columns, when combined with the affine independence of the columns of T : ∀λ ∈Rr, λ⊤1r = 0 : D:,Cλ = 0 ⇐⇒T (A:,Cλ) = 0 =⇒A:,Cλ = 0 =⇒λ = 0, (2) using at the second step that 1⊤ r A:,Cλ = 1⊤ r λ = 0 and the affine independence of the {T:,k}r k=1. Note that the assumption rank(A) = r is natural; otherwise, the data would reside in an affine subspace of lower dimension so that D would not contain enough information to reconstruct T . 2.2 Approach. Property (2) already provides the entry point of our approach. From D = T A, it is obvious that aff(T ) ⊇aff(D). Since D contains the same number of affinely independent columns as T , it must also hold that aff(D) ⊇aff(T ), in particular aff(D) ⊇{T:,k}r k=1. Consequently, (1) can in principle be solved by enumerating all vertices of [0, 1]m contained in aff(D) and selecting a maximal affinely independent subset thereof (see Figure 1). This procedure, however, is exponential in the dimension m, with 2m vertices to be checked for containment in aff(D) by solving a linear system. Remarkably, the following observation along with its proof, which prompts Algorithm 1 below, shows that the number of elements to be checked can be reduced to 2r−1 irrespective of m. Proposition 1. The affine subspace aff(D) contains no more than 2r−1 vertices of [0, 1]m. Moreover, Algorithm 1 provides all vertices contained in aff(D). 2 Algorithm 1 FINDVERTICES EXACT 1. Fix p ∈aff(D) and compute P = [D:,1 −p, . . . , D:,n −p]. 2. Determine r −1 linearly independent columns C of P, obtaining P:,C and subsequently r −1 linearly independent rows R, obtaining PR,C ∈Rr−1×r−1. 3. Form Z = P:,C(PR,C)−1 ∈Rm×r−1 and bT = Z(B(r−1) −pR1⊤ 2r−1) + p1⊤ 2r−1 ∈ Rm×2r−1, where the columns of B(r−1) correspond to the elements of {0, 1}r−1. 4. Set T = ∅. For u = 1, . . . , 2r−1, if bT:,u ∈{0, 1}m set T = T ∪{ bT:,u}. 5. Return T = {0, 1}m ∩aff(D). Algorithm 2 BINARYFACTORIZATION EXACT 1. Obtain T as output from FINDVERTICES EXACT(D) 2. Select r affinely independent elements of T to be used as columns of T . 3. Obtain A as solution of the linear system [1⊤ r ; T ]A = [1⊤ n ; D]. 4. Return (T, A) solving problem (1). Figure 1: Illustration of the geometry underlying our approach in dimension m = 3. Dots represent data points and the shaded areas their affine hulls aff(D) ∩[0, 1]m. Left: aff(D) intersects with r + 1 vertices of [0, 1]m. Right: aff(D) intersects with precisely r vertices. Comments. In step 2 of Algorithm 1, determining the rank of P and an associated set of linearly independent columns/rows can be done by means of a rank-revealing QR factorization [21, 22]. The crucial step is the third one, which is a compact description of first solving the linear systems PR,Cλ = b−pR for all b ∈{0, 1}r−1 and back-substituting the result to compute candidate vertices P:,Cλ + p stacked into the columns of bT; the addition/subtraction of p is merely because we have to deal with an affine instead of a linear subspace, in which p serves as origin. In step 4, the pool of 2r−1 ’candidates’ is filtered, yielding T = aff(D) ∩{0, 1}m. Determining T is the hardest part in solving the matrix factorization problem (1). Given T , the solution can be obtained after few inexpensive standard operations. Note that step 2 in Algorithm 2 is not necessary if one does not aim at finding a minimal factorization, i.e. if it suffices to have D = T A with T ∈{0, 1}m×r′ but r′ possibly being larger than r. As detailed in the supplement, the case without sum-to-one constraints on A can be handled similarly, as can be the model in [14] with binary left and right factor and real-valued middle factor. Computational complexity. The dominating cost in Algorithm 1 is computation of the candidate matrix bT and checking whether its columns are vertices of [0, 1]m. Note that bTR,: = ZR,:(B(r−1)−pR1⊤ 2r−1)+pR1⊤ 2r−1 = Ir−1(B(r−1)−pR1⊤ 2r−1)+pR1⊤ 2r−1 = B(r−1), (3) i.e. the r −1 rows of bT corresponding to R do not need to be taken into account. Forming the matrix bT would hence require O((m −r + 1)(r −1)2r−1) and the subsequent check for vertices in the fourth step O((m −r + 1)2r−1) operations. All other operations are of lower order provided e.g. (m −r + 1)2r−1 > n. The second most expensive operation is forming the matrix PR,C in step 2 with the help of a QR decomposition requiring O(mn(r −1)) operations in typical cases [21]. Computing the matrix factorization (1) after the vertices have been identified (steps 2 to 4 in Algorithm 2) has complexity O(mnr + r3 + r2n). Here, the dominating part is the solution of a linear system in r variables and n right hand sides. Altogether, our approach for solving (1) has exponential complexity in r, but only linear complexity in m and n. Later on, we will argue that under additional assumptions on T , the O((m−r+1)2r−1) terms can be reduced to O((r−1)2r−1). 2.3 Uniqueness. In this section, we study uniqueness of the matrix factorization problem (1) (modulo permutation of columns/rows). First note that in view of the affine independence of the columns of T , the factorization is unique iff T is, which holds iff aff(D) ∩{0, 1}m = aff(T ) ∩{0, 1}m = {T:,1, . . . , T:,r}, (4) i.e. if the affine subspace generated by {T:,1, . . . , T:,r} contains no other vertices of [0, 1]m than the r given ones (cf. Figure 1). Uniqueness is of great importance in applications, where one aims at 3 an interpretation in which the columns of T play the role of underlying data-generating elements. Such an interpretation is not valid if (4) fails to hold, since it is then possible to replace one of the columns of a specific choice of T by another vertex contained in the same affine subspace. Solution of a non-negative variant of our factorization. In the sequel, we argue that property (4) plays an important role from a computational point of view when solving extensions of problem (1) in which further constraints are imposed on A. One particularly important extension is the following. find T ∈{0, 1}m×r and A ∈Rr×n + , A⊤1r = 1n such that D = T A. (5) Problem (5) is a special instance of non-negative matrix factorization. Problem (5) is of particular interest in the present paper, leading to a novel real world application of matrix factorization techniques as presented in Section 4.2 below. It is natural to ask whether Algorithm 2 can be adapted to solve problem (5). A change is obviously required for the second step when selecting r vertices from T , since in (5) the columns D now have to be expressed as convex instead of only affine combinations of columns of T : picking an affinely independent collection from T does not take into account the non-negativity constraint imposed on A. If, however, (4) holds, we have |T | = r and Algorithm 2 must return a solution of (5) provided that there exists one. Corollary 1. If problem (1) has a unique solution, i.e. if condition (4) holds and if there exists a solution of (5), then it is returned by Algorithm 2. To appreciate that result, consider the converse case |T | > r. Since the aim is a minimal factorization, one has to find a subset of T of cardinality r such that (5) can be solved. In principle, this can be achieved by solving a linear program for |T | r subsets of T , but this is in general not computationally feasible: the upper bound of Proposition 1 indicates that |T | = 2r−1 in the worst case. For the example below, T consists of all 2r−1 vertices contained in an r−1-dimensional face of [0, 1]m: T = 0m−r×r Ir−1 0r−1 0⊤ r with T = ( T λ : λ1 ∈{0, 1}, . . ., λr−1 ∈{0, 1}, λr = 1 − r−1 X k=1 λk ) . (6) Uniqueness under separability. In view of the negative example (6), one might ask whether uniqueness according to (4) can be achieved under additional conditions on T . We prove uniqueness under separability, a condition introduced in [19] and imposed recently in [17] to show solvability of the NMF problem by linear programming. We say that T is separable if there exists a permutation Π such that ΠT = [M; Ir], where M ∈{0, 1}m−r×r. Proposition 2. If T is separable, condition (4) holds and thus problem (1) has a unique solution. Uniqueness under generic random sampling. Both the negative example (6) as well as the positive result of Proposition 2 are associated with special matrices T . This raises the question whether uniqueness holds respectively fails for broader classes of binary matrices. In order to gain insight into this question, we consider random T with i.i.d. entries from a Bernoulli distribution with parameter 1 2 and study the probability of the event {aff(T ) ∩{0, 1}m = {T:,1, . . . , T:,r}}. This question has essentially been studied in combinatorics [23], with further improvements in [24]. The results therein rely crucially on Littlewood-Offord theory (see Section 2.4 below). Theorem 1. Let T be a random m × r-matrix whose entries are drawn i.i.d. from {0, 1} with probability 1 2. Then, there is a constant C so that if r ≤m −C, P aff(T )∩{0, 1}m = {T:,1, . . . , T:,r ≥1−(1+o(1)) 4 r 3 3 4 m − 3 4 + o(1) m as m →∞. Theorem 1 suggests a positive answer to the question of uniqueness posed above. For m large enough and r small compared to m (in fact, following [24] one may conjecture that Theorem 1 holds with C = 1), the probability that the affine hull of r vertices of [0, 1]m selected uniformly at random contains some other vertex is exponentially small in the dimension m. We have empirical evidence that the result of Theorem 1 continues to hold if the entries of T are drawn from a Bernoulli distribution with parameter in (0, 1) sufficiently far away from the boundary points (cf. supplement). As a byproduct, these results imply that also the NMF variant of our matrix factorization problem (5) can in most cases be reduced to identifying a set of r vertices of [0, 1]m (cf. Corollary 1). 4 2.4 Speeding up Algorithm 1. In Algorithm 1, an m × 2r−1 matrix bT of potential vertices is formed (Step 3). We have discussed the case (6) where all candidates must indeed be vertices, in which case it seems to be impossible to reduce the computational cost of O((m −r)r2r−1), which becomes significant once m is in the thousands and r ≥25. On the positive side, Theorem 1 indicates that for many instances of T , only r out of 2r−1 candidates are in fact vertices. In that case, noting that columns of bT cannot be vertices if a single coordinate is not in {0, 1} (and that the vast majority of columns of bT must have one such coordinate), it is computationally more favourable to incrementally compute subsets of rows of bT and then to discard already those columns with coordinates not in {0, 1}. We have observed empirically that this scheme rapidly reduces the candidate set −already checking a single row of bT eliminates a substantial portion (see Figure 2). Littlewood-Offord theory. Theoretical underpinning for the last observation can be obtained from a result in combinatorics, the Littlewood-Offord (L-O)-lemma. Various extensions of that result have been developed until recently, see the survey [25]. We here cite the L-O-lemma in its basic form. Theorem 2. [20] Let a1, . . . , aℓ∈R \ {0} and y ∈R. (i) {b ∈{0, 1}ℓ: Pℓ i=1 aibi = y} ≤ ℓ ⌊ℓ/2⌋ . (ii) If |ai| ≥1, i = 1, . . . , ℓ, {b ∈{0, 1}ℓ: Pℓ i=1 aibi ∈(y, y + 1)} ≤ ℓ ⌊ℓ/2⌋ . The two parts of Theorem 2 are referred to as discrete respectively continuous L-O lemma. The discrete L-O lemma provides an upper bound on the number of {0, 1}-vectors whose weighted sum with given weights {ai}ℓ i=1 is equal to some given number y, whereas the stronger continuous version, under a more stringent condition on the weights, upper bounds the number of {0, 1}-vectors whose weighted sum is contained in some interval (y, y +1). In order to see the relation of Theorem 2 to Algorithm 1, let us re-inspect the third step of that algorithm. To obtain a reduction of candidates by checking a single row of bT = Z(B(r−1)−pR1⊤ 2r−1)+p1⊤ 2r−1, pick i /∈R (recall that coordinates in R do not need to be checked, cf. (3)) and u ∈{1, . . . , 2r−1} arbitrary. The u-th candidate can be a vertex only if bTi,u ∈{0, 1}. The condition bTi,u = 0 can be written as Zi,: |{z} {ak}r k=1 B(r−1) :,u | {z } =b = Zi,:pR −pi | {z } =y . (7) A similar reasoning applies when setting bTi,u = 1. Provided none of the entries of Zi,: = 0, the discrete L-O lemma implies that there are at most 2 r−1 ⌊(r−1)/2⌋ out of 2r−1 candidates for which the i-th coordinate is in {0, 1}. This yields a reduction of the candidate set by 2 r−1 ⌊(r−1)/2⌋ /2r−1 = O 1 √r−1 . Admittedly, this reduction may appear insignificant given the total number of candidates to be checked. The reduction achieved empirically (cf. Figure 2) is typically larger. Stronger reductions have been proven under additional assumptions on the weights {ai}ℓ i=1: e.g. for distinct weights, one obtains a reduction of O((r −1)−3/2) [25]. Furthermore, when picking successively d rows of bT and if one assumes that each row yields a reduction according to the discrete L-O lemma, one would obtain the reduction (r −1)−d/2 so that d = r −1 would suffice to identify all vertices provided r ≥4. Evidence for the rate (r −1)−d/2 can be found in [26]. This indicates a reduction in complexity of Algorithm 1 from O((m −r)r2r−1) to O(r22r−1). Achieving further speed-up with integer linear programming. The continuous L-O lemma (part (ii) of Theorem 2) combined with the derivation leading to (7) allows us to tackle even the case r = 80 (280 ≈1024). In view of the continuous L-O lemma, a reduction in the number of candidates can still be achieved if the requirement is weakened to bTi,u ∈[0, 1]. According to (7) the candidates satisfying the relaxed constraint for the i-th coordinate can be obtained from the feasibility problem find b ∈{0, 1}r−1 subject to 0 ≤Zi,:(b −pR) + pi ≤1, (8) which is an integer linear program that can be solved e.g. by CPLEX. The L-O- theory suggests that the branch-bound strategy employed therein is likely to be successful. With the help of CPLEX, it is affordable to solve problem (8) with all m −r + 1 constraints (one for each of the rows of bT to be checked) imposed simultaneously. We always recovered directly the underlying vertices in our experiments and only these, without the need to prune the solution pool (which could be achieved by Algorithm 1, replacing the 2r−1 candidates by a potentially much smaller solution pool). 5 0 1 2 5 10 20 50 100 200 500 0 5 10 15 20 25 Coordinates checked Number of Vertices (log2) Maximum number of remaining vertices (out of 2(r−1)) over 100 trials r=8, p=0.02 r=8, p=0.1 r=8, p=0.5 r=16, p=0.02 r=16, p=0.1 r=16, p=0.5 r=24, p=0.02 r=24, p=0.1 r=24, p=0.5 10 20 30 40 50 60 70 80 −0.5 0 0.5 1 1.5 2 2.5 3 r log10(CPU time) in seconds Speed−up achieved by cplex (m = 1000) alg1,p=0.5 alg1,p = 0.1 alg1,p = 0.9 cplex,p = 0.5 cplex,p = 0.1 cplex,p = 0.9 Figure 2: Left: Speeding up the algorithm by checking single coordinates, remaining number of coordinates vs.# coordinates checked (m = 1000). Right: Speed up by CPLEX compared to Algorithm 1. For both plots, T is drawn entry-wise from a Bernoulli distribution with parameter p. 3 Approximate case In the sequel, we discuss an extension of our approach to handle the approximate case D ≈T A with T and A as in (1). In particular, we have in mind the case of additive noise i.e. D = T A + E with ∥E∥F small. While the basic concept of Algorithm 1 can be adopted, changes are necessary because D may have full rank min{m, n} and second aff(D) ∩{0, 1}m = ∅, i.e. the distances of aff(D) and the {T:,k}r k=1 may be strictly positive (but are at least assumed to be small). As disAlgorithm 3 FINDVERTICES APPROXIMATE 1. Let p = D1n/n and compute P = [D:,1 −p, . . . , D:,n −p]. 2. Compute U (r−1) ∈Rm×r−1, the left singular vectors corresponding to the r −1 largest singular values of P. Select r −1 linearly independent rows R of U (r−1), obtaining U (r−1) R,: ∈Rr−1×r−1. 3. Form Z = U (r−1)(U (r−1) R,: )−1 and bT = Z(B(r−1) −pR1⊤ 2r−1) + p1⊤ 2r−1. 4. Compute bT 01 ∈Rm×2r−1: for u = 1, . . . , 2r−1, i = 1, . . . , m, set bT 01 i,u = I( bTi,u > 1 2). 5. For u = 1, . . . , 2r−1, set δu = ∥bT:,u−bT 01 :,u∥2. Order increasingly s.t. δu1 ≤. . . ≤δ2r−1. 6. Return T = [ bT 01 :,u1 . . . bT 01 :,ur] tinguished from the exact case, Algorithm 3 requires the number of components r to be specified in advance as it is typically the case in noisy matrix factorization problems. Moreover, the vector p subtracted from all columns of D in step 1 is chosen as the mean of the data points, which is in particular a reasonable choice if D is contaminated with additive noise distributed symmetrically around zero. The truncated SVD of step 2 achieves the desired dimension reduction and potentially reduces noise corresponding to small singular values that are discarded. The last change arises in step 5. While in the exact case, one identifies all columns of bT that are in {0, 1}m, one instead only identifies columns close to {0, 1}m. Given the output of Algorithm 3, we solve the approximate matrix factorization problem via least squares, obtaining the right factor from minA∥D −T A∥2 F. Refinements. Improved performance for higher noise levels can be achieved by running Algorithm 3 multiple times with different sets of rows selected in step 2, which yields candidate matrices {T (l)}s l=1, and subsequently using T = argmin{T (l)} minA∥D −T (l)A∥2 F , i.e. one picks the candidate yielding the best fit. Alternatively, we may form a candidate pool by merging the {T (l)}s l=1 and then use a backward elimination scheme, in which successively candidates are dropped that yield the smallest improvement in fitting D until r candidates are left. Apart from that, T returned by Algorithm 3 can be used for initializing the block optimization scheme of Algorithm 4 below. Algorithm 4 is akin to standard block coordinate descent schemes proposed in the matrix factorization literature, e.g. [27]. An important observation (step 3) is that optimization of T is separable along the rows of T , so that for small r, it is feasible to perform exhaustive search over all 2r possibilities (or to use CPLEX). However, Algorithm 4 is impractical as a stand-alone scheme, because without proper initialization, it may take many iterations to converge, with each single iteration being more expensive than Algorithm 3. When initialized with the output of the latter, however, we have observed convergence of the block scheme only after few steps. 6 Algorithm 4 Block optimization scheme for solving minT ∈{0,1}m×r, A ∥D −T A∥2 F 1. Set k = 0 and set T (k) equal to a starting value. 2. A(k) ←argminA∥D −T (k)A∥2 F and set k = k + 1. 3. T (k) ←argminT ∈{0,1}m×r∥D−T A(k)∥2 F = argmin{Ti,:∈{0,1}r}m i=1 Pm i=1∥Di,:−Ti,:A(k)∥2 2 (9) 4. Alternate between steps 2 and 3. 4 Experiments In Section 4.1 we demonstrate with the help of synthetic data that the approach of Section 3 performs well on noisy datasets. In the second part, we present an application to a real dataset. 4.1 Synthetic data. Setup. We generate D = T ∗A∗+ αE, where the entries of T ∗are drawn i.i.d. from {0, 1} with probability 0.5, the columns of A are drawn i.i.d. uniformly from the probability simplex and the entries of E are i.i.d. standard Gaussian. We let m = 1000, r = 10 and n = 2r and let the noise level α vary along a grid starting from 0. Small sample sizes n as considered here yield more challenging problems and are motivated by the real world application of the next subsection. Evaluation. Each setup is run 20 times and we report averages over the following performance measures: the normalized Hamming distance ∥T ∗−T ∥2 F/(m r) and the two RMSEs ∥T ∗A∗−T A∥F /(m n)1/2 and ∥T A −D∥F/(m n)1/2, where (T, A) denotes the output of one of the following approaches that are compared. FindVertices: our approach in Section 3. oracle: we solve problem (9) with A(k) = A∗. box: we run the block scheme of Algorithm 4, relaxing the integer constraint into a box constraint. Five random initializations are used and we take the result yielding the best fit, subsequently rounding the entries of T to fulfill the {0, 1}-constraints and refitting A. quad pen: as box, but a (concave) quadratic penalty λ P i,k Ti,k(1 −Ti,k) is added to push the entries of T towards {0, 1}. D.C. programming [28] is used for the block updates of T . 0 0.05 0.1 0 0.05 0.1 0.15 0.2 alpha |T−T*|F 2/(m*r) Hamming(T,T*),T0.5,r=10 box quad pen oracle FindVertices 0 0.05 0.1 0 0.05 0.1 0.15 0.2 alpha |TA−T*A*|F/sqrt(m*n) RMSE(TA, T*A*),T0.5,r=10 box quad pen oracle FindVertices 0 0.05 0.1 0 0.05 0.1 0.15 0.2 alpha |TA−D|F/sqrt(m*n) RMSE(TA, D),T0.5,r=10 box quad pen oracle FindVertices 0 0.02 0.04 0.06 0 0.05 0.1 0.15 0.2 alpha |T−T*|F 2/(m*r) Hamming(T,T*),r=10 HotTopixx FindVertices 0 0.02 0.04 0.06 0 0.05 0.1 0.15 0.2 alpha |TA−T*A*|F/sqrt(m*n) RMSE(TA, T*A*),r=10 HotTopixx FindVertices 0 0.02 0.04 0.06 0 0.05 0.1 0.15 0.2 alpha |TA−D|F/sqrt(m*n) RMSE(TA, D),r=10 HotTopixx FindVertices Figure 3: Top: comparison against block schemes. Bottom: comparison against HOTTOPIXX. Left/Middle/Right: ∥T ∗−T ∥2 F/(m r), ∥T ∗A∗−T A∥F/(m n)1/2 and ∥T A −D∥F /(m n)1/2. Comparison to HOTTOPIXX [18]. HOTTOPIXX (HT) is a linear programming approach to NMF equipped with guarantees such as correctness in the exact and robustness in the non-exact case as long as T is (nearly) separable (cf. Section 2.3). HT does not require T to be binary, but applies to the generic NMF problem D ≈T A, T ∈Rm×r + and A ∈Rr×n + . Since separability is crucial to the performance of HT, we restrict our comparison to separable T = [M; Ir], generating the entries of M i.i.d. from a Bernoulli distribution with parameter 0.5. For runtime reasons, we lower the dimension to m = 100. Apart from that, the experimental setup is as above. We 7 use an implementation of HT from [29]. We first pre-normalize D to have unit row sums as required by HT, and obtain A as first output. Given A, the non-negative least squares problem minT ∈Rm×r + ∥D −T A∥2 F is solved. The entries of T are then re-scaled to match the original scale of D, and thresholding at 0.5 is applied to obtain a binary matrix. Finally, A is re-optimized by solving the above fitting problem with respect to A in place of T . In the noisy case, HT needs a tuning parameter to be specified that depends on the noise level, and we consider a grid of 12 values for that parameter. The range of the grid is chosen based on knowledge of the noise matrix E. For each run, we pick the parameter that yields best performance in favour of HT. Results. From Figure 3, we find that unlike the other approaches, box does not always recover T ∗even if the noise level α = 0. FindVertices outperforms box and quad pen throughout. For α ≤0.06, its performance closely matches that of the oracle. In the separable case, our approach performs favourably as compared to HT, a natural benchmark in this setting. 4.2 Analysis of DNA methylation data. Background. Unmixing of DNA methylation profiles is a problem of high interest in cancer research. DNA methylation is a chemical modification of the DNA occurring at specific sites, so-called CpGs. DNA methylation affects gene expression and in turn various processes such as cellular differentiation. A site is either unmethylated (’0’) or methylated (’1’). DNA methylation microarrays allow one to measure the methylation level for thousands of sites. In the dataset considered here, the measurements D (the rows corresponding to sites, the columns to samples) result from a mixture of cell types. The methylation profiles of the latter are in {0, 1}m, whereas, depending on the mixture proportions associated with each sample, the entries of D take values in [0, 1]m. In other words, we have the model D ≈T A, with T representing the methylation of the cell types and the columns of A being elements of the probability simplex. It is often of interest to recover the mixture proportions of the samples, because e.g. specific diseases, in particular cancer, can be associated with shifts in these proportions. The matrix T is frequently unknown, and determining it experimentally is costly. Without T , however, recovering the mixing matrix A is challenging, in particular since the number of samples in typical studies is small. Dataset. We consider the dataset studied in [9], with m = 500 CpG sites and n = 12 samples of blood cells composed of four major types (B-/T-cells, granulocytes, monocytes), i.e. r = 4. Ground truth is partially available: the proportions of the samples, denoted by A∗, are known. sample component A (ground truth) 1 2 3 4 5 6 7 8 9 101112 1 2 3 4 0 0.5 1 sample component estimated A 1 2 3 4 5 6 7 8 9 101112 1 2 3 4 0 0.5 1 2 3 4 5 6 0.1 0.15 0.2 components used |D − T A|F / sqrt(m * n) number of components vs error FindVertices ground truth Figure 4: Left: Mixture proportions of the ground truth. Middle: mixture proportions as estimated by our method. Right: RMSEs ∥D −T A∥F/(m n)1/2 in dependency of r. Analysis. We apply our approach to obtain an approximate factorization D ≈T A, T ∈{0, 1}m×r, A ∈Rr×n + and A ⊤1r = 1n. We first obtained T as outlined in Section 3, replacing {0, 1} by {0.1, 0.9} in order to account for measurement noise in D that slightly pushes values towards 0.5. This can be accomodated re-scaling bT 01 in step 4 of Algorithm 3 by 0.8 and then adding 0.1. Given T, we solve the quadratic program A = argminA∈Rr×n + ,A⊤1r=1n∥D −TA∥2 F and compare A to the ground truth A∗. In order to judge the fit as well as the matrix T returned by our method, we compute T ∗= argminT ∈{0,1}m×r∥D−T A∗∥2 F as in (9). We obtain 0.025 as average mean squared difference of T and T ∗, which corresponds to an agreement of 96 percent. Figure 4 indicates at least a qualitative agreement of A∗and A. In the rightmost plot, we compare the RMSEs of our approach for different choices of r relative to the RMSE of (T ∗, A∗). The error curve flattens after r = 4, which suggests that with our approach, we can recover the correct number of cell types. 8 References [1] P. Paatero and U. Tapper. Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. Environmetrics, 5:111–126, 1994. [2] D. Lee and H. Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788– 791, 1999. [3] J. Ramsay and B. Silverman. Functional Data Analysis. Springer, New York, 2006. [4] F. Bach, J. Mairal, and J. Ponce. Convex Sparse Matrix Factorization. Technical report, ENS, Paris, 2008. [5] D. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, 10:515–534, 2009. [6] A-J. van der Veen. Analytical Method for Blind Binary Signal Separation. IEEE Signal Processing, 45:1078–1082, 1997. [7] J. Liao, R. Boscolo, Y. Yang, L. Tran, C. Sabatti, and V. Roychowdhury. Network component analysis: reconstruction of regulatory signals in biological systems. PNAS, 100(26):15522–15527, 2003. [8] S. Tu, R. Chen, and L. Xu. Transcription Network Analysis by a Sparse Binary Factor Analysis Algorithm. Journal of Integrative Bioinformatics, 9:198, 2012. [9] E. Houseman et al. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinformatics, 13:86, 2012. [10] A. Banerjee, C. Krumpelman, J. Ghosh, S. Basu, and R. Mooney. Model-based overlapping clustering. In KDD, 2005. [11] E. Segal, A. Battle, and D. Koller. Decomposing gene expression into cellular processes. In Proceedings of the 8th Pacific Symposium on Biocomputing, 2003. [12] A. Schein, L. Saul, and L. Ungar. A generalized linear model for principal component analysis of binary data. In AISTATS, 2003. [13] A. Kaban and E. Bingham. Factorisation and denoising of 0-1 data: a variational approach. Neurocomputing, 71:2291–2308, 2008. [14] E. Meeds, Z. Gharamani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent factors. In NIPS, 2007. [15] Z. Zhang, C. Ding, T. Li, and X. Zhang. Binary matrix factorization with applications. In IEEE ICDM, 2007. [16] P. Miettinen and T. Mielik¨ainen and A. Gionis and G. Das and H. Mannila. The discrete basis problem. In PKDD, 2006. [17] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization – provably. STOC, 2012. [18] V. Bittdorf, B. Recht, C. Re, and J. Tropp. Factoring nonnegative matrices with linear programs. In NIPS, 2012. [19] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition into parts? In NIPS, 2003. [20] P. Erd¨os. On a lemma of Littlewood and Offord. Bull. Amer. Math. Soc, 51:898–902, 1951. [21] M. Gu and S. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM Journal on Scientific Computing, 17:848–869, 1996. [22] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University Press, 1996. [23] A. Odlyzko. On Subspaces Spanned by Random Selections of ±1 vectors. Journal of Combinatorial Theory A, 47:124–133, 1988. [24] J. Kahn, J. Komlos, and E. Szemeredi. On the Probability that a ±1 matrix is singular. Journal of the American Mathematical Society, 8:223–240, 1995. [25] H. Nguyen and V. Vu. Small ball probability, Inverse theorems, and applications. arXiv:1301.0019. [26] T. Tao and V. Vu. The Littlewoord-Offord problem in high-dimensions and a conjecture of Frankl and F¨uredi. Combinatorica, 32:363–372, 2012. [27] C.-J. Lin. Projected gradient methods for non-negative matrix factorization. Neural Computation, 19:2756–2779, 2007. [28] P. Tao and L. An. Convex analysis approach to D.C. programming: theory, algorithms and applications. Acta Mathematica Vietnamica, pages 289–355, 1997. [29] https://sites.google.com/site/nicolasgillis/publications. 9
|
2013
|
40
|
5,115
|
Reshaping Visual Datasets for Domain Adaptation Boqing Gong U. of Southern California Los Angeles, CA 90089 boqinggo@usc.edu Kristen Grauman U. of Texas at Austin Austin, TX 78701 grauman@cs.utexas.edu Fei Sha U. of Southern California Los Angeles, CA 90089 feisha@usc.edu Abstract In visual recognition problems, the common data distribution mismatches between training and testing make domain adaptation essential. However, image data is difficult to manually divide into the discrete domains required by adaptation algorithms, and the standard practice of equating datasets with domains is a weak proxy for all the real conditions that alter the statistics in complex ways (lighting, pose, background, resolution, etc.) We propose an approach to automatically discover latent domains in image or video datasets. Our formulation imposes two key properties on domains: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we require the underlying distributions of the identified domains to be different from each other to the maximum extent; by maximum learnability, we ensure that a strong discriminative model can be learned from the domain. We devise a nonparametric formulation and efficient optimization procedure that can successfully discover domains among both training and test data. We extensively evaluate our approach on object recognition and human activity recognition tasks. 1 Introduction A domain refers to an underlying data distribution. Generally, there are two: the one with which classifiers are trained, and the other to which classifiers are applied. While many learning algorithms assume the two are the same, in real-world applications, the distributions are often mismatched, causing significant performance degradation when the classifiers are applied. Domain adaptation techniques are crucial in building robust classifiers to address mismatched new and unexpected target environments. As such, the subject has been intensively studied in computer vision [1, 2, 3, 4], speech and language processing [5, 6], and statistics and learning [7, 8, 9, 10]. While domain adaptation research largely focuses on how adaptation should proceed, there are also vital questions concerning the domains themselves: what exactly is a domain composed of? and how are domains different from each other? For some applications, the answers come naturally. For example, in speech recognition, we can organize data into speaker-specific domains where each domain contains a single speaker’s utterances. In language processing, we can organize text data into language-specific domains. For those types of data, we can neatly categorize each instance with a discrete set of semantically meaningful properties; a domain is thus naturally composed of instances of the same (subset of) properties. For visual recognition, however, the same is not possible. In addition to large intra-category appearance variations, images and video of objects (or scenes, attributes, activities, etc.) are also significantly affected by many extraneous factors such as pose, illumination, occlusion, camera resolution, and background. Many of these factors simply do not naturally lend themselves to deriving discrete domains. Furthermore, the factors overlap and interact in images in complex ways. In fact, even coming up with a comprehensive set of such properties is a daunting task in its own right—not to mention automatically detecting them in images! 1 Partially due to these conceptual and practical constraints, datasets for visual recognition are not deliberately collected with clearly identifiable domains [11, 12, 13, 14, 15]. Instead, standard image/video collection is a product of trying to ensure coverage of the target category labels on one hand, and managing resource availability on the other. As a result, a troubling practice in visual domain adaptation research is to equate datasets with domains and study the problem of cross-dataset generalization or correcting dataset bias [16, 17, 18, 19]. One pitfall of this ad hoc practice is that a dataset could be an agglomeration of several distinctive domains. Thus, modeling the dataset as a single domain would necessarily blend the distinctions, potentially damaging visual discrimination. Consider the following human action recognition task, which is also studied empirically in this work. Suppose we have a training set containing videos of multiple subjects taken at view angles of 30◦and 90◦, respectively. Unaware of the distinction of these two views of videos, a model for the training set as a single training domain needs to account for both inter-subject and inter-view variations. Presumably, applying the model to recognizing videos taken at view angle of 45◦(i.e., from the test domain) would be less effective than applying models accounting for the two view angles separately, i.e., modeling inter-subject variations only. How can we avoid such pitfalls? More specifically, how can we form characteristic domains, without resorting to the hopeless task of manually defining properties along which to organize them? We propose novel learning methods to automatically reshape datasets into domains. This is a challenging unsupervised learning problem. At the surface, we are not given any information about the domains that the datasets contain, such as the statistical properties of the domains, or even the number of domains. Furthermore, the challenge cannot be construed as a traditional clustering problem; simply clustering images by their appearance is prone to reshaping datasets into per-category domains, as observed in [20] and our own empirical studies. Moreover, there may be many complex factors behind the domains, making it difficult to model the domains with parametric mixture models on which traditional clustering algorithms (e.g., Kmeans or Gaussian mixtures) are based. Our key insights are two axiomatic properties that latent domains should possess: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we identify domains that are maximally different in distribution from each other. This ensures domains are characteristic in terms of their large inter-domain variations. By maximum learnability, we identify domains from which we can derive strong discriminative models to apply to new testing data. In section 2, we describe our learning methods for extracting domains with these desirable properties. We derive nonparametric approaches to measure domain discrepancies and show how to optimize them to arrive at maximum distinctiveness. We also show how to achieve maximum learnability by monitoring an extracted domain’s discriminative learning performance, and we use that property to automatically choose the number of latent domains. To our best knowledge, [20] is the first and only work addressing latent domain discovery. We postpone a detailed discussion and comparison to their method to section 3, after we have described our own. In section 4, we demonstrate the effectiveness of our approach on several domain adaptation tasks for object recognition and human activity recognition. We show that we achieve far better classification results using adapted classifiers learned on the discovered domains. We conclude in section 5. 2 Proposed approach We assume that we have access to one or more annotated datasets with a total of M data instances. The data instances are in the form of (xm, ym) where xm ∈RD is the feature vector and ym ∈[C] the corresponding label out of C categories. Moreover, we assume that each data instance comes from a latent domain zm ∈[K] where K is the number of domains. In what follows, we start by describing our algorithm for inferring zm assuming K is known. Then we describe how to infer K from the data. 2.1 Maximally distinctive domains Given K, we denote the distributions of unknown domains Dk by Pk(x, y) for k ∈[K]. We do not impose any parametric form on Pk(·, ·). Instead, the marginal distribution Pk(x) is approximated 2 by the empirical distribution ˆPk(x) ˆPk(x) = 1 Mk X m δxmzmk, where Mk is the number of data instances to be assigned to the domain k and δxm is an atom at xm. zmk ∈{0, 1} is a binary indicator variable and takes the value of 1 when zm = k. Note that Mk = P m zmk and P k Mk = M. What kind of properties do we expect from ˆPk(x)? Intuitively, we would like any two different domains ˆPk(x) and ˆPk′(x) to be as distinctive as possible. In the context of modeling visual data, this implies that intra-class variations between domains are often far more pronounced than interclass variations within the same domain. As a concrete example, consider the task of differentiating commercial jetliners from fighter jets. While the two categories are easily distinguishable when viewed from the same pose (frontal view, side view, etc.), there is a significant change in appearance when either category undergoes a pose change. Clearly, defining domains by simply clustering the images by appearance is insufficient; the inter-category and inter-pose variations will both contribute to the clustering procedure and may lead to unreasonable clusters. Instead, to identify characteristic domains, we need to look for divisions of the data that yield maximally distinctive distributions. To quantify this intuition, we need a way to measure the difference in distributions. To this end, we apply a kernel-based method to examine whether two samples are from the same distribution [21]. Concretely, let k(·, ·) denote a characteristic positive semidefinite kernel (such as the Gaussian kernel). We compute the the difference between the means of two empirical distributions in the reproducing kernel Hilbert space (RKHS) H induced by the kernel function, d(k, k′) =
1 Mk X m k(·, xm)zmk −1 M′ k X m k(·, xm)zmk′
2 H (1) where k(·, xm) is the image (or kernel-induced feature) of xm under the kernel. The measure approaches zero as the number of samples tends to infinity, if and only if the two domains are the same, Pk = Pk′. We define the total domain distinctiveness (TDD) as the sum of this quantity over all possible pairs of domains: TDD(K) = X k̸=k′ d(k, k′), (2) and choose domain assignments for zm such that TDD is maximized. We first discuss this optimization problem in its native formulation of integer programming, followed by a more computationally convenient continuous optimization. Optimization In addition to the binary constraints on zmk, we also enforce K X k=1 zmk = 1, ∀m ∈[M], and 1 Mk M X m=1 zmkymc = 1 M M X m=1 ymc, ∀c ∈[C], k ∈[K] (3) where ymc is a binary indicator variable, taking the value of 1 if ym = c. The first constraint stipulates that every instance will be assigned to one domain and one domain only. The second constraint, which we refer to as the label prior constraint (LPC), requires that within each domain, the class labels are distributed according to the prior distribution (of the labels), estimated empirically from the labeled data. LPC does not restrict the absolute numbers of instances of different labels in each domain. It only reflects the intuition that in the process of data collection, the relative percentages of different classes are approximately in accordance with a prior distribution that is independent of domains. For example, in action recognition, if the “walking” category occurs relatively frequently in a domain corresponding to brightly lit video, we also expect it to be frequent in the darker videos. Thus, when data instances are re-arranged into latent domains, the same percentages are likely to be preserved. The optimization problem is NP-hard due to the integer constraints. In the following, we relax it into a continuous optimization, which is more accessible with off-the-shelf optimization packages. 3 Relaxation We introduce new variables βmk = zmk/Mk, and relax them to live on the simplex βk = (β1k, · · · , βMk)T ∈∆= ( βk : βmk ≥0, M X m=1 βmk = 1 ) for k = 1, · · · , K. With the new variables, our optimization problem becomes max β X k̸=k′ TDD(K) = X k̸=k′ (βk −βk′)TK(βk −βk′) (4) s.t. 1/M ≤ X k βmk ≤1/C, m = 1, 2, · · · , M, (5) (1 −δ)/M X m ymc ≤ X m βmkymc ≤(1 + δ)/M X m ymc, c = 1, · · · , C, k = 1, · · · , K, where K is the M × M kernel matrix. The first constraint stems from the (default) requirement that every domain should have at least one instance per category, namely, Mk ≥C and every domain should at most have M instances (Mk ≤M). The second constraint is a relaxed version of the LPC, allowing a small deviation from the prior distribution by setting δ = 1%. We assign xm to the domain k for which βmk is the maximum of βm1, · · · , βmK. This relaxed optimization problem is a maximization of convex quadratic function subject to linear constraints. Though in general still NP-hard, this type of optimization problem has been studied extensively and we have found existing solvers are adequate in yielding satisfactory solutions. 2.2 Maximally learnable domains: determining the number of domains Given M instances, how many domains hide inside? Note that the total domain distinctiveness TDD(K) increases as K increases — presumably, in the extreme case, each domain has only a few instances and their distributions would be maximally different from each other. However, such tiny domains would offer insufficient data to separate the categories of interest reliably. To infer the optimal K, we appeal to maximum learnability, another desirable property we impose on the identified domains. Specifically, for any identified domain, we would like the data instances it contains to be adequate to build a strong classifier for labeled data — failing to do so would cripple the domain’s adaptability to new test data. Following this line of reasoning, we propose domain-wise cross-validation (DWCV) to identify the optimal K. DWCV consists of the following steps. First, starting from K = 2, we use the method described in the previous section to identify K domains. Second, for each identified domain, we build discriminative classifiers, using the label information and evaluate them with cross-validation. Denote the cross-validation accuracy for the k-th domain by Ak. We then combine all the accuracies with a weighted sum A(K) = 1/M K X k=1 MkAk. For very large K such that each domain contains only a few examples, A(K) approaches the classification accuracy using the class prior probability to classify. Thus, starting at K = 2 (and assuming A(2) is greater than the prior probability’s classification accuracy), we choose K∗as the value that attains the highest cross-validation accuracy: K∗= arg maxK A(K). For N-fold cross-validation, a practical bound for the largest K we need to examine is Kmax ≤min{M/(NC), C}. Beyond this bound it does not quite make sense to do cross-validation. 3 Related work Domain adaptation is a fundamental research subject in statistical machine learning [9, 22, 23, 10], and is also extensively studied in speech and language processing [5, 6, 8] and computer vision [1, 2, 3, 4, 24, 25]. Mostly these approaches are validated by adaptating between datasets, which, as discussed above, do not necessarily correspond to well-defined domains. 4 In our previous work, we proposed to identify some landmark data points in the source domain which are distributed similarly to the target domain [26]. While that approach also slices the training set, it differs in the objective. We discover the underlying domains of the training datasets, each of which will be adaptable, whereas the landmarks in [26] are intentionally biased towards the single given target domain. Hoffman et al.’s work [20] is the most relevant to ours. They also aim at discovering the latent domains from datasets, by modeling the data with a hierarchical distribution consisting of Gaussian mixtures. However, their explicit form of distribution may not be easily satisfiable in real data. In contrast, we appeal to nonparametric methods, overcoming this limitation without assuming any form of distribution. In addition, we examine the new scenario where the test set is also composed of heterogeneous domains. A generalized clustering approach by Jegelka et al. [27] shares the idea of maximum distinctiveness (or “discriminability” used in [27]) criterion with our approach. However, their focus is the setting of unsupervised clustering where ours is domain discovery. As such, they adopt a different regularization term from ours, which exploits labels in the datasets. Multi-domain adaptation methods suppose that multiple source domains are given as input, and the learner must adapt from (some of) them to do well in testing on a novel target domain [28, 29, 10]. In contrast, in the problem we tackle, the division of data into domains is not given—our algorithm must discover the latent domains. After our approach slices the training data into multiple domains, it is natural to apply multi-domain techniques to achieve good performance on a test domain. We will present some related experiments in the next section. 4 Experimental Results We validate our approach on visual object recognition and human activity recognition tasks. We first describe our experimental settings, and then report the results of identifying latent domains and using the identified domains for adapting classifiers to a new mono-domain test set. After that, we present and report experimental results of reshaping heterogeneous test datasets into domains matching to the identified training domains. Finally, we give some qualitative analyses and details on choosing the number of domains. 4.1 Experimental setting Data For object recognition, we use images from Caltech-256 (C) [14] and the image datasets of Amazon (A), DSLR (D), and Webcam (W) provided by Saenko et al. [2]. There are total 10 common categories among the 4 datasets. These images mainly differ in the data collection sources: Caltech256 was collected from webpages on the Internet, Amazon images from amazon.com, and DSLR and Webcam images from an office environment. We represent images with bag-of-visual-words descriptors following previous work on domain adaptation [2, 4]. In particular, we extract SURF [30] features from the images, use K-means to build a codebook of 800 clusters, and finally obtain an 800-bin histogram for each image. For action recognition from video sequences, we use the IXMAS multi-view action dataset [15]. There are five views (Camera 0, 1, · · · , 4) of eleven actions in the dataset. Each action is performed three times by twelve actors and is captured by the five cameras. We keep the first five actions performed by alba, andreas, daniel, hedlena, julien, and nicolas such that the irregularly performed actions [15] are excluded. In each view, 20 sequences are randomly selected per actor per action. We use the shape-flow descriptors to characterize the motion of the actions [31]. Evaluation strategy The four image datasets are commonly used as distinctive domains in research in visual domain adaptation [2, 3, 4, 32]. Likewise, each view in the IXMAS dataset is often taken as a domain in action recognition [33, 34, 35, 24]. Similarly, in our experiments, we use a subset of these datasets (views) as source domains for training classifiers and the rest of the datasets (views) as target domains for testing. However, the key difference is that we do not compare performance of different adaptation algorithms which assume domains are already given. Instead, we evaluate the effectiveness of our approach by investigating whether its automatically identified domains improve adaptation, that is, whether recognition accuracy on the target domains can be improved by reshaping the datasets into their latent source domains. 5 Table 1: Oracle recognition accuracy on target domains by adapting original or identified domains S A, C D, W C, D, W Cam 0, 1 Cam 2, 3, 4 T D, W A, C A Cam 2, 3, 4 Cam 0, 1 GORIG 41.0 32.6 41.8 44.6 47.1 GOTHER [20] 39.5 33.7 34.6 43.9 45.1 GOURS 42.6 35.5 44.6 47.3 50.3 Table 2: Adaptation recognition accuracies, using original and identified domains with different multi-source adaptation methods Latent Multi-DA A, C D, W C, D, W Cam 0, 1 Cam 2, 3, 4 Domains method D, W A, C A Cam 2, 3, 4 Cam 0, 1 ORIGINAL UNION 41.7 35.8 41.0 45.1 47.8 [20] ENSEMBLE 31.7 34.4 38.9 43.3 29.6 MATCHING 39.6 34.0 34.6 43.2 45.2 OURS ENSEMBLE 38.7 35.8 42.8 45.0 40.5 MATCHING 42.6 35.5 44.6 47.3 50.3 We use the geodesic flow kernel for adapting classifiers [4]. To use the kernel-based method for computing distribution difference, we use Gaussian kernels, cf. section 2. We set the kernel bandwidth to be twice the median distances of all pairwise data points. The number of latent domains K is determined by the DWCV procedure (cf. section 2.2). 4.2 Identifying latent domains from training datasets Notation Let S = {S1, S2, . . . , SJ} denote the J datasets we will be using as training source datasets and let T = {T1, T2, . . . , TL} denote the L datasets we will be using as testing target datasets. Furthermore, let K denote the number of optimal domains discovered by our DWCV procedure and U = {U1, U2, . . . , UK} the K hidden domains identified by our approach. Let r(A →B) denote the recognition accuracy on the target domain B with A as the source domain. Goodness of the identified domains We examine whether {Uk} is a set of good domains by computing the expected best possible accuracy of using the identified domains separately for adaptation GOURS = EB∈P max k r(Uk, B) ≈1 L X l max k r(Uk →Tl) (6) where B is a target domain drawn from a distribution on domains P. Since this distribution is not obtainable, we approximate the expectation with the empirical average over the observed testing datasets {Tl}. Likewise, we can define GORIG where we compute the best possible accuracy for the original domains {Sj}, and GOTHER where we compute the same quantity for a competing method for identifying latent domains, proposed in [20]. Note that the max operation requires that the target domains be annotated; thus the accuracies are the most optimistic estimate for all methods, and upper bounds of practical algorithms. Table 1 reports the three quantities on different pairs of sources and target domains. Clearly, our method yields a better set of identified domains, which are always better than the original datasets. We also experimented using Kmeans or random partition for clustering data instances into domains. Neither yields competitive performance and the results are omitted here for brevity. Practical utility of identified domains In practical applications of domain adaptation algorithms, however, the target domains are not annotated. The oracle accuracies reported in Table 1 are thus not achievable in general. In the following, we examine how closely the performance of the identified domains can approximate the oracle if we employ multi-source adaptation. To this end, we consider several choices of multiple-source domain adaptation methods: • UNION The most naive way is to combine all the source domains into a single dataset and adapt from this “mega” domain to the target domains. We use this as a baseline. • ENSEMBLE A more sophisticated strategy is to adapt each source domain to the target domain and combine the adaptation results in the form of combining multiple classifiers [20]. 6 Table 3: Results of reshaping the test set when it consists of data from multiple domains. From identified (Reshaping training only) No reshaping Conditional reshaping A′ →F B′ →F C′ →F A S B S C →F X →FX, ∀X ∈{A′, B′, C′} Cam 012 36.4 37.1 37.7 37.3 38.5 Cam 123 40.4 38.7 39.6 39.9 41.1 Cam 234 46.5 45.7 46.1 47.8 49.2 Cam 340 50.7 50.6 50.5 52.3 54.9 Cam 401 43.6 41.8 43.9 43.3 44.8 • MATCHING This strategy compares the empirical (marginal) distribution of the source domains and the target domains and selects the single source domain that has the smallest difference to the target domain to adapt. We use the kernel-based method to compare distributions, as explained in section 2. Note that since we compare only the marginal distributions, we do not require the target domains to be annotated. Table 2 reports the averaged recognition accuracies on the target domains, using either the original datasets/domains or the identified domains as the source domains. The latent domains identified by our method generally perform well, especially using MATCHING to select the single best source domain to match the target domain for adaptation. In fact, contrasting Table 2 to Table 1, the MATCHING strategy for adaptation is able to match the oracle accuracies, even though the matching process does not use label information from the target domains. 4.3 Reshaping the test datasets So far we have been concentrating on reshaping multiple annotated datasets (for training classifiers) into domains for adapting to test datasets. However, test datasets can also be made of multiple latent domains. Hence, it is also instrumental to investigate whether we can reshape the test datasets into multiple domains to achieve better adaptation results. However, the reshaping process for test datasets has a critical difference from reshaping training datasets. Specifically, we should reshape test datasets, conditioning on the identified domains from the training datasets — the goal is to discover latent domains in the test datasets that match the domains in the training datasets as much as possible. We term this conditional reshaping. Computationally, conditional reshaping is more tractable than identifying latent domains from the training datasets. Concretely, we minimize the distribution differences between the latent domains in the test datasets and the domains in the training datasets, using the kernel-based measure explained in section 2. The optimization problem, however, can be relaxed into a convex quadratic programming problem. Details are in the Suppl. Material. Table 3 demonstrates the benefit of conditionally reshaping the test datasets, on cross-view action recognition. This problem inherently needs test set reshaping, since the person may be viewed from any direction at test time. (In contrast, test sets for the object recognition datasets above are less heterogeneous.) The first column shows five groups of training datasets, each being a different view, denoted by A, B and C. In each group, the remaining views D and E are merged into a new test dataset, denoted by F = D S E. Two baselines are included: (1) adapting from the identified domains A′, B′ and C′ to the merged dataset F; (2) adapting from the merged dataset A S B S C to F. These are contrasted to adapting from the identified domains in the training datasets to the matched domains in F. In most groups, there is a significant improvement in recognition accuracies by conditional reshaping over no reshaping on either training or testing, and reshaping on training only. 4.4 Analysis of identified domains and the optimal number of domains It is also interesting to see which factors are dominant in the identified domains. Object appearance, illumination, or background? Do they coincide with the factors controlled by the dataset collectors? Some exemplar images are shown in Figure 1, where each row corresponds to an original dataset, and each column is an identified domain across two datasets. On the left of Figure 1 we reshape Amazon and Caltech-256 into two domains. In Domain II all the “laptop” images 1) are taken from 7 Identified Domain I Identified Domain II Amazon Caltech Identified Domain I Identified Domain II DSLR Webcam Figure 1: Exemplar images from the original and identified domains after reshaping. Note that identified domains contain images from both datasets. 2 3 4 5 30 35 40 45 50 # of domains Accuracy (%) (A, C) DWCV Domain adaptation 2 3 4 5 0 10 20 30 40 50 # of domains Accuracy (%) (C, D, W) DWCV Domain adaptation 2 3 4 5 35 40 45 50 55 60 65 70 # of domains Accuracy (%) (Cam 1, 2, 3) DWCV Domain adaptation 2 3 4 5 0 10 20 30 40 50 60 70 # of domains Accuracy (%) (Cam 2, 3, 4) DWCV Domain adaptation Figure 2: Domain-wise cross-validation (DWCV) for choosing the number of domains. the front view and 2) have colorful screens, while Domain I images are less colorful and have more diversified views. It looks like the domains in Amazon and Caltech-256 are mainly determined by the factors of object pose and appearance (color). The figures on the right are from reshaping DSLR and Webcam, of which the “keyboard” images are taken in an office environment with various lighting, object poses, and background controlled by the dataset creators [2]. We can see that the images in Domain II have gray background, while in Domain I the background is either white or wooden. Besides, keyboards of the same model, characterized by color and shape, are almost perfectly assigned to the same domain. In sum, the main factors here are probably background and object appearance (color and shape). Figure 2 plots some intermediate results of the domain-wise cross-validation (DWCV) for determining the number of domains K to identify from the multiple training datasets. In addition to the DWCV accuracy A(K), the average classification accuracies on the target domain(s) are also included for reference. We set A(K) to 0 when some categories in a domain are assigned with only one or no data point (as a result of optimization). Generally, A(K) goes up and then drops at some point, before which is the optimal K⋆we use in the experiments. Interestingly, the number favored by DWCV coincides with the number of datasets we mix, even though, as our experiments above show, the ideal domain boundaries do not coincide with the dataset boundaries. 5 Conclusion We introduced two domain properties, maximum distinctiveness and maximum learnability, to discover latent domains from datasets. Accordingly, we proposed nonparametric approaches encouraging the extracted domains to satisfy these properties. Since in each domain visual discrimination is more consistent than that in the heterogeneous datasets, better prediction performance can be achieved on the target domain. The proposed approach is extensively evaluated on visual object recognition and human activity recognition tasks. Our identified domains outperform not only the original datasets but also the domains discovered by [20], validating the effectiveness of our approach. It may also shed light on dataset construction in the future by examining the main factors of the domains discovered from the existing datasets. Acknowledgments K.G is supported by ONR ATL N00014-11-1-0105. B.G. and F.S. is supported by ARO Award# W911NF-12-1-0241 and DARPA Contract# D11AP00278 and the IARPA via DoD/ARL contract # W911NF-12-C-0012. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government. 8 References [1] L. Duan, D. Xu, I.W. Tsang, and J. Luo. Visual event recognition in videos by learning from web data. In CVPR, 2010. [2] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010. [3] R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. [4] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [5] H. Daum´e III. Frustratingly easy domain adaptation. In ACL, 2007. [6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In EMNLP, 2006. [7] J. Huang, A.J. Smola, A. Gretton, K.M. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In NIPS, 2007. [8] S.J. Pan, I.W. Tsang, J.T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE Trans. NN, (99):1–12, 2009. [9] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence. Dataset shift in machine learning. The MIT Press, 2009. [10] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS, 2009. [11] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. [12] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010. [13] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. LabelMe: a database and web-based tool for image annotation. IJCV, 77:157–173, 2008. [14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical report, California Institute of Technology, 2007. [15] D. Weinland, E. Boyer, and R. Ronfard. Action recognition from arbitrary views using 3d exemplars. In ICCV, 2007. [16] A. Torralba and A.A. Efros. Unbiased look at dataset bias. In CVPR, 2011. [17] B. Gong, F. Sha, and K. Grauman. Overcoming dataset bias: An unsupervised domain adaptation approach. In NIPS Workshop on Large Scale Visual Recognition and Retrieval, 2012. [18] L. Cao, Z. Liu, and T. S Huang. Cross-dataset action detection. In CVPR, 2010. [19] T. Tommasi, N. Quadrianto, B. Caputo, and C. Lampert. Beyond dataset bias: multi-task unaligned shared knowledge transfer. In ACCV, 2012. [20] J. Hoffman, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multisource domain adaptation. In ECCV. 2012. [21] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS. 2007. [22] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000. [23] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In NIPS, 2007. [24] R. Li and T. Zickler. Discriminative virtual views for cross-view action recognition. In CVPR, 2012. [25] K. Tang, V. Ramanathan, L. Fei-Fei, and D. Koller. Shifting weights: Adapting object detectors from image to video. In NIPS, 2012. [26] B. Gong, K. Grauman, and F. Sha. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In ICML, 2013. [27] S. Jegelka, A. Gretton, B. Sch¨olkopf, B. K Sriperumbudur, and U. Von Luxburg. Generalized clustering via kernel embeddings. In Advances in Artificial Intelligence, 2009. [28] Q. Sun, R. Chattopadhyay, S. Panchanathan, and J. Ye. A two-stage weighting framework for multi-source domain adaptation. In NIPS, 2011. [29] L. Duan, I. W Tsang, D. Xu, and T. Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, 2009. [30] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In ECCV, 2006. [31] D. Tran and A. Sorokin. Human activity recognition with metric learning. In ECCV. 2008. [32] A. Bergamo and L. Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, 2010. [33] A. Farhadi and M. Tabrizi. Learning to recognize activities from the wrong view point. In ECCV, 2008. [34] C.-H. Huang, Y.-R. Yeh, and Y.-C. Wang. Recognizing actions across cameras by exploring the correlated subspace. In ECCV, 2012. [35] J. Liu, M. Shah, B. Kuipers, and S. Savarese. Cross-view action recognition via view knowledge transfer. In CVPR, 2011. 9
|
2013
|
41
|
5,116
|
Perfect Associative Learning with Spike-Timing-Dependent Plasticity Christian Albers Institute of Theoretical Physics University of Bremen 28359 Bremen, Germany calbers@neuro.uni-bremen.de Maren Westkott Institute of Theoretical Physics University of Bremen 28359 Bremen, Germany maren@neuro.uni-bremen.de Klaus Pawelzik Institute of Theoretical Physics University of Bremen 28359 Bremen, Germany pawelzik@neuro.uni-bremen.de Abstract Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons. 1 Introduction Perceptrons are paradigmatic building blocks of neural networks [1]. The original Perceptron Learning Rule (PLR) is a supervised learning rule that employs a threshold to control weight changes, which also serves as a margin to enhance robustness [2, 3]. If the learning set is separable, the PLR algorithm is guaranteed to converge in a finite number of steps [1], which justifies the term ’perfect learning’. Associative learning can be considered a special case of supervised learning where the activity of the output neuron is used as a teacher signal such that after learning missing activities are filled in. For this reason the PLR is very useful for building associative memories in recurrent networks where it can serve to learn arbitrary patterns in a ’quasi-unsupervised’ way. Here it turned out to be far more efficient than the simple Hebb rule, leading to a superior memory capacity and non-symmetric weights [4]. Note also that over-learning from repetitions of training examples is not possible with the PLR because weight changes vanish as soon as the accumulated inputs are sufficient, a property 1 which in contrast to the na¨ıve Hebb rule makes it suitable also for incremental learning of associative memories from sequential presentation of patterns. On the other hand, it is not known if and how real synaptic mechanisms might realize the successdependent self-regulation of the PLR in networks of spiking neurons in the brain. For example in the Tempotron [5], a generalization of the perceptron to spatio-temporal patterns, learning was conceived even somewhat less biological than the PLR, since here it not only depends on the potential classification success, but also on a process that is not local in time, namely the localization of the absolute maximum of the (virtual) postsynaptic membrane potential of the post-synaptic neuron. The simplified tempotron learning rule, while biologically more plausible, still relies on a reward signal which tells each neuron specifically that it should have spiked or not. Taken together, while highly desirable, the feature of self regulation in the PLR still poses a challenge for biologically realistic synaptic mechanisms. The classical form of spike-timing-dependent plasticity (STDP) for excitatory synapses (here denoted CSTDP) states that the causal temporal order of first pre-synaptic activity and then postsynaptic activity leads to long-term potentiation of the synapse (LTP) while the reverse order leads to long-term depression (LTD)[6, 7, 8]. More recently, however, it became clear that STDP can exhibit different dependencies on the temporal order of spikes. In particular, it was found that the reversed temporal order (first post- then presynaptic spiking) could lead to LTP (and vice versa; RSTDP), depending on the location on the dendrite [9, 10]. For inhibitory synapses some experiments were performed which indicate that here STDP exists as well and has the form of CSTDP [11]. Note that CSTDP of inhibitory synapses in its effect on the postsynaptic neuron is equivalent to RSTDP of excitatory synapses. Additionally it has been shown that CSTDP does not always rely on spikes, but that strong subthreshold depolarization can replace the postsynaptic spike for LTD while keeping the usual timing dependence [12]. We therefore assume that there exists a second threshold for the induction of timing dependent LTD. For simplicity and without loss of generality, we restrict the study to RSTDP for synapses that in contradiction to Dale’s law can change their sign. It is very likely that plasticity rules and dynamical properties of neurons co-evolved to take advantage of each other. Combining them could reveal new and desirable effects. A modeling example for a beneficial effect of such an interplay was investigated in [13], where CSTDP interacted with spike-frequency adaptation of the postsynaptic neuron to perform a gradient descent on a square error. Several other studies investigate the effect of STDP on network function, however mostly with a focus on stability issues (e.g. [14, 15, 16]). In contrast, we here focus on the constructive role of STDP for associative learning. First we prove that RSTDP of excitatory synapses (or CSTDP on inhibitory synapses) when acting in concert with neuronal after-hyperpolarisation and depolarization-dependent LTD is sufficient for realizing the classical Perceptron learning rule, and then show that this plasticity dynamics realizes a learning rule suited for the Tempotron and the Chronotron [17]. 2 Ingredients 2.1 Neuron model and network structure We assume a feed-forward network of N presynaptic neurons and one postsynaptic integrate-andfire neuron with a membrane potential U governed by τU ˙U = −U + Isyn + Iext, (1) where Isyn denotes the input from the presynaptic neurons, and Iext is an input which can be used to drive the postsynaptic neuron to spike at certain times. When the neuron reaches a threshold potential Uthr, it is reset to a reset potential Ureset < 0, from where it decays back to the resting potential U∞= 0 with time constant τU. Spikes and other signals (depolarization) take finite times to travel down the axon (τa) and the dendrite (τd). Synaptic transmission takes the form of delta pulses, which reach the soma of the postsynaptic neuron after time τa + τd, and are modulated by the synaptic weight w. We denote the presynaptic spike train of neuron i as xi with spike times ti pre: xi(t) = X tipre δ(t −ti pre). (2) 2 Uthr Ust U¥ z(t) w(t) x(t) subthreshold events z(t) x postsynaptic trace y presynaptic spikes x A B Figure 1: Illustration of STDP mechanism. A: Upper trace (red) is the membrane potential of the postsynaptic neuron. Shown are the firing threshold Uthr and the threshold for LTD Ust. Middle trace (black) is the variable z(t), the train of LTD threshold crossing events. Please note that the first spike in z(t) occurs at a different time than the neuronal spike. Bottom traces show w(t) (yellow) and ¯x (blue) of a selected synapse. The second event in z reads out the trace of the presynaptic spike ¯x, leading to LTD. B: Learning rule (4) is equivalent to RSTDP. A postsynaptic spike leads to an instantaneous jump in the trace ¯y (top left, red line), which decays exponentially. Subsequent presynaptic spikes (dark blue bars and corresponding thin gray bars in the STDP window) “read” out the state of the trace for the respective ∆t = tpre −tpost. Similarly, z(t) reads out the presynaptic trace ¯x (lower left, blue line). Sampling for all possible times results in the STDP window (right). A postsynaptic neuron receives the input Isyn(t) = P i wixi(t −τa −τd). The postsynaptic spike train is similarly denoted by y(t) = P tpost δ(t −tpost). 2.2 The plasticity rule The plasticity rule we employ mimics reverse STDP: A postsynaptic spike which arrives at the synapse shortly before a presynaptic spike leads to synaptic potentiation. For synaptic depression the relevant signal is not the spike, but the point in time where U(t) crosses an additional threshold Ust from below, with U∞< Ust < Uthr (“subthreshold threshold”). These events are modelled as δ-pulses in the function z(t) = P k δ(t−tk), where tk are the times of the aforementioned threshold crossing events (see Fig. 1 A for an illustration of the principle). The temporal characteristic of (reverse) STDP is preserved: If a presynaptic spike occurs shortly before the membrane potential crosses this threshold, the synapse depresses. Timing dependent LTD without postsynaptic spiking has been observed, although with classical timing requirements [12]. We formalize this by letting pre- and postsynaptic spikes each drive a synaptic trace: τpre ˙¯x = −¯x + x(t −τa) τpost ˙¯y = −¯y + y(t −τd). (3) The learning rule is a read–out of the traces by spiking and threshold crossing events, respectively: ˙w ∝¯yx(t −τa) −γ¯xz(t −τd), (4) where γ is a factor which scales depression and potentiation relative to each other. Fig. 1 B shows how this plasticity rule creates RSTDP. 3 Equivalence to Perceptron Learning Rule The Perceptron Learning Rule (PLR) for positive binary inputs and outputs is given by ∆wµ i ∝xi,µ 0 (2yµ 0 −1)Θ [κ −(2yµ 0 −1)(hµ −ϑ)] , (5) 3 where xi,µ 0 ∈{0, 1} denotes the activity of presynaptic neuron i in pattern µ ∈{1, . . . , P}, yµ 0 ∈{0, 1} signals the desired response to pattern µ, κ > 0 is a margin which ensures a certain robustness against noise after convergence, hµ = P i wixi,µ 0 is the input to a postsynaptic neuron, ϑ denotes the firing threshold, and Θ(x) denotes the Heaviside step function. If the P patterns are linearly separable, the perceptron will converge to a correct solution of the weights in a finite number of steps. For random patterns this is generally the case for P < 2N. A finite margin κ reduces the capacity. Interestingly, for the case of temporally well separated synchronous spike patterns the combination of RSTDP-like synaptic plasticity dynamics with depolarization-dependent LTD and neuronal hyperpolarization leads to a plasticity rule which can be mapped to the Perceptron Learning Rule. To cut down unnecessary notation in the derivation, we drop the indices i and µ except where necessary and consider only times 0 ≤t ≤τa + 2τd. We consider a single postsynaptic neuron with N presynaptic neurons, with the condition τd < τa. During learning, presynaptic spike patterns consisting of synchronous spikes at time t = 0 are induced, concurrent with a possibly occuring postsynaptic spike which signals the class the presynaptic pattern belongs to. This is equivalent to the setting of a single layered perceptron with binary neurons. With x0 and y0 used as above we can write the pre- and postsynaptic activity as x(t) = x0δ(t) and y(t) = y0δ(t). The membrane potential of the postsynaptic neuron depends on y0: U(t) = y0Ureset exp(−t/τU) U(τa + τd) = y0Ureset exp(−(τa + τd)/τU) = y0Uad. (6) Similarly, the synaptic current is Isyn(t) = X i wixi 0δ(t −τa −τd) Isyn(τa + τd) = X i wixi 0 = Iad. (7) The activity traces at the synapses are ¯x(t) = x0Θ(t −τa)exp(−(t −τa)/τpre) τpre ¯y(t) = y0Θ(t −τd)exp(−(t −τd)/τpost) τpost . (8) The variable of threshold crossing z(t) depends on the history of the postsynaptic neurons, which again can be written with the aid of y0 as: z(t) = Θ(Iad + y0Uad −Ust)δ(t −τa −τd). (9) Here, Θ reflects the condition for induction of LTD. Only when the postsynaptic input at time t = τa + τd is greater than the residual hyperpolarization (Uad < 0!) plus the threshold Ust, a potential LTD event gets enregistered. These are the ingredients for the plasticity rule (4): ∆w ∝ Z [¯yx(t −τa) −γ¯xz(t −τd)] dt =x0y0 exp(−(τa + τd)/τpost) τpost −γx0 exp(−2τd/τpre) τpre Θ(Iad + y0Uad −Ust). (10) We shorten this expression by choosing γ such that the factors of both terms are equal, which we can drop subsequently: ∆w ∝x0 (y0 −Θ(Iad + y0Uad −Ust)) . (11) We expand the equation by adding and substracting y0Θ(Iad + y0Uad −Ust): ∆w ∝x0 [y0(1 −Θ(Iad + y0Uad −Ust)) −(1 −y0)Θ(Iad + y0Uad −Ust)] =x0 [y0Θ(−Iad −Uad + Ust) −(1 −y0)Θ(Iad −Ust)] . (12) We used 1 −Θ(x) = Θ(−x) in the last transformation, and dropped y0 from the argument of the Heaviside functions, as the two terms are seperated into the two cases y0 = 0 and y0 = 1. We do a 4 similar transformation to construct an expression G that turns either into the argument of the left or right Heaviside function depending on y0. That expression is G = Iad −Ust + y0(−2Iad −Uad + 2Ust), (13) with which we replace the arguments: ∆w ∝x0y0Θ(G) −x0(1 −y0)Θ(G) = x0(2y0 −1)Θ(G). (14) The last task is to show that G and the argument of the Heaviside function in equation (5) are equivalent. For this we choose Iad = h, Uad = −2κ and Ust = ϑ −κ and keep in mind, that ϑ = Uthr is the firing threshold. If we put this into G we get G =Iad −Ust + y0(−2Iad −Uad + 2Ust) =h −ϑ + κ + 2y0h + 2y0κ + 2y0ϑ −2y0κ =κ −(2y0 −1)(h −ϑ), (15) which is the same as the argument of the Heaviside function in equation (5). Therefore, we have shown the equivalence of both learning rules. 4 Associative learning of spatio-temporal spike patterns 4.1 Tempotron learning with RSTDP The condition of exact spike synchrony used for the above equivalence proof can be relaxed to include the association of spatio-temporal spike patterns with a desired postsynaptic activity. In the following we take the perspective of the postsynaptic neuron which during learning is externally activated (or not) to signal the respective class by spiking at time t = 0 (or not). During learning in each trial presynaptic spatio-temporal spike patterns are presented in the time span 0 < t < T , and plasticity is ruled by (4). For these conditions the resulting synaptic weights realize a Tempotron with substantial memory capacity. A Tempotron is an integrate-and-fire neuron with input weights adjusted to perform arbitrary classifications of (sparse) spike patterns [5, 18]. To implement a Tempotron, we make two changes to the model. First, we separate the time scales of membrane potential and hyperpolarization by introducing a variable ν: τν ˙ν = −ν . (16) Immediately after a postsynaptic spike, ν is reset to νspike < 0. The reason is that the length of hyperpolarization determines the time window where significant learning can take place. To improve comparability with the Tempotron as presented originally in [5], we set T = 0.5s and τν = τpost = 0.2s, so that the postsynaptic neuron can learn to spike almost anywhere over the time window, and we introduce postsynaptic potentials (PSP) with a finite rise time: τs ˙Isyn = −Isyn + X i wixi(t −τa), (17) where wi denotes the synaptic weight of presynaptic neuron i. With τs = 3ms and τU = 15ms the PSPs match the ones used in the original Tempotron study. This second change has little impact on the capacity or otherwise. With these changes, the membrane potential is governed by τU ˙U = (ν −U) + Isyn(t −τd). (18) A postsynaptic spike resets U to νspike = Ureset < 0. Ureset is the initial hyperpolarization which is induced after a spike, which relaxes back to zero with the time constant τν ≫τU. Presynaptic spikes add up linearly, and for simplicity we assume that both the axonal and the dendritic delay are negligibly small: τa = τd = 0. It is a natural choice to set τU = τpre and τν = τpost. τU sets the time scale for the summation of EPSP contributing to spurious spikes, τν sets the time window where the desired spikes can lie. They therefore should coincide with LTD and LTP, respectivly. 5 Figure 2: Illustration of Perceptron learning with RSTDP with subthreshold LTD and postsynaptic hyperpolarization. Shown are the traces ¯x, ¯y and U. Pre- and postsynaptic spikes are displayed as black bars at t = 0. A: Learning in the case of y0 = 1, i.e. a postsynaptic spike as the desired output. Initially the weights are too low and the synaptic current (summed PSPs) is smaller than Ust. Weight change is LTP only until during pattern presentation the membrane potential hits Ust. At this point LTP and LTD cancel exactly, and learning stops. B: Pattern completion for y0 = 1. Shown are the same traces as in A at the absence of an inital postsynaptic spike. The membrane potential after learning is drawn as a dashed line to highlight the amplitude. Without the initial hyperpolarization, the synaptic current after learning is large enough to cross the spiking threshold, the postsynaptic neuron fires the desired spike. Learning until Ust is reached ensures a minimum height of synaptic currents and therefore robustness against noise. C: Pattern presentation and completion for y0 = 0. Initially, the synaptic current during pattern presentation causes a spike and consequently LTD. Learning stops when the membrane potential stays below Ust. Again, this ensures a certain robustness against noise, analogous to the margin in the PLR. 6 A B Figure 3: Performance of Tempotron and Chronotron after convergence. A: Classification performance of the Tempotron. Shown is the fraction of pattern which elicit the desired postsynaptic activity upon presentation. Perfect recall for all N is achieved up to α = 0.18. Beyond that mark, some of the patterns become incorrectly classified. The inset shows the learning curves for α = 7/16. The final fraction of correctly classified pattern is the average fraction of the last 500 blocks of each run. B: Performance of the Chronotron. Shown is the fraction of pattern which during recall succeed in producing the correct postsynaptic spike time in a window of length 30 ms after the teacher spike. See supplemental material for a detailed description. Please note that the scale of the load axis is different in A and B. Table 1: Parameters for Tempotron learning τU, τpre τν, τpost τs Uthr Ust νspike η γ 15 ms 200 ms 3 ms 20 mV 19 mV -20 mV 10−5 2 4.1.1 Learning performance We test the performanceof networks of N input neurons at classifying spatio-temporal spike patterns by generating P = αN patterns, which we repeatedly present to the network. In each pattern, each presynaptic neuron spikes exactly once at a fixed time in each presentation, with spike times uniformly distributed over the trial. Learning is organized in learning blocks. In each block all P patterns are presented in randomized order. Synaptic weights are initialized as zero, and are updated after each pattern presentation. After each block, we test if the postsynaptic output matches the desired activity for each pattern. If during training a postsynaptic spike at t = 0 was induced, the output can lie anytime in the testing trial for a positive outcome. To test scaling of the capacity, we generate networks of 100 to 600 neurons and present the patterns until the classification error reaches a plateau. Examples of learning curves (Classification error over time) are shown in Fig. 3. For each combination of α and N, we run 40 simulations. The final classification error is the mean over the last 500 blocks, averaged over all runs. The parameters we use in the simulations are shown in Tab. 1. Fig. 3 shows the final classification performance as a function of the memory load α, for all network sizes we use. Up to a load of 0.18, the networks learns to perfectly classify each pattern. Higher loads leave a residual error which increases with load. The drop in performance is steeper for larger networks. In comparison, the simplified Tempotron learning rule proposed in [5] achieves perfect classification up to α ≈1.5, i.e. one order of magnitude higher. 4.2 Chronotron learning with RSTDP In the Chronotron [17] input spike patterns become associated with desired spike trains. There are different learning rules which can achieve this mapping, including E–learning, I–learning, ReSuMe and PBSNLR [17, 19, 20]. The plasticity mechanism presented here has the tendency to generate postsynaptic spikes as close in time as possible to the teacher spike during recall. The presented learning principle is therefore a candidate for Chronotron learning. The average distance of these 7 spikes depends on the time constants of hyperpolarization and the learning window, especially τpost. The modifications of the model necessary to implement Chronotron learning are described in the supplement. The resulting capacity, i.e. the ability to generate the desired spike times within a short window in time, is shown in Fig. 3 B. Up to a load of α = 0.01, the recall is perfect within the limits of the learning window τlw = 30ms. Inspection of the spike times reveals that the average distance of output spikes to the respective teacher spike is much shorter than the learning window (≈2ms for α = 0.01, see supplemental Fig. 1). 5 Discussion We present a new and biologically highly plausible approach to learning in neuronal networks. RSTDP with subthreshold LTD in concert with hyperpolarisation is shown to be mathematically equivalent to the Perceptron learning rule for activity patterns consisting of synchronous spikes, thereby inheriting the highly desirable properties of the PLR (convergence in finite time, stop condition if performance is sufficient and robustness against noise). This provides a biologically plausible mechanism to build associative memories with a capacity close to the theoretical maximum. Equivalence of STDP with the PRL was shown before in [21], but this equivalence only holds on average. We would like to stress that we here present a novel approach that ensures exact mathematical eqivalence to the PRL. The mechanism proposed here is complementary to a previous approach [13] which uses CSTDP in combination with spike frequency adaptation to perform gradient descent learning on a squared error. However, that approach relies on an explicit teacher signal, and is not applicable to autoassociative memories in recurrent networks. Most importantly, the approach presented here inherits the important feature of selfregulation and fast convergence from the original Perceptron which is absent in [13]. For sparse spatio-temporal spike patterns extensive simulations show that the same mechanism is able to learn Tempotrons and Chronotrons with substantial memory capacity. In the case of the Tempotron, the capacity achieved with this mechanism is lower than with a comparably plausible learning rule. However, in the case of the Chronotron the capacity comes close to the one obtained with a commonly employed, supervised spike time learning rule. Moreover, these rules are biologically quite unrealistic. A prototypical example for such a supervised learning rule is the Temptron rule proposed by G¨utig and Sompolinski [5]. Essentially, after a pattern presentation the complete time course of the membrane potential during the presentation is examined, and if classification was erroneous, the synaptic weights which contributed most to the absolute maximum of the potential are changed. In other words, the neurons would have to able to retrospectivly disentangle contributions to their membrane potential at a certain time in the past. As we showed here, RSTDP with subthreshold LTD together with postsynaptic hyperpolarization for the first time provides a realistic mechanism for Tempotron and Chronotron learning. Spike after-hyperpolarization is often neglected in theoretical studies or assumed to only play a role in network stabilization by providing refractoriness. Depolarization dependent STDP receives little attention in modeling studies (but see [22]), possibly because there are only few studies which show that such a mechanism exists [12, 23]. The novelty of the learning mechanism presented here lies in the constructive roles both play in concert. After-hyperpolarization allows synaptic potentiation for presynaptic inputs immediately after the teacher spike without causing additional non-teacher spikes, which would be detrimental for learning. During recall, the absence of the hyperpolarization ensures the then desired threshold crossing of the membrane potential (see Fig. 2 B). Subthreshold LTD guarantees convergence of learning. It counteracts synaptic potentiation when the membrane potential becomes sufficiently high after the teacher spike. The combination of both provides the learning margin, which makes the resulting network robust against noise in the input. Taken together, our results show that the interplay of neuronal dynamics and synaptic plasticity rules can give rise to powerful learning dynamics. Acknowledgments This work was in part funded by the German ministry for Science and Education (BMBF), grant number 01GQ0964. We are grateful to the anonymus reviewers who pointed out an error in first version of the proof. 8 References [1] Hertz J, Krogh A, Palmer RG (1991) Introduction to the Theory of Neural Computation., Addison-Wesley. [2] Rosenblatt F (1957) The Perceptron–a perceiving and recognizing automaton. Report 85-460-1. [3] Minsky ML, Papert SA (1969) Perceptrons. Cambridge, MA: MIT Press. [4] Diederich S, Opper M (1987) Learning of correlated patterns in spin-glass networks by local learning rules. Physical Review Letters 58(9):949-952. [5] G¨utig R, Sompolinsky H (2006) The Tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience 9(3):420-8. [6] Dan Y, Poo M (2004) Spike Timing-Dependent Plasticity of Neural Circuits. Neuron 44:2330. [7] Dan Y, Poo M (2006) Spike timing-dependent plasticity: from synapse to perception. Physiological Reviews 86(3):1033-48. [8] Caporale N, Dan Y (2008) Spike TimingDependent Plasticity: A Hebbian Learning Rule. Annual Review of Neuroscience 31:2546. [9] Froemke RC, Poo MM, Dan Y (2005) Spike-timing-dependent synaptic plasticity depends on dendritic location. Nature 434:221-225. [10] Sj¨ostr¨om PJ, H¨ausser M (2006) A Cooperative Switch Determines the Sign of Synaptic Plasticity in Distal Dendrites of Neocortical Pyramidal Neurons. Neuron 51:227-238. [11] Haas JS, Nowotny T, Abarbanel HDI (2006) Spike-Timing-Dependent Plasticity of Inhibitory Synapses in the Entorhinal Cortex. Journal of Neurophysiology 96(6):3305-3313. [12] Sj¨ostr¨om PJ, Turrigiano GG, Nelson SB (2004) Endocannabinoid-Dependent Neocortical Layer-5 LTD in the Absence of Postsynaptic Spiking. J Neurophysiol 92:3338-3343 [13] D’Souza P, Liu SC, Hahnloser RHR (2010) Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity. PNAS 107(10):47224727. [14] Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience 3:919-926. [15] Izhikevich EM, Desai NS (2003) Relating STDP to BCM. Neural Computation 15:1511-1523 [16] Vogels TP, Sprekeler H, Zenkel F, Clopath C, Gerstner W (2011) Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science 334(6062):1569-1573. [17] Florian RV (2012) The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns. PLoS ONE 7(8): e40233 [18] Rubin R, Monasson R, Sompolinsky H (2010) Theory of Spike Timing-Based Neural Classifiers. Physical Review Letters 105(21): 218102. [19] Ponulak F, Kasinski, A (2010) Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting. Neural Computation 22:467-510 [20] Xu Y, Zeng X, Zhong S (2013) A New Supervised Learning Algorithm for Spiking Neurons. Neural Computation 25: 1475-1511 [21] Legenstein R, Naeger C, Maass W (2005) What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? Neural Computation 17:2337-2382 [22] Clopath C, B¨using L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of voltagebased STDP with homeostasis. Nature Neuroscience 13:344-355 [23] Fino E, Deniau JM, Venance L (2009) Brief Subthreshold Events Can Act as Hebbian Signals for LongTerm Plasticity. PLoS ONE 4(8): e6557 9
|
2013
|
42
|
5,117
|
Tracking Time-varying Graphical Structure Erich Kummerfeld Carnegie Mellon University Pittsburgh, PA 15213 ekummerf@andrew.cmu.edu David Danks Carnegie Mellon University Pittsburgh, PA 15213 ddanks@andrew.cmu.edu Abstract Structure learning algorithms for graphical models have focused almost exclusively on stable environments in which the underlying generative process does not change; that is, they assume that the generating model is globally stationary. In real-world environments, however, such changes often occur without warning or signal. Real-world data often come from generating models that are only locally stationary. In this paper, we present LoSST, a novel, heuristic structure learning algorithm that tracks changes in graphical model structure or parameters in a dynamic, real-time manner. We show by simulation that the algorithm performs comparably to batch-mode learning when the generating graphical structure is globally stationary, and significantly better when it is only locally stationary. 1 Introduction Graphical models are used in a wide variety of domains, both to provide compact representations of probability distributions for rapid, efficient inference, and also to represent complex causal structures. Almost all standard algorithms for learning graphical model structure [9, 10, 12, 3] assume that the underlying generating structure does not change over the course of data collection, and so the data are i.i.d. (or can be transformed into i.i.d. data). In the real world, however, generating structures often change and it can be critical to quickly detect the structure change and then learn the new one. In many of these real-world contexts, we also do not have the luxury of collecting large amounts of data and then retrospectively determining when (if ever) the structure changed. That is, we cannot learn in “batch mode,” but must instead learn the novel structure in an online manner, processing the data as it arrives. Current online learning algorithms can detect and handle changes in the learning environment, but none are capable of general, graphical model structure learning. In this paper, we develop a heuristic algorithm that fills this gap: it assumes only that our data are locally i.i.d., and learns graphical model structure in an online fashion. In the next section, we quickly survey related methods and show that they are individually insufficient for this task. We then present the details of our algorithm and provide simulation evidence that it can successfully learn graphical model structure in an online manner. Importantly, when there is a stable generating structure, the algorithm’s performance is indistinguishable from a standard batch-mode structure learning algorithm. Thus, using this algorithm incurs no additional costs in “normal” structure learning situations. 2 Related work We focus here on graphical models based on directed acyclic graphs (DAGs) over random variables with corresponding quantitative components, whether Bayesian networks or recursive Structural Equation Models (SEMs) [3, 12, 10]. All of our observations in this paper, as well as the core 1 algorithm, are readily adaptable to learn structure for models based on undirected graphs, such as Markov random fields or Gaussian graphical models [6, 9]. Standard graphical model structure learning algorithms divide into two rough types. Bayesian/scorebased methods aim to find the model M that maximizes P(M|Data), but in practice, score the models using a decomposable measure based on P(Data|M) and the number of parameters in M [3]. Constraint-based structure learning algorithms leverage the fact that every graphical model predicts a pattern of (conditional) independencies over the variables, though multiple models can predict the same pattern. Those algorithms (e.g., [10, 12]) find the set of graphical models that best predict the (conditional) independencies in the data. Both types of structure learning algorithms assume that the data come from a single generating structure, and so neither is directly usable for learning when structure change is possible. They learn from the sufficient statistics, but neither has any mechanism for detecting change, responding to it, or learning the new structure. Bayesian learning algorithms—or various approximations to them—are often used for online learning, but precisely because case-by-case Bayesian updating yields the same output as batch-mode processing (assuming the data are i.i.d.). Since we are focused on situations in which the underlying structure can change, we do not want the same output. One could instead look to online learning methods that track some environmental feature. The classic TDL algorithm, TD(0) [13], provides a dynamic estimate Et(X) of a univariate random variable X using a simple update rule: Et+1(X) ←Et(X) + α(Xt −Et(X)), where Xt is the value of X at time t. The static α parameter encodes the learning rate, and trades off convergence rate and robustness to noise (in stable environments). In general, TDL methods are good at tracking slow-moving environmental changes, but perform suboptimally during times of either high stability or dramatic change, such as when the generating model structure abruptly changes. Both Bayesian [1] and frequentist [4] online changepoint detection (CPD) algorithms are effective at detecting abrupt changes, but do so by storing substantial portions of the input data. For example, a Bayesian CPD [1] outputs the probability of a changepoint having occurred r timesteps ago, and so the algorithm must store more than r datapoints. Furthermore, CPD algorithms assume a model of the environment that has only abrupt changes separated by periods of stability. Environments that evolve slowly but continuously will have their time-series discretized in seemingly arbitrary fashion, or not at all. Two previous papers have aimed to learn time-indexed graph structures from time-series data, though both require full datasets as input, so cannot function in real-time [14, 11]. Talih and Hengartner (2005) take an ordered data set and divide it into a fixed number of (possibly empty) data intervals, each with an associated undirected graph that differs by one edge from its neighbors. In contrast with our work, they focus on a particular type of graph structure change (single edge addition or deletion), operate solely in “batch mode,” and use undirected graphs instead of directed acyclic graph models. Siracusa and Fisher III (2009) uses a Bayesian approach to find the posterior uncertainty over the possible directed edges at different points in a time-series. Our approach differs by using frequentist methods instead of Bayesian ones (since we would otherwise need to maintain a probability distribution over the superexponential number of graphical models), and by being able to operate in real-time on an incoming data stream. 3 Locally Stationary Structure Tracker (LoSST) Algorithm Given a set of continuous variables V, we assume that there is, at each time r, a true underlying generative model Gr over V. Gr is assumed to be a recursive Structural Equation Model (SEM): a pair ⟨G, F⟩, where G denotes a DAG over V, and F is a set of linear equations of the form Vi = P Vj∈pa(Vi) aji · Vj + ϵi, where pa(Vi) denotes the variables Vj ∈G such that Vj →Vi, and the ϵi are normally distributed noise/error terms. In contrast to previous work on structure learning, we assume only that the generating process is locally stationary: for each time r, data are generated i.i.d. from Gr, but it is not necessarily the case that Gr = Gs for r ̸= s. Notice that Gr can change in both structure (i.e., adding, removing, or reorienting edges) and parameters (i.e., changes in aji’s or the ϵi distributions). At a high level, the Locally Stationary Structure Tracker (LoSST) algorithm takes, at each timestep r, a new datapoint as input and outputs a graphical model Mr. Obviously, a single datapoint is 2 insufficient to learn graphical model structure. The LoSST algorithm instead tracks the locally stationary sufficient statistics—for recursive SEMs, the means, covariances, and sample size—in an online fashion, and then dynamically (re)learns the graphical model structure as appropriate. The LoSST algorithm processes each datapoint only once, and so LoSST can also function as a singlepass, graphical model structure learner for very large datasets. Let Xr be the r-th multivariate datapoint and let Xr i be the value of Vi for that datapoint. To track the potentially changing generating structure, the datapoints must potentially be differentially weighted. In particular, datapoints should be weighted more heavily after a change occurs. Let ar ∈(0, ∞) be the weight on Xr, and let br = Pr k=1 ak be the sum of those weights over time. The weighted mean of Vi after datapoint r is µr i = Pr k=1 ak br Xk i , which can be computed in an online fashion using the update equation: µr+1 i = br br+1 µr i + ar+1 br+1 Xr+1 i (1) The (weighted) covariance between Vi and Vj after datapoint r is provably equal to Cr Vi,Vj = Pr k=1 ak br (Xr i −µr i )(Xr j −µr j). Let δi = µr+1 i −µr i = ar+1 br+1 (Xr+1 i −µr i ). The update equation for Cr+1 can be written (after some algebra) as: Cr+1 Xi,Xj = 1 br+1 [brCr Xi,Xj + brδiδj + ar+1(Xr+1 i −µr+1 i )(Xr+1 j −µr+1 j )] (2) If ak = c for all k and some constant c > 0, then the estimated covariance matrix is identical to the batch-mode estimated covariance matrix. If ar = αbr, then the learning is the same as if one uses TD(0) learning for each covariance with a learning rate of α. The sample size Sr is more complicated, since datapoints are weighted differently and so the “effective” sample size can differ from the actual sample size (though it should always be less-than-orequal). Because Xr+1 comes from the current generating structure, it should always contribute 1 to the effective sample size. In addition, Xr+1 is weighted ar+1 ar more than Xr. If we adjust the natural sample size update equation to satisfy these two constraints, then the update equation becomes: Sr+1 = ar ar+1 Sr + 1 (3) If ar+1 ≥ar for all r (as in the method we use below), then Sr+1 ≤Sr + 1. If ar+1 = ar for all r, then Sr = r; that is, if the datapoint weights are constant, then Sr is the true sample size. Sufficient statistics tracking—µr+1, Cr+1, and Sr+1—thus requires remembering only their previous values and br, assuming that ar+1 can be efficiently computed. The ar+1 weights are based on the “fit” between the current estimated covariance matrix and the input data: poor fit implies that a change in the underlying generating structure is more likely. For multivariate Gaussian data, the “fit” between Xr+1 and the current estimated covariance matrix Cr is given by the Mahalanobis distance Dr+1 [8]: Dr+1 = (Xr+1 −µr)(Cr)−1(Xr+1 −µr)T . A large Mahalanobis distance (i.e., poor fit) for some datapoint could indicate simply an outlier; inferring that the underlying generating structure has changed requires large Mahalanobis distances over multiple datapoints. The likelihood of the (weighted) sequence of Dr’s is analytically intractable, and so we cannot use the Dr values directly. We instead base the ar+1 weights on the (weighted) pooled p-value of the individual p-values for the Mahalanobis distance of each datapoint. The Mahalanobis distance of a V -dimensional datapoint from a covariance matrix estimated from a sample of size N is distributed as Hotelling’s T 2 with parameters p = V and m = N −1. The p-value for the Mahalanobis distance Dr+1 is thus: pr+1 = T 2(x > Dr+1|p = N, m = Sr −1) where Sr is the effective sample size. Let Φ(x, y) be the cdf of a Gaussian with mean 0 and variance y evaluated at x. Then Liptak’s method for weighted pooling of the individual p-values [7] gives the following definition:1 ρr+1 = Φ(Pr i=1 aiΦ−1(pi, 1), pP a2 i ) = Φ(ηr+1, √τr+1), where the update equations for η and τ are ηr+1 = ηr + arΦ−1(pr, 1) and τr+1 = τr + a2 r. 1ρr+1 cannot include pr+1 without being circular: pr+1 would have to be appropriately weighted by ar+1, but that weight depends on ρr+1. 3 There are many ways to convert the pooled p-value ρr+1 into a weight ar+1. We use the strategy: if ρr+1 is greater than some threshold T (i.e., the data sequence is sufficiently likely given the current model), then keep the weight constant; if ρr+1 is less that T, then increase ar+1 linearly and inversely to ρr+1 up to a maximum of γar at ρr+1 = 0. Mathematically, this transformation is: ar+1 = ar · max 1, γT −γρr+1 + ρr+1 T (4) Efficient computation of ar+1 thus only requires additionally tracking ρr, ηr, and τr. We can efficiently track the relevant sufficient statistics in an online fashion, and so the only remaining step is to learn the corresponding graphical model. The implementation in this paper uses the PC algorithm [12], a standard constraint-based structure learning algorithm. A range of alternative structure learning algorithms could be used instead, depending on the assumptions one is able to make. Learning graphical model structure is computationally expensive [2] and so one should balance the accuracy of the current model against the computational cost of relearning. More precisely, graph2 relearning should be most frequent after an inferred underlying change, though there should be a non-zero chance of relearning even when the structure appears to be relatively stable (since the structure could be slowly drifting). In practice, the LoSST algorithm probabilistically relearns based on the inverse3 of ρr: the probability of relearning at time r + 1 is a noisy-OR gate with the probability of relearning at time r, and a weighted (1 −ρr+1). Mathematically, Pr+1(relearn) = Pr(relearn) + ν(1 −ρr+1) − Pr(relearn)ν(1 −ρr+1), where ν ∈[0, 1] modifies the frequency of graph relearning: large values result in more frequent relearning and small values result in fewer. If a relearning event is triggered at datapoint r, then a new graphical model structure and parameters are learned, and Pr(relearn) is set to 0. In general, ρr is lower when changepoints are detected, so Pr(relearn) will increase more quickly around changepoints, and graph relearning will become more frequent. During times of stability, ρr will be comparatively large, resulting in a slower increase of Pr(relearn) and thus less frequent graph relearning. 3.1 Convergence vs. diligence in LoSST LoSST is capable of exhibiting different long-run properties, depending on its parameters. Convergence is a standard desideratum: if there is a stable structure in the limit, then the algorithm’s output should stabilize on that structure. In contexts in which the true structure can change, another desirable property for learning algorithms is diligence: if the generating structure has a change of given size (that manifests in the data), then the algorithm should detect and respond to that change within a fixed number of datapoints (regardless of the amount of previous data). Both diligence and convergence are desirable methodological virtues, but they are provably incompatible: no learning algorithm can be both diligent and convergent [5]. Intuitively, they are incompatible because they must respond differently to improbable datapoints: convergent algorithms must tolerate them (since such data occur with probability 1 in the infinite limit), while diligent algorithms must regard them as signals that the structure has changed. If γ = 1, then LoSST is a convergent algorithm, since it follows that ar+1 = ar for all r (which is a sufficient condition for convergence). For γ > 1, the behavior of LoSST depends on T. If T < 0, then we again have ar+1 = ar for all r, and so LoSST is convergent. LoSST is also provably convergent if T is time-indexed such that Tr = f(Sr) for some f with (0, 1] range, where P∞ i=1 (1 −f(i)) converges.4 2Recall that the sufficient statistics are updated after every datapoint. 3Recall that ρr is a pooled p-value, so low values indicate unlikely data. 4Proof sketch: P∞ i=r (1 −qi) can be shown to be an upper bound on the probability that (1 −ρi) > qi will occur for some i in [r, ∞), where qi is the i-th element of the sequence Q of lower threshold values. Any sequence Q s.t. P∞ i=1 (1 −qi) < 1 will then guarantee that an infinite amount of unbiased data will be accumulated in the infinite limit. This provides probability 1 convergence for LoSST, since the structure learning method has probability 1 convergence in the limit. If Q is prepended with arbitrary strictly positive threshold values, the first element of Q will still be reached infinitely many times with probability 1 in the infinite limit, and so LoSST will still converge with probability 1, even using these expanded sequences. 4 In contrast, if T > 1 and γ > 1, then LoSST is provably diligent.5 We conjecture that there are sequences of time-indexed Tr < 1 that will also yield diligent versions of LoSST, analogously to the condition given above for convergence. Interestingly, if γ > 1 and 0 < T < 1, then LoSST is neither convergent nor diligent, but rather strikes a balance between the desiderata. In particular, these versions (a) tend to converge towards stable structures, but provably do not actually converge since they remain sensitive to outliers; and (b) respond quickly to change in generating structure, but only exponentially fast in the number of previous datapoints, rather than within a fixed interval. The full behavior of LoSST in this parameter regime, including the extent and sensitivity of trade-offs, is an open question for future research. For the simulations below, unsystematic investigation led to T = 0.05 and γ = 3, which seemed to appropriately trade off convergence vs. diligence in that context. 4 Simulation results We used synthetic data to evaluate the performance of LoSST given known ground truth. All simulations used scenarios in which either the ground truth parameters or ground truth graph (and parameters) changed during the course of data collection. Before the first changepoint, there should be no significant difference between LoSST and a standard batch-mode learner, since those datapoints are globally i.i.d. Performance on these datapoints thus provides information about the performance cost (if any) of online learning using LoSST, relative to traditional algorithms. After a changepoint, one is interested both in the absolute performance of LoSST (i.e., can it track the changes?) and in its performance relative to a standard batch-mode algorithm (i.e., what performance gain does it provide?). We used the PC algorithm [12] as our baseline batch-mode learning algorithm; we conjecture that any other standard graphical model structure learning algorithm would perform similarly, given the graphs and sample sizes in our simulations. In order to directly compare the performance of LoSST and PC, we imposed a fixed “graph relearning” schedule6 on LoSST. The first set of simulations used datasets with 2000 datapoints, where the SEM graph and parameters both changed after the first 1000 datapoints. 500 datasets were generated for each of a range of ⟨#variables, MaxDegree⟩pairs,7 where each dataset used two different, randomly generated SEMs of the specified size and degree. Figures 1(a-c) show the mean edge addition, removal, and orientation errors (respectively) by LoSST as a function of time, and Figures 1(d-f) show the means of #errorsP C −#errorsLoSST for each error type (i.e., higher numbers imply LoSST outperforms PC). In all Figures, each ⟨variable, degree⟩pair is a distinct line. As expected, LoSST was basically indistinguishable from PC for the first 1000 datapoints; the lines in Figures 1(d-f) for that interval are all essentially zero. After the underlying generating model changes, however, there are significant differences. The PC algorithm performs quite poorly because the full dataset is essentially a mixture from two different distributions which induces a large number of spurious associations. In contrast, the LoSST algorithm finds large Mahalanobis distances for those datapoints, which lead to higher weights, which lead it to learn (approximately) the new underlying graphical model. In practice, LoSST typically stabilized on a new model by roughly 250 datapoints after the changepoint. The second set of simulations was identical to the first (500 runs each for various pairs of variable number and edge degree), except that the graph was held constant throughout and only the SEM parameters changed after 1000 datapoints. Figures 2(a-c) and 2(d-f) report, for these simulations, the same measures as Figures 1(a-c) and 1(d-f). Again, LoSST and PC performed basically identically for the first 1000 datapoints. Performance after the parameter change did not follow quite the same pattern as before, however. LoSST again does much better on edge addition and orientation errors, but performed significantly worse on edge removal errors for the first 200 points following the 5Proof sketch: By equation (4), T > 1 & γ > 1 ⇒γ −γ−1 T > 1 ⇒ar+1 ≥ar(γ −γ−1 T ) > ar for all r. This last strict inequality implies that the effective sample size has a finite upper bound (= γT −γ+1 (γ−1)(T −1) if ρr = 1 for all r), and the majority of the effective sample comes from recent data points. These two conditions are jointly sufficient for diligence. 6LoSST relearned graphs and PC was rerun after datapoints {25, 50, 100, 200, 300, 500, 750, 1000, 1025, 1050, 1100, 1200, 1300, 1500, 1750, 2000}. 7Specifically, ⟨4, 3⟩, ⟨8, 3⟩, ⟨10, 3⟩, ⟨10, 7⟩, ⟨15, 4⟩, ⟨15, 9⟩, ⟨20, 5⟩, and ⟨20, 12⟩ 5 (a) (b) (c) (d) (e) (f) Figure 1: Structure & parameter changes: (a-c) LoSST errors; (d-f) LoSST improvement over PC (a) (b) (c) (d) (e) (f) Figure 2: Parameter changes: (a-c) LoSST errors; (d-f) LoSST improvement over PC change. When a change occcurs, PC intially responds by adding edges to the output, while LoSST responds by being more cautious in its inferences (since the effective sample size shrinks after a change). The short-term impact on each algorithm is thus: PC’s output tends to be a superset of the original edges, while LoSST outputs fewer edges. As a result, PC can outperform LoSST for a brief time on the edge removal metric in these types of cases in which the change involves only parameters, not graph structure. The third set of simulations was designed to explore in detail the performance with probabilistic relearning. We randomly generated a single dataset with 10,000 datapoints, where the underlying SEM graph and parameters changed after every 1000 datapoints. Each SEM had 10 variables and maximum degree of 7. We then ran LoSST with probabilistic relearning (ν = .005) 500 times on this dataset. Figure 3(a) shows the (observed) expected number of “relearnings” in each 256 (a) (b) (c) (d) Figure 3: (a) LoSST expected relearnings; (b-d) Expected edge additions, removals, and flips, against constant relearning (a) (b) (c) Figure 4: (a) Effective sample size during LoSST run on BLS data; (b) Pooled p-values; (c) Mahalanobis distances datapoint window. As expected, there are substantial relearning peaks after each structure shift, and the expected number of relearnings persisted at roughly 0.1 per 25 datapoints throughout the stable periods. Figures 3(b-d) provide error information: the smooth green lines indicate the mean edge addition, removal, and orientation errors (respectively) during learning, and the blocky blue lines indicate the LoSST errors if graph relearning occurred after every datapoint. Although there are many fewer graph relearnings with the probabilistic schedule, overall errors did not significantly increase. 5 Application to US price index volatility To test the performance of the LoSST algorithm on real-world data, we applied it to seasonally adjusted price index data from the U.S. Bureau of Labor Statistics. We limited the data to commodities/services with data going back to at least 1967, resulting in a data set of 6 variables: Apparel, Food, Housing, Medical, Other, and Transportation. The data were collected monthly from 19672011, resulting in 529 data points. Because of significant trends in the indices over time, we used month-to-month differences. Figure 4(a) shows the change in effective sample size, where the key observation is that change detection prompts significant drops in the effective sample size. Figures 4(b) and 4(c) show the pooled p-value and Mahalanobis distance for each month, which are the drivers of sample size 7 changes. The Great Moderation was a well-known macroeconomic phenomenon between 1980 and 2007 in which the U.S. financial market underwent a slow but steady reduction in volatility. LoSST appears to detect exactly such a shift in the volatility of the relationships between these price indexes, though it detects another shift shortly after 2000.8 This real-world case study also demonstrates the importance of using pooled p-values, as that is why LoSST does not respond to the single-month spike in Mahalanobis distance in 1995, but does respond to the extended sequence of slightly above average Mahalanobis distances around 1980. 6 Discussion and future research The LoSST algorithm is suitable for locally stationary structures, but there are obviously limits. In particular, it will perform poorly if the generating structure changes very rapidly, or if the datapoints are a random-order mixture from multiple structures. An important future research direction is to characterize and then improve LoSST’s performance on more rapidly varying structures. Various heuristic aspects of LoSST could also potentially be replaced by more normative procedures, though as noted earlier, many will not work without substantial revision (e.g., obvious Bayesian methods). This algorithm can also be extended to have the current learned model influence the ar weights. Suppose particular graphical edges or adjacencies have not changed over a long period of time, or have been stable over multiple relearnings. In that case, one might plausibly conclude that those connections are less likely to change, and so much greater error should be required to relearn those connections. In practice, this extension would require the ar weights to vary across ⟨Vi, Vj⟩pairs, which significantly complicates the mathematics and memory requirements of the sufficient statistic tracking. It is an open question whether the (presumably) improved tracking would compensate for the additional computational and memory cost in particular domains. We have focused on SEMs, but there are many other types of graphical models; for example, Bayesian networks have the same graph-type but are defined over discrete variables with conditional probability tables. Tracking the sufficient statistics for Bayes net structure learning is substantially more costly, and we are currently investigating ways to learn the necessary information in a tractable, online fashion. Similarly, our graph learning relies on constraint-based structure learning since the relevant scores in score-based methods (such as [3]) do not decompose in a manner that is suitable for online learning. We are thus investigating alternative scores, as well as heuristic approximations to principled score-based search. There are many real-world contexts in which batch-mode structure learning is either infeasible or inappropriate. In particular, the real world frequently involves dynamically varying structures that our algorithms must track over time. The online structure learning algorithm presented here has great potential to perform well in a range of challenging contexts, and at little cost in “traditional” settings. Acknowledgments Thanks to Joe Ramsey and Rob Tillman for help with the simulations, and three anonymous reviewers for helpful comments. DD was partially supported by a James S. McDonnell Foundation Scholar Award. 8This shift is almost certainly due to the U.S. recession that occurred in March to November of that year. 8 References [1] R. P. Adams and D. J. C. MacKay. Bayesian online changepoint detection. Technical report, University of Cambridge, Cambridge, UK, 2007. arXiv:0710.3742v1 [stat.ML]. [2] D. M. Chickering. Learning Bayesian networks is NP-complete. In Proceedings of AI and Statistics, 1995. [3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 3:507–554, 2002. [4] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. IEEE Transactions on Signal Processing, 8:2961–2974, 2005. [5] E. Kummerfeld and D. Danks. Model change and methodological virtues in scientific inference. Technical report, Carnegie Mellon University, Pittsburgh, Pennsylvania, 2013. [6] S. L. Lauritzen. Graphical models. Clarendon Press, 1996. [7] T. Liptak. On the combination of independent tests. Magyar Tud. Akad. Mat. Kutato Int. Kozl., 3:171–197, 1958. [8] P. C. Mahalanobis. On the generalized distance in statistics. Proceedings of the National Institute of Sciences of India, 2:49–55, 1936. [9] A. McCallum, D. Freitag, and F. C. N. Pereira. Maximum entropy Markov models of information extraction and segmentation. In Proceedings of ICML-2000, pages 591–598, 2000. [10] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. [11] M.R. Siracusa and J.W. Fisher III. Tractable bayesian inference of time-series dependence structure. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, 2009. [12] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd edition, 2000. [13] R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988. [14] M. Talih and N. Hengartner. Structural learning with time-varying components: tracking the cross-section of financial time series. Journal of the Royal Statistical Society - Series B: Statistical Methodology, 67(3):321–341, 2005. 9
|
2013
|
43
|
5,118
|
Phase Retrieval using Alternating Minimization Praneeth Netrapalli Department of ECE The University of Texas at Austin Austin, TX 78712 praneethn@utexas.edu Prateek Jain Microsoft Research India Bangalore, India prajain@microsoft.com Sujay Sanghavi Department of ECE The University of Texas at Austin Austin, TX 78712 sanghavi@mail.utexas.edu Abstract Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers). Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estimating the missing phase information, and the candidate solution. In this paper, we show that a simple alternating minimization algorithm geometrically converges to the solution of one such problem – finding a vector x from y, A, where y = |AT x| and |z| denotes a vector of element-wise magnitudes of z – under the assumption that A is Gaussian. Empirically, our algorithm performs similar to recently proposed convex techniques for this variant (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, our algorithm is much more efficient and can scale to large problems. Analytically, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the only known theoretical guarantee for alternating minimization for any variant of phase retrieval problems in the non-convex setting. 1 Introduction In this paper we are interested in recovering a complex1 vector x∗∈Cn from magnitudes of its linear measurements. That is, for ai ∈Cn, if yi = |⟨ai, x∗⟩|, for i = 1, . . . , m (1) then the task is to recover x∗using y and the measurement matrix A = [a1 a2 . . . am]. The above problem arises in many settings where it is harder / infeasible to record the phase of measurements, while recording the magnitudes is significantly easier. This problem, known as phase retrieval, is encountered in several applications in crystallography, optics, spectroscopy and tomography [14]. Moreover, the problem is broadly studied in the following two settings: (i) The measurements in (1) correspond to the Fourier transform (the number of measurements here is equal to n) and there is some apriori information about the signal. 1Our results also cover the real case, i.e. where all quantities are real. 1 (ii) The set of measurements y are overcomplete (i.e., m > n), while some apriori information about the signal may or may not be available. In the first case, various types of apriori information about the underlying signal such as positivity, magnitude information on the signal [11], sparsity [25] and so on have been studied. In the second case, algorithms for various measurement schemes such as Fourier oversampling [21], multiple random illuminations [4, 28] and wavelet transform [28] have been suggested. By and large, the most well known methods for solving this problem are the error reduction algorithms due to Gerchberg and Saxton [13] and Fienup [11], and variants thereof. These algorithms are alternating projection algorithms that iterate between the unknown phases of the measurements and the unknown underlying vector. Though the empirical performance of these algorithms has been well studied [11, 19], and they are used in many applications [20], there are not many theoretical guarantees regarding their performance. More recently, a line of work [7, 6, 28] has approached this problem from a different angle, based on the realization that recovering x∗is equivalent to recovering the rank-one matrix x∗x∗T , i.e., its outer product. Inspired by the recent literature on trace norm relaxation of the rank constraint, they design SDPs to solve this problem. Refer Section 1.1 for more details. In this paper we go back to the empirically more popular ideology of alternating minimization; we develop a new alternating minimization algorithm, for which we show that (a) empirically, it noticeably outperforms convex methods, and (b) analytically, a natural resampled version of this algorithm requires O(n log3 n) i.i.d. random Gaussian measurements to geometrically converge to the true vector. Our contribution: • The iterative part of our algorithm is implicit in previous work [13, 11, 28, 4]; the novelty in our algorithmic contribution is the initialization step which makes it more likely for the iterative procedure to succeed - see Figures 1 and 2. • Our analytical contribution is the first theoretical guarantee regarding the convergence of alternating minimization for the phase retrieval problem in a non-convex setting. • When the underlying vector is sparse, we design another algorithm that achieves a sample complexity of O (x∗ min)−4 log n + log3 k where k is the sparsity and x∗ min is the minimum non-zero entry of x∗. This algorithm also runs over Cn and scales much better than SDP based methods. Besides being an empirically better algorithm for this problem, our work is also interesting in a broader sense: there are many problems in machine learning where the natural formulation of a problem is non-convex; examples include rank constrained problems, applications of EM algorithms etc., and alternating minimization has good empirical performance. However, the methods with the best (or only) analytical guarantees involve convex relaxations (e.g., by relaxing the rank constraint and penalizing the trace norm). In most of these settings, correctness of alternating minimization is an open question. We believe that our results in this paper are of interest, and may have implications, in this larger context. The rest of the paper is organized as follows: In section 1.1, we briefly review related work. We clarify our notation in Section 2. We present our algorithm in Section 3 and the main results in Section 4. We present our results for the sparse case in Section 5. Finally, we present experimental results in Section 6. 1.1 Related Work Phase Retrieval via Non-Convex Procedures: Inspite of the huge amount of work it has attracted, phase retrieval has been a long standing open problem. Early work in this area focused on using holography to capture the phase information along with magnitude measurements [12]. However, computational methods for reconstruction of the signal using only magnitude measurements received a lot of attention due to their applicability in resolving spurious noise, fringes, optical system aberrations and so on and difficulties in the implementation of interferometer setups [9]. Though such methods have been developed to solve this problem in various practical settings [8, 20], our 2 theoretical understanding of this problem is still far from complete. Many papers have focused on determining conditions under which (1) has a unique solution - see [24] and references therein. However, the uniqueness results of these papers do not resolve the algorithmic question of how to find the solution to (1). Since the seminal work of Gerchberg and Saxton [13] and Fienup [11], many iterated projection algorithms have been developed targeted towards various applications [1, 10, 2]. [21] first suggested the use of multiple magnitude measurements to resolve the phase problem. This approach has been successfully used in many practical applications - see [9] and references there in. Following the empirical success of these algorithms, researchers were able to explain its success in some of the instances [29] using Bregman’s theory of iterated projections onto convex sets [3]. However, many instances, such as the one we consider in this paper, are out of reach of this theory since they involve magnitude constraints which are non-convex. To the best of our knowledge, there are no theoretical results on the convergence of these approaches in a non-convex setting. Phase Retrieval via Convex Relaxation: An interesting recent approach for solving this problem formulates it as one of finding the rank-one solution to a system of linear matrix equations. The papers [7, 6] then take the approach of relaxing the rank constraint by a trace norm penalty, making the overall algorithm a convex program (called PhaseLift) over n × n matrices. Another recent line of work [28] takes a similar but different approach : it uses an SDP relaxation (called PhaseCut) that is inspired by the classical SDP relaxation for the max-cut problem. To date, these convex methods are the only ones with analytical guarantees on statistical performance [5, 28] (i.e. the number m of measurements required to recover x∗) – under an i.i.d. random Gaussian model on the measurement vectors ai. However, by “lifting” a vector problem to a matrix one, these methods lead to a much larger representation of the state space, and higher computational cost as a result. Sparse Phase Retrieval: A special case of the phase retrieval problem which has received a lot of attention recently is when the underlying signal x∗is known to be sparse. Though this problem is closely related to the compressed sensing problem, lack of phase information makes this harder. However, the ℓ1 regularization approach of compressed sensing has been successfully used in this setting as well. In particular, if x∗is sparse, then the corresponding lifted matrix x∗x∗T is also sparse. [22, 18] use this observation to design ℓ1 regularized SDP algorithms for phase retrieval of sparse vectors. For random Gaussian measurements, [18] shows that ℓ1 regularized PhaseLift recovers x∗correctly if the number of measurements is Ω(k2 log n). By the results of [23], this result is tight up to logarithmic factors for ℓ1 and trace norm regularized SDP relaxations. Alternating Minimization (a.k.a. ALS): Alternating minimization has been successfully applied to many applications in the low-rank matrix setting. For example, clustering, sparse PCA, nonnegative matrix factorization, signed network prediction etc. - see [15] and references there in. However, despite empirical success, for most of the problems, there are no theoretical guarantees regarding its convergence except to a local minimum. The only exceptions are the results in [16, 15] which give provable guarantees for alternating minimization for the problems of matrix sensing and matrix completion. 2 Notation We use bold capital letters (A, B etc.) for matrices, bold small case letters (x, y etc.) for vectors and non-bold letters (α, U etc.) for scalars. For every complex vector w ∈Cn, |w| ∈Rn denotes its element-wise magnitude vector. wT and AT denote the Hermitian transpose of the vector w and the matrix A respectively. e1, e2, etc. denote the canonical basis vectors in Cn. z denotes the complex conjugate of the complex number z. In this paper we use the standard Gaussian (or normal) distribution over Cn. a is said to be distributed according to this distribution if a = a1 + ia2, where a1 and a2 are independent and are distributed according to N (0, I). We also define Ph (z) def = z |z| for every z ∈C, and dist (w1, w2) def = r 1 − ⟨w1,w2⟩ ∥w1∥2∥w2∥2 2 for every w1, w2 ∈Cn. Finally, we use the shorthand wlog for without loss of generality and whp for with high probability. 3 Algorithm In this section, we present our alternating minimization based algorithm for solving the phase retrieval problem. Let A ∈Cn×m be the measurement matrix, with ai as its ith column; similarly let 3 Algorithm 1 AltMinPhase input A, y, t0 1: Initialize x0 ←top singular vector of P i y2 i aiaiT 2: for t = 0, · · · , t0 −1 do 3: Ct+1 ←Diag Ph AT xt 4: xt+1 ←argminx∈Rn
AT x −Ct+1y
2 5: end for output xt0 y be the vector of recorded magnitudes. Then, y = | AT x∗|. Recall that, given y and A, the goal is to recover x∗. If we had access to the true phase c∗of AT x∗ (i.e., c∗ i = Ph (⟨ai, x∗⟩)) and m ≥n, then our problem reduces to one of solving a system of linear equations: C∗y = AT x∗, where C∗def = Diag(c∗) is the diagonal matrix of phases. Of course we do not know C∗, hence one approach to recovering x∗is to solve: argmin C,x ∥AT x −Cy∥2, (2) where x ∈Cn and C ∈Cm×m is a diagonal matrix with each diagonal entry of magnitude 1. Note that the above problem is not convex since C is restricted to be a diagonal phase matrix and hence, one cannot use standard convex optimization methods to solve it. Instead, our algorithm uses the well-known alternating minimization: alternatingly update x and C so as to minimize (2). Note that given C, the vector x can be obtained by solving the following least squares problem: minx ∥AT x −Cy∥2. Since the number of measurements m is larger than the dimensionality n and since each entry of A is sampled from independent Gaussians, A is invertible with probability 1. Hence, the above least squares problem has a unique solution. On the other hand, given x, the optimal C is given by C = Diag Ph AT x . While the above algorithm is simple and intuitive, it is known that with bad initial points, the solution might not converge to x∗. In fact, this algorithm with a uniformly random initial point has been empirically evaluated for example in [28], where it performs worse than SDP based methods. Moreover, since the underlying problem is non-convex, standard analysis techniques fail to guarantee convergence to the global optimum, x∗. Hence, the key challenges here are: a) a good initialization step for this method, b) establishing this method’s convergence to x∗. We address the first key challenge in our AltMinPhase algorithm (Algorithm 1) by initializing x as the largest singular vector of the matrix S = 1 m P i y2 i aiaiT . Theorem 4.1 shows that when A is sampled from standard complex normal distribution, this initialization is accurate. In particular, if m ≥C1n log3 n for large enough C1 > 0, then whp we have ∥x0 −x∗∥2 ≤1/100 (or any other constant). Theorem 4.2 addresses the second key challenge and shows that a variant of AltMinPhase (see Algorithm 2) actually converges to the global optimum x∗at linear rate. See section 4 for a detailed analysis of our algorithm. We would like to stress that not only does a natural variant of our proposed algorithm have rigorous theoretical guarantees, it also is effective practically as each of its iterations is fast, has a closed form solution and does not require SVD computation. AltMinPhase has similar statistical complexity to that of PhaseLift and PhaseCut while being much more efficient computationally. In particular, for accuracy ǫ, we only need to solve each least squares problem only up to accuracy O (ǫ). Now, since the measurement matrix A is sampled from Gaussian with m > Cn, it is well conditioned. Hence, using conjugate gradient method, each such step takes O mn log 1 ǫ time. When m = O (n) and we have geometric convergence, the total time taken by the algorithm is O n2 log2 1 ǫ . SDP based methods on the other hand require Ω(n3/√ǫ) time. Moreover, our initialization step increases the likelihood of successful recovery as opposed to a random initialization (which has been considered so far in prior work). Refer Figure 1 for an empirical validation of these claims. 4 (a) (b) Figure 1: Sample and Time complexity of various methods for Gaussian measurement matrices A. Figure 1(a) compares the number of measurements required for successful recovery by various methods. We note that our initialization improves sample complexity over that of random initialization (AltMin (random init)) by a factor of 2. AltMinPhase requires similar number of measurements as PhaseLift and PhaseCut. Figure 1(b) compares the running time of various algorithms on log-scale. Note that AltMinPhase is almost two orders of magnitude faster than PhaseLift and PhaseCut. 4 Main Results: Analysis In this section we describe the main contribution of this paper: provable statistical guarantees for the success of alternating minimization in solving the phase recovery problem. To this end, we consider the setting where each measurement vector ai is iid and is sampled from the standard complex normal distribution. We would like to stress that all the existing guarantees for phase recovery also use exactly the same setting [6, 5, 28]. Table 1 presents a comparison of the theoretical guarantees of Algorithm 2 as compared to PhaseLift and PhaseCut. Sample complexity Comp. complexity Algorithm 2 O n log3 n + log 1 ǫ log log 1 ǫ O n2 log3 n + log2 1 ǫ log log 1 ǫ PhaseLift [5] O (n) O n3/ǫ2 PhaseCut [28] O (n) O n3/√ǫ Table 1: Comparison of Algorithm 2 with PhaseLift and PhaseCut: Though the sample complexity of Algorithm 2 is off by log factors from that of PhaseLift and PhaseCut, it is O (n) better than them in computational complexity. Note that, we can solve the least squares problem in each iteration approximately by using conjugate gradient method which requires only O (mn) time. Our proof for convergence of alternating minimization can be broken into two key results. We first show that if m ≥Cn log3 n, then whp the initialization step used by AltMinPhase returns x0 which is at most a constant distance away from x∗. Furthermore, that constant can be controlled by using more samples (see Theorem 4.1). We then show that if xt is a fixed vector such that dist xt, x∗ < c (small enough) and A is sampled independently of xt with m > Cn (C large enough) then whp xt+1 satisfies: dist xt+1, x∗ < 3 4dist xt, x∗ (see Theorem 4.2). Note that our analysis critically requires xt to be “fixed” and be independent of the sample matrix A. Hence, we cannot re-use the same A in each iteration; instead, we need to resample A in every iteration. Using these results, we prove the correctness of Algorithm 2, which is a natural resampled version of AltMinPhase. We now present the two results mentioned above. For our proofs, wlog, we assume that ∥x∗∥2 = 1. Our first result guarantees a good initial vector. Theorem 4.1. There exists a constant C1 such that if m > C1 c2 n log3 n, then in Algorithm 2, with probability greater than 1 −4/m2 we have: ∥x0 −x∗∥2 < c. 5 Algorithm 2 AltMinPhase with Resampling input A, y, ǫ 1: t0 ←c log 1 ǫ 2: Partition y and (the corresponding columns of) A into t0 + 1 equal disjoint sets: (y0, A0), (y1, A1), · · · , (yt0, At0) 3: x0 ←top singular vector of P l y0 l 2 a0 ℓ a0 ℓ T 4: for t = 0, · · · , t0 −1 do 5: Ct+1 ←Diag Ph At+1T xt 6: xt+1 ←argminx∈Rn
At+1T x −Ct+1yt+1
2 7: end for output xt0 The second result proves geometric decay of error assuming a good initialization. Theorem 4.2. There exist constants c, bc and ec such that in iteration t of Algorithm 2, if dist xt, x∗ < c and the number of columns of At is greater than bcn log 1 η then, with probability more than 1 −η, we have: dist xt+1, x∗ < 3 4 dist xt, x∗ , and ∥xt+1 −x∗∥2 < ec dist xt, x∗ . Proof. For simplicity of notation in the proof of the theorem, we will use A for At+1, C for Ct+1, x for xt, x+ for xt+1, and y for yt+1. Now consider the update in the (t + 1)th iteration: x+ = argmin ex∈Rn
AT ex −Cy
2 = AAT −1 ACy = AAT −1 ADAT x∗, (3) where D is a diagonal matrix with Dll def = Ph aℓT x · aℓT x∗ . Now (3) can be rewritten as: x+ = AAT −1 ADAT x∗= x∗+ AAT −1 A (D −I) AT x∗, (4) that is, x+ can be viewed as a perturbation of x∗and the goal is to bound the error term (the second term above). We break the proof into two main steps: 1. ∃a constant c1 such that |⟨x∗, x+⟩| ≥1 −c1dist (x, x∗) (see Lemma A.2), and 2. |⟨z, x+⟩| ≤5 9dist (x, x∗), for all z s.t. zT x∗= 0. (see Lemma A.4) Assuming the above two bounds and choosing c < 1 100c1 , we can prove the theorem: dist x+, x∗2 < (25/81) · dist (x, x∗)2 (1 −c1dist (x, x∗))2 ≤9 16dist (x, x∗)2 , proving the first part of the theorem. The second part follows easily from (3) and Lemma A.2. Intuition and key challenge: If we look at step 6 of Algorithm 2, we see that, for the measurements, we use magnitudes calculated from x∗and phases calculated from x. Intuitively, this means that we are trying to push x+ towards x∗(since we use its magnitudes) and x (since we use its phases) at the same time. The key intuition behind the success of this procedure is that the push towards x∗is stronger than the push towards x, when x is close to x∗. The key lemma that captures this effect is stated below: Lemma 4.3. Let w1 and w2 be two independent standard complex Gaussian random variables2. Let U = |w1| w2 Ph 1 + √ 1−α2w2 α|w1| −1 . Fix δ > 0. Then, there exists a constant γ > 0 such that if √ 1 −α2 < γ, then: E [U] ≤(1 + δ) √ 1 −α2. 2z is standard complex Gaussian if z = z1 + iz2 where z1 and z2 are independent standard normal random variables. 6 Algorithm 3 SparseAltMinPhase input A, y, k 1: S ←top-k argmaxj∈[n] Pm i=1 |aijyi| {Pick indices of k largest absolute value inner product} 2: Apply Algorithm 2 on AS, yS and output the resulting vector with elements in Sc set to zero. Sample complexity Comp. complexity Algorithm 3 O k k log n + log 1 ǫ log log 1 ǫ O k2 kn log n + log2 1 ǫ log log 1 ǫ ℓ1-PhaseLift [18] O k2 log n O n3/ǫ2 Table 2: Comparison of Algorithm 3 with ℓ1-PhaseLift when x∗ min = Ω 1/ √ k . Note that the complexity of Algorithm 3 is dominated by the support finding step. If k = O (1), Algorithm 3 runs in quasi-linear time. See Appendix A for a proof of the above lemma and how we use it to prove Theorem 4.2. Combining Theorems 4.1 and 4.2, and a simple observation that ∥xT −x∗∥2 < ec dist xT, x∗ for a constant ec, we can establish the correctness of Algorithm 2. Theorem 4.4. Suppose the measurement vectors in (1) are independent standard complex normal vectors. For every η > 0, there exists a constant c such that if m > cn log3 n + log 1 ǫ log log 1 ǫ then, with probability greater than 1 −η, Algorithm 2 outputs xt0 such that ∥xt0 −x∗∥2 < ǫ. 5 Sparse Phase Retrieval In this section, we consider the case where x∗is known to be sparse, with sparsity k. A natural and practical question to ask here is: can the sample and computational complexity of the recovery algorithm be improved when k ≪n. Recently, [18] studied this problem for Gaussian A and showed that for ℓ1 regularized PhaseLift, m = O(k2 log n) samples suffice for exact recovery of x∗. However, the computational complexity of this algorithm is still O(n3/ǫ2). In this section, we provide a simple extension of our AltMinPhase algorithm that we call SparseAltMinPhase, for the case of sparse x∗. The main idea behind our algorithm is to first recover the support of x∗. Then, the problem reduces to phase retrieval of a k-dimensional signal. We then solve the reduced problem using Algorithm 2. The pseudocode for SparseAltMinPhase is presented in Algorithm 3. Table 2 provides a comparison of Algorithm 3 with ℓ1-regularized PhaseLift in terms of sample complexity as well as computational complexity. The following lemma shows that if the number of measurements is large enough, step 1 of SparseAltMinPhase recovers the support of x∗correctly. Lemma 5.1. Suppose x∗is k-sparse with support S and ∥x∗∥2 = 1. If ai are standard complex Gaussian random vectors and m > c (x∗ min) 4 log n δ , then Algorithm 3 recovers S with probability greater than 1 −δ, where x∗ min is the minimum non-zero entry of x∗. The key step of our proof is to show that if j ∈supp(x∗), then random variable Zij = P i |aijyi| has significantly higher mean than for the case when j /∈supp(x∗). Now, by applying appropriate concentration bounds, we can ensure that minj∈supp(x∗) |Zij| > maxj /∈supp(x∗) |Zij| and hence our algorithm never picks up an element outside the true support set supp(x∗). See Appendix B for a detailed proof of the above lemma. The correctness of Algorithm 3 now is a direct consequence of Lemma 5.1 and Theorem 4.4. For the special case where each non-zero value in x∗is from {−1 √ k, 1 √ k}, we have the following corollary: Corollary 5.2. Suppose x∗is k-sparse with non-zero elements ± 1 √ k. If the number of measurements m > c k2 log n δ + k log2 k + k log 1 ǫ , then Algorithm 3 will recover x∗up to accuracy ǫ with probability greater than 1 −δ. 7 (a) (b) (c) Figure 2: (a) & (b): Sample and time complexity for successful recovery using random Gaussian illumination filters. Similar to Figure 1, we observe that AltMinPhase has similar number of filters (J) as PhaseLift and PhaseCut, but is computationally much more efficient. We also see that AltMinPhase performs better than AltMin (randominit). (c): Recovery error ∥x −x∗∥2 incurred by various methods with increasing amount of noise (σ). AltMinPhase and PhaseCut perform comparably while PhaseLift incurs significantly larger error. 6 Experiments In this section, we present experimental evaluation of AltMinPhase (Algorithm 1) and compare its performance with the SDP based methods PhaseLift [6] and PhaseCut [28]. We also empirically demonstrate the advantage of our initialization procedure over random initialization (denoted by AltMin (random init)), which has thus far been considered in the literature [13, 11, 28, 4]. AltMin (random init) is the same as AltMinPhase except that step 1 of Algorithm 1 is replaced with:x0 ← Uniformly random vector from the unit sphere. We first choose x∗uniformly at random from the unit sphere. In the noiseless setting, a trial is said to succeed if the output x satisfies ∥x −x∗∥2 < 10−2. For a given dimension, we do a linear search for smallest m (number of samples) such that empirical success ratio over 20 runs is at least 0.8. We implemented our methods in Matlab, while we obtained the code for PhaseLift and PhaseCut from the authors of [22] and [28] respectively. We now present results from our experiments in three different settings. Independent Random Gaussian Measurements: Each measurement vector ai is generated from the standard complex Gaussian distribution. This measurement scheme was first suggested by [6] and till date, this is the only scheme with theoretical guarantees. Multiple Random Illumination Filters: We now present our results for the setting where the measurements are obtained using multiple illumination filters; this setting was suggested by [4]. In particular, choose J vectors z(1), · · · , z(J) and compute the following discrete Fourier transforms: bx(u) = DFT x∗· ∗z(u) , where ·∗denotes component-wise multiplication. Our measurements will then be the magnitudes of components of the vectors bx(1), · · · , bx(J). The above measurement scheme can be implemented by modulating the light beam or by the use of masks; see [4] for more details. We again perform the same experiments as in the previous setting. Figures 2 (a) and (b) present the results. We again see that the measurement complexity of AltMinPhase is similar to that of PhaseCut and PhaseLift, but AltMinPhase is orders of magnitude faster than PhaseLift and PhaseCut. Noisy Phase Retrieval: Finally, we study our method in the following noisy measurement scheme: yi = |⟨ai, x∗+ wi⟩| for i = 1, . . . , m, (5) where wi is the noise in the i-th measurement and is sampled from N(0, σ2). We fix n = 64 and m = 6n. We then vary the amount of noise added σ and measure the ℓ2 error in recovery, i.e., ∥x −x∗∥2, where x is the recovered vector. Figure 2(c) compares the performance of various methods with varying amount of noise. We observe that our method outperforms PhaseLift and has similar recovery error as PhaseCut. Acknowledgments S. Sanghavi would like to acknowledge support from NSF grants 0954059, 1302435, ARO grant W911NF-11-1-0265 and a DTRA YIP award. 8 References [1] J. Abrahams and A. Leslie. Methods used in the structure determination of bovine mitochondrial f1 atpase. Acta Crystallographica Section D: Biological Crystallography, 52(1):30–42, 1996. [2] H. H. Bauschke, P. L. Combettes, and D. R. Luke. Hybrid projection–reflection method for phase retrieval. JOSA A, 20(6):1025–1034, 2003. [3] L. Bregman. Finding the common point of convex sets by the method of successive projection.(russian). In Dokl. Akad. Nauk SSSR, volume 162, pages 487–490, 1965. [4] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199–225, 2013. [5] E. J. Candes and X. Li. Solving quadratic equations via phaselift when there are about as many equations as unknowns. arXiv preprint arXiv:1208.6247, 2012. [6] E. J. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 2012. [7] A. Chai, M. Moscoso, and G. Papanicolaou. Array imaging using intensity-only measurements. Inverse Problems, 27(1):015005, 2011. [8] J. C. Dainty and J. R. Fienup. Phase retrieval and image reconstruction for astronomy. Image Recovery: Theory and Application, ed. byH. Stark, Academic Press, San Diego, pages 231–275, 1987. [9] H. Duadi, O. Margalit, V. Mico, J. A. Rodrigo, T. Alieva, J. Garcia, and Z. Zalevsky. Digital holography and phase retrieval. Source: Holography, Research and Technologies. InTech, 2011. [10] V. Elser. Phase retrieval by iterated projections. JOSA A, 20(1):40–55, 2003. [11] J. R. Fienup et al. Phase retrieval algorithms: a comparison. Applied optics, 21(15):2758–2769, 1982. [12] D. Gabor. A new microscopic principle. Nature, 161(4098):777–778, 1948. [13] R. W. Gerchberg and W. O. Saxton. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35:237, 1972. [14] N. E. Hurt. Phase Retrieval and Zero Crossings: Mathematical Methods in Image Reconstruction, volume 52. Kluwer Academic Print on Demand, 2001. [15] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. arXiv preprint arXiv:1212.0467, 2012. [16] R. H. Keshavan. Efficient algorithms for collaborative filtering. Phd Thesis, Stanford University, 2012. [17] W. V. Li and A. Wei. Gaussian integrals involving absolute value functions. In Proceedings of the Conference in Luminy, 2009. [18] X. Li and V. Voroninski. Sparse signal recovery from quadratic measurements via convex programming. arXiv preprint arXiv:1209.4785, 2012. [19] S. Marchesini. Invited article: A unified evaluation of iterative projection algorithms for phase retrieval. Review of Scientific Instruments, 78(1):011301–011301, 2007. [20] J. Miao, P. Charalambous, J. Kirz, and D. Sayre. Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens. Nature, 400(6742):342–344, 1999. [21] D. Misell. A method for the solution of the phase problem in electron microscopy. Journal of Physics D: Applied Physics, 6(1):L6, 1973. [22] H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry. Compressive phase retrieval from squared output measurements via semidefinite programming. arXiv preprint arXiv:1111.6323, 2011. [23] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi. Simultaneously structured models with application to sparse and low-rank matrices. arXiv preprint arXiv:1212.3753, 2012. [24] J. L. Sanz. Mathematical considerations for the problem of fourier transform phase retrieval from magnitude. SIAM Journal on Applied Mathematics, 45(4):651–664, 1985. [25] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing. arXiv preprint arXiv:1104.4406, 2011. [26] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [27] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [28] I. Waldspurger, A. d’Aspremont, and S. Mallat. Phase recovery, maxcut and complex semidefinite programming. arXiv preprint arXiv:1206.0102, 2012. [29] D. C. Youla and H. Webb. Image restoration by the method of convex projections: Part 1theory. Medical Imaging, IEEE Transactions on, 1(2):81–94, 1982. 9
|
2013
|
44
|
5,119
|
Unsupervised Structure Learning of Stochastic And-Or Grammars Kewei Tu Maria Pavlovskaia Song-Chun Zhu Center for Vision, Cognition, Learning and Art Departments of Statistics and Computer Science University of California, Los Angeles {tukw,mariapavl,sczhu}@ucla.edu Abstract Stochastic And-Or grammars compactly represent both compositionality and reconfigurability and have been used to model different types of data such as images and events. We present a unified formalization of stochastic And-Or grammars that is agnostic to the type of the data being modeled, and propose an unsupervised approach to learning the structures as well as the parameters of such grammars. Starting from a trivial initial grammar, our approach iteratively induces compositions and reconfigurations in a unified manner and optimizes the posterior probability of the grammar. In our empirical evaluation, we applied our approach to learning event grammars and image grammars and achieved comparable or better performance than previous approaches. 1 Introduction Stochastic grammars are traditionally used to represent natural language syntax and semantics, but they have also been extended to model other types of data like images [1, 2, 3] and events [4, 5, 6, 7]. It has been shown that stochastic grammars are powerful models of patterns that combine compositionality (i.e., a pattern can be decomposed into a certain configuration of sub-patterns) and reconfigurability (i.e., a pattern may have multiple alternative configurations). Stochastic grammars can be used to parse data samples into their compositional structures, which help solve tasks like classification, annotation and segmentation in a unified way. We study stochastic grammars in the form of stochastic And-Or grammars [1], which are an extension of stochastic grammars in natural language processing [8, 9] and are closely related to sum-product networks [10]. Stochastic And-Or grammars have been used to model spatial structures of objects and scenes [1, 3] as well as temporal structures of actions and events [7]. Manual specification of a stochastic grammar is typically very difficult and therefore machine learning approaches are often employed to automatically induce unknown stochastic grammars from data. In this paper we study unsupervised learning of stochastic And-Or grammars in which the training data are unannotated (e.g., images or action sequences). The learning of a stochastic grammar involves two parts: learning the grammar rules (i.e., the structure of the grammar) and learning the rule probabilities or energy terms (i.e., the parameters of the grammar). One strategy in unsupervised learning of stochastic grammars is to manually specify a fixed grammar structure (in most cases, the full set of valid grammar rules) and try to optimize the parameters of the grammar. Many approaches of learning natural language grammars (e.g., [11, 12]) as well as some approaches of learning image grammars [10, 13] adopt this strategy. The main problem of this strategy is that in some scenarios the full set of valid grammar rules is too large for practical learning and inference, while manual specification of a compact grammar structure is challenging. For example, in an image grammar the number of possible grammar rules to decompose an image patch is exponential in the size of the patch; previous approaches restrict the valid 1 ways of decomposing an image patch (e.g., allowing only horizontal and vertical segmentations), which however reduces the expressive power of the image grammar. In this paper, we propose an approach to learning both the structure and the parameters of a stochastic And-Or grammar. Our approach extends the previous work on structure learning of natural language grammars [14, 15, 16], while improves upon the recent work on structure learning of AndOr grammars of images [17] and events [18]. Starting from a trivial initial grammar, our approach iteratively inserts new fragments into the grammar to optimize its posterior probability. Most of the previous structure learning approaches learn new compositions and reconfigurations modeled in the grammar in a separate manner, which can be error-prone when the training data is scarce or ambiguous; in contrast, we induce And-Or fragments of the grammar, which unifies the search for new compositions and reconfigurations, making our approach more efficient and robust. Our main contributions are as follows. • We present a formalization of stochastic And-Or grammars that is agnostic to the types of atomic patterns and their compositions. Consequently, our learning approach is capable of learning from different types of data, e.g., text, images, events. • Unlike some previous approaches that rely on heuristics for structure learning, we explicitly optimize the posterior probability of both the structure and the parameters of the grammar. The optimization procedure is made efficient by deriving and utilizing a set of sufficient statistics from the training data. • We learn compositions and reconfigurations modeled in the grammar in a unified manner that is more efficient and robust to data scarcity and ambiguity than previous approaches. • We empirically evaluated our approach in learning event grammars and image grammars and it achieved comparable or better performance than previous approaches. 2 Stochastic And-Or Grammars Stochastic And-Or grammars are first proposed to model images [1] and later adapted to model events [7]. Here we provide a unified definition of stochastic And-Or grammars that is agnostic to the type of the data being modeled. We restrict ourselves to the context-free subclass of stochastic And-Or grammars, which can be seen as an extension of stochastic context-free grammars in formal language theory [8] as well as an extension of decomposable sum-product networks [10]. A stochastic context-free And-Or grammar is defined as a 5-tuple ⟨Σ, N, S, R, P⟩. Σ is a set of terminal nodes representing atomic patterns that are not decomposable; N is a set of nonterminal nodes representing decomposable patterns, which is divided into two disjoint sets: And-nodes N AND and Or-nodes N OR; S ∈N is a start symbol that represents a complete entity; R is a set of grammar rules, each of which represents the generation from a nonterminal node to a set of nonterminal or terminal nodes; P is the set of probabilities assigned to the grammar rules. The set of grammar rules R is divided into two disjoint sets: And-rules and Or-rules. • An And-rule represents the decomposition of a pattern into a configuration of nonoverlapping sub-patterns. It takes the form of A →a1a2 . . . an, where A ∈N AND is a nonterminal And-node and a1a2 . . . an is a set of terminal or nonterminal nodes representing the sub-patterns. A set of relations are specified between the sub-patterns and between the nonterminal node A and the sub-patterns, which configure how these sub-patterns form the composite pattern represented by A. The probability of an And-rule is specified by the energy terms defined on the relations. Note that one can specify different types of relations in different And-rules, which allows multiple types of compositions to be modeled in the same grammar. • An Or-rule represents an alternative configuration of a composite pattern. It takes the form of O →a, where O ∈N OR is a nonterminal Or-node, and a is either a terminal or a nonterminal node representing a possible configuration. The set of Or-rules with the same left-hand side can be written as O →a1|a2| . . . |an. The probability of an Or-rule specifies how likely the alternative configuration represented by the Or-rule is selected. A stochastic And-Or grammar defines generative processes of valid entities, i.e., starting from an entity containing only the start symbol S and recursively applying the grammar rules in R to convert 2 Table 1: Examples of stochastic And-Or grammars Terminal node Nonterminal node Relations in And-rules Natural language grammar Word Phrase Deterministic “concatenating” relations Event And-Or grammar [7] Atomic action (e.g., standing, drinking) Event or sub-event Temporal relations (e.g., those proposed in [19]) Image And-Or grammar [1] Visual word (e.g., Gabor bases) Image patch Spatial relations (e.g., those specifying relative positions, rotations and scales) And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a3 a4 a6 x x S S x1 x2 S …… A1 A2 …… A1 A2 a1 a2 X a5 a6 X X Y a1 Y a6 Y X …… X a3 a4 X a3 a4 a a a2 a5 (a) S And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a6 a7 a8 S S a1 a2 X a5 a6 …… A1 A2 X a1 Y a6 …… A1 A2 Y a2 X a5 a6 X X Y X …… a3 a4 X a3 a4 a2 a5 And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a3 a4 a6 x x S S x1 x2 S …… A1 A2 …… A1 A2 a1 a2 N1 a5 a6 N1 N N2 a1 N2 a6 N2 N1 …… N1 a3 a4 N1 a3 a4 a a a2 a5 (b) S And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a6 a7 a8 S S a1 a2 X a5 a6 …… A1 A2 X a1 Y a6 …… A1 A2 Y a2 X a5 a6 X X Y X …… a3 a4 X a3 a4 a2 a5 And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a3 a4 a6 x x S S x1 x2 S …… A1 A2 …… A1 A2 a1 a2 N1 a5 a6 N1 N N2 a1 N2 a6 N2 N1 …… N1 a3 a4 N1 a3 a4 a a a2 a5 (c) S And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a6 a7 a8 S S a1 a2 X a5 a6 …… A1 A2 X a1 Y a6 …… A1 A2 Y a2 X a5 a6 X X Y X …… a3 a4 X a3 a4 a2 a5 S And‐node Or‐node S …… A1 A2 a1 a2 a3 a4 a5 a6 a7 a8 S S a1 a2 X a5 a6 …… A1 A2 X a1 Y a6 …… A1 A2 Y a2 X a5 a6 X X Y X …… a3 a4 X a3 a4 a2 a5 Figure 1: An illustration of the learning process. (a) The initial grammar. (b) Iteration 1: learning a grammar fragment rooted at N1. (c) Iteration 2: learning a grammar fragment rooted at N2. nonterminal nodes until the entity contains only terminal nodes (atomic patterns). Table 1 gives a few examples of stochastic context-free And-Or grammars that model different types of data. 3 Unsupervised Structure Learning 3.1 Problem Definition In unsupervised learning of stochastic And-Or grammars, we aim to learn a grammar from a set of unannotated i.i.d. data samples (e.g., natural language sentences, quantized images, action sequences). The objective function is the posterior probability of the grammar given the training data: P(G|X) ∝P(G)P(X|G) = 1 Z e−α∥G∥Y xi∈X P(xi|G) where G is the grammar, X = {xi} is the set of training samples, Z is the normalization factor of the prior, α is a constant, and ∥G∥is the size of the grammar. By adopting a sparsity prior that penalizes the size of the grammar, we hope to learn a compact grammar with good generalizability. In order to ease the learning process, during learning we approximate the likelihood P(xi|G) with the Viterbi likelihood (the probability of the best parse of the data sample xi). Viterbi likelihood has been empirically shown to lead to better grammar learning results [20, 10] and can be interpreted as combining the standard likelihood with an unambiguity bias [21]. 3.2 Algorithm Framework We first define an initial grammar that generates the exact set of training samples. Specifically, for each training sample xi ∈X, there is an Or-rule S →Ai in the initial grammar where S is the start symbol and Ai is an And-node, and the probability of the rule is 1 ∥X∥where ∥X∥is the number of training samples; for each xi there is also an And-rule Ai →ai1ai2 . . . ain where aij (j = 1 . . . n) are the terminal nodes representing the set of atomic patterns contained in sample xi, and a set of relations are specified between these terminal nodes such that they compose sample xi. Figure 1(a) shows an example initial grammar. This initial grammar leads to the maximal likelihood on the training data but has a very small prior probability because of its large size. 3 Starting from the initial grammar, we introduce new intermediate nonterminal nodes between the terminal nodes and the top-level nonterminal nodes in an iterative bottom-up fashion to generalize the grammar and increase its posterior probability. At each iteration, we add a grammar fragment into the grammar that is rooted at a new nonterminal node and contains a set of grammar rules that specify how the new nonterminal node generates one or more configurations of existing terminal or nonterminal nodes; we also try to reduce each training sample using the new grammar rules and update the top-level And-rules accordingly. Figure 1 illustrates this learning process. There are typically multiple candidate grammar fragments that can be added at each iteration, and we employ greedy search or beam search to explore the search space and maximize the posterior probability of the grammar. We also restrict the types of grammar fragments that can be added in order to reduce the number of candidate grammar fragments, which will be discussed in the next subsection. The algorithm terminates when no more grammar fragment can be found that increases the posterior probability of the grammar. 3.3 And-Or Fragments In each iteration of our learning algorithm framework, we search for a new grammar fragment and add it into the grammar. There are many different types of grammar fragments, the choice of which greatly influences the efficiency and accuracy of the learning algorithm. Two simplest types of grammar fragments are And-fragments and Or-fragments. An And-fragment contains a new Andnode A and an And-rule A →a1a2 . . . an specifying the generation from the And-node A to a configuration of existing nodes a1a2 . . . an. An Or-fragment contains a new Or-node O and a set of Or-rules O →a1|a2| . . . |an each specifying the generation from the Or-node O to an existing node ai. While these two types of fragments are simple and intuitive, they both have important disadvantages if they are searched for separately in the learning algorithm. For And-fragments, when the training data is scarce, many compositions modeled by the target grammar would be missing from the training data and hence cannot be learned by searching for And-fragments alone; besides, if the search for And-fragments is not properly coupled with the search for Or-fragments, the learned grammar would become large and redundant. For Or-fragments, it can be shown that in most cases adding an Or-fragment into the grammar decreases the posterior probability of the grammar even if the target grammar does contain the Or-fragment, so in order to learn Or-rules we need more expensive search techniques than greedy or beam search employed in our algorithm; in addition, the search for Or-fragments can be error-prone if different Or-rules can generate the same node in the target grammar. Instead of And-fragments and Or-fragments, we propose to search for And-Or fragments in the learning algorithm. An And-Or fragment contains a new And-node A, a set of new Or-nodes O1, O2, . . . , On, an And-rule A →O1O2 . . . On, and a set of Or-rules Oi →ai1|ai2| . . . |aimi for each Or-node Oi (where ai1, ai2, . . . , aimi are existing nodes of the grammar). Such an And-Or fragment can generate Qn i=1 mi number of configurations of existing nodes. Figure 2(a) shows an example And-Or fragment. It can be shown that by adding only And-Or fragments, our algorithm is still capable of constructing any context-free And-Or grammar. Using And-Or fragments can avoid or alleviate the problems associated with And-fragments and Or-fragments: since an And-Or fragment systematically covers multiple compositions, the data scarcity problem of And-fragments is alleviated; since And-rules and Or-rules are learned in a more unified manner, the resulting grammar is often more compact; reasonable And-Or fragments usually increase the posterior probability of the grammar, therefore easing the search procedure; finally, ambiguous Or-rules can be better distinguished since they are learned jointly with their sibling Or-nodes in the And-Or fragments. To perform greedy search or beam search, in each iteration of our learning algorithm we need to find the And-Or fragments that lead to the highest gain in the posterior probability of the grammar. Computing the posterior gain by re-parsing the training samples can be very time-consuming if the training set or the grammar is large. Fortunately, we show that by assuming grammar unambiguity the posterior gain of adding an And-Or fragment can be formulated based on a set of sufficient statistics of the training data and is efficient to compute. Since the posterior probability is proportional to the product of the likelihood and the prior probability, the posterior gain is equal to the product of the likelihood gain and the prior gain, which we formulate separately below. Likelihood Gain. Remember that in our learning algorithm when an And-Or fragment is added into the grammar, we try to reduce the training samples using the new grammar rules and update the 4 A O1 O2 O3 9 12 3 2 9 12 3 10 3 4 1 0 a11 a11 a12 a13 a21 a22 a31 a32 a33 a34 6 8 2 12 15 20 5 3 17 23 6 3 a12 a13 a31 a32 a33 a34 context1 context2 context3 … … a11a21a31 1 0 0 … a12a21a31 5 1 2 … … … … … … … a13a22a34 4 1 1 … (a) A 9 12 3 2 9 12 3 10 3 4 1 0 a11 O1 O2 O3 6 8 2 12 15 20 5 3 17 23 6 3 a12 a13 a11 a12 a13 a21 a22 a31 a32 a33 a34 a31 a32 a33 a34 context1 context2 context3 … … a11a21a31 1 0 0 … a12a21a31 5 1 2 … … … … … … … a13a22a34 4 1 1 … (b) 9 12 3 2 9 12 3 10 3 4 1 0 a11 O1 O2 O3 6 8 2 12 15 20 5 3 17 23 6 3 a12 a13 a11 a12 a13 a21 a22 a31 a32 a33 a34 a31 a32 a33 a34 context1 context2 context3 … … a11a21a31 1 0 0 … a12a21a31 5 1 2 … … … … … … … a13a22a34 4 1 1 … (c) Figure 2: (a) An example And-Or fragment. (b) The n-gram tensor of the And-Or fragment based on the training data (here n = 3). (c) The context matrix of the And-Or fragment based on the training data. top-level And-rules accordingly. Denote the set of reductions being made on the training samples by RD. Suppose in reduction rd ∈RD, we replace a configuration e of nodes a1j1a2j2 . . . anjn with the new And-node A, where aiji(i = 1 . . . n) is an existing terminal or nonterminal node that can be generated by the new Or-node Oi in the And-Or fragment. With reduction rd, the Viterbi likelihood of the training sample x where rd occurs is changed by two factors. First, since the grammar now generates the And-node A first, which then generates a1j1a2j2 . . . anjn, the Viterbi likelihood of sample x is reduced by a factor of P(A →a1j1a2j2 . . . anjn). Second, the reduction may make sample x identical to some other training samples, which increases the Viterbi likelihood of sample x by a factor equal to the ratio of the numbers of such identical samples after and before the reduction. To facilitate the computation of this factor, we can construct a context matrix CM where each row is a configuration of existing nodes covered by the And-Or fragment, each column is a context which is the surrounding patterns of a configuration, and each element is the number of times that the corresponding configuration and context co-occur in the training set. See Figure 2(c) for the context matrix of the example And-Or fragment. Putting these two types of changes to the likelihood together, we can formulate the likelihood gain of adding the And-Or fragment as follows (see the supplementary material for the full derivation). P(X|Gt+1) P(X|Gt) = Qn i=1 Qmi j=1 ∥RDi(aij)∥∥RDi(aij)∥ ∥RD∥n∥RD∥ × Q c(P e CM[e, c]) P e CM[e,c] Q e,c CM[e, c]CM[e,c] where Gt and Gt+1 are the grammars before and after learning from the And-Or fragment, RDi(aij) denotes the subset of reductions in RD in which the i-th node of the configuration being reduced is aij, e in the summation or product ranges over all the configurations covered by the And-Or fragment, and c in the product ranges over all the contexts that appear in CM. It can be shown that the likelihood gain can be factorized as the product of two tensor/matrix coherence measures as defined in [22]. The first is the coherence of the n-gram tensor of the And-Or fragment (which tabulates the number of times each configuration covered by the And-Or fragment appears in the training samples, as illustrated in Figure 2(b)). The second is the coherence of the context matrix. These two factors provide a surrogate measure of how much the training data support the context-freeness within the And-Or fragment and the context-freeness of the And-Or fragment against its context respectively. See the supplementary material for the derivation and discussion. The formulation of likelihood gain also entails the optimal probabilities of the Or-rules in the AndOr fragment. ∀i, j P(Oi →aij) = ∥RDi(aij)∥ Pmi j′=1 ∥RDi(aij′)∥= ∥RDi(aij)∥ ∥RD∥ Prior Gain. The prior probability of the grammar is determined by the grammar size. When the And-Or fragment is added into the grammar, the size of the grammar is changed in two aspects: first, the size of the grammar is increased by the size of the And-Or fragment; second, the size of the grammar is decreased because of the reductions from configurations of multiple nodes to the new And-node. Therefore, the prior gain of learning from the And-Or fragment is: P(Gt+1) P(Gt) = e−α(∥Gt+1∥−∥Gt∥) = e−α((nsa+Pn i=1 miso)−∥RD∥(n−1)sa) 5 Relation 2 r1 r2 r3 Relation 2 3 1 3 r1 (a) +1 1 2 5 3 +1 2 6 3 Training samples n‐gram tensors relations (here n 1 A 2 A O1 O2 (b) r1 2 1 5 3 5 3 of different n=2) And‐Or fragment Figure 3: An illustration of the procedure of finding the best And-Or fragment. r1, r2, r3 denote different relations between patterns. (a) Collecting statistics from the training samples to construct or update the n-gram tensors. (b) Finding one or more sub-tensors that lead to the highest posterior gain and constructing the corresponding And-Or fragments. Figure 4: An example video and the action annotations from the human activity dataset [23]. Each colored bar denotes the start/end time of an occurrence of an action. where sa and so are the number of bits needed to encode each node on the right-hand side of an And-rule and Or-rule respectively. It can be seen that the prior gain penalizes And-Or fragments that have a large size but only cover a small number of configurations in the training data. In order to find the And-Or fragments with the highest posterior gain, we could construct n-gram tensors from all the training samples for different values of n and different And-rule relations, and within these n-gram tensors we search for sub-tensors that correspond to And-Or fragments with the highest posterior gain. Figure 3 illustrates this procedure. In practice, we find it sufficient to use greedy search or beam search with random restarts in identifying good And-Or fragments. See the supplementary material for the pseudocode of the complete algorithm of grammar learning. The algorithm runs reasonably fast: our prototype implementation can finish running within a few minutes on a desktop with 5000 training samples each containing more than 10 atomic patterns. 4 Experiments 4.1 Learning Event Grammars We applied our approach to learn event grammars from human activity data. The first dataset contains 61 videos of indoor activities, e.g., using a computer and making a phone call [23]. The atomic actions and their start/end time are annotated in each video, as shown in Figure 4. Based on this dataset, we also synthesized a more complicated second dataset by dividing each of the two most frequent actions, sitting and standing, into three subtypes and assigning each occurrence of the two actions randomly to one of the subtypes. This simulates the scenarios in which the actions are detected in an unsupervised way and therefore actions of the same type may be regarded as different because of the difference in the posture or viewpoint. We employed three different methods to apply our grammar learning approach on these two datasets. The first method is similar to that proposed in [18]. For each frame of a video in the dataset, we construct a binary vector that indicates which of the atomic actions are observed in this frame. In this way, each video is represented by a sequence of vectors. Consecutive vectors that are identical are 6 Pick & throw trash Stand Stand Stand f f f f f The “followed‐ Pick up trash Throw trash c c c The followed by” relation The “co occurring” Bend down Squat Stand Bend down The co‐occurring relation Figure 5: An example event And-Or grammar with two types of relations that grounds to atomic actions Table 2: The experimental results (Fmeasure) on the event datasets. For our approach, f, c+f and cf denote the first, second and third methods respectively. Data 1 Data 2 ADIOS [15] 0.810 0.204 SPYZ [18] 0.756 0.582 Ours (f) 0.831 0.702 Ours (c+f) 0.768 0.624 Ours (cf) 0.767 0.813 merged. We then map each distinct vector to a unique ID and thus convert each video into a sequence of IDs. Our learning approach is applied on the ID sequences, where each terminal node represents an ID and each And-node specifies the temporal “followed-by” relation between its child nodes. In the second and third methods, instead of the ID sequences, our learning approach is directly applied to the vector sequences. Each terminal node now represents an occurrence of an atomic action. In addition to the “followed-by” relation, an And-node may also specify the “co-occurring” relation between its child nodes. In this way, the resulting And-Or grammar is directly grounded to the observed atomic actions and is therefore more flexible and expressive than the grammar learned from IDs as in the first method. Figure 5 shows such a grammar. The difference between the second and the third method is: in the second method we require the And-nodes with the “co-occurring” relation to be learned before any And-node with the “followed-by” relation is learned, which is equivalent to applying the first method based on a set of IDs that are also learned; on the other hand, the third method does not restrict the order of learning of the two types of And-nodes. Note that in our learning algorithm we assume that each training sample consists of a single pattern generated from the target grammar, but here each video may contain multiple unrelated events. We slightly modified our algorithm to accommodate this issue: right before the algorithm terminates, we change the top-level And-nodes in the grammar to Or-nodes, which removes any temporal relation between the learned events in each training sample and renders them independent of each other. When parsing a new sample using the learned grammar, we employ the CYK algorithm to efficiently identify all the subsequences that can be parsed as an event by the grammar. We used 55 samples of each dataset as the training set and evaluated the learned grammars on the remaining 6 samples. On each testing sample, the events identified by the learned grammars were compared against manual annotations. We measured the purity (the percentage of the identified event durations overlapping with the annotated event durations) and inverse purity (the percentage of the annotated event durations overlapping with the identified event durations), and report the Fmeasure (the harmonic mean of purity and inverse purity). We compared our approach with two previous approaches [15, 18], both of which can only learn from ID sequences. Table 2 shows the experimental results. It can be seen that our approach is competitive with the previous approaches on the first dataset and outperforms the previous approaches on the more complicated second dataset. Among the three methods of applying our approach, the second method has the worst performance, mostly because the restriction of learning the “co-occurring” relation first often leads to premature equating of different vectors. The third method leads to the best overall performance, which implies the advantage of grounding the grammar to atomic actions and simultaneously learning different relations. Note that the third method has better performance on the more complicated second dataset, and our analysis suggests that the division of sitting/standing into subtypes in the second dataset actually helps the third method to avoid learning erroneous compositions of continuous siting or standing. 4.2 Learning Image Grammars We first tested our approach in learning image grammars from a synthetic dataset of animal face sketches [24]. Figure 6 shows some example images from the dataset. We constructed 15 training sets of 5 different sizes and ran our approach for three times on each training set. We set the terminal 7 Figure 6: Example images from the synthetic dataset 0 100 200 300 400 0 0.2 0.4 0.6 0.8 1 Number of Training Samples F−measure Ours SZ [17] 0 100 200 300 400 0 5 10 15 Number of Training Samples KL−Divergence Ours SZ [17] Figure 7: The experimental results on the synthetic image dataset Example images Example quantized images Atomic patterns images Atomic patterns (terminal nodes) Figure 8: Example images and atomic patterns of the real dataset [17] Table 3: The average perplexity on the testing sets from the real image experiments (lower is better) Perplexity Ours 67.5 SZ [17] 129.4 nodes to represent the atomic sketches in the images and set the relations in And-rules to represent relative positions between image patches. The hyperparameter α of our approach is fixed to 0.5. We evaluated the learned grammars against the true grammar. We estimated the precision and recall of the sets of images generated from the learned grammars versus the true grammar, from which we computed the F-measure. We also estimated the KL-divergence of the probability distributions defined by the learned grammars from that of the true grammar. We compared our approach with the image grammar learning approach proposed in [17]. Figure 7 shows the experimental results. It can be seen that our approach significantly outperforms the competing approach. We then ran our approach on a real dataset of animal faces that was used in [17]. The dataset contains 320 images of four categories of animals: bear, cat, cow and wolf. We followed the method described in [17] to quantize the images and learn the atomic patterns, which become the terminal nodes of the grammar. Figure 8 shows some images from the dataset, the quantization examples and the atomic patterns learned. We again used the relative positions between image patches as the type of relations in And-rules. Since the true grammar is unknown, we evaluated the learned grammars by measuring their perplexity (the reciprocal of the geometric mean probability of a sample from a testing set). We ran 10-fold cross-validation on the dataset: learning an image grammar from each training set and then evaluating its perplexity on the testing set. Before estimating the perplexity, the probability distribution represented by each learned grammar was smoothed to avoid zero probability on the testing images. Table 3 shows the results of our approach and the approach from [17]. Once again our approach significantly outperforms the competing approach. 5 Conclusion We have presented a unified formalization of stochastic And-Or grammars that is agnostic to the type of the data being modeled, and have proposed an unsupervised approach to learning the structures as well as the parameters of such grammars. Our approach optimizes the posterior probability of the grammar and induces compositions and reconfigurations in a unified manner. Our experiments in learning event grammars and image grammars show satisfactory performance of our approach. Acknowledgments The work is supported by grants from DARPA MSEE project FA 8650-11-1-7149, ONR MURI N00014-10-1-0933, NSF CNS 1028381, and NSF IIS 1018751. 8 References [1] S.-C. Zhu and D. Mumford, “A stochastic grammar of images,” Found. Trends. Comput. Graph. Vis., vol. 2, no. 4, pp. 259–362, 2006. [2] Y. Jin and S. Geman, “Context and hierarchy in a probabilistic image model,” in CVPR, 2006. [3] Y. Zhao and S. C. Zhu, “Image parsing with stochastic scene grammar,” in NIPS, 2011. [4] Y. A. Ivanov and A. F. Bobick, “Recognition of visual activities and interactions by stochastic parsing,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 852–872, 2000. [5] M. S. Ryoo and J. K. Aggarwal, “Recognition of composite human activities through context-free grammar based representation,” in CVPR, 2006. [6] Z. Zhang, T. Tan, and K. Huang, “An extended grammar system for learning and recognizing complex visual events,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 2, pp. 240–255, Feb. 2011. [7] M. Pei, Y. Jia, and S.-C. Zhu, “Parsing video events with goal inference and intent prediction,” in ICCV, 2011. [8] C. D. Manning and H. Sch¨utze, Foundations of statistical natural language processing. Cambridge, MA, USA: MIT Press, 1999. [9] P. Liang, M. I. Jordan, and D. Klein, “Probabilistic grammars and hierarchical dirichlet processes,” The handbook of applied Bayesian analysis, 2009. [10] H. Poon and P. Domingos, “Sum-product networks : A new deep architecture,” in Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (UAI), 2011. [11] J. K. Baker, “Trainable grammars for speech recognition,” in Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, 1979. [12] D. Klein and C. D. Manning, “Corpus-based induction of syntactic structure: Models of dependency and constituency,” in Proceedings of ACL, 2004. [13] S. Wang, Y. Wang, and S.-C. Zhu, “Hierarchical space tiling for scene modeling,” in Computer Vision– ACCV 2012. Springer, 2013, pp. 796–810. [14] A. Stolcke and S. M. Omohundro, “Inducing probabilistic grammars by Bayesian model merging,” in ICGI, 1994, pp. 106–118. [15] Z. Solan, D. Horn, E. Ruppin, and S. Edelman, “Unsupervised learning of natural languages,” Proc. Natl. Acad. Sci., vol. 102, no. 33, pp. 11 629–11 634, August 2005. [16] K. Tu and V. Honavar, “Unsupervised learning of probabilistic context-free grammar using iterative biclustering,” in Proceedings of 9th International Colloquium on Grammatical Inference (ICGI 2008), ser. LNCS 5278, 2008. [17] Z. Si and S. Zhu, “Learning and-or templates for object modeling and recognition,” IEEE Trans on Pattern Analysis and Machine Intelligence, 2013. [18] Z. Si, M. Pei, B. Yao, and S.-C. Zhu, “Unsupervised learning of event and-or grammar and semantics from video,” in ICCV, 2011. [19] J. F. Allen, “Towards a general theory of action and time,” Artificial intelligence, vol. 23, no. 2, pp. 123–154, 1984. [20] V. I. Spitkovsky, H. Alshawi, D. Jurafsky, and C. D. Manning, “Viterbi training improves unsupervised dependency parsing,” in Proceedings of the Fourteenth Conference on Computational Natural Language Learning, ser. CoNLL ’10, 2010. [21] K. Tu and V. Honavar, “Unambiguity regularization for unsupervised learning of probabilistic grammars,” in Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL 2012), 2012. [22] S. C. Madeira and A. L. Oliveira, “Biclustering algorithms for biological data analysis: A survey.” IEEE/ACM Trans. on Comp. Biol. and Bioinformatics, vol. 1, no. 1, pp. 24–45, 2004. [23] P. Wei, N. Zheng, Y. Zhao, and S.-C. Zhu, “Concurrent action detection with structural prediction,” in Proc. Intl Conference on Computer Vision (ICCV), 2013. [24] A. Barbu, M. Pavlovskaia, and S. Zhu, “Rates for inductive learning of compositional models,” in AAAI Workshop on Learning Rich Representations from Low-Level Sensors (RepLearning), 2013. 9
|
2013
|
45
|
5,120
|
Learning Multi-level Sparse Representations Ferran Diego Fred A. Hamprecht Heidelberg Collaboratory for Image Processing (HCI) Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg, Heidelberg 69115, Germany {ferran.diego,fred.hamprecht}@iwr.uni-heidelberg.de Abstract Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel →neuron →assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1 Introduction This work was stimulated by a concrete problem, namely the decomposition of state-of-the-art 2D+ time calcium imaging sequences as shown in Fig. 1 into neurons, and assemblies of neurons [20]. Calcium imaging is an increasingly popular tool for unraveling the network structure of local circuits of the brain [11, 6, 7]. Leveraging sparsity constraints seems natural, given that the neural activations are sparse in both space and time. The experimentally achievable optical slice thickness still results in spatial overlap of cells, meaning that each pixel can show intensity from more than one neuron. In addition, it is anticipated that one neuron can be part of more than one assembly. All neurons of an assembly are expected to fire at roughly the same time [20]. A standard sparse decomposition of the set of vectorized images into a dictionary and a set of coefficients would not conform with prior knowledge that we have entities at three levels: the pixels, the neurons, and the assemblies, see Fig. 2. Also, it would not allow to include structured constraints [10] in a meaningful way. As a consequence, we propose a multi-level decomposition (Fig. 3) that • allows enforcing (structured) sparsity constraints at each level, • admits both hierarchical or heterarchical relations between levels (Fig. 2), • can be learned jointly (section 2 and 2.4), and • yields good results on real-world experimental data (Fig. 2). 1 Figure 1: Left: frames from a calcium imaging sequence showing firing neurons that were recorded by an epi-fluorescence microscope. Right: two frames from a synthetic sequence. The underlying biological aim motivating these experiments is to study the role of neuronal assemblies in memory consolidation. 1.1 Relation to Previous Work Most important unsupervised data analysis methods such as PCA, NMF / pLSA, ICA, cluster analysis, sparse coding and others can be written in terms of a bilinear decomposition of, or approximation to, a two-way matrix of raw data [22]. One natural generalization is to perform multilinear decompositions of multi-way arrays [4] using methods such as higher-order SVD [1]. This is not the direction pursued here, because the image sequence considered does not have a tensorial structure. On the other hand, there is a relation to (hierarchical) topic models (e.g. [8]). These do not use structured sparsity constraints, but go beyond our approach in automatically estimating the appropriate number of levels using nonparametric Bayesian models. Closest to our proposal are four lines of work that we build on: Jenatton et al. [10] introduce structured sparsity constraints that we use to find dictionary basis functions representing single neurons. The works [9] and [13] enforce hierarchical (tree-structured) sparsity constraints. These authors find the tree structure using extraneous methods, such as a separate clustering procedure. In contrast, the method proposed here can infer either hierarchical (tree-structured) or heterarchical (directed acyclic graph) relations between entities at different levels. Cichocki and Zdunek [3] proposed a multilayer approach to non-negative matrix factorization. This is a multi-stage procedure which iteratively decomposes the rightmost matrix of the decomposition that was previously found. Similar approaches are explored in [23], [24]. Finally, Rubinstein et al. [21] proposed a novel dictionary structure where each basis function in a dictionary is a linear combination of a few elements from a fixed base dictionary. In contrast to these last two methods, we optimize over all factors (including the base dictionary) jointly. Note that our semantics of “bilevel factorization” (section 2.2) are different from the one in [25]. Notation. A matrix is a set of columns and rows, respectively, X = [x:1, . . . , x:n] = [x1:; . . . ; xm:]. The zero matrix or vector is denoted 0, with dimensions inferred from the context. For any vector x ∈Rm, ∥x∥α = (Pm i=1 |xi|α)1/α is the lα (quasi)-norm of x, and ∥· ∥F is the Frobenius norm. 2 Learning a Sparse Heterarchical Structure 2.1 Dictionary Learning: Single Level Sparse Matrix Factorization Let X ∈Rm×n be a matrix whose n columns represent an m-dimensional observation each. The idea of dictionary learning is to find a decomposition X ≈D U0T , see Fig. 3(a). D is called the dictionary, and its columns hold the basis functions in terms of which the sparse coefficients in U0 approximate the original observations. The regularization term ΩU encourages sparsity of the coefficient matrix. ΩD prevents the inflation of dictionary entries to compensate for small coefficients, and induces, if desired, additional structure on the learned basis functions [16]. Interesting theoretical results on support recovery, furthered by an elegantly compact formulation and the ready availability of optimizers [17] have spawned a large number of intriguing and successful applications, e.g. image denoising [19] and detection of unusual events [26]. Dictionary learning is a special instance of our framework, involving only a single-level decomposition. In the following we first generalize to two, then to more levels. 2 heterarchical correspondence 23 20 22 24 37 27 36 40 43 5 6 11 12 15 25 30 38 42 8 19 28 33 47 7 9 10 23 31 32 39 41 id neuron id assemblies times (frames) 5 10 15 20 25 30 35 40 45 times (frames) 0 5 10 15 20 25 47 43 42 41 40 39 38 37 36 33 32 31 30 28 27 25 24 23 22 20 19 15 12 11 109876532 Time (s) Cell number Figure 2: Bottom left: Shown are the temporal activation patterns of individual neurons U0 (lower level), and assemblies of neurons U1 (upper level). Neurons D and assemblies are related by a bipartite graph A1 the estimation of which is a central goal of this work. The signature of five neuronal assemblies (five columns of DA1) in the spatial domain is shown at the top. The outlines in the middle of the bottom show the union of all neurons found in D, superimposed onto a maximum intensity projection across the background-subtracted raw image sequence. The graphs on the right show a different view on the transients estimated for single neurons, that is, the rows of U0. The raw data comes from a mouse hippocampal slice, where single neurons can indeed be part of more than one assembly [20]. Analogous results on synthetic data are shown in the supplemental material. a) b) c) d) Figure 3: Decomposition of X into {1, 2, 3, L + 1} levels, with corresponding equations. 3 2.2 Bilevel Sparse Matrix Factorization We now come to the heart of this work. To build intuition, we first refer to the application that has motivated this development, before giving mathematical details. The relation between the symbols used in the following is sketched in Fig. 3(b), while actual matrix contents are partially visualized in Fig. 2. Given is a sequence of n noisy sparse images which we vectorize and collect in the columns of matrix X. We would like to find the following: • a dictionary D of q0 vectorized images comprising m pixels each. Ideally, each basis function should correspond to a single neuron. • a matrix A1 indicating to what extent each of the q0 neurons is associated with any of the q1 neuronal assemblies. We will call this matrix interchangeably assignment or adjacency matrix in the following. It is this matrix which encapsulates the quintessential structure we extract from the raw data, viz., which lower-level concept is associated with which higher-level concept. • a coefficient matrix [U1]T that encodes in its rows the temporal evolution (activation) of the q1 neuronal assemblies across n time steps. • a coefficient matrix [U0]T (shown in the equation, but not in the sketch of Fig. 3(b)) that encodes in its rows the temporal activation of the q0 neuron basis functions across n time steps. The quantities D, A1, [U0], [U1] in this redundant representation need to be consistent. Let us now turn to equations. At first sight, it seems like minimizing ∥X −DA1[U1]T ∥2 F over D, A1, U1 subject to constraints should do the job. However, this could be too much of a simplification! To illustrate, assume for the moment that only a single neuronal assembly is active at any given time. Then all neurons associated with that assembly would follow an absolutely identical time course. While it is expected that neurons from an assembly show similar activation patterns [20], this is something we want to glean from the data, and not absolutely impose. In response, we introduce an auxiliary matrix U0 ≈U1[A1]T showing the temporal activation pattern of individual neurons. These two matrices, U0 and U1, are also shown in the false color plots of the collage of Fig. 2, bottom left. The full equation involving coefficient and auxiliary coefficient matrices is shown in Fig. 3(b). The terms involving X are data fidelity terms, while ∥U0−U1[A1]T ∥2 F enforces consistency. Parameters η trade off the various terms, and constraints of a different kind can be applied selectively to each of the matrices that we optimize over. Jointly optimizing over D, A1, U0, and U1 is a hard and non-convex problem that we address using a block coordinate descent strategy described in section 2.4 and supplemental material. 2.3 Trilevel and Multi-level Sparse Matrix Factorization We now discuss the generalization to an arbitrary number of levels that may be relevant for applications other than calcium imaging. To give a better feeling for the structure of the equations, the trilevel case is spelled out explicitly in Fig. 3(c), while Fig. 3(d) shows the general case of L + 1 levels. The most interesting matrices, in many ways, are the assignment matrices A1, A2, etc. Assume, first, that the relations between lower-level and higher-level concepts obey a strict inclusion hierarchy. Such relations can be expressed in terms of a forest of trees: each highest-level concept is the root of a tree which fans out to all subordinate concepts. Each subordinate concept has a single parent only. Such a forest can also be seen as a (special case of an L + 1-partite) graph, with an adjacency matrix Al specifying the parents of each concept at level l −1. To impose an inclusion hierarchy, one can enforce the nestedness condition by requiring that ∥al k:∥0 ≤1. In general, and in the application considered here, one will not want to impose an inclusion hierarchy. In that case, the relations between concepts can be expressed in terms of a concatenation of bipartite graphs that conform with a directed acyclic graph. Again, the adjacency matrices encode the structure of such a directed acyclic graph. 4 In summary, the general equation in Fig. 3(d) is a principled alternative to simpler approaches that would impose the relations between concepts, or estimate them separately using, for instance, clustering algorithms; and that would then find a sparse factorization subject to this structure. Instead, we simultaneously estimate the relation between concepts at different levels, as well as find a sparse approximation to the raw data. 2.4 Optimization The optimization problem in Fig. 3(d) is not jointly convex, but becomes convex w.r.t. one variable while keeping the others fixed provided that the norms ΩU, ΩD, and ΩA are also convex. Indeed, it is possible to define convex norms that not only induce sparse solutions, but also favor non-zero patterns of a specific structure, such as sets of variables in a convex polygon with certain symmetry constraints [10]. Following [5], we use such norms to bias towards neuron basis functions holding a single neuron only. We employ a block coordinate descent strategy [2, Section 2.7] that iteratively optimizes one group of variables while fixing all others. Due to space limitations, the details and implementation of the optimization are described in the supplemental material. 3 Methods 3.1 Decomposition into neurons and their transients only Cell Sorting [18] and Adina [5] focus only on the detection of cell centroids and of cell shape, and the estimation and analysis of Calcium transient signals. However, these methods provide no means to detect and identify neuronal co-activation. The key idea is to decompose calcium imaging data into constituent signal sources, i.e. temporal and spatial components. Cell sorting combines principal component analysis (PCA) and independent component analysis (ICA). In contrast, Adina relies on a matrix factorization based on sparse coding and dictionary learning [15], exploiting that neuronal activity is sparsely distributed in both space and time. Both methods are combined with a subsequent image segmentation since the spatial components (basis functions) often contain more than one neuron. Without such a segmentation step, overlapping cells or those with highly correlated activity are often associated with the same basis function. 3.2 Decomposition into neurons, their transients, and assemblies of neurons MNNMF+Adina Here, we combine a multilayer extension of non-negative matrix factorization with the segmentation from Adina. MNNMF [3] is a multi-stage procedure that iteratively decomposes the rightmost matrix of the decomposition that was previously found. In the first stage, we decompose the calcium imaging data into spatial and temporal components, just like the methods cited above, but using NMF and a non-negative least squares loss function [12] as implemented in [14]. We then use the segmentation from [5] to obtain single neurons in an updated dictionary1 D. Given this purged dictionary, the temporal components U0 are updated under the NMF criterion. Next, the temporal components U0 are further decomposed into two low-rank matrices, U0 ≈U1[A1]T , again using NMF. Altogether, this procedure allows identifying neuronal assemblies and their temporal evolution. However, the exact number of assemblies q1 must be defined a priori. KSVDS+Adina allows estimating a sparse decomposition [21] X ≈DA1[U1]T provided that i) a dictionary of basis functions and ii) the exact number of assemblies is supplied as input. In addition, the assignment matrix A1 is typically dense and needs to be thresholded. We obtain good results when supplying the purged dictionary1 of single neurons resulting from Adina [5]. SHMF – Sparse Heterarchical Matrix Factorization in its bilevel formulation decomposes the raw data simultaneously into neuron basis functions D, a mapping of these to assemblies A1, as well as time courses of neurons U0 and assemblies U1, see equation in Fig. 3. Sparsity is induced by setting ΩU and ΩA to the l1-norm. In addition, we impose the l2-norm at the assembly level Ω1 D, 1Without such a segmentation step, the dictionary atoms often comprise more than one neuron, and overall results (not shown) are poor. 5 and let ΩD be the structured sparsity-inducing norm proposed by Jenatton et al. [10]. In contrast to all other approaches described above, this already suffices to produce basis functions that contain, in most cases, only single neurons. Exceptions arise only in the case of cells which both overlap in space and have high temporal correlation. For this reason, and for a fair comparison with the other methods, we again use the segmentation from [5]. For the optimization, D and U0 are initialized with the results from Adina. U1 is initialized randomly with positive-truncated Gaussian noise, and A1 by the identity matrix as in KSVDS [21]. Finally, the number of neurons q0 and neuronal assemblies q1 are set to generous upper bounds of the expected true numbers, and are both set to equal values (here: q0 = q1 = 60) for simplicity. Note that a precise specification as for the above methods is not required. 4 Results To obtain quantitative results, we first evaluate the proposed methods on synthetic image sequences designed so as to exhibit similar characteristics as the real data. We also report a qualitative analysis of the performance on real data from [20]. Since neuronal assemblies are still the subject of ongoing research, ground truth is not available for such real-world data. 4.1 Artifical Sequences For evaluation, we created 80 synthetic sequences with 450 frames of size 128 × 128 pixels with a frame rate of 30fps. The data is created by randomly selecting cell shapes from 36 different active cells extracted from real data, and locating them in different locations with an overlap of up to 30%. Each cell is randomly assigned to up to three out of a total of five assemblies. Each assembly fires according to a dependent Poisson process, with transient shapes following a one-sided exponential decay with a scale of 500 to 800ms that is convolved by a Gaussian kernel with σ = 50ms. The dependency is induced by eliminating all transients that overlap by more than 20%. Within such a transient, the neurons associated with the assembly fire with a probability of 90% each. The number of cells per assembly varies from 1 to 10, and we use five assemblies in all experiments. Finally, the synthetic movies are distorted by white Gaussian noise with a relative amplitude, (max. intensity −mean intensity)/σnoise ∈{3, 5, 7, 10, 12, 15, 17, 20}. By construction, the identity, location and activity patterns of all cells along with their membership in assemblies are known. The supplemental material shows one example, and two frames are shown in Fig. 1. Identificaton of assemblies First, we want to quantify the ability to correctly infer assemblies from an image sequence. To that end, we compute the graph edit distance of the estimated assignments of neurons to assemblies, encoded in matrices A1, to the known ground truth. We count the number of false positive and false negative edges in the assignment graphs, where vertices (assemblies) are matched by minimizing the Hamming distance between binarized assignment matrices over all permutations. Remember that MNNMF+Adina and KSVDS+Adina require a specification of the precise number of assemblies, which is unknown for real data. Accordingly, adjacency matrices, A1 ∈Rq0×q1 for different values for the number of assemblies q1 ∈[3, 7] were estimated. Bilevel SHMF only needs an upper bound on the number of assemblies. Its performance is independent of the precise value, but computational cost increases with the bound. In these experiments, q1 was set to 60. Fig. 4 shows that all methods from section 3.2 give respectable performance in the task of inferring neuronal assemblies from nontrivial synthetic image sequences. For the true number of assemblies (q1 = 5), Bilevel SHMF reaches a higher sensitivity than the alternative methods, with a median difference of 14%. According to the quartiles, the precisions achieved are broadly comparable, with MNNMF+Adina reaching the highest value. All methods from section 3.2 also infer the temporal activity of all assemblies, U1. We omit a comparison of these matrices for lack of a good metric that would also take into account the correctness of the assemblies themselves: a fine time course has little worth if its associated assembly is deficient, for instance by having lost some neurons with respect to ground truth. 6 Sensitivity Precision Figure 4: Performance on learning correct assignments of neurons to assemblies from nontrivial synthetic data with ground truth. KSVDS+Adina and MNNMF+Adina require that the number of assemblies q1 be fixed in advance. In contrast, bilevel SHMF estimates the number of assemblies given an upper-bound. Its performance is hence shown as a constant over the q1-axis. Plots show the median as well as the band between the lower and the upper quartile for all 80 sequences. Colors at non-integer q1-values are a guide to the eye. Detection of calcium transients While the detection of assemblies as evaluated above is completely new in the literature, we now turn to a better studied [18, 5] problem: the detection of calcium transients of individual neurons. Some estimates for these characteristic waveforms are also shown, for real-world data, on the right hand side of Fig. 2. To quantify transient detection performance, we compute the sensitivity and precision as in [20]. Here, sensitivity is the ratio of correctly detected to all neuronal activities; and precision is the ratio of correctly detected to all detected neuronal activities. Results are shown in Fig. 5. 50 60 70 80 90 100 Bilevel SHMF Single level SHMF MNNMF+Adina Cell Sorting Adina Precision (%) 50 60 70 80 90 100 Bilevel SHMF Single level SHMF MNNMF+Adina Cell Sorting Adina Sensitivity (%) Figure 5: Sensitivity and precision of transient detection for individual neurons. Methods that estimate both assemblies and neuron transients perform at least as well as their simpler counterparts that focus on the latter. Perhaps surprisingly, the methods from section 3.2 (MNNMF+Adina and Bilevel SHMF2) fare at least as well as those from section 3.1 (CellSorting and Adina). This is not self-evident, because a bilevel factorization could be expected to be more ill-posed than a single level factorization. We make two observations: Firstly, it seems that using a bilevel representation with suitable regularization constraints helps stabilize the activity estimates also for single neurons. Secondly, the higher sensitivity and similar precision of bilevel SHMF compared to MNNMF+Adina suggest that a joint estimation of neurons, assemblies and their temporal activities as described in section 2 increases the robustness, and compensates errors that may not be corrected in greedy level-per-level estimation. Incidentally, the great spread of both sensitivities and precisions results from the great variety of noise levels used in the simulations, and attests to the difficulty of part of the synthetic data sets. 2KSVDS is not evaluated here because it does not yield activity estimates for individual neurons. 7 Raw data Cell Sorting [18] Adina [5] Neurons (D[U0]T ) Assemblies (DA1[U1]T ) Figure 6: Three examples of raw data and reconstructed images of the times indicated in Fig. 2. The other examples are shown in the supplemental material. 4.2 Real Sequences We have applied bilevel SHMF to epifluorescent data sets from mice (C57BL6) hippocampal slice cultures. As shown in Fig. 2, the method is able to distinguish overlapping cells and highly correlated cells, while at the same time estimating neuronal co-activation patterns (assemblies). Exploiting spatio-temporal sparsity and convex cell shape priors allows to accurately infer the transient events. 5 Discussion The proposed multi-level sparse factorization essentially combines a clustering of concepts across several levels (expressed by the assignment matrices) with the finding of a basis dictionary, shared by concepts at all levels, and the finding of coefficient matrices for different levels. The formalism allows imposing different regularizers at different levels. Users need to choose tradeoff parameters η, λ that indirectly determine the number of concepts (clusters) found at each level, and the sparsity. The ranks ql, on the other hand, are less important: Figure 2 shows that the ranks of estimated matrices can be lower than their nominal dimensionality: superfluous degrees of freedom are simply not used. On the application side, the proposed method allows to accomplish the detection of neurons, assemblies and their relation in a single framework, exploiting sparseness in the temporal and spatial domain in the process. Bilevel SHMF in particular is able to detect automatically, and differentiate between, overlapping and highly correlated cells, and to estimate the underlying co-activation patterns. As shown in Fig. 6, this approach is able to reconstruct the raw data at both levels of representations, and to make plausible proposals for neuron and assembly identification. Given the experimental importance of calcium imaging, automated methods in the spirit of the one described here can be expected to become an essential tool for the investigation of complex activation patterns in live neural tissue. Acknowledgement We are very grateful for partial financial support by CellNetworks Cluster (EXC81). We also thank Susanne Reichinnek, Martin Both and Andreas Draguhn for their comments on the manuscript. 8 References [1] G. Bergqvist and E. G. Larsson. The Higher-Order Singular Value Decomposition Theory and an Application. IEEE Signal Processing Magazine, 27(3):151–154, 2010. [2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [3] A. Cichocki and R. Zdunek. Multilayer nonnegative matrix factorization. Electronics Letters, 42:947– 948, 2006. [4] A. Cichocki, R. Zdunek, A. H. Phan, and S. Amari. Nonnegative Matrix and Tensor Factorizations Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, 2009. [5] F. Diego, S. Reichinnek, M. Both, and F. A. Hamprecht. Automated identification of neuronal activity from calcium imaging by sparse dictionary learning. In International Symposium on Biomedical Imaging, in press, 2013. [6] W. Goebel and F. Helmchen. In vivo calcium imaging of neural network function. Physiology, 2007. [7] C. Grienberger and A. Konnerth. Imaging calcium in neurons. Neuron, 2011. [8] Q. Ho, J. Eisenstein, and E. P. Xing. Document hierarchies from text and links. In Proc. of the 21st Int. World Wide Web Conference (WWW 2012), pages 739–748. ACM, 2012. [9] R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multi-scale Mining of fMRI data with Hierarchical Structured Sparsity. SIAM Journal on Imaging Sciences, 5(3), 2012. [10] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [11] J. Kerr and W. Denk. Imaging in vivo: watching the brain in action. Nature Review Neuroscience, 2008. [12] H. Kim and H. Park. Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM J. on Matrix Analysis and Applications, 2008. [13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. Ann. Appl. Stat., 2012. [14] Y. Li and A. Ngom. The non-negative matrix factorization toolbox for biological data mining. In BMC Source Code for Biology and Medicine, 2013. [15] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009. [16] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. Journal of Machine Learning Research, 2010. [17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and R. Jenatton. Sparse modeling software. http://spamsdevel.gforge.inria.fr/. [18] E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer. Automated analysis of cellular signals from largescale calcium imaging data. Neuron, 2009. [19] M. Protter and M. Elad. Image sequence denoising via sparse and redundant representations. IEEE Transactions on Image Processing, 18(1), 2009. [20] S. Reichinnek, A. von Kameke, A. M. Hagenston, E. Freitag, F. C. Roth, H. Bading, M. T. Hasan, A. Draguhn, and M. Both. Reliable optical detection of coherent neuronal activity in fast oscillating networks in vitro. NeuroImage, 60(1), 2012. [21] R. Rubinstein, M. Zibulevsky, and M. Elad. Double sparsity: Learning sparse dictionaries for sparse signal approximation. IEEE Transactions on Signal Processing, 2010. [22] A. P. Singh and G. J. Gordon. A unified view of matrix factorization models. ECML PKDD, 2008. [23] M. Sun and H. Van Hamme. A two-layer non-negative matrix factorization model for vocabulary discovery. In Symposium on machine learning in speech and language processing, 2011. [24] Q. Sun, P. Wu, Y. Wu, M. Guo, and J. Lu. Unsupervised multi-level non-negative matrix factorization model: Binary data case. Journal of Information Security, 2012. [25] J. Yang, Z. Wang, Z. Lin, X. Shu, and T. S. Huang. Bilevel sparse coding for coupled feature spaces. In CVPR’12, pages 2360–2367. IEEE, 2012. [26] B. Zhao, L. Fei-Fei, and E. P. Xing. Online detection of unusual events in videos via dynamic sparse coding. In The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 9
|
2013
|
46
|
5,121
|
Estimation, Optimization, and Parallelism when Data is Sparse John C. Duchi1,2 Michael I. Jordan1 University of California, Berkeley1 Berkeley, CA 94720 {jduchi,jordan}@eecs.berkeley.edu H. Brendan McMahan2 Google, Inc.2 Seattle, WA 98103 mcmahan@google.com Abstract We study stochastic optimization problems when the data is sparse, which is in a sense dual to current perspectives on high-dimensional statistical learning and optimization. We highlight both the difficulties—in terms of increased sample complexity that sparse data necessitates—and the potential benefits, in terms of allowing parallelism and asynchrony in the design of algorithms. Concretely, we derive matching upper and lower bounds on the minimax rate for optimization and learning with sparse data, and we exhibit algorithms achieving these rates. We also show how leveraging sparsity leads to (still minimax optimal) parallel and asynchronous algorithms, providing experimental evidence complementing our theoretical results on several medium to large-scale learning tasks. 1 Introduction and problem setting In this paper, we investigate stochastic optimization problems in which the data is sparse. Formally, let {F(·; ξ), ξ ∈Ξ} be a collection of real-valued convex functions, each of whose domains contains the convex set X ⊂Rd. For a probability distribution P on Ξ, we consider the following optimization problem: minimize x∈X f(x) := E[F(x; ξ)] = Z Ξ F(x; ξ)dP(ξ). (1) By data sparsity, we mean the samples ξ are sparse: assuming that samples ξ lie in Rd, and defining the support supp(x) of a vector x to the set of indices of its non-zero components, we assume supp ∇F(x; ξ) ⊂supp ξ. (2) The sparsity condition (2) means that F(x; ξ) does not “depend” on the values of xj for indices j such that ξj = 0.1 This type of data sparsity is prevalent in statistical optimization problems and machine learning applications; in spite of its prevalence, study of such problems has been limited. As a motivating example, consider a text classification problem: data ξ ∈Rd represents words appearing in a document, and we wish to minimize a logistic loss F(x; ξ) = log(1 + exp(⟨ξ, x⟩)) on the data (we encode the label implicitly with the sign of ξ). Such generalized linear models satisfy the sparsity condition (2), and while instances are of very high dimension, in any given instance, very few entries of ξ are non-zero [8]. From a modelling perspective, it thus makes sense to allow a dense predictor x: any non-zero entry of ξ is potentially relevant and important. In a sense, this is dual to the standard approaches to high-dimensional problems; one usually assumes that the data ξ may be dense, but there are only a few relevant features, and thus a parsimonious model x is desirous [2]. So 1Formally, if πξ denotes the coordinate projection zeroing all indices j of its argument where ξj = 0, then F(πξ(x); ξ) = F(x; ξ) for all x, ξ. This follows from the first-order conditions for convexity [6]. 1 while such sparse data problems are prevalent—natural language processing, information retrieval, and other large data settings all have significant data sparsity—they do not appear to have attracted as much study as their high-dimensional “duals” of dense data and sparse predictors. In this paper, we investigate algorithms and their inherent limitations for solving problem (1) under natural conditions on the data generating distribution. Recent work in the optimization and machine learning communities has shown that data sparsity can be leveraged to develop parallel (and even asynchronous [12]) optimization algorithms [13, 14], but this work does not consider the statistical effects of data sparsity. In another line of research, Duchi et al. [4] and McMahan and Streeter [9] develop “adaptive” stochastic gradient algorithms to address problems in sparse data regimes (2). These algorithms exhibit excellent practical performance and have theoretical guarantees on their convergence, but it is not clear if they are optimal—in that no algorithm can attain better statistical performance—or whether they can leverage parallel computing as in the papers [12, 14]. In this paper, we take a two-pronged approach. First, we investigate the fundamental limits of optimization and learning algorithms in sparse data regimes. In doing so, we derive lower bounds on the optimization error of any algorithm for problems of the form (1) with sparsity condition (2). These results have two main implications. They show that in some scenarios, learning with sparse data is quite difficult, as essentially each coordinate j ∈[d] can be relevant and must be optimized for. In spite of this seemingly negative result, we are also able to show that the ADAGRAD algorithms of [4, 9] are optimal, and we show examples in which their dependence on the dimension d can be made exponentially better than standard gradient methods. As the second facet of our two-pronged approach, we study how sparsity may be leveraged in parallel computing frameworks to give substantially faster algorithms that still achieve optimal sample complexity in terms of the number of samples ξ used. We develop two new algorithms, asynchronous dual averaging (ASYNCDA) and asynchronous ADAGRAD (ASYNCADAGRAD), which allow asynchronous parallel solution of the problem (1) for general convex f and X. Combining insights of Niu et al.’s HOGWILD! [12] with a new analysis, we prove our algorithms achieve linear speedup in the number of processors while maintaining optimal statistical guarantees. We also give experiments on text-classification and web-advertising tasks to illustrate the benefits of the new algorithms. 2 Minimax rates for sparse optimization We begin our study of sparse optimization problems by establishing their fundamental statistical and optimization-theoretic properties. To do this, we derive bounds on the minimax convergence rate of any algorithm for such problems. Formally, let bx denote any estimator for a minimizer of the objective (1). We define the optimality gap ǫN for the estimator bx based on N samples ξ1, . . . , ξN from the distribution P as ǫN(bx, F, X, P) := f(bx) −inf x∈X f(x) = EP [F(bx; ξ)] −inf x∈X EP [F(x; ξ)] . This quantity is a random variable, since bx is a random variable (it is a function of ξ1, . . . , ξN). To define the minimax error, we thus take expectations of the quantity ǫN, though we require a bit more than simply E[ǫN]. We let P denote a collection of probability distributions, and we consider a collection of loss functions F specified by a collection F of convex losses F : X × ξ →R. We can then define the minimax error for the family of losses F and distributions P as ǫ∗ N(X, P, F) := inf bx sup P ∈P sup F ∈F EP [ǫN(bx(ξ1:N), F, X, P)], (3) where the infimum is taken over all possible estimators bx (an estimator is an optimization scheme, or a measurable mapping bx : ΞN →X) . 2.1 Minimax lower bounds Let us now give a more precise characterization of the (natural) set of sparse optimization problems we consider to provide the lower bound. For the next proposition, we let P consist of distributions supported on Ξ = {−1, 0, 1}d, and we let pj := P(ξj ̸= 0) be the marginal probability of appearance of feature j ∈{1, . . . , d}. For our class of functions, we set F to consist of functions F satisfying the sparsity condition (2) and with the additional constraint that for g ∈∂xF(x; ξ), we have that the jth coordinate |gj| ≤Mj for a constant Mj < ∞. We obtain 2 Proposition 1. Let the conditions of the preceding paragraph hold. Let R be a constant such that X ⊃[−R, R]d. Then ǫ∗ N(X, P, F) ≥1 8R d X j=1 Mj min pj, √pj √N log 3 . We provide the proof of Proposition 1 in the supplement A.1 in the full version of the paper, providing a few remarks here. We begin by giving a corollary to Proposition 1 that follows when the data ξ obeys a type of power law: let p0 ∈[0, 1], and assume that P(ξj ̸= 0) = p0j−α. We have Corollary 2. Let α ≥0. Let the conditions of Proposition 1 hold with Mj ≡M for all j, and assume the power law condition P(ξj ̸= 0) = p0j−α on coordinate appearance probabilities. Then (1) If d > (p0N)1/α, ǫ∗ N(X, P, F) ≥MR 8 2 2 −α rp0 N (p0N) 2−α 2α −1 + p0 1 −α d1−α −(p0N) 1−α α . (2) If d ≤(p0N)1/α, ǫ∗ N(X, P, F) ≥MR 8 rp0 N 1 1 −α/2d1−α 2 − 1 1 −α/2 . Expanding Corollary 2 slightly, for simplicity assume the number of samples is large enough that d ≤(p0N)1/α. Then we find that the lower bound on optimization error is of order MR rp0 N d1−α 2 when α < 2, MR rp0 N log d when α →2, and MR rp0 N when α > 2. (4) These results beg the question of tightness: are they improvable? As we see presently, they are not. 2.2 Algorithms for attaining the minimax rate To show that the lower bounds of Proposition 1 and its subsequent specializations are sharp, we review a few stochastic gradient algorithms. We begin with stochastic gradient descent (SGD): SGD repeatedly samples ξ ∼P, computes g ∈∂xF(x; ξ), then performs the update x ←ΠX (x −ηg), where η is a stepsize parameter and ΠX denotes Euclidean projection onto X. Standard analyses of stochastic gradient descent [10] show that after N samples ξi, the SGD estimator bx(N) satisfies E[f(bx(N))] −inf x∈X f(x) ≤O(1) R2M(Pd j=1 pj) 1 2 √ N , (5) where R2 denotes the ℓ2-radius of X. Dual averaging, due to Nesterov [11] (sometimes called “follow the regularized leader” [5]) is a more recent algorithm. In dual averaging, one again samples g ∈∂xF(x; ξ), but instead of updating the parameter vector x one updates a dual vector z by z ←z + g, then computes x ←argmin x∈X ⟨z, x⟩+ 1 η ψ(x) , where ψ(x) is a strongly convex function defined over X (often one takes ψ(x) = 1 2 ∥x∥2 2). As we discuss presently, the dual averaging algorithm is somewhat more natural in asynchronous and parallel computing environments, and it enjoys the same type of convergence guarantees (5) as SGD. The ADAGRAD algorithm [4, 9] is an extension of the preceding stochastic gradient methods. It maintains a diagonal matrix S, where upon receiving a new sample ξ, ADAGRAD performs the following: it computes g ∈∂xF(x; ξ), then updates Sj ←Sj + g2 j for j ∈[d]. The dual averaging variant of ADAGRAD updates the usual dual vector z ←z + g; the update to x is based on S and a stepsize η and computes x ←argmin x′∈X ⟨z, x′⟩+ 1 2η D x′, S 1 2 x′E . 3 After N samples ξ, the averaged parameter bx(N) returned by ADAGRAD satisfies E[f(bx(N))] −inf x∈X f(x) ≤O(1)R∞M √ N d X j=1 √pj, (6) where R∞denotes the ℓ∞-radius of X (cf. [4, Section 1.3 and Theorem 5]). By inspection, the ADAGRAD rate (6) matches the lower bound in Proposition 1 and is thus optimal. It is interesting to note, though, that in the power law setting of Corollary 2 (recall the error order (4)), a calculation shows that the multiplier for the SGD guarantee (5) becomes R∞ √ d max{d(1−α)/2, 1}, while ADAGRAD attains rate at worst R∞max{d1−α/2, log d}. For α > 1, the ADAGRAD rate is no worse, and for α ≥2, is more than √ d/ log d better—an exponential improvement in the dimension. 3 Parallel and asynchronous optimization with sparsity As we note in the introduction, recent works [12, 14] have suggested that sparsity can yield benefits in our ability to parallelize stochastic gradient-type algorithms. Given the optimality of ADAGRADtype algorithms, it is natural to focus on their parallelization in the hope that we can leverage their ability to “adapt” to sparsity in the data. To provide the setting for our further algorithms, we first revisit Niu et al.’s HOGWILD! [12]. HOGWILD! is an asynchronous (parallelized) stochastic gradient algorithm for optimization over product-space domains, meaning that X in problem (1) decomposes as X = X1 × · · · × Xd, where Xj ⊂R. Fix a stepsize η > 0. A pool of independently running processors then performs the following updates asynchronously to a centralized vector x: 1. Sample ξ ∼P 2. Read x and compute g ∈∂xF(x; ξ) 3. For each j s.t. gj ̸= 0, update xj ←ΠXj(xj −ηgj). Here ΠXj denotes projection onto the jth coordinate of the domain X. The key of HOGWILD! is that in step 2, the parameter x is allowed to be inconsistent—it may have received partial gradient updates from many processors—and for appropriate problems, this inconsistency is negligible. Indeed, Niu et al. [12] show linear speedup in optimization time as the number of processors grow; they show this empirically in many scenarios, providing a proof under the somewhat restrictive assumptions that there is at most one non-zero entry in any gradient g and that f has Lipschitz gradients. 3.1 Asynchronous dual averaging A weakness of HOGWILD! is that it appears only applicable to problems for which the domain X is a product space, and its analysis assumes ∥g∥0 = 1 for all gradients g. In effort to alleviate these difficulties, we now develop and present our asynchronous dual averaging algorithm, ASYNCDA. ASYNCDA maintains and upates a centralized dual vector z instead of a parameter x, and a pool of processors perform asynchronous updates to z, where each processor independently iterates: 1. Read z and compute x := argminx∈X n ⟨z, x⟩+ 1 ηψ(x) o // Implicitly increment “time” counter t and let x(t) = x 2. Sample ξ ∼P and let g ∈∂xF(x; ξ) // Let g(t) = g. 3. For j ∈[d] such that gj ̸= 0, update zj ←zj + gj. Because the actual computation of the vector x in ASYNCDA is performed locally on each processor in step 1 of the algorithm, the algorithm can be executed with any proximal function ψ and domain X. The only communication point between any of the processors is the addition operation in step 3. Since addition is commutative and associative, forcing all asynchrony to this point of the algorithm is a natural strategy for avoiding synchronization problems. In our analysis of ASYNCDA, and in our subsequent analysis of the adaptive methods, we require a measurement of time elapsed. With that in mind, we let t denote a time index that exists (roughly) behind-the-scenes. We let x(t) denote the vector x ∈X computed in the tth step 1 of the ASYNCDA 4 algorithm, that is, whichever is the tth x actually computed by any of the processors. This quantity exists and is recoverable from the algorithm, and it is possible to track the running sum Pt τ=1 x(τ). Additionally, we state two assumptions encapsulating the conditions underlying our analysis. Assumption A. There is an upper bound m on the delay of any processor. In addition, for each j ∈[d] there is a constant pj ∈[0, 1] such that P(ξj ̸= 0) ≤pj. We also require certain continuity (Lipschitzian) properties of the loss functions; these amount to a second moment constraint on the instantaneous ∂F and a rough measure of gradient sparsity. Assumption B. There exist constants M and (Mj)d j=1 such that the following bounds hold for all x ∈X: E[∥∂xF(x; ξ)∥2 2] ≤M2 and for each j ∈[d] we have E[|∂xjF(x; ξ)|] ≤pjMj. With these definitions, we have the following theorem, which captures the convergence behavior of ASYNCDA under the assumption that X is a Cartesian product, meaning that X = X1 × · · · × Xd, where Xj ⊂R, and that ψ(x) = 1 2 ∥x∥2 2. Note the algorithm itself can still be efficiently parallelized for more general convex X, even if the theorem does not apply. Theorem 3. Let Assumptions A and B and the conditions in the preceding paragraph hold. Then E T X t=1 F(x(t); ξt) −F(x∗; ξt) ≤1 2η ∥x∗∥2 2 + η 2TM2 + ηTm d X j=1 p2 jM 2 j . We now provide a few remarks to explain and simplify the result. Under the more stringent condition that |∂xjF(x; ξ)| ≤Mj, Assumption A implies E[∥∂xF(x; ξ)∥2 2] ≤Pd j=1 pjM 2 j . Thus, for the remainder of this section we take M2 = Pd j=1 pjM 2 j , which upper bounds the Lipschitz continuity constant of the objective function f. We then obtain the following corollary. Corollary 4. Define bx(T) = 1 T PT t=1 x(t), and set η = ∥x∗∥2 /M √ T. Then E[f(bx(T)) −f(x∗)] ≤M ∥x∗∥2 √ T + m ∥x∗∥2 2M √ T d X j=1 p2 jM 2 j . Corollary 4 is nearly immediate: since ξt is independent of x(t), we have E[F(x(t); ξt) | x(t)] = f(x(t)); applying Jensen’s inequality to f(bx(T)) and performing an algebraic manipulation give the result. If the data is suitably sparse, meaning that pj ≤1/m, the bound in Corollary 4 simplifies to E[f(bx(T)) −f(x∗)] ≤3 2 M ∥x∗∥2 √ T = 3 2 qPd j=1 pjM 2 j ∥x∗∥2 √ T , (7) which is the convergence rate of stochastic gradient descent even in centralized settings (5). The convergence guarantee (7) shows that after T timesteps, the error scales as 1/ √ T; however, if we have k processors, updates occur roughly k times as quickly, as they are asynchronous, and in time scaling as N/k, we can evaluate N gradient samples: a linear speedup. 3.2 Asynchronous AdaGrad We now turn to extending ADAGRAD to asynchronous settings, developing ASYNCADAGRAD (asynchronous ADAGRAD). As in the ASYNCDA algorithm, ASYNCADAGRAD maintains a shared dual vector z (the sum of gradients) and the shared matrix S, which is the diagonal sum of squares of gradient entries (recall Section 2.2). The matrix S is initialized as diag(δ2), where δj ≥0 is an initial value. Each processor asynchronously performs the following iterations: 1. Read S and z and set G = S 1 2 . Compute x := argminx∈X {⟨z, x⟩+ 1 2η ⟨x, Gx⟩} // Implicitly increment “time” counter t and let x(t) = x, S(t) = S 2. Sample ξ ∼P and let g ∈∂F(x; ξ) 3. For j ∈[d] such that gj ̸= 0, update Sj ←Sj + g2 j and zj ←zj + gj. 5 As in the description of ASYNCDA, we note that x(t) is the vector x ∈X computed in the tth “step” of the algorithm (step 1), and similarly associate ξt with x(t). To analyze ASYNCADAGRAD, we make a somewhat stronger assumption on the sparsity properties of the losses F than Assumption B. Assumption C. There exist constants (Mj)d j=1 such that E[(∂xjF(x; ξ))2 | ξj ̸= 0] ≤M 2 j for all x ∈X. Indeed, taking M2 = P j pjM 2 j shows that Assumption C implies Assumption B with specific constants. We then have the following convergence result. Theorem 5. In addition to the conditions of Theorem 3, let Assumption C hold. Assume that for all j we have δ2 ≥M 2 j m and X ⊂[−R∞, R∞]d. Then T X t=1 E F(x(t); ξt) −F(x∗; ξt) ≤ d X j=1 min 1 η R2 ∞E " δ2 + T X t=1 gj(t)2 1 2 # + ηE " T X t=1 gj(t)2 1 2 # (1 + pjm), MjR∞pjT . It is possible to relax the condition on the initial constant diagonal term; we defer this to the full version of the paper. It is natural to ask in which situations the bound provided by Theorem 5 is optimal. We note that, as in the case with Theorem 3, we may obtain a convergence rate for f(bx(T))−f(x∗) using convexity, where bx(T) = 1 T PT t=1 x(t). By Jensen’s inequality, we have for any δ that E " δ2 + T X t=1 gj(t)2 1 2 # ≤ δ2 + T X t=1 E[gj(t)2] 1 2 ≤ q δ2 + TpjM 2 j . For interpretation, let us now make a few assumptions on the probabilities pj. If we assume that pj ≤c/m for a universal (numerical) constant c, then Theorem 5 guarantees that E[f(bx(T)) −f(x∗)] ≤O(1) 1 η R2 ∞+ η d X j=1 Mj min (p log(T)/T + pj √ T , pj ) , (8) which is the convergence rate of ADAGRAD except for a small factor of min{√log T/T, pj} in addition to the usual p pj/T rate. In particular, optimizing by choosing η = R∞, and assuming pj ≳1 T log T, we have convergence guarantee E[f(bx(T)) −f(x∗)] ≤O(1)R∞ d X j=1 Mj min √pj √ T , pj , which is minimax optimal by Proposition 1. In fact, however, the bounds of Theorem 5 are somewhat stronger: they provide bounds using the expectation of the squared gradients gj(t) rather than the maximal value Mj, though the bounds are perhaps clearer in the form (8). We note also that our analysis applies to more adversarial settings than stochastic optimization (e.g., to online convex optimization [5]). Specifically, an adversary may choose an arbitrary sequence of functions subject to the random data sparsity constraint (2), and our results provide an expected regret bound, which is strictly stronger than the stochastic convergence guarantees provided (and guarantees high-probability convergence in stochastic settings [3]). Moreover, our comments in Section 2 about the relative optimality of ADAGRAD versus standard gradient methods apply. When the data is sparse, we indeed should use asynchronous algorithms, but using adaptive methods yields even more improvement than simple gradient-based methods. 4 Experiments In this section, we give experimental validation of our theoretical results on ASYNCADAGRAD and ASYNCDA, giving results on two datasets selected for their high-dimensional sparsity.2 2In our experiments, ASYNCDA and HOGWILD! had effectively identical performance. 6 2 4 6 8 10 12 14 16 1 2 3 4 5 6 7 8 Number of workers Speedup A-ADAGRAD ASYNCDA 2 4 6 8 10 12 14 16 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065 0.07 Number of workers Training loss 2 4 6 8 10 12 14 16 0.017 0.018 0.019 0.02 0.021 0.022 0.023 0.024 0.025 Number of workers Test error Figure 1. Experiments with URL data. Left: speedup relative to one processor. Middle: training dataset loss versus number of processors. Right: test set error rate versus number of processors. AADAGRAD abbreviates ASYNCADAGRAD. 1.0 1.2 1.4 1.6 1.8 Fixed stepsizes, training data, L2=0 number of passes relative log-loss 1 2 4 8 16 64 256 A-AdaGrad η = 0.002 A-AdaGrad η = 0.004 A-AdaGrad η = 0.008 A-AdaGrad η = 0.016 A-AdaGrad η = 0.002 A-AdaGrad η = 0.004 A-AdaGrad η = 0.008 A-AdaGrad η = 0.016 A-DA η = 0.800 A-DA η = 1.600 A-DA η = 3.200 1.00 1.01 1.02 1.03 1.04 Fixed stepsizes, test data, L2=0 number of passes 1 2 4 8 16 32 64 128 256 1.00 1.01 1.02 1.03 1.04 Impact of L2 regularizaton on test error number of passes 1 2 4 8 16 32 64 128 256 A-AdaGrad, η = 0.008 L2 = 0 A-AdaGrad, η = 0.008 L2 = 80 A-DA, η = 0.8 L2 = 0 A-DA, η = 0.8 L2 = 80 Figure 2: Relative accuracy for various stepsize choices on an click-through rate prediction dataset. 4.1 Malicious URL detection For our first set of experiments, we consider the speedup attainable by applying ASYNCADAGRAD and ASYNCDA, investigating the performance of each algorithm on a malicious URL prediction task [7]. The dataset in this case consists of an anonymized collection of URLs labeled as malicious (e.g., spam, phishing, etc.) or benign over a span of 120 days. The data in this case consists of 2.4 · 106 examples with dimension d = 3.2 · 106 (sparse) features. We perform several experiments, randomly dividing the dataset into 1.2 · 106 training and test samples for each experiment. In Figure 1 we compare the performance of ASYNCADAGRAD and ASYNCDA after doing after single pass through the training dataset. (For each algorithm, we choose the stepsize η for optimal training set performance.) We perform the experiments on a single machine running Ubuntu Linux with six cores (with two-way hyperthreading) and 32Gb of RAM. From the left-most plot in Fig. 1, we see that up to six processors, both ASYNCDA and ASYNCADAGRAD enjoy the expected linear speedup, and from 6 to 12, they continue to enjoy a speedup that is linear in the number of processors though at a lesser slope (this is the effect of hyperthreading). For more than 12 processors, there is no further benefit to parallelism on this machine. The two right plots in Figure 1 plot performance of the different methods (with standard errors) versus the number of worker threads used. Both are essentially flat; increasing the amount of parallelism does nothing to the average training loss or the test error rate for either method. It is clear, however, that for this dataset, the adaptive ASYNCADAGRAD algorithm provides substantial performance benefits over ASYNCDA. 4.2 Click-through-rate prediction experiments We also experiment on a proprietary datasets consisting of search ad impressions. Each example corresponds to showing a search-engine user a particular text ad in response to a query string. From this, we construct a very sparse feature vector based on the text of the ad displayed and the query string (no user-specific data is used). The target label is 1 if the user clicked the ad and -1 otherwise. 7 1.000 1.005 1.010 1.015 (A) Optimized stepsize for each number of passes number of passes relative log-loss 1 2 4 8 16 32 64 128 256 A-DA, L2=80 A-AdaGrad, L2=80 1.001 1.002 1.003 1.004 1.005 1.006 1.007 1.008 0 4 8 12 (B) A-AdaGrad speedup target relative log-loss speedup (C) Optimal stepsize scaling number of passes relative stepsize 1 2 4 8 16 32 64 128 256 0 1 2 A-DA base η = 1.600 A-AdaGrad base η = 0.023 1.000 1.005 1.010 1.015 (D) Impact of training data ordering number of passes relative log-loss 1 2 4 8 16 32 64 128 256 A-DA η = 1.600 A-AdaGrad η = 0.016 Figure 3. (A) Relative test-set log-loss for ASYNCDA and ASYNCADAGRAD, choosing the best stepsize (within a factor of about 1.4×) individually for each number of passes. (B) Effective speedup for ASYNCADAGRAD. (C) The best stepsize η, expressed as a scaling factor on the stepsize used for one pass. (D) Five runs with different random seeds for each algorithm (with ℓ2 penalty 80). We fit logistic regression models using both ASYNCDA and ASYNCADAGRAD. We run extensive experiments on a moderate-sized dataset (about 107 examples, split between training and testing), which allows thorough investigation of the impact of the stepsize η, the number of training passes,3 and ℓ2-regularization on accuracy. For these experiments we used 32 threads on 16 core machines for each run, as ASYNCADAGRAD and ASYNCDA achieve similar speedups from parallelization. On this dataset, ASYNCADAGRAD typically achieves an effective additional speedup over ASYNCDA of 4× or more. That is, to reach a given level of accuracy, ASYNCDA generally needs four times as many effective passes over the dataset. We measure accuracy with log-loss (the logistic loss) averaged over five runs using different random seeds (which control the order in which the algorithms sample examples during training). We report relative values in Figures 2 and 3, that is, the ratio of the mean loss for the given datapoint to the lowest (best) mean loss obtained. Our results are not particularly sensitive to the choice of relative log-loss as the metric of interest; we also considered AUC (the area under the ROC curve) and observed similar results. Figure 2 shows relative log-loss as a function of the number of training passes for various stepsizes. Without regularization, ASYNCADAGRAD is prone to overfitting: it achieves significantly higher accuracy on the training data (Fig. 2 (left)), but unless the stepsize is tuned carefully to the number of passes, it will overfit (Fig. 2 (middle)). Fortunately, the addition of ℓ2 regularization largely solves this problem. Indeed, Figure 2 (right) shows that while adding an ℓ2 penalty of 80 has very little impact on ASYNCDA, it effectively prevents the overfitting of ASYNCADAGRAD.4 Fixing ℓ2 regularization multiplier to 80, we varied the stepsize η over a multiplicative grid with resolution √ 2 for each number of passes and for each algorithm. Figure 3 reports the results obtained by selecting the best stepsize in terms of test set log-loss for each number of passes. Figure 3(A) shows relative log-loss of the best stepsize for each algorithm; 3(B) shows the relative time ASYNCDA requires with respect to ASYNCADAGRAD to achieve a given loss. Specifically, Fig. 3(B) shows the ratio of the number of passes the algorithms require to achieve a fixed loss, which gives a broader estimate of the speedup obtained by using ASYNCADAGRAD; speedups range from 3.6× to 12×. Figure 3(C) shows the optimal stepsizes as a function of the best setting for one pass. The optimal stepsize decreases moderately for ASYNCADAGRAD, but are somewhat noisy for ASYNCDA. It is interesting to note that ASYNCADAGRAD’s accuracy is largely independent of the ordering of the training data, while ASYNCDA shows significant variability. This can be seen both in the error bars on Figure 3(A), and explicitly in Figure 3(D), where we plot one line for each of the five random seeds used. Thus, while on the one hand ASYNCDA requires somewhat less tuning of the stepsize and ℓ2 parameter, tuning ASYNCADAGRAD is much easier because of its predictable response. 3Here “number of passes” more precisely means the expected number of times each example in the dataset is trained on. That is, each worker thread randomly selects a training example from the dataset for each update, and we continued making updates until (dataset size) × (number of passes) updates have been processed. 4For both algorithms, this is accomplished by adding the term η80 ∥x∥2 2 to the ψ function. We can achieve slightly better results for ASYNCADAGRAD by varying the ℓ2 penalty with the number of passes. 8 References [1] P. Auer and C. Gentile. Adaptive and self-confident online learning algorithms. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000. [2] P. B¨uhlmann and S. van de Geer. Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, 2011. [3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 50(9):2050–2057, September 2004. [4] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [5] E. Hazan. The convex optimization approach to regret minimization. In Optimization for Machine Learning, chapter 10. MIT Press, 2012. [6] J. Hiriart-Urruty and C. Lemar´echal. Convex Analysis and Minimization Algorithms I & II. Springer, New York, 1996. [7] J. Ma, L. K. Saul, S. Savage, and G. M. Voelker. Identifying malicious urls: An application of large-scale online learning. In Proceedings of the 26th International Conference on Machine Learning, 2009. [8] C. Manning and H. Sch¨utze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. [9] B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the Twenty Third Annual Conference on Computational Learning Theory, 2010. [10] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [11] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):261–283, 2009. [12] F. Niu, B. Recht, C. R´e, and S. Wright. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems 24, 2011. [13] P. Richt´arik and M. Tak´aˇc. Parallel coordinate descent methods for big data optimization. arXiv:1212.0873 [math.OC], 2012. URL http://arxiv.org/abs/1212.0873. [14] M. Tak´aˇc, A. Bijral, P. Richt´arik, and N. Srebro. Mini-batch primal and dual methods for SVMs. In Proceedings of the 30th International Conference on Machine Learning, 2013. 9
|
2013
|
47
|
5,122
|
Predictive PAC Learning and Process Decompositions Cosma Rohilla Shalizi Statistics Department Carnegie Mellon University Pittsburgh, PA 15213 USA cshalizi@cmu.edu Aryeh Kontorovich Computer Science Department Ben Gurion University Beer Sheva 84105 Israel karyeh@cs.bgu.ac.il Abstract We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example). A mixture of learnable processes need not be learnable itself, and certainly its generalization error need not decay at the same rate. In this paper, we argue that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path. This definition not only matches what a realistic learner might demand, but also allows us to sidestep several otherwise grave problems in learning from dependent data. In particular, we give a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component. We also provide a characterization of mixtures of absolutely regular (β-mixing) processes, of independent probability-theoretic interest. 1 Introduction Statistical learning theory, especially the theory of “probably approximately correct” (PAC) learning, has mostly developed under the assumption that data are independent and identically distributed (IID) samples from a fixed, though perhaps adversarially-chosen, distribution. As recently as 1997, Vidyasagar [1] named extending learning theory to stochastic processes of dependent variables as a major open problem. Since then, considerable progress has been made for specific classes of processes, particularly strongly-mixing sequences and exchangeable sequences. (Especially relevant contributions, for our purposes, came from [2, 3] on exchangeability, from [4, 5] on absolute regularity1, and from [6, 7] on ergodicity; others include [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].) Our goals in this paper are to point out that many practically-important classes of stochastic processes possess a special sort of structure, namely they are convex combinations of simpler, extremal processes. This both demands something of a re-orientation of the goals of learning, and makes the new goals vastly easier to attain than they might seem. Main results Our main contribution is threefold: a conceptual definition of learning from non-IID data (Definition 1) and a technical result establishing tight generalization bounds for mixtures of learnable processes (Theorem 2), with a direct corollary about exchangeable sequences (Corollary 1), and an application to mixtures of absolutely regular sequences, for which we provide a new characterization. Notation X1, X2, . . . will be a sequence of dependent random variables taking values in a common measurable space X, which we assume to be “standard” [18, ch. 3] to avoid technicalities, implying 1Absolutely regular processes are ones where the joint distribution of past and future events approaches independence, in L1, as the separation between events goes to infinity; see §4 below for a precise statement and extensive discussion. Absolutely regular sequences are now more commonly called “β-mixing”, but we use the older name to avoid confusion with the other sort of “mixing”. 1 that their σ-field has a countable generating basis; the reader will lose little if they think of X as Rd. (We believe our ideas apply to stochastic processes with multidimensional index sets as well, but use sequences here.) Xj i will stand for the block (Xi, Xi+1, . . . Xj−1, Xj). Generic infinitedimensional distributions, measures on X ∞, will be µ or ρ; these are probability laws for X∞ 1 . Any such stochastic process can be represented through the shift map τ : X ∞7→X ∞(which just drops the first coordinate, (τx)t = xt+1), and a suitable distribution of initial conditions. When we speak of a set being invariant, we mean invariant under the shift. The collection of all such probability measures is itself a measurable space, and a generic measure there will be π. 2 Process Decompositions Since the time of de Finetti and von Neumann, an important theme of the theory of stochastic processes has been finding ways of representing complicated but structured processes, obeying certain symmetries, as mixtures of simpler processes with the same symmetries, as well as (typically) some sort of 0-1 law. (See, for instance, the beautiful paper by Dynkin [19], and the statistically-motivated [20].) In von Neumann’s original ergodic decomposition [18, §7.9], stationary processes, whose distributions are invariant over time, proved to be convex combinations of stationary ergodic measures, ones where all invariant sets have either probability 0 or probability 1. In de Finetti’s theorem [21, ch. 1], exchangeable sequences, which are invariant under permutation, are expressed as mixtures of IID sequences2. Similar results are now also known for asymptotically mean stationary sequences [18, §8.4], for partially-exchangeable sequences [22], for stationary random fields, and even for infinite exchangeable arrays (including networks and graphs) [21, ch. 7]. The common structure shared by these decompositions is as follows. 1. The probability law ρ of the composite but symmetric process is a convex combination of the simpler, extremal processes µ ∈M with the same symmetry. The infinite-dimensional distribution of these extremal processes are, naturally, mutually singular. 2. Sample paths from the composite process are generated hierarchically, first by picking an extremal process µ from M according to a some measure π supported on M, and then generating a sample path from µ. Symbolically, µ ∼ π X∞ 1 | µ ∼ µ 3. Each realization of the composite process therefore gives information about only a single extremal process, as this is an invariant along each sample path. 3 Predictive PAC These points raise subtle but key issues for PAC learning theory. Recall the IID case: random variables X1, X2, . . . are all generated from a common distribution µ(1), leading to an infinitedimensional process distribution µ. Against this, we have a class F of functions f. The goal in PAC theory is to find a sample complexity function3 s(ϵ, δ, F, µ) such that, whenever n ≥s, Pµ sup f∈F 1 n n X t=1 f(Xt) −Eµ [f] ≥ϵ ! ≤δ (1) That is, PAC theory seeks finite-sample uniform law of large numbers for F. Because of its importance, it will be convenient to abbreviate the supremum, sup f∈F 1 n n X t=1 f(Xt) −Eµ [f] ≡Γn 2This is actually a special case of the ergodic decomposition [21, pp. 25–26]. 3Standard PAC is defined as distribution-free, but here we maintain the dependence on µ for consistency with future notation. 2 using the letter “Γ” as a reminder that when this goes to zero, F is a Glivenko-Cantelli class (for µ). Γn is also a function of F and of µ, but we suppress this in the notation for brevity. We will also pass over the important and intricate, but fundamentally technical, issue of establishing that Γn is measurable (see [23] for a thorough treatment of this topic). What one has in mind, of course, is that there is a space H of predictive models (classifiers, regressions, ...) h, and that F is the image of H through an appropriate loss function ℓ, i.e., each f ∈F can be written as f(x) = ℓ(x, h(x)) for some h ∈H. If Γn →0 in probability for this “loss function” class, then empirical risk minimization is reliable. That is, the function ˆfn which minimizes the empirical risk n−1 P t f(Xt) has an expected risk in the future which is close to the best attainable risk over all of F, R(F, µ) = inff∈F Eµ [f]. Indeed, since when n ≥s, with high (≥1 −δ) probability all functions have empirical risks within ϵ of their true risks, with high probability the true risk Eµ h ˆfn i is within 2ϵ of R(F, µ). Although empirical risk minimization is not the only conceivable learning strategy, it is, in a sense, a canonical one (computational considerations aside). The latter is an immediate consequence of the VC-dimension characterization of PAC learnability: Theorem 1 Suppose that the concept class H is PAC learnable from IID samples. Then H is learnable via empirical risk minimization. PROOF: Since H is PAC-learnable, it must necessarily have a finite VC-dimension [24]. But for finite-dimensional H and IID samples, Γn →0 in probability (see [25] for a simple proof). This implies that the empirical risk minimizer is a PAC learner for H. □ In extending these ideas to non-IID processes, a subtle issue arises, concerning which expectation value we would like empirical means to converge towards. In the IID case, because µ is simply the infinite product of µ(1) and f is a function on X, we can without trouble identify expectations under the two measures with each other, and with expectations conditional on the first n observations: Eµ [f(X)] = Eµ(1) [f(X)] = Eµ [f(Xn+1) | Xn 1 ] Things are not so tidy when µ is the law of a dependent stochastic process. In introducing a notion of “predictive PAC learning”, Pestov [3], like Berti and Rigo [2] earlier, proposes that the target should be the conditional expectation, in our notation Eµ [f(Xn+1) | Xn 1 ]. This however presents two significant problems. First, in general there is no single value for this — it truly is a function of the past Xn 1 , or at least some part of it. (Consider even the case of a binary Markov chain.) The other, and related, problem with this idea of predictive PAC is that it presents learning with a perpetually moving target. Whether the function which minimizes the empirical risk is going to do well by this criterion involves rather arbitrary details of the process. To truly pursue this approach, one would have to actually learn the predictive dependence structure of the process, a quite formidable undertaking, though perhaps not hopeless [26]. Both of these complications are exacerbated when the process producing the data is actually a mixture over simpler processes, as we have seen is very common in interesting applied settings. This is because, in addition to whatever dependence may be present within each extremal process, Xn 1 gives (partial) information about what that process is. Finding Eρ [Xn+1 | Xn 1 ] amounts to first finding all the individual conditional expectations, Eµ [Xn+1 | Xn 1 ], and then averaging them with respect to the posterior distribution π(µ | Xn 1 ). This averaging over the posterior produces additional dependence between past and future. (See [27] on quantifying how much extra apparent Shannon information this creates.) As we show in §4 below, continuous mixtures of absolutely regular processes are far from being absolutely regular themselves. This makes it exceedingly hard, if not impossible, to use a single sample path, no matter how long, to learn anything about global expectations. These difficulties all simply dissolve if we change the target distribution. What a learner should care about is not averaging over some hypothetical prior distribution of inaccessible alternative dynamical systems, but rather what will happen in the future of the particular realization which provided the training data, and must continue to provide the testing data. To get a sense of what this is means, 3 notice that for an ergodic µ, Eµ [f] = lim m→∞ 1 m m X i=1 E [f(Xn+i) | Xn 1 ] (from [28, Cor. 4.4.1]). That is, matching expectations under the process measure µ means controlling the long-run average behavior, and not just the one-step-ahead expectation suggested in [3, 2]. Empirical risk minimization now makes sense: it is attempting to find a model which will work well not just at the next step (which may be inherently unstable), but will continue to work well, on average, indefinitely far into the future. We are thus led to the following definition. Definition 1 Let X∞ 1 be a stochastic process with law µ, and let I be the σ-field generated by all the invariant events. We say that µ admits predictive PAC learning of a function class F when there exists a sample-complexity function s(ϵ, δ, F, µ) such that, if n ≥s, then Pµ sup f∈F 1 n n X t=1 f(Xt) −Eµ [f|I] ≥ϵ ! ≤δ A class of processes P admits of distribution-free predictive PAC learning if there exists a common sample-complexity function for all µ ∈P, in which case we write s(ϵ, δ, F, µ) = s(ϵ, δ, F, P). As is well-known, distribution-free predictive PAC learning, in this sense, is possible for IID processes (F must have finite VC dimension). For an ergodic µ, [6] shows that s(ϵ, δ, F, µ) exist and is finite if and only, once again, F has a finite VC dimension; this implies predictive PAC learning, but not distribution-free predictive PAC. Since ergodic processes can converge arbitrarily slowly, some extra condition must be imposed on P to ensure that dependence decays fast enough for each µ. A sufficient restriction is that all processes in P be stationary and absolutely regular (β-mixing), with a common upper bound on the β dependence coefficients, as [5, 14] show how to turn algorithms which are PAC on IID data into ones where are PAC on such sequences, with a penalty in extra sample complexity depending on µ only through the rate of decay of correlations4. We may apply these familiar results straightforwardly, because, when µ is ergodic, all invariant sets have either measure 0 or measure 1, conditioning on I has no effect, and Eµ [f | I] = Eµ [f]. Our central result is now almost obvious. Theorem 2 Suppose that distribution-free prediction PAC learning is possible for a class of functions F and a class of processes M, with sample-complexity function s(ϵ, δ, F, P). Then the class of processes P formed by taking convex mixtures from M admits distribution-free PAC learning with the same sample complexity function. PROOF: Suppose that n ≥s(ϵ, δ, F). Then by the law of total expectation, Pρ (Γn ≥ϵ) = Eρ [Pρ (Γn ≥ϵ | µ)] (2) = Eρ [Pµ (Γn ≥ϵ)] (3) ≤ Eρ [δ] = δ (4) □ In words, if the same bound holds for each component of the mixture, then it still holds after averaging over the mixture. It is important here that we are only attempting to predict the long-run average risk along the continuation of the same sample path as that which provided the training data; with this as our goal, almost all sample paths looks like — indeed, are — realizations of single components of the mixture, and so the bound for extremal processes applies directly to them5. By contrast, there may be no distribution-free bounds at all if one does not condition on I. 4We suspect that similar results could be derived for many of the weak dependence conditions of [29]. 5After formulating this idea, we came across a remarkable paper by Wiener [30], where he presents a qualitative version of highly similar considerations, using the ergodic decomposition to argue that a full dynamical model of the weather is neither necessary nor even helpful for meteorological forecasting. The same paper also lays out the idea of sensitive dependence on initial conditions, and the kernel trick of turning nonlinear problems into linear ones by projecting into infinite-dimensional feature spaces. 4 A useful consequence of this innocent-looking result is that it lets us immediately apply PAC learning results for extremal processes to composite processes, without any penalty in the sample complexity. For instance, we have the following corollary: Corollary 1 Let F have finite VC dimension, and have distribution-free sample complexity s(ϵ, δ, F, M) for all IID measures µ ∈P. Then the class M of exchangeable processes composed from P admit distribution-free PAC learning with the same sample complexity, s(ϵ, δ, F, P) = s(ϵ, δ, F, M) This is in contrast with, for instance, the results obtained by [2, 3], which both go from IID sequences (laws in P) to exchangeable ones (laws in M) at the cost of considerably increased sample complexity. The easiest point of comparison is with Theorem 4.2 of [3], which, in our notation, shows that s(ϵ, δ, M) ≤s(ϵ/2, δϵ, P) That we pay no such penalty in sample complexity because our target of learning is Eµ [f | I], not Eρ [f | Xn 1 ]. This means we do not have to use data points to narrow the posterior distribution π(µ | Xn 1 ), and that we can give a much more direct argument. In [3], Pestov asks whether “one [can] maintain the initial sample complexity” when going from IID to exchangeable sequences; the answer is yes, if one picks the right target. This holds whenever the data-generating process is a mixture of extremal processes for which learning is possible. A particularly important special case of this are the absolutely regular processes. 4 Mixtures of Absolutely Regular Processes Roughly speaking, an absolutely regular process is one which is asymptotically independent in a particular sense, where the joint distribution between past and future events approaches, in L1, the product of the marginal distributions as the time-lag between past and future grows. These are particularly important for PAC learning, since much of the existing IID learning theory translates directly to this setting. To be precise, let X∞ −∞be a two-sided6 stationary process. The β-dependence coefficient at lag k is β(k) ≡
P 0 −∞⊗P ∞ k −P−(1:k)
TV (5) where P−(1:k) is the joint distribution of X0 −∞and X∞ k , the total variation distance between the actual joint distribution of the past and future, and the product of their marginals. Equivalently [31, 32] β(k) = E " sup B∈σ(X∞ k ) P B | X0 −∞ −P (B) # (6) which, roughly, is the expected total variation distance between the marginal distribution of the future and its distribution conditional on the past. As is well known, β(k) is non-increasing in k, and of course ≥0, so β(k) must have a limit as k →∞; it will be convenient to abbreviate this as β(∞). When β(∞) = 0, the process is said to be beta mixing, or absolutely regular. All absolutely regular processes are also ergodic [32]. The importance of absolutely regular processes for learning comes essentially from a result due to Yu [4]. Let Xn 1 be a part of a sample path from an absolutely regular process µ, whose dependence coefficients are β(k). Fix integers m and a such that 2ma = n, so that the sequence is divided into 2m contiguous blocks of length a, and define µ(m,a) to be distribution of m length-a blocks. (That is, µ(m,a) approximates µ by IID blocks.) Then |µ(C) −µ(m,a)(C)| ≤(m −1)β(a) [4, Lemma 4.1]. Since in particular the event C could be taken to be {Γn > ϵ}, this approximation result allows distribution-free learning bounds for IID processes to translate directly into distribution-free learning bounds for absolutely regular processes with bounded β coefficients. 6We have worked with one-sided processes so far, but the devices for moving between the two representations are standard, and this definition is more easily stated in its two-sided version. 5 If M contains only absolutely regular processes, then a measure π on M creates a ρ which is a mixture of absolutely regular processes, or a MAR process. It is easy to see that absolute regularity of the component processes (βµ(k) →0) does not imply absolute regularity of the mixture processes (βρ(k) ̸→0). To see this, suppose that M consists of two processes µ0, which puts unit probability mass on the sequence of all zeros, and µ1, which puts unit probability on the sequence of all ones; and that π gives these two equal probability. Then βµi(k) = 0 for both i, but the past and the future of ρ are not independent of each other. More generally, suppose π is supported on just two absolutely regular processes, µ and µ′. For each such µ, there exists a set of typical sequences Tµ ⊂X ∞, in which the infinite sample path of µ lies almost surely7, and these sets do not overlap8, Tµ ∩Tµ′ = ∅. This implies that ρ(Tµ) = π(µ), but ρ(Tµ | X0 −∞) = 1 X0 −∞∈Tµ 0 otherwise Thus the change in probability of Tµ due to conditioning on the past is π(µ1) if the selected component was µ2, and 1 −π(µ1) = π(µ2) if the selected component is µ1. We can reason in parallel for Tµ2, and so the average change in probability is π1(1 −π1) + π2(1 −π2) = 2π1(1 −π1) and this must be βρ(∞). Similar reasoning when π is supported on q extremal processes shows βρ(∞) = q X i=1 πi(1 −πi) so that the general case is βρ(∞) = Z [1 −π({µ})]dπ(µ) This implies that if π has no atoms, βρ(∞) = 1 always. Since βρ(k) is non-increasing, βρ(k) = 1 for all k, for a continuous mixture of absolutely regular processes. Conceptually, this arises because of the use of infinite-dimensional distributions for both past and future in the definition of the βdependence coefficient. Having seen an infinite past is sufficient, for an ergodic process, to identify the process, and of course the future must be a continuation of this past. MARs thus display a rather odd separation between the properties of individual sample paths, which approach independence asymptotically in time, and the ensemble-level behavior, where there is ineradicable dependence, and indeed maximal dependence for continuous mixtures. Any one realization of a MAR, no matter how long, is indistinguishable from a realization of an absolutely regular process which is a component of the mixture. The distinction between a MAR and a single absolutely regular process only becomes apparent at the level of ensembles of paths. It is desirable to characterize MARs. They are stationary, but non-ergodic and have non-vanishing β(∞). However, this is not sufficient to characterize them. Bernoulli shifts are stationary and ergodic, but not absolutely regular9. It follows that a mixture of Bernoulli shifts will be stationary, and by the preceding argument will have positive β(∞), but will not be a MAR. Roughly speaking, given the infinite past of a MAR, events in the future become asymptotically independent as the separation between them increases10. A more precise statement needs to control the approach to independence of the component processes in a MAR. We say that ρ is a ˜β-uniform MAR when ρ is a MAR, and, for π-almost-all µ, βµ(k) ≤˜β(k), with ˜β(k) →0. Then if we condition on finite histories X0 −n and let n grow, widely separated future events become asymptotically independent. 7Since X is “standard”, so is X ∞, and the latter’s σ-field σ(X∞ −∞) has a countable generating basis, say B. For each B ∈B, the set Tµ,B = x ∈X ∞: limn→∞n−1 Pn−1 t=0 1B(τ tx) = µ(B) exists, is measurable, is dynamically invariant, and, by Bikrhoff’s ergodic theorem, µ(Tµ,B) = 1 [18, §7.9]. Then Tµ ≡T B∈B Tµ,B also has µ-measure 1, because B is countable. 8Since µ ̸= µ′, there exists at least one set C with µ(C) ̸= µ′(C). The set Tµ,C then cannot overlap at all with the set Tµ′,C, and so Tµ cannot intersect Tµ′. 9They are, however, mixing in the sense of ergodic theory [28]. 10ρ-almost-surely, X0 −∞belongs to the typical set of a unique absolutely regular process, say µ, and so the posterior concentrates on that µ, π(· | X0 −∞) = δµ. Hence ρ(Xl 1, X∞ l+k | X0 −∞) = µ(Xl −∞, X∞ l+k), which tends to µ(Xl −∞)µ(X∞ l+k) as k →∞because µ is absolutely regular. 6 Theorem 3 A stationary process ρ is a ˜β-uniform MAR if and only if lim k→∞lim n→∞E " sup l sup B∈σ(X∞ k+l) ρ(B | Xl 1, X0 −n) −ρ(B | X0 −n) # = 0 (7) Before proceeding to the proof, it is worth remarking on the order of the limits: for finite n, conditioning on X0 −n still gives a MAR, not a single (albeit random) absolutely-regular process. Hence the k →∞limit for fixed n would always be positive, and indeed 1 for a continuous π. PROOF “Only if”: Re-write Eq. 7, expressing ρ in terms of π and the extremal processes: lim k→∞lim n→∞E " sup l sup B∈σ(X∞ k+l) Z µ(B | Xl 1, X0 −n) −µ(B | X0 −n) dπ(µ | X0 −n) # As n →∞, µ(B | X0 −n) →µ(B | X0 −∞), and µ(B | Xl 1, X0 −n) →µ(B | Xl −∞). But, in expectation, both of these are within ˜β(k) of µ(B), and so within 2˜β(k). “If”: Consider the contrapositive. If ρ is not a uniform MAR, either it is a non-uniform MAR, or it is not a MAR at all. If it is not a uniform MAR, then no matter what function ˜β(k) tending to zero we propose, the set of µ with βµ ≥˜β must have positive π measure, i.e., a positive-measure set of processes must converge arbitrarily slowly. Therefore there must exist a set B (or a sequence of such sets) witnessing this arbitrarily slow convergence, and hence the limit in Eq. 7 will be strictly positive. If ρ is not a MAR at all, we know from the ergodic decomposition of stationary processes that it must be a mixture of ergodic processes, and so it must give positive π weight to processes which are not absolutely regular at all, i.e., µ for which βµ(∞) > 0. The witnessing events B for these processes a fortiori drive the limit in Eq. 7 above zero. □ 5 Discussion and future work We have shown that with the right conditioning, a natural and useful notion of predictive PAC emerges. This notion is natural in the sense that for a learner sampling from a mixture of ergodic processes, the only thing that matters is the future behavior of the component he is “stuck” in — and certainly not the process average over all the components. This insight enables us to adapt the recent PAC bounds for mixing processes to mixtures of such processes. An interesting question then is to characterize those processes that are convex mixtures of a given kind of ergodic process (de Finetti’s theorem was the first such characterization). In this paper, we have addressed this question for mixtures of uniformly absolutely regular processes. Another fascinating question is how to extend predictive PAC to real-valued functions [33, 34]. References [1] Mathukumalli Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer-Verlag, Berlin, 1997. [2] Patrizia Berti and Pietro Rigo. A Glivenko-Cantelli theorem for exchangeable random variables. Statistics and Probability Letters, 32:385–391, 1997. [3] Vladimir Pestov. Predictive PAC learnability: A paradigm for learning from exchangeable input data. In Proceedings of the 2010 IEEE International Conference on Granular Computing (GrC 2010), pages 387–391, Los Alamitos, California, 2010. IEEE Computer Society. URL http://arxiv.org/abs/1006.1129. [4] Bin Yu. Rates of convergence for empirical processes of stationary mixing sequences. Annals of Probability, 22:94–116, 1994. URL http://projecteuclid.org/euclid.aop/ 1176988849. [5] M. Vidyasagar. Learning and Generalization: With Applications to Neural Networks. Springer-Verlag, Berlin, second edition, 2003. [6] Terrence M. Adams and Andrew B. Nobel. Uniform convergence of Vapnik-Chervonenkis classes under ergodic sampling. Annals of Probability, 38:1345–1367, 2010. URL http: //arxiv.org/abs/1010.3162. 7 [7] Ramon van Handel. The universal Glivenko-Cantelli property. Probability Theory and Related Fields, 155:911–934, 2013. doi: 10.1007/s00440-012-0416-5. URL http://arxiv.org/ abs/1009.4434. [8] Dharmendra S. Modha and Elias Masry. Memory-universal prediction of stationary random processes. IEEE Transactions on Information Theory, 44:117–133, 1998. doi: 10.1109/18. 650998. [9] Ron Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, 39:5–34, 2000. URL http://www.ee.technion.ac.il/˜rmeir/ Publications/MeirTimeSeries00.pdf. [10] Rajeeva L. Karandikar and Mathukumalli Vidyasagar. Rates of uniform convergence of empirical means with mixing processes. Statistics and Probability Letters, 58:297–307, 2002. doi: 10.1016/S0167-7152(02)00124-4. [11] David Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Transactions on Information Theory, 49:338–345, 2003. doi: 10.1145/307400.307478. [12] Ingo Steinwart and Andreas Christmann. Fast learning from non-i.i.d. observations. In Y. Bengio, D. Schuurmans, John Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22 [NIPS 2009], pages 1768–1776. MIT Press, Cambridge, Massachusetts, 2009. URL http://books.nips.cc/papers/files/ nips22/NIPS2009_1061.pdf. [13] Mehryar Mohri and Afshin Rostamizadeh. Stability bounds for stationary φ-mixing and βmixing processes. Journal of Machine Learning Research, 11, 2010. URL http://www. jmlr.org/papers/v11/mohri10a.html. [14] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-I.I.D. processes. In Daphne Koller, D. Schuurmans, Y. Bengio, and L´eon Bottou, editors, Advances in Neural Information Processing Systems 21 [NIPS 2008], pages 1097–1104, 2009. URL http://books.nips.cc/papers/files/nips21/NIPS2008_0419.pdf. [15] Pierre Alquier and Olivier Wintenberger. Model selection for weakly dependent time series forecasting. Bernoulli, 18:883–913, 2012. doi: 10.3150/11-BEJ359. URL http://arxiv. org/abs/0902.2924. [16] Ben London, Bert Huang, and Lise Getoor. Improved generalization bounds for largescale structured prediction. In NIPS Workshop on Algorithmic and Statistical Approaches for Large Social Networks, 2012. URL http://linqs.cs.umd.edu/basilic/web/ Publications/2012/london:nips12asalsn/. [17] Ben London, Bert Huang, Benjamin Taskar, and Lise Getoor. Collective stability in structured prediction: Generalization from one example. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning [ICML-13], volume 28, pages 828–836, 2013. URL http://jmlr.org/proceedings/papers/ v28/london13.html. [18] Robert M. Gray. Probability, Random Processes, and Ergodic Properties. Springer-Verlag, New York, second edition, 2009. URL http://ee.stanford.edu/˜gray/arp. html. [19] E. B. Dynkin. Sufficient statistics and extreme points. Annals of Probability, 6:705–730, 1978. URL http://projecteuclid.org/euclid.aop/1176995424. [20] Steffen L. Lauritzen. Extreme point models in statistics. Scandinavian Journal of Statistics, 11:65–91, 1984. URL http://www.jstor.org/pss/4615945. With discussion and response. [21] Olav Kallenberg. Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New York, 2005. [22] Persi Diaconis and David Freedman. De Finetti’s theorem for Markov chains. Annals of Probability, 8:115–130, 1980. doi: 10.1214/aop/1176994828. URL http://projecteuclid. org/euclid.aop/1176994828. [23] R. M. Dudley. A course on empirical processes. In ´Ecole d’´et´e de probabilit´es de Saint-Flour, XII–1982, volume 1097 of Lecture Notes in Mathematics, pages 1–142. Springer, Berlin, 1984. 8 [24] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36:929–965, 1989. doi: 10.1145/76359.76371. URL http://users.soe.ucsc. edu/˜manfred/pubs/J14.pdf. [25] St´ephane Boucheron, Olivier Bousquet, and G´abor Lugosi. Theory of classification: A survey of recent advances. ESAIM: Probabability and Statistics, 9:323–375, 2005. URL http: //www.numdam.org/item?id=PS_2005__9__323_0. [26] Frank B. Knight. A predictive view of continuous time processes. Annals of Probability, 3: 573–596, 1975. URL http://projecteuclid.org/euclid.aop/1176996302. [27] William Bialek, Ilya Nemenman, and Naftali Tishby. Predictability, complexity and learning. Neural Computation, 13:2409–2463, 2001. URL http://arxiv.org/abs/physics/ 0007070. [28] Andrzej Lasota and Michael C. Mackey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Springer-Verlag, Berlin, 1994. First edition, Probabilistic Properties of Deterministic Systems, Cambridge University Press, 1985. [29] J´erˆome Dedecker, Paul Doukhan, Gabriel Lang, Jos´e Rafael Le´on R., Sana Louhichi, and Cl´ementine Prieur. Weak Dependence: With Examples and Applications. Springer, New York, 2007. [30] Norbert Wiener. Nonlinear prediction and dynamics. In Jerzy Neyman, editor, Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, volume 3, pages 247–252, Berkeley, 1956. University of California Press. URL http://projecteuclid. org/euclid.bsmsp/1200502197. [31] Paul Doukhan. Mixing: Properties and Examples. Springer-Verlag, New York, 1995. [32] Richard C. Bradley. Basic properties of strong mixing conditions. A survey and some open questions. Probability Surveys, 2:107–144, 2005. URL http://arxiv.org/abs/math/ 0511078. [33] Noga Alon, Shai Ben-David, Nicol`o Cesa-Bianchi, and David Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44:615–631, 1997. doi: 10.1145/263867.263927. URL http://tau.ac.il/˜nogaa/PDFS/learn3.pdf. [34] Peter L. Bartlett and Philip M. Long. Prediction, learning, uniform convergence, and scale-sensitive dimensions. Journal of Computer and Systems Science, 56:174–190, 1998. doi: 10.1006/jcss.1997.1557. URL http://www.phillong.info/publications/ more_theorems.pdf. 9
|
2013
|
48
|
5,123
|
Scalable Inference for Logistic-Normal Topic Models Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng and Bo Zhang State Key Lab of Intelligent Tech. & Systems; Tsinghua National TNList Lab; Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China {chenjf10,wangzi10}@mails.tsinghua.edu.cn; {dcszj,dcszb}@mail.tsinghua.edu.cn; xunzheng@cs.cmu.edu Abstract Logistic-normal topic models can effectively discover correlation structures among latent topics. However, their inference remains a challenge because of the non-conjugacy between the logistic-normal prior and multinomial topic mixing proportions. Existing algorithms either make restricting mean-field assumptions or are not scalable to large-scale applications. This paper presents a partially collapsed Gibbs sampling algorithm that approaches the provably correct distribution by exploring the ideas of data augmentation. To improve time efficiency, we further present a parallel implementation that can deal with large-scale applications and learn the correlation structures of thousands of topics from millions of documents. Extensive empirical results demonstrate the promise. 1 Introduction In Bayesian models, though conjugate priors normally result in easier inference problems, nonconjugate priors could be more expressive in capturing desired model properties. One popular example is admixture topic models which have obtained much success in discovering latent semantic structures from data. For the most popular latent Dirichlet allocation (LDA) [5], a Dirichlet distribution is used as the conjugate prior for multinomial mixing proportions. But a Dirichlet prior is unable to model topic correlation, which is important for understanding/visualizing the semantic structures of complex data, especially in large-scale applications. One elegant extension of LDA is the logistic-normal topic models (aka correlated topic models, CTMs) [3], which use a logisticnormal prior to capture the correlation structures among topics effectively. Along this line, many subsequent extensions have been developed, including dynamic topic models [4] that deal with time series via a dynamic linear system on the Gaussian variables and infinite CTMs [11] that can resolve the number of topics from data. The modeling flexibility comes with computational cost. Although significant progress has been made on developing scalable inference algorithms for LDA using either distributed [10, 16, 1] or online [7] learning methods, the inference of logistic-normal topic models still remains a challenge, due to the non-conjugate priors. Existing algorithms on learning logistic-normal topic models mainly rely on approximate techniques, e.g., variational inference with unwarranted mean-field assumptions [3]. Although variational methods have a deterministic objective to optimize and are usually efficient, they could only achieve an approximate solution. If the mean-field assumptions are not made appropriately, the approximation could be unsatisfactory. Furthermore, existing algorithms can only deal with small corpora and learn a limited number of topics. It is important to develop scalable algorithms in order to apply the models to large collections of documents, which are becoming increasingly common in both scientific and engineering fields. To address the limitations listed above, we develop a scalable Gibbs sampling algorithm for logisticnormal topic models, without making any restricting assumptions on the posterior distribution. Technically, to deal with the non-conjugate logistic-normal prior, we introduce auxiliary Polya-Gamma 1 variables [13], following the statistical ideas of data augmentation [17, 18, 8]; and the augmented posterior distribution leads to conditional distributions from which we can draw samples easily without accept/reject steps. Moreover, the auxiliary variables are locally associated with each individual document, and this locality naturally allows us to develop a distributed sampler by splitting the documents into multiple subsets and allocating them to multiple machines. The global statistics can be updated asynchronously without sacrificing the predictive ability on unseen testing documents. We successfully apply the scalable inference algorithm to learning a correlation graph of thousands of topics on large corpora with millions of documents. These results are the largest automatically learned topic correlation structures to our knowledge. 2 Logistic-Normal Topic Models Let W = {wd}D d=1 be a set of documents, where wd = {wdn}Nd n=1 denote the words appearing in document d of length Nd. A hierarchical Bayesian topic model posits each document as an admixture of K topics, where each topic Φk is a multinomial distribution over a V -word vocabulary. For a logistic-normal topic model (e.g., CTM), the generating process of document d is: ηd ∼N(µ, Σ), θk d = eηk d PK j=1 eηj d , ∀n ∈{1, · · · , Nd} : zdn ∼Mult(θd), wdn ∼Mult(Φzdn), where Mult(·) denotes the multinomial distribution; zdn is a K-binary vector with only one nonzero element; and Φzdn denotes the topic selected by the non-zero entry of zdn. For Bayesian CTM, the topics are samples drawn from a prior, e.g., Φk ∼Dir(β), where Dir(·) is a Dirichlet distribution. Note that for identifiability, normally we assume ηK d = 0. Given a set of documents W, CTM infers the posterior distribution p(η, Z, Φ|W) ∝ p0(η, Z, Φ)p(W|Z, Φ) by the Bayes’ rule. This problem is generally hard because of the nonconjugacy between the normal prior and the logistic transformation function (can be seen as a likelihood model for θ). Existing approaches resort to variational approximate methods [3] with strict factorization assumptions. To avoid mean-field assumptions and improve the inference accuracy, below we present a partially collapsed Gibbs sampler, which is simple to implement and can be naturally parallelized for large-scale applications. 3 Gibbs Sampling with Data Augmentation We now present a block-wise Gibbs sampling algorithm for logistic-normal topic models. To improve mixing rates, we first integrate out the Dirichlet variables Φ, by exploring the conjugacy between a Dirichlet prior and multinomial likelihood. Specifically, we can integrate out Φ and perform Gibbs sampling for the marginalized distribution: p(η, Z|W) ∝p(W|Z) D Y d=1 Nd Y n=1 θzdn d N(ηd|µ, Σ) ∝ K Y k=1 δ(Ck + β) δ(β) D Y d=1 Nd Y n=1 eηzdn d PK j=1 eηj d N(ηd|µ, Σ), where Ct k is the number of times topic k being assigned to the term t over the whole corpus; Ck = {Ct k}V t=1; and δ(x) = Qdim(x) i=1 Γ(xi) Γ(Pdim(x) i=1 xi) is a function defined with the Gamma function Γ(·). 3.1 Sampling Topic Assignments When the variables η = {ηd}D d=1 are given, we draw samples from p(Z|η, W). In our Gibbs sampler, this is done by iteratively drawing a sample for each word in each document. The local conditional distribution is: p(zk dn = 1|Z¬n, wdn, W¬dn, η) ∝p(wdn|zk dn = 1, Z¬n, W¬dn)eηk d ∝ Cwdn k,¬n + βwdn PV j=1 Cj k,¬n + PV j=1 βj eηk d ,(1) where C· ·,¬n indicates that term n is excluded from the corresponding document or topic. 3.2 Sampling Logistic-Normal Parameters When the topic assignments Z are given, we draw samples from the posterior distribution p(η|Z, W) ∝QD d=1 QNd n=1 eηd zn PK j=1 e ηd j N(ηd|µ, Σ), which is a Bayesian logistic regression model 2 with Z as the multinomial observations. Though it is hard to draw samples directly due to nonconjugacy, we can leverage recent advances in data augmentation to solve this inference task efficiently, with analytical local conditionals for Gibbs sampling, as detailed below. Specifically, we have the likelihood of “observing” the topic assignments zd for document d 1 as p(zd|ηd) = QNd n=1 eηzdn d PK j=1 eηj d . Following Homes & Held [8], the likelihood for ηd k conditioned on η¬k d is: ℓ(ηk d|η¬k d ) = Nd Y n=1 eρk d 1 + eρk d zk dn 1 1 + eρk d 1−zk dn = (eρk d)Ck d (1 + eρk d)Nd , where ρk d = ηk d −ζk d; ζk d = log(P j̸=k eηj d); and Ck d = PNd n=1 zk dn is the number of words assigned to topic k in document d. Therefore, we have the conditional distribution p(ηk d|η¬k d , Z, W) ∝ℓ(ηk d|η¬k d )N(ηk d|µk d, σ2 k), (2) where µk d = µk −Λ−1 kk Λk¬k(η¬k d −µ¬k) and σ2 k = Λ−1 kk . Λ = Σ−1 is the precision matrix of a Gaussian distribution. This is a posterior distribution of a Bayesian logistic model with a Gaussian prior, where zk dn are binary response variables. Due to the non-conjugacy between the normal prior and logistic likelihood, we do not have analytical form of this posterior distribution. Although standard Monte Carlo methods (e.g., rejection sampling) can be applied, they normally require a good proposal distribution and may have the trouble to deal with accept/reject rates. Data augmentation techniques have been developed, e.g., [8] presented a two layer data augmentation representation with logistic distributions and [9] applied another data augmentation with uniform variables and truncated Gaussian distributions, which may involve sophisticated accept/reject strategies [14]. Below, we develop a simple exact sampling method without a proposal distribution. Our method is based on a new data augmentation representation, following the recent developments in Bayesian logistic regression [13], which is a direct data augmentation scheme with only one layer of auxiliary variables and does not need to tune in order to get optimal performance. Specifically, for the above posterior inference problem, we can show the following lemma. Lemma 1 (Scale Mixture Representation). The likelihood ℓ(ηk d|η¬k d ) can be expressed as (eρk d)Ck d (1 + eρk d)Nd = 1 2Nd eκk dρk d Z ∞ 0 e− λk d(ρk d)2 2 p(λk d|Nd, 0)dλk d, where κk d = Ck d −Nd/2 and p(λk d|Nd, 0) is the Polya-Gamma distribution PG(Nd, 0). The lemma suggest that p(ηk d|η¬k d , Z, W) is a marginal distribution of the complete distribution p(ηk d, λk d|η−k d , Z, W) ∝ 1 2Nd exp κk dρk d −λk d(ρk d)2 2 p(λk d|Nd, 0)N(ηk d|µk d, σ2 k). Therefore, we can draw samples from the complete distribution. By discarding the augmented variable λk d, we get the samples of the posterior distribution p(ηk d|η¬k d , Z, W). For ηk d: we have p(ηk d|η¬k d , Z, W, λk d) ∝exp κk dηk d −λk d(ηk d)2 2 N(ηk d|µ, σ2) = N(γk d, (τ k d )2), where the posterior mean is γk d = (τ k d )2(σ−2 k µk d + κk d + λk dζk d ) and the variance is (τ k d )2 = (σ−2 k + λk d)−1. Therefore, we can easily draw a sample from a univariate Gaussian distribution. For λk d: the conditional distribution of the augmented variable is p(λk d|Z, W, η) ∝exp − λk d(ρk d)2 2 p(λk d|Nd, 0) = PG λk d; Nd, ρk d , which is again a Polya-Gamma distribution by using the construction definition of the general PG(a, b) class through an exponential tilting of the PG(a, 0) density [13]. To draw samples from the Polya-Gamma distribution, note that a naive implementation of the sampling using the infinite sum-of-Gamma representation is not efficient and it also involves a potentially inaccurate step of truncating the infinite sum. Here we adopt the exact method proposed in [13], which draws the samples through drawing Nd samples from PG(1, ηk d). Since Nd is normally large, we will develop a fast and effective approximation in the next section. 1Due to the independence, we can treat documents separately. 3 120 130 140 150 160 170 180 0 0.5 1 1.5 2 2.5 x 10 4 z ∼ PG(z ; m, ρ) Frequency m=1 m=2 m=4 m=8 m=n (exact) (a) −1 0 1 2 3 0 2000 4000 6000 8000 10000 ηd k ∼ P(ηd k | ηd ¬ k, Z, W ) Frequency m=1 m=2 m=4 m=8 m=n (exact) (b) Figure 3: (a) frequency of f(z) with z ∼ PG(m, ρ); and (b) frequency of samples from ηk d ∼p(ηk d|η¬k d , Z, W). Though z is not from the exact distribution, the distribution of ηk d is very accurate. The parameters ρk d = −4.19, Ck d = 19, Nd = 1155, µk d = 0.40, σ2 d = 0.31, and ζ = 5.35 are from a real distribution when training on the NIPS data set. 3.3 Fully-Bayesian Models We can treat µ and Σ as random variables and perform fully-Bayesian inference, by using the conjugate Normal-Inverse-Wishart prior, p0(µ, Σ) = NIW(µ0, ρ, κ, W), that is Σ|κ, W ∼IW(Σ; κ, W −1), µ|Σ, µ0, ρ ∼N(µ; µ0, Σ/ρ), where IW(Σ; κ, W −1) = |W |κ/2 2 κM 2 ΓM ( κ 2 )|Σ| κ+M+1 2 exp(−1 2Tr(WΣ−1)) is the inverse Wishart distribution and (µ0, ρ, κ, W) are hyper-parameters. Then, the conditional distribution is p(µ, Σ|η, Z, W) ∝p0(µ, Σ) Y d p(ηd|µ, Σ) = NIW(µ′ 0, ρ′, κ′, W ′), (3) which is still a Normal-Inverse-Wishart distribution due to the conjugate property and the parameters are µ′ 0 = ρ ρ+Dµ0 + D ρ+D ¯η, ρ′ = ρ+D, κ′ = κ+D and W ′ = W +Q+ ρD ρ+D(¯η −µ0)(¯η −µ0)⊤, where ¯η = 1 D P d ηd is the empirical mean of the data and Q = P d(ηd −¯η)(ηd −¯η)⊤. 4 Parallel Implementation and Fast Approximate Sampling The above Gibbs sampler can be naturally parallelized to extract large correlation graphs from millions of documents, due to the following observations: First, both ηd and λd are conditionally independent given µ and Σ, which makes it natural to distribute documents over machines and infer local ηd and λd. No communication is needed for this sampling step. Second, the global variables µ and Σ can be inferred and broadcast to every machine after each iteration. As mentioned in Section 3.3, this involves: 1) computing NIW posterior parameters, and 2) sampling from Eq. 3. Notice that ηd contribute to the posterior parameters µ′ 0, W ′ through the simple summation operator, so that we can perform local summation on each machine, followed by a global aggregation. Similarly, NIW sample can be drawn distributively, by computing sample covariance of x1, · · · , xκ′, drawn from N(x|0, W ′) distributively after broadcasting W ′. Finally, the topic assignments zd are conditionally independent given the topic counts Ck. We synchronize Ck globally by leveraging the recent advances on scalable inference of LDA [1, 16], which implemented a general framework to synchronize such counts. To further speed up the inference algorithm, we designed a fast approximate sampling method to draw PG(n, ρ) samples, reducing the time complexity from O(n) in [13] to O(1). Specifically, Polson et al. [13] show how to efficiently generate PG(1, ρ) random variates. Due to additive property of Polya-Gamma distribution, y ∼PG(n, ρ) if xi ∼PG(1, ρ) and y = Pn i=1 xi. However, this sampler can be slow when n is large. For our Gibbs sampler, n is the document length, often around hundreds. Fortunately, an effective approximation can be developed to achieve constant time sampling of PG. Since n is relatively large, the sum variable y should be almost normally distributed, according to the central limit theorem. Fig. 3(a) confirms this intuition. Consider another PG variable z ∼PG(m, ρ). If both m and n are large, y and z should be both samples from normal distribution. Hence, we can do a simple linear transformation of z to approximate y. Specifically, we have f(z) = p V ar(y)/V ar(z)(z −E[z]) + E[y], where E[y] = n 2ρtanh(ρ/2) from [12], and V ar(z) V ar(y) = m n since both y and z are sum of PG(1, ρ) variates. It can be shown that f(z) and y have the same mean and variance. In practice, we found that even when m = 1, the algorithm still can draw good samples from p(ηk d|η¬k d , Z, W) (See Fig. 3(b)). Hence, we are able to speed up the Polya-Gamma sampling process significantly by applying this approximation. More empirical analysis can be found in the appendix. 4 Furthermore, we can perform sparsity-aware fast sampling [19] in the Gibbs sampler. Specifically, let Ak = C wdn k,¬n PV j=1 Cj k,¬n+PV j=1 βj eηk d, Bk = βwdn PV j=1 Cj k,¬n+PV j=1 βj eηk d, then Eq. (1) can be written as p(zk dn = 1|Z¬n, wdn, W¬dn, η) ∝Ak + Bk. Let ZA = P k Ak and ZB = P k Bk. We can show that the sampling of zdn can be done by sampling from Mult( A ZA ) or Mult( B ZB ), due to the fact: p(zk dn = 1|Z¬n, wdn, W¬dn, η) = Ak ZA + ZB + Bk ZA + ZB = (1 −p) Ak ZA + p Bk ZB , (4) where p = ZB ZA+ZB . Note that Eq. (4) is a marginalization with respect to an auxiliary binary variable. Thus a sample of zdn can be drawn by flipping a coin with probability p being head. If it is tail, we draw zdn from Mult( A ZA ); otherwise from Mult( B ZB ). The advantage is that we only need to consider all non-zero entries of A to sample from Mult( A ZA ). In fact, A has few non-zero entries due to the sparsity of the topic counts Ck. Thus, the time complexity would be reduced from O(K) to O(s(K)), where s(K) is the average number of non-zero entries in Ck. In practice, Ck is very sparse, hence s(K) ≪K when K is large. To sample from Mult( B ZB ), we iterate over all K potential assignments. But since p is typically small, O(K) time complexity is acceptable. With the above techniques, the time complexity per document of the Gibbs sampler is O(Nds(K)) for sampling zd, O(K2) for computing (µk d, σ2 k), and O(SK) for sampling ηd with Eq. (2), where S is the number of sub-burn-in steps over sampling ηk d. Thus the overall time complexity is O(Nds(K) + K2 + SK), which is higher than the O(Nds(K)) complexity of LDA [1] when K is large, indicating a cost for the enriched representation of CTM comparing to LDA. 5 Experiments We now present qualitative and quantitative evaluation to demonstrate the efficacy and scalability of the Gibbs sampler for CTM (denoted by gCTM). Experiments are conducted on a 40-node cluster, where each node is equipped with two 6-core CPUs (2.93GHz). For all the experiments, if not explicitly mentioned, we set the hyper-parameters as β = 0.01, T = 350, S = 8, m = 1, ρ = κ = 0.01D, µ0 = 0, and W = κI, where T is the number of burn-in steps. We will use M to denote the number of machines and P to denote the number of CPU cores. For baselines, we compare with the variational CTM (vCTM) [3] and the state-of-the-art LDA implementation, Yahoo! LDA (Y!LDA) [1]. In order to achieve fair comparison, for both vCTM and gCTM we select T such that the models converge sufficiently, as we shall discuss later in Section 5.3. Data Sets: Experiments are conducted on several benchmark data sets, including NIPS paper abstracts, 20Newsgroups, and NYTimes (New York Times) corpora from [2] and the Wikipedia corpus from [20]. All the data sets are randomly split into training and testing sets. Following the settings in [3], we partition each document in the testing set into an observed part and a held-out part. 5.1 Qualitative Evaluation We first examine the correlation structure of 1,000 topics learned by CTM using our scalable sampler on the NYTimes corpus with 285,000 documents. Since the entire correlation graph is too large, we build a 3-layer hierarchy by clustering the learned topics, with their learned correlation strength as the similarity measure. Fig. 4 shows a part of the hierarchy2, where the subgraph A represents the top layer with 10 clusters. The subgraphs B and C are two second layer clusters; and D and E are two correlation subgraphs consisting of leaf nodes (i.e., learned topics). To represent their semantic meanings, we present 4 most frequent words for each topic; and for each topic cluster, we also show most frequent words by building a hyper-topic that aggregates all the included topics. On the top layer, the font size of each word in a word cloud is proportional to its frequency in the hyper-topic. Clearly, we can see that many topics have strong correlations and the structure is useful to help humans understand/browse the large collection of topics. With 40 machines, our parallel Gibbs sampler finishes the training in 2 hours, which means that we are able to process real world corpus in considerable speed. More details on scalability will be provided below. 2The entire correlation graph can be found on http://ml-thu.net/˜scalable-ctm 5 113 82 12 248 5 314 130 22 27 B C 47 48 31 4 6 6 41 42 17 4 13 12 17 7 6 7 12 4 4 5 3 3 3 A D E 113 denotes the number of topics a cluster contains. Figure 4: A hierarchical visualization of the correlation graph with 1,000 topics learned from 285,000 articles of the NYTimes. A denotes the top-layer subgraph with 10 big clusters; B and C denote two second-layer clusters; and D and E are two subgraphs with leaf nodes (i.e., topics). We present most frequent words of each topic cluster. Edges denote a correlation (above some threshold) and the distance between two nodes represents the strength of their correlation. The node size of a cluster is determined by the number of topics included in that cluster. 6 20 40 60 80 100 1400 1600 1800 2000 2200 K perplexity vCTM gCTM (M=1, P=1) gCTM (M=1, P=12) (a) 20 40 60 80 100 10 0 10 2 10 4 K time (s) vCTM gCTM (M=1, P=1) gCTM (M=1, P=12) (b) 200 400 600 800 1000 2500 3000 3500 4000 4500 K perplexity gCTM (M=1, P=12) gCTM (M=40, P=480) Y!LDA (M=40, P=480) (c) 200 400 600 800 1000 10 0 10 2 10 4 10 6 K time (s) gCTM (M=1, P=12) gCTM (M=40, P=480) Y!LDA (M=40, P=480) (d) Figure 5: (a)(b): Perplexity and training time of vCTM, single-core gCTM, and multi-core gCTM on the NIPS data set; (c)(d): Perplexity and training time of single-machine gCTM, multi-machine gCTM, and multi-machine Y!LDA on the NYTimes data set. 5.2 Performance We begin with an empirical assessment on the small NIPS data set, whose training set contains 1.2K documents. Fig. 5(a)&(b) show the performance of three single-machine methods: vCTM (M = 1, P = 1), sequential gCTM (M = 1, P = 1), and parallel gCTM (M = 1, P = 12). Fig. 5(a) shows that both versions of gCTM produce similar or better perplexity, compared to vCTM. Moreover, Fig. 5(b) shows that when K is large, the advantage of gCTM becomes salient, e.g., sequential gCTM is about 7.5 times faster than vCTM; and multi-core gCTM achieves almost two orders of magnitude of speed-up compared to vCTM. data set D K vCTM gCTM NIPS 1.2K 100 1.9 hr 8.9 min 20NG 11K 200 16 hr 9 min NYTimes 285K 400 N/A* 0.5 hr Wiki 6M 1000 N/A* 17 hr *not finished within 1 week. Table 1: Training time of vCTM and gCTM (M = 40) on various datasets. In Table 1, we compare the efficiency of vCTM and gCTM on different sized data sets. It can be observed that vCTM immediately becomes impractical when the data size reaches 285K, while by utilizing additional computing resources, gCTM is able to process larger data sets with considerable speed, making it applicable to real world problems. Note that gCTM has almost the same training time on NIPS and 20Newsgroups data sets, due to their small sizes. In such cases, the algorithm is dominated by synchronization rather than computation. Fig. 5(c)&(d) show the results on the NYTimes corpus, which contains over 285K training documents and cannot be handled well by non-parallel methods. Therefore we concentrate on three parallel methods — single-machine gCTM (M = 1, P = 12), multi-machine gCTM (M = 40, P = 480), and multi-machine Y!LDA (M = 40, P = 480). We can see that: 1) both versions of gCTM obtain comparable perplexity to Y!LDA; and 2) gCTM (M = 40) is over an order of magnitude faster than the single-machine method, achieving considerable speed-up with additional computing resources. These observations suggest that gCTM is able to handle large data sets without sacrificing the quality of inference. Also note that Y!LDA is faster than gCTM because of the model difference — LDA does not learn correlation structure among topics. As analyzed in Section 4, the time complexity of gCTM is O(K2 + SK + Nds(K)) per document, while for LDA it is O(Nds(K)). 5.3 Sensitivity Burn-In and Sub-Burn-In: Fig. 6(a)&(b) show the effect of burn-in steps and sub-burn-in steps on the NIPS data set with K = 100. We also include vCTM for comparison. For vCTM, T denotes the number of iteration of its EM loop in variational context. Our main observations are twofold: 1) despite various S, all versions of gCTMs reach a similar level of perplexity that is better than vCTM; and 2) a moderate number of sub-iterations, e.g. S = 8, leads to the fastest convergence. This experiment also provides insights on determining the number of outer iterations T that assures convergence for both models. We adopt Cauchy’s criterion [15] for convergence: given an ǫ > 0, an algorithm converges at iteration T if ∀i, j ≥T, |Perpi −Perpj| < ǫ, where Perpi and Perpj are perplexity at iteration i and j respectively. In practice, we set ǫ = 20 and run experiments with very large number of iterations. As a result, we obtained T = 350 for gCTM and T = 8 for vCTM, as pointed out with corresponing verticle line segments in Fig. 6(a)&(b). 7 10 0 10 1 10 2 10 3 1500 2000 2500 T perplexity gCTM (S=1) gCTM (S=2) gCTM (S=4) gCTM (S=8) gCTM (S=16) gCTM (S=32) vCTM (a) 10 1 10 2 10 3 10 4 10 5 1500 2000 2500 time (s) perplexity gCTM (S=1) gCTM (S=2) gCTM (S=4) gCTM (S=8) gCTM (S=16) gCTM (S=32) vCTM (b) 1 4 16 64 256 1024 1000 1200 1400 1600 1800 2000 a perplexity K=50 K=100 (c) Figure 6: Sensitivity analysis with respect to key hyper-parameters: (a) perplexity at each iteration with different S; (b) convergence speed with different S; (c) perplexity tested with different prior. Prior: Fig. 6(c) shows perplexity under different prior settings. To avoid expensive search in a huge space, we set (µ0, ρ, W, κ) = (0, a, aI, a) to test the effect of NIW prior, where a larger a implies more pseudo-observations of µ = 0, Σ = I. We can see that for both K = 50 and K = 100, the perplexity is invariant under a wide range of prior settings. This suggests that gCTM is insensitive to prior values. 5.4 Scalability 1.2M 2.4M 3.6M 4.8M 6M 5 10 15 x 10 4 #docs time (s) Fixed M=8 Linearly scaling M Ideal Figure 7: Scalability analysis. We set M = 8, 16, 24, 32, 40 so that each machine processes 150K documents. Fig. 7 shows the scalability of gCTM on the large Wikipedia data set with K = 500. A practical problem in real world machine learning is that when computing resources are limitted, as the data size grows, the running time soon upsurges to an untolerable level. Ideally, this problem can be solved by adding the same ratio of computing nodes. Our experiment demonstrates that gCTM performs well in this scenario — as we pour in the same proportion of data and machines, the training time is almost kept constant. In fact, the largest difference from ideal curve is about 1,000 seconds, which is almost unobservable in the figure. This suggests that parallel gCTM enjoys nice scalability. 6 Conclusions and Discussions We present a scalable Gibbs sampling algorithm for logistic-normal topic models. Our method builds on a novel data augmentation formulation and addresses the non-conjugacy without making strict mean-field assumptions. The algorithm is naturally parallelizable and can be further boosted by approximate sampling techniques. Empirical results demonstrate significant improvement in time efficiency over existing variational methods, with slightly better perplexity. Our method enjoys good scalability, suggesting the ability to extract large structures from massive data. In the future, we plan to study the performance of Gibbs CTM on industry level clusters with thousands of machines. We are also interested in developing scalable sampling algorithms of other logistic-normal topic models, e.g., infinite CTM and dynamic topic models. Finally, the fast sampler of Poly-Gamma distributions can be used in relational and supervised topic models [6, 21]. Acknowledgments This work is supported by the National Basic Research Program (973 Program) of China (Nos. 2013CB329403, 2012CB316301), National Natural Science Foundation of China (Nos. 61322308, 61305066), Tsinghua University Initiative Scientific Research Program (No. 20121088071), and Tsinghua National Laboratory for Information Science and Technology, China. 8 References [1] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola. Scalable inference in latent variable models. In International Conference on Web Search and Data Mining (WSDM), 2012. [2] K. Bache and M. Lichman. UCI machine learning repository, 2013. [3] D. Blei and J. Lafferty. Correlated topic models. In Advances in Neural Information Processing Systems (NIPS), 2006. [4] D. Blei and J. Lafferty. Dynamic topic models. In International Conference on Machine Learning (ICML), 2006. [5] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [6] N. Chen, J. Zhu, F. Xia, and B. Zhang. Generalized relational topic models with data augmentation. In International Joint Conference on Artificial Intelligence (IJCAI), 2013. [7] M. Hoffman, D. Blei, and F. Bach. Online learning for latent Dirichlet allocation. In Advances in Neural Information Processing Systems (NIPS), 2010. [8] C. Holmes and L. Held. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1):145–168, 2006. [9] D. Mimno, H. Wallach, and A. McCallum. Gibbs sampling for logistic normal topic models with graph-based priors. In NIPS Workshop on Analyzing Graphs, 2008. [10] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models. Journal of Machine Learning Research, (10):1801–1828, 2009. [11] J. Paisley, C. Wang, and D. Blei. The discrete infinite logistic normal distribution for mixedmembership modeling. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. [12] N. G. Polson and J. G. Scott. Default bayesian analysis for multi-way tables: a dataaugmentation approach. arXiv:1109.4180, 2011. [13] N. G. Polson, J. G. Scott, and J. Windle. Bayesian inference for logistic models using PolyaGamma latent variables. arXiv:1205.0310v2, 2013. [14] C. P. Robert. Simulation of truncated normal variables. Statistics and Compuating, 5:121–125, 1995. [15] W. Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., 1964. [16] A. Smola and S. Narayanamurthy. An architecture for parallel topic models. Very Large Data Base (VLDB), 2010. [17] M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation. Journal of the Americal Statistical Association, 82(398):528–540, 1987. [18] D. van Dyk and X. Meng. The art of data augmentation. Journal of Computational and Graphical Statistics, 10(1):1–50, 2001. [19] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document collections. In International Conference on Knowledge Discovery and Data mining (SIGKDD), 2009. [20] A. Zhang, J. Zhu, and B. Zhang. Sparse online topic models. In International Conference on World Wide Web (WWW), 2013. [21] J. Zhu, X. Zheng, and B. Zhang. Improved bayesian supervised topic models with data augmentation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2013. 9
|
2013
|
49
|
5,124
|
On model selection consistency of M-estimators with geometrically decomposable penalties Jason D. Lee, Yuekai Sun Institute for Computational and Mathematical Engineering Stanford University {jdl17,yuekai}@stanford.edu Jonathan E. Taylor Department of Statisticis Stanford University jonathan.taylor@stanford.edu Abstract Penalized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Often, the penalties are geometrically decomposable, i.e. can be expressed as a sum of support functions over convex sets. We generalize the notion of irrepresentable to geometrically decomposable penalties and develop a general framework for establishing consistency and model selection consistency of M-estimators with such penalties. We then use this framework to derive results for some special cases of interest in bioinformatics and statistical learning. 1 Introduction The principle of parsimony is used in many areas of science and engineering to promote “simple” models over more complex ones. In machine learning, signal processing, and high-dimensional statistics, this principle motivates the use of sparsity inducing penalties for model selection and signal recovery from incomplete/noisy measurements. In this work, we consider M-estimators of the form: minimize θ∈Rp ℓ(n)(θ) + λρ(θ), subject to θ ∈S, (1.1) where ℓ(n) is a convex, twice continuously differentiable loss function, ρ is a penalty function, and S ⊆Rp is a subspace. Many commonly used penalties are geometrically decomposable, i.e. can be expressed as a sum of support functions over convex sets. We describe this notion of decomposable in Section 2 and then develop a general framework for analyzing the consistency and model selection consistency of M-estimators with geometrically decomposable penalties. When specialized to various statistical models, our framework yields some known and some new model selection consistency results. This paper is organized as follows: First, we review existing work on consistency and model selection consistency of penalized M-estimators. Then, in Section 2, we describe the notion of geometrically decomposable and give some examples of geometrically decomposable penalties. In Section 3, we generalize the notion of irrepresentable to geometrically decomposable penalties and state our main result (Theorem 3.4). We prove our main result in the Supplementary Material and develop a converse result concerning the necessity of the irrepresentable condition in the Supplementary Material. In Section 4, we use our main result to derive consistency and model selection consistency results for the generalized lasso (total variation) and maximum likelihood estimation in exponential families. 1 1.1 Consistency of penalized M-estimators The consistency of penalized M-estimators has been studied extensively. The three most wellstudied problems are (i) the lasso [2, 26], (ii) generalized linear models (GLM) with the lasso penalty [10], and (iii) inverse covariance estimators with sparsity inducing penalties (equivalent to sparse maximum likelihood estimation for a Gaussian graphical model) [21, 20]. There are also consistency results for M-estimators with group and structured variants of the lasso penalty [1, 7]. Negahban et al. [17] proposes a unified framework for establishing consistency and convergence rates for M-estimators with penalties ρ that are decomposable with respect to a pair of subspaces M, ¯ M: ρ(x + y) = ρ(x) + ρ(y), for all x ∈M, y ∈¯ M ⊥. Many commonly used penalties such as the lasso, group lasso, and nuclear norm are decomposable in this sense. Negahban et al. prove a general result that establishes the consistency of M-estimators with decomposable penalties. Using their framework, they derive consistency results for special cases like sparse and group sparse regression. The current work is in a similar vein as Negahban et al. [17], but we focus on establishing the more stringent result of model selection consistency rather than consistency. See Section 3 for a comparison of the two notions of consistency. The model selection consistency of penalized M-estimators has also been extensively studied. The most commonly studied problems are (i) the lasso [30, 26], (ii) GLM’s with the lasso penalty [4, 19, 28], (iii) covariance estimation [15, 12, 20] and (more generally) structure learning [6, 14]. There are also general results concerning M-estimators with sparsity inducing penalties [29, 16, 11, 22, 8, 18, 24]. Despite the extensive work on model selection consistency, to our knowledge, this is the first work to establish a general framework for model selection consistency for penalized M-estimators. 2 Geometrically decomposable penalties Let C ⊂Rp be a closed convex set. Then the support function over C is hC(x) = supy {yT x | y ∈C}. (2.1) Support functions are sublinear and should be thought of as semi-norms. If C is a norm ball, i.e. C = {x | ∥x∥≤1}, then hC is the dual norm: ∥y∥∗= supx {xT y | ∥x∥≤1}. The support function is a supremum of linear functions, hence the subdifferential consists of the linear functions that attain the supremum: ∂hC(x) = {y ∈C | yT x = hC(x)}. The support function (as a function of the convex set C) is also additive over Minkowski sums, i.e. if C and D are convex sets, then hC+D(x) = hC(x) + hD(x). We use this property to express penalty functions as sums of support functions. E.g. if ρ is a norm and the dual norm ball can be expressed as a (Minkowski) sum of convex sets C1, . . . , Ck, then ρ can be expressed as a sum of support functions: ρ(x) = hC1(x) + · · · + hCk(x). If a penalty ρ can be expressed as ρ(θ) = hA(θ) + hI(θ) + hS⊥(θ), (2.2) where A and I are closed convex sets and S is a subspace, then we say ρ is a geometrically decomposable penalty. This form is general; if ρ can be expressed as a sum of support functions, i.e. ρ(θ) = hC1(θ) + · · · + hCk(θ), then we can set A, I, and S⊥to be sums of the sets C1, . . . , Ck to express ρ in geometrically decomposable form (2.2). In many cases of interest, A + I is a norm ball and hA+I = hA + hI is the dual norm. In our analysis, we assume 1Given the extensive work on consistency of penalized M-estimators, our review and referencing is necessarily incomplete. 2 1. A and I are bounded. 2. I contains a relative neighborhood of the origin, i.e. 0 ∈relint(I). We do not require A + I to contain a neighborhood of the origin. This generality allows for unpenalized variables. The notation A and I should be as read as “active” and “inactive”: span(A) should contain the true parameter vector and span(I) should contain deviations from the truth that we want to penalize. E.g. if we know the sparsity pattern of the unknown parameter vector, then A should span the subspace of all vectors with the correct sparsity pattern. The third term enforces a subspace constraint θ ∈S because the support function of a subspace is the (convex) indicator function of the orthogonal complement: hS⊥(x) = 1S(x) = 0 x ∈S ∞ otherwise. Such subspace constraints arise in many problems, either naturally (e.g. the constrained lasso [9]) or after reformulation (e.g. group lasso with overlapping groups). We give three examples of penalized M-estimators with geometrically decomposable penalties, i.e. minimize θ∈Rp ℓ(n)(θ) + λρ(θ), (2.3) where ρ is a geometrically decomposable penalty. We also compare our notion of geometrically decomposable to two other notions of decomposable penalties by Negahban et al. [17] and Van De Geer [25] in the Supplementary Material. 2.1 The lasso and group lasso penalties Two geometrically decomposable penalties are the lasso and group lasso penalties. Let A and I be complementary subsets of {1, . . . , p}. We can decompose the lasso penalty component-wise to obtain ∥θ∥1 = hB∞,A(θ) + hB∞,I(θ), where hB∞,A and hB∞,I are support functions of the sets B∞,A = θ ∈Rp | ∥θ∥∞≤1 and θI = 0 B∞,I = θ ∈Rp | ∥θ∥∞≤1 and θA = 0 . If the groups do not overlap, then we can also decompose the group lasso penalty group-wise (A and I are now sets of groups) to obtain X g∈G ∥θg∥2 = hB(2,∞),A(θ) + hB(2,∞),I(θ). hB(2,∞),A and hB(2,∞),I are support functions of the sets B(2,∞),A = θ ∈Rp | max g∈G ∥θg∥2 ≤1 and θg = 0, g ∈A B(2,∞),I = θ ∈Rp | max g∈G ∥θg∥2 ≤1 and θg = 0, g ∈I . If the groups overlap, then we can duplicate the parameters in overlapping groups and enforce equality constraints. 2.2 The generalized lasso penalty Another geometrically decomposable penalty is the generalized lasso penalty [23]. Let D ∈Rm×p be a matrix and A and I be complementary subsets of {1, . . . , m}. We can express the generalized lasso penalty in decomposable form: ∥Dθ∥1 = hDT B∞,A(θ) + hDT B∞,I(θ). (2.4) 3 hDT B∞,A and hDT B∞,I are support functions of the sets DT B∞,A = {x ∈Rp | x = DT Ay, ∥y∥∞≤1} (2.5) DT B∞,I = {x ∈Rp | x = DT I y, ∥y∥∞≤1}. (2.6) We can also formulate any generalized lasso penalized M-estimator as a linearly constrained, lasso penalized M-estimator. After a change of variables, a generalized lasso penalized M-estimator is equivalent to minimize θ∈Rk,γ∈Rp ℓ(n)(D†θ + γ) + λ ∥θ∥1 , subject to γ ∈N(D), where N(D) is the nullspace of D. The lasso penalty can then be decomposed component-wise to obtain ∥θ∥1 = hB∞,A(θ) + hB∞,I(θ). We enforce the subspace constraint θ ∈N(D) with the support function of R(D)⊥. This yields the convex optimization problem minimize θ∈Rk,γ∈Rp ℓ(n)(D†θ + γ) + λ(hB∞,A(θ) + hB∞,I(θ) + hN (D)⊥(γ)). There are many interesting applications of the generalized lasso in signal processing and statistical learning. We refer to Section 2 in [23] for some examples. 2.3 “Hybrid” penalties A large class of geometrically decomposable penalties are so-called “hybrid” penalties: infimal convolutions of penalties to promote solutions that are sums of simple components, e.g. θ = θ1 +θ2, where θ1 and θ2 are simple. If the constituent simple penalties are geometrically decomposable, then the resulting hybrid penalty is also geometrically decomposable. For example, let ρ1 and ρ2 be geometrically decomposable penalties, i.e. there are sets A1, I1, S1 and A2, I2, S2 such that ρ1(θ) = hA1(θ) + hI1(θ) + hS⊥ 1 (θ) ρ2(θ) = hA2(θ) + hI2(θ) + hS⊥ 2 (θ) The M-estimator with penalty ρ(θ) = infγ {ρ1(γ) + ρ2(θ −γ)} is equivalent to the solution to the convex optimization problem minimize θ∈R2p ℓ(n)(θ1 + θ2) + λ(ρ1(θ1) + ρ2(θ2)). (2.7) This is an M-estimator with a geometrically decomposable penalty: minimize θ∈R2p ℓ(n)(θ1 + θ2) + λ(hA(θ) + hI(θ) + hS⊥(θ)). hA, hI and hS⊥are support functions of the sets A = {(θ1, θ2) | θ1 ∈A1 ⊂Rp, θ2 ∈A2 ⊂Rp} I = {(θ1, θ2) | θ1 ∈I1 ⊂Rp, θ2 ∈I2 ⊂Rp} S = {(θ1, θ2) | θ1 ∈S1 ⊂Rp, θ2 ∈S2 ⊂Rp}. There are many interesting applications of the hybrid penalties in signal processing and statistical learning. Two examples are the huber function, ρ(θ) = infθ=γ1+γ2 ∥γ1∥1+∥γ2∥2 2, and the multitask group regularizer, ρ(Θ) = infΘ=B+S ∥B∥1,∞+ ∥S∥1. See [27] for recent work on model selection consistency in hybrid penalties. 3 Main result We assume the unknown parameter vector θ⋆is contained in the model subspace M := span(I)⊥∩S, (3.1) 4 and we seek estimates of θ⋆that are “correct”. We consider two notions of correctness: (i) an estimate ˆθ is consistent (in the ℓ2 norm) if the estimation error in the ℓ2 norm decays to zero in probability as sample size grows:
ˆθ −θ⋆
2 p→0 as n →∞, and (ii) ˆθ is model selection consistent if the estimator selects the correct model with probability tending to one as sample size grows: Pr(ˆθ ∈M) →1 as n →∞. NOTATION: We use PC to denote the orthogonal projector onto span(C) and γC to denote the gauge function of a convex set C containing the origin: γC(x) = inf x {λ ∈R+ | x ∈λC}. Further, we use κ(ρ) to denote the compatibility constant between a semi-norm ρ and the ℓ2 norm over the model subspace: κ(ρ) := sup x {ρ(x) | ∥x∥2 ≤1, x ∈M}. Finally, we choose a norm ∥·∥ε to make
∇ℓ(n)(θ⋆)
ε small. This norm is usually the dual norm to the penalty. Before we state our main result, we state our assumptions on the problem. Our two main assumptions are stated in terms of the Fisher information matrix: Q(n) = ∇2ℓ(n)(θ⋆). Assumption 3.1 (Restricted strong convexity). We assume the loss function ℓ(n) is locally strongly convex with constant m over the model subspace, i.e. ℓ(n)(θ1) −ℓ(n)(θ2) ≥∇ℓ(n)(θ2)T (θ1 −θ2) + m 2 ∥θ1 −θ2∥2 2 (3.2) for some m > 0 and all θ1, θ2 ∈Br(θ⋆) ∩M. We require this assumption to make the maximum likelihood estimate unique over the model subspace. Otherwise, we cannot hope for consistency. This assumption requires the loss function to be curved along certain directions in the model subspace and is very similar to Negahban et al.’s notion of restricted strong convexity [17] and Buhlmann and van de Geer’s notion of compatibility [3]. Intuitively, this assumption means the “active” predictors are not overly dependent on each other. We also require ∇2ℓ(n) to be locally Lipschitz continuous, i.e. ∥∇2ℓ(n)(θ1) −∇2ℓ(n)(θ2)∥2 ≤L ∥θ1 −θ2∥2 . for some L > 0 and all θ1, θ2 ∈Br(θ⋆) ∩M. This condition automatically holds for all twicecontinuously differentiable loss functions, hence we do not state this condition as an assumption. To obtain model selection consistency results, we must first generalize the key notion of irrepresentable to geometrically decomposable penalties. Assumption 3.2 (Irrepresentability). There exist τ ∈(0, 1) such that sup z {V (PM ⊥(Q(n)PM(PMQ(n)PM)†PMz −z)) | z ∈∂hA(Br(θ⋆) ∩M)} < 1 −τ, where V is the infimal convolution of γI and 1S⊥ V (z) = inf u {γI(u) + 1S⊥(z −u)}. If uI(z) and uS⊥(u) achieve V (z) (i.e. V (z) = γI(uI(z))), then V (u) < 1, means uI(z) ∈ relint(I). Hence the irrepresentable condition requires any z ∈M ⊥to be decomposable into uI + uS⊥, where uI ∈relint(I) and uS⊥∈S⊥. 5 Lemma 3.3. V is a bounded semi-norm over M ⊥, i.e. V is finite and sublinear over M ⊥. Let ∥·∥ε be an error norm, usually chosen to make
∇ℓ(n)(θ⋆)
ε small. V is a bounded semi-norm over M ⊥, hence there exists some ¯τ such that V (PM ⊥(Q(n)PM(PMQ(n)PM)†PMx −x)) ≤¯τ ∥x∥ε (3.3) ¯τ surely exists because (i) ∥·∥ε is a norm, so the set {x ∈Rp | ∥x∥ε ≤1} is compact, and (ii) V is finite over M ⊥, so the left side of (3.3) is a continuous function of x. Intuitively, ¯τ quantifies how large the irrepresentable term can be compared to the error norm. The irrepresentable condition is a standard assumption for model selection consistency and has been shown to be almost necessary for sign consistency of the lasso [30, 26]. Intuitively, the irrepresentable condition requires the active predictors to be not overly dependent on the inactive predictors. In Supplementary Material, we show our (generalized) irrepresentable condition is also necessary for model selection consistency with some geometrically decomposable penalties. Theorem 3.4. Suppose Assumption 3.1 and 3.2 are satisfied. If we select λ such that λ > 2¯τ τ ∥∇ℓ(n)(θ⋆)∥ε and λ < min m2 L τ 2¯τκ(∥·∥ε)(2κ(hA)+ τ ¯τ κ(∥·∥∗ ε)) 2 mr 2κ(hA)+ τ ¯τ κ(∥·∥∗ ε), then the penalized M-estimator is unique, consistent (in the ℓ2 norm), and model selection consistent, i.e. the optimal solution to (2.3) satisfies 1.
ˆθ −θ⋆
2 ≤2 m κ(hA) + τ 2¯τ κ(∥·∥∗ ε) λ, 2. ˆθ ∈M := span(I)⊥∩S. Remark 1. Theorem 3.4 makes a deterministic statement about the optimal solution to (2.3). To use this result to derive consistency and model selection consistency results for a statistical model, we must first verify Assumptions (3.1) and (3.2) are satisfied with high probability. Then, we must choose an error norm ∥·∥ε and select λ such that λ > 2¯τ τ ∥∇ℓ(n)(θ⋆)∥ε and λ < min m2 L τ 2¯τκ(∥·∥ε)(2κ(hA)+ τ ¯τ κ(∥·∥∗ ε)) 2 mr 2κ(hA)+ τ ¯τ κ(∥·∥∗ ε) with high probability. In Section 4, we use this theorem to derive consistency and model selection consistency results for the generalized lasso and penalized likelihood estimation for exponential families. 4 Examples We use Theorem 3.4 to establish the consistency and model selection consistency of the generalized lasso and a group lasso penalized maximum likelihood estimator in the high-dimensional setting. Our results are nonasymptotic, i.e. we obtain bounds in terms of sample size n and problem dimension p that hold with high probability. 4.1 The generalized lasso Consider the linear model y = XT θ⋆+ ϵ, where X ∈Rn×p is the design matrix, and θ⋆∈Rp are unknown regression parameters. We assume the columns of X are normalized so ∥xi∥2 ≤√n. ϵ ∈Rn is i.i.d., zero mean, sub-Gaussian noise with parameter σ2. 6 We seek an estimate of θ⋆with the generalized lasso: minimize θ∈Rp 1 2n∥y −Xθ∥2 2 + λ ∥Dθ∥1 , (4.1) where D ∈Rm×p. The generalized lasso penalty is geometrically decomposable: ∥Dθ∥1 = hDT B∞,A(θ) + hDT B∞,I(θ). hDT B∞,A and hDT B∞,I are support functions of the sets DT B∞,A = {x ∈Rp | x = DT y, yI = 0, ∥y∥∞≤1} DT B∞,I = {x ∈Rp | x = DT y, yA = 0, ∥y∥∞≤1}. The sample fisher information matrix is Q(n) = 1 nXT X. Q(n) does not depend on θ, hence the Lipschitz constant of Q(n) is zero. The restricted strong convexity constant is m = λmin(Q(n)) = inf x {xT Q(n)x | ∥x∥2 = 1}. The model subspace is the set span(DT B∞,I)⊥= R(DT I )⊥= N(DI), where I is a subset of the row indices of D. The compatibility constants κ(ℓ1), κ(hA) are κ(ℓ1) = sup x {∥x∥1 | ∥x∥2 ≤1, x ∈N(DI)} κ(hA) = sup x hDT B∞,A(x) | ∥x∥2 ≤1, x ∈M ≤∥DA∥2 p |A|. If we select λ > 2 √ 2σ ¯τ τ q log p n , then there exists c such that Pr λ ≥2¯τ τ
∇ℓ(n)(θ⋆)
∞ ≤1 − 2 exp −cλ2n . Thus the assumptions of Theorem 3.4 are satisfied with probability at least 1 − 2 exp(−cλ2n), and we deduce the generalized lasso is consistent and model selection consistent. Corollary 4.1. Suppose y = Xθ⋆+ ϵ, where X ∈Rn×p is the design matrix, θ⋆are unknown coefficients, and ϵ is i.i.d., zero mean, sub-Gaussian noise with parameter σ2. If we select λ > 2 √ 2σ ¯τ τ q log p n then, with probability at least 1 −2 exp −cλ2n , the solution to the generalized lasso is unique, consistent, and model selection consistent, i.e. the optimal solution to (4.1) satisfies 1.
ˆθ −θ⋆
2 ≤2 m ∥DA∥2 p |A| + τ 2¯τ κ(ℓ1) λ, 2. Dˆθ i = 0, for any i such that Dθ⋆ i = 0. 4.2 Learning exponential families with redundant representations Suppose X is a random vector, and let φ be a vector of sufficient statistics. The exponential family associated with these sufficient statistics is the set of distributions with the form Pr(x; θ) = exp θT φ(x) −A(θ) , Suppose we are given samples x(1), . . . , x(n) drawn i.i.d. from an exponential family with unknown parameters θ⋆∈Rp. We seek a maximum likelihood estimate (MLE) of the unknown parameters: minimize θ∈Rp ℓ(n) ML(θ) + λ ∥θ∥2,1 , subject to θ ∈S. (4.2) where ℓ(n) ML is the (negative) log-likelihood function ℓ(n) ML(θ) = −1 n n X i=1 log Pr(x(i); θ) = −1 n n X i=1 θT φ(x(i)) + A(θ) 7 and ∥θ∥2,1 is the group lasso penalty ∥θ∥2,1 = X g∈G ∥θg∥2 . It is also straightforward to change the maximum likelihood estimator to the more computationally tractable pseudolikelihood estimator [13, 6], the neighborhood selection procedure [15], and include covariates [13]. For brevity, we only explain the details for the maximum likelihood estimator. Many undirected graphical models can be naturally viewed as exponential families. Thus estimating the parameters of exponential families is equivalent to learning undirected graphical models, a problem of interest in many application areas such as bioinformatics. Below, we state a corollary that results from applying Theorem 3.4 to exponential families. Please see the supplementary material for the proof and definitions of the quantities involved. Corollary 4.2. Suppose we are given samples x(1), . . . , x(n) drawn i.i.d. from an exponential family with unknown parameters θ⋆. If we select λ > 2√2L1¯τ τ r (maxg∈G |g|) log |G| n and the sample size n is larger than max ( 32L1L2 2¯τ 2 m4τ 4 2 + τ ¯τ 4 (maxg∈G |g|)|A|2 log |G| 16L1 m2r2 (2 + τ ¯τ )2(maxg∈G |g|)|A| log |G|, then, with probability at least 1 −2 maxg∈G |g| exp(−cλ2n), the penalized maximum likelihood estimator is unique, consistent, and model selection consistent, i.e. the optimal solution to (4.2) satisfies 1.
ˆθ −θ⋆
2 ≤2 m 1 + τ 2¯τ p |A|λ, 2. ˆθg = 0, g ∈I and ˆθg ̸= 0 if
θ⋆ g
2 > 1 m 1 + τ 2¯τ p |A|λ. 5 Conclusion We proposed the notion of geometrically decomposable and generalized the irrepresentable condition to geometrically decomposable penalties. This notion of decomposability builds on those by Negahban et al. [17] and Cand´es and Recht [5] and includes many common sparsity inducing penalties. This notion of decomposability also allows us to enforce linear constraints. We developed a general framework for establishing the model selection consistency of M-estimators with geometrically decomposable penalties. Our main result gives deterministic conditions on the problem that guarantee consistency and model selection consistency; in this sense, it extends the work of [17] from estimation consistency to model selection consistency. We combine our main result with probabilistic analysis to establish the consistency and model selection consistency of the generalized lasso and group lasso penalized maximum likelihood estimators. Acknowledgements We thank Trevor Hastie and three anonymous reviewers for their insightful comments. J. Lee was supported by a National Defense Science and Engineering Graduate Fellowship (NDSEG) and an NSF Graduate Fellowship. Y. Sun was supported by the NIH, award number 1U01GM102098-01. J.E. Taylor was supported by the NSF, grant DMS 1208857, and by the AFOSR, grant 113039. References [1] F. Bach. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res., 9:1179–1225, 2008. 8 [2] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. Ann. Statis., 37(4):1705–1732, 2009. [3] P. B¨uhlmann and S. van de Geer. Statistics for high-dimensional data: Methods, theory and applications. 2011. [4] F. Bunea. Honest variable selection in linear and logistic regression models via ℓ1 and ℓ1+ℓ2 penalization. Electron. J. Stat., 2:1153–1194, 2008. [5] E. Cand`es and B. Recht. Simple bounds for recovering low-complexity models. Math. Prog. Ser. A, pages 1–13, 2012. [6] J. Guo, E. Levina, G. Michailidis, and J. Zhu. Asymptotic properties of the joint neighborhood selection method for estimating categorical markov networks. arXiv preprint. [7] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In Int. Conf. Mach. Learn. (ICML), pages 433–440. ACM, 2009. [8] A. Jalali, P. Ravikumar, V. Vasuki, S. Sanghavi, and UT ECE. On learning discrete graphical models using group-sparse regularization. In Int. Conf. Artif. Intell. Stat. (AISTATS), 2011. [9] G.M. James, C. Paulson, and P. Rusmevichientong. The constrained lasso. Technical report, University of Southern California, 2012. [10] S.M. Kakade, O. Shamir, K. Sridharan, and A. Tewari. Learning exponential families in high-dimensions: Strong convexity and sparsity. In Int. Conf. Artif. Intell. Stat. (AISTATS), 2010. [11] M. Kolar, L. Song, A. Ahmed, and E. Xing. Estimating time-varying networks. Ann. Appl. Stat., 4(1):94– 123, 2010. [12] C. Lam and J. Fan. Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Statis., 37(6B):4254, 2009. [13] J.D. Lee and T. Hastie. Learning mixed graphical models. arXiv preprint arXiv:1205.5012, 2012. [14] P.L. Loh and M.J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. arXiv:1212.0478, 2012. [15] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the lasso. Ann. Statis., 34(3):1436–1462, 2006. [16] Y. Nardi and A. Rinaldo. On the asymptotic properties of the group lasso estimator for linear models. Electron. J. Stat., 2:605–633, 2008. [17] S.N. Negahban, P. Ravikumar, M.J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statist. Sci., 27(4):538–557, 2012. [18] G. Obozinski, M.J. Wainwright, and M.I. Jordan. Support union recovery in high-dimensional multivariate regression. Ann. Statis., 39(1):1–47, 2011. [19] P. Ravikumar, M.J. Wainwright, and J.D. Lafferty. High-dimensional ising model selection using ℓ1regularized logistic regression. Ann. Statis., 38(3):1287–1319, 2010. [20] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electron. J. Stat., 5:935–980, 2011. [21] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electron. J. Stat., 2:494–515, 2008. [22] Y. She. Sparse regression with exact clustering. Electron. J. Stat., 4:1055–1096, 2010. [23] R.J. Tibshirani and J.E. Taylor. The solution path of the generalized lasso. Ann. Statis., 39(3):1335–1371, 2011. [24] S. Vaiter, G. Peyr´e, C. Dossal, and J. Fadili. Robust sparse analysis regularization. IEEE Trans. Inform. Theory, 59(4):2001–2016, 2013. [25] S. van de Geer. Weakly decomposable regularization penalties and structured sparsity. arXiv preprint arXiv:1204.4813, 2012. [26] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (lasso). IEEE Trans. Inform. Theory, 55(5):2183–2202, 2009. [27] E. Yang and P. Ravikumar. Dirty statistical models. In Adv. Neural Inf. Process. Syst. (NIPS), pages 827–835, 2013. [28] E. Yang, P. Ravikumar, G.I. Allen, and Z. Liu. On graphical models via univariate exponential family distributions. arXiv:1301.4183, 2013. [29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol., 68(1):49–67, 2006. [30] P. Zhao and B. Yu. On model selection consistency of lasso. J. Mach. Learn. Res., 7:2541–2563, 2006. 9
|
2013
|
5
|
5,125
|
A multi-agent control framework for co-adaptation in brain-computer interfaces ∗Josh Merel1, ∗Roy Fox2, Tony Jebara3, Liam Paninski4 1Department of Neurobiology and Behavior, 3Department of Computer Science, 4Department of Statistics, Columbia University, New York, NY 10027 2School of Computer Science and Engineering, Hebrew University, Jerusalem 91904, Israel jsm2183@columbia.edu, royf@cs.huji.ac.il, jebara@cs.columbia.edu, liam@stat.columbia.edu Abstract In a closed-loop brain-computer interface (BCI), adaptive decoders are used to learn parameters suited to decoding the user’s neural response. Feedback to the user provides information which permits the neural tuning to also adapt. We present an approach to model this process of co-adaptation between the encoding model of the neural signal and the decoding algorithm as a multi-agent formulation of the linear quadratic Gaussian (LQG) control problem. In simulation we characterize how decoding performance improves as the neural encoding and adaptive decoder optimize, qualitatively resembling experimentally demonstrated closed-loop improvement. We then propose a novel, modified decoder update rule which is aware of the fact that the encoder is also changing and show it can improve simulated co-adaptation dynamics. Our modeling approach offers promise for gaining insights into co-adaptation as well as improving user learning of BCI control in practical settings. 1 Introduction Neural signals from electrodes implanted in cortex [1], electrocorticography (ECoG) [2], and electroencephalography (EEG) [3] all have been used to decode motor intentions and control motor prostheses. Standard approaches involve using statistical models to decode neural activity to control some actuator (e.g. a cursor on a screen [4], a robotic manipulator [5], or a virtual manipulator [6]). Performance of offline decoders is typically different from the performance of online, closed-loop decoders where the user gets immediate feedback and neural tuning changes are known to occur [7, 8]. In order to understand how decoding will be performed in closed-loop, it is necessary to model how the decoding algorithm updates and neural encoding updates interact in a coordinated learning process, termed co-adaptation. There have been a number of recent efforts to learn improved adaptive decoders specifically tailored for the closed loop setting [9, 10], including an approach relying on stochastic optimal control theory [11]. In other contexts, emphasis has been placed on training users to improve closed-loop control [12]. Some efforts towards modeling the co-adaptation process have sought to model properties of different decoders when used in closed-loop [13, 14, 15], with emphasis on ensuring the stability of the decoder and tuning the adaptation rate. One recent simulation study also demonstrated how modulating task difficulty can improve the rate of co-adaptation when feedback noise limits performance [16]. However, despite speculation that exploiting co-adaptation will be integral to state-of-the-art BCI [17], general models of co-adaptation and methods which exploit those models to improve co-adaptation dynamics are lacking. ∗These authors contributed equally. 1 We propose that we should be able to leverage our knowledge of how the encoder changes in order to better update the decoder. In the current work, we present a simple model of the closed-loop coadaptation process and show how we can use this model to improve decoder learning on simulated experiments. Our model is a novel control setting which uses a split Linear Quadratic Gaussian (LQG) system. Optimal decoding is performed by Linear Quadratic Estimation (LQE), effectively the Kalman filter model. Encoding model updates are performed by the Linear Quadratic Regulator (LQR), the dual control problem of the Kalman filter. The system is split insofar as each agent has different information available and each performs optimal updates given the state of the other side of the system. We take advantage of this model from the decoder side by anticipating changes in the encoder and pre-emptively updating the decoder to match the estimate of the further optimized encoding model. We demonstrate that this approach can improve the co-adaptation process. 2 Model framework 2.1 Task model For concreteness, we consider a motor-cortical neuroprosthesis setting. We assume a naive user, placed into a BCI control setting, and propose a training scheme which permits the user and decoder to adapt. We provide a visual target cue at a 3D location and the user controls the BCI via neural signals which, in a natural setting, relate to hand kinematics. The target position is moved each timestep to form a trajectory through the 3D space reachable by the user’s hand. The BCI user receives visual feedback via the displayed location of their decoded hand position. The user’s objective is to control their cursor to be as close to the continuously moving target cursor as possible. A key feature of this scheme is that we know the “intention” of the user, assuming it corresponds to the target. The complete graphical model of this system is provided in figure 1. xt in our simulations is a three dimensional position vector (Cartesian Coordinates) corresponding to the intended hand position. This variable could be replaced or augmented by other variables of interest (e.g. velocity). We randomly evolve the target signal using a linear-Gaussian drift model (eq. (1)). The neural encoding model is linear-Gaussian in response to intended position xt and feedback ˆxt−1 (eq. (2)), giving a vector of neural responses ut (e.g. local field potential or smoothed firing rates of neural units). Since we do not observe the whole brain region, we must subsample the number of neural units from which we collect information. The transformation C is conceptually equivalent to electrode sampling and yt is the observable neural response vector via the electrodes (eq. (3)). Lastly, ˆxt is the decoded hand position estimate, which also serves as visual feedback (eq. (4)). xt = Pxt−1 + ξt; ξt ∼N(0, Q) (1) ut = Axt + Bˆxt−1 + ηt; ηt ∼N(0, R) (2) yt = Cut + ǫt; ǫt ∼N(0, S) (3) ˆxt = Fyt + Gˆxt−1. (4) xt ut yt ˆxt ˆxt−1 xt+1 ut+1 yt+1 ˆxt+1 A B C F G P A B C F G Figure 1: Graphical model relating target signal (xt), neural response (ut), electrode observation of neural response (yt), and decoded feedback signal (ˆxt). During training, the decoding system is allowed access to the target position, interpreted as the real intention xt. The decoded ˆxt is only used as feedback, to inform the user of the gradually learned dynamics of the decoder. After training, the system is tested on a task with the same parameters of the trajectory dynamics, but with the actual intention only known to the user, and hidden from the decoder. A natural objective is to minimize tracking error, measured as accumulated mean squared error between the target and neurally decoded pose over time. For contemporary BCI applications, the Kalman filter is a reasonable baseline decoder, so we do not consider even simpler models. However, for other applications one might wish to consider a model in which the state at each timestep is encoded independently. It is possible to find a closed form for the optimal encoder and decoder that minimizes the error in this case [18, 19]. 2 Sections 2.2 and 2.3 describe the model presented in figure 1 as seen from the distinct viewpoints of the two agents involved – the encoder and the decoder. The encoder observes xt and ˆxt−1, and selects A and B to generate a control signal ut. The decoder observes yt, and selects F and G to estimate the intention as ˆxt. We assume that both agents are free to performed unconstrained optimization on their parameters. 2.2 Encoding model and optimal decoder Our encoding model is quite simple, with neural units responding in a linear-Gaussian fashion to intended position xt and feedback ˆxt−1 (eq. (2)). This is a standard model of neural responses for BCI. The matrices A and B effectively correspond to the tuning response functions of the neural units, and we will allow these parameters to be adjusted under the control of the user. The matrix C corresponds to the observation of the neural units by the electrodes, so we treat it as fixed (in our case C will down-sample the neurons). For this paper, we assume noise covariances are fixed and known, but this can be generalized. Given the encoder, the decoder will estimate the intention xt, which follows a hidden Markov chain (eq. (1)). The observations available to the decoder are the electrode samples yt (eq. (2) and (3)) yt = CAxt + CBˆxt−1 + ǫ′ t; ǫ′ t ∼N(0, RC) (5) RC = CRCT + S. (6) Given all the electrode samples up to time t, the problem of finding the most likely hidden intention is a Linear-Quadratic Estimation problem (figure 2), and its standard solution is the Kalman filter, and this decoder is widely in similar contexts. To choose appropriate Kalman gain F and mean dynamics G, the decoding system needs a good model of the dynamics of the underlying intention process (P, Q of eq.(1)) and the electrode observations (CA, CB, and RC of eqs. (5) & (6)). We can assume that P and Q are known since the decoding algorithm is controlled by the same experimenter who specifies the intention process for the training phase. We discuss the estimation of the observation model in section 4. xt yt ˆxt ˆxt−1 xt+1 yt+1 ˆxt+1 CA CB F G P CA CB F G Figure 2: Decoder’s point of view – target signal (xt) directly generates observed responses (yt), with the encoding model collapsed to omit the full signal (ut). Decoded feedback signal (ˆxt) is generated by the steady state Kalman filter. xt ut ˆxt ˆxt−1 xt+1 ut+1 ˆxt+1 A B FC G P A B FC G Figure 3: Encoder’s point of view – target signal (xt) and decoded feedback signal (ˆxt−1) generate neural response (ut). Model of decoder collapses over responses (yt) which are unseen by the encoder side. Given an encoding model, and assuming a very long horizon 1, there exist standard methods to optimize the stationary value of the decoder parameters [20]. The stationary covariance Σ of xt given ˆxt−1 is the unique positive-definite fixed point of the Riccati equation Σ = PΣP T −PΣ(CA)T (RC + (CA)Σ(CA)T )−1(CA)ΣP T + Q. (7) The Kalman gain is then F = Σ(CA)T ((CA)Σ(CA)T + RC)−1 (8) with mean dynamics G = P −F(CA)P −F(CB). (9) 1Our task is control of the BCI for arbitrarily long duration, so it makes sense to look for the stationary decoder. Similarly the BCI user will look for a stationary encoder. We could also handle the finite horizon case (see section 2.3 for further discussion). 3 We estimate ˆxt using eq. (4), and this is the most likely value, as well as the expected value, of xt given the electrode observations y1, . . . , yt. Using this estimate as the decoded intention is equivalent to minimizing the expectation of a quadratic cost clqe = X t 1 2∥xt −ˆxt∥2. (10) 2.3 Model of co-adaptation At the same time as the decoder-side agent optimizes the decoder parameters F and G, the encoderside agent can optimize the encoder parameters A and B. We formulate encoder updates for the BCI application as a standard LQR problem. This framework requires that the encoder-side agent has an intention model (same as eq. (1)) and a model of the decoder. The decoder model combines eq. (3) and (4) into ˆxt = FCut + Gˆxt−1 + Fǫt. (11) This model is depicted in figure 3. We assume that the encoder has access to a perfect estimate of the intention-model parameters P and Q (task knowledge). We also assume that the encoder is free to change its parameters A and B arbitrarily given the decoder-side parameters (which it can estimate as discussed in section 4). As a model of real neural activity, there must be some cost to increasing the power of the neural signal. Without such a cost, the solutions diverge. We add an additional cost term (a regularizer), which is quadratic in the magnitude of the neural response ut, and penalizes a large neural signal clqr = X t 1 2∥xt −ˆxt∥2 + 1 2uT t ˜Rut. (12) Since the decoder has no direct influence on this additional term, it can be viewed as optimizing for this target cost function as well. The LQR problem is solved similarly to eq. (7), by assuming a very long horizon and optimizing the stationary value of the encoder parameters [20]. We next formulate our objective function in terms of standard LQR parameters. The control depends on the joint process of the intention and the feedback (xt, ˆxt−1), but the cost is defined between xt and ˆxt. To compute the expected cost given xt, ˆxt−1 and ut, we use eq. (11) to get E ∥ˆxt −xt∥2 = ∥FCut + Gˆxt−1 −xt∥2 + const (13) = (Gˆxt−1 −xt)T (Gˆxt−1 −xt) + (FCut)T (FCut) + 2(Gˆxt−1 −xt)T (FCut) + const. Equation 13 provides the error portion of the quadratic objective of the LQR problem. The standard solution for the stationary case involves computing the Hessian V of the cost-to-go in joint state xt ˆxt−1 as the unique positive-definite fixed point of the Riccati equation V = ˜P T V ˜P + ( ˜ N + ˜P T V ˜D)( ˜R + ˜S + ˜DT V ˜D)−1( ˜ N T + ˜DT V ˜P) + ˜Q. (14) Here ˜P is the process dynamics for the joint state of xt and ˆxt−1 and ˜D is the controllability of this dynamics. ˜Q, ˜S and ˜N are the cost parameters which can be determined by inspection of eq. (13). ˜R is the Hessian of the neural response cost term which is chosen in simulations so that the resulting increase in neural signal strength is reasonable. ˜P = P 0 0 G , ˜D = 0 FC , ˜Q = I −GT −G GT G , ˜S = (FC)T (FC), ˜N = −FC GT (FC) . In our formulation, the encoding model (A, B) is equivalent to the feedback gain [A B] = −( ˜DT V ˜D + ˜R + ˜S)−1( ˜ N T + ˜DT V ˜P). (15) This is the optimal stationary control, and is generally not optimal for shorter planning horizons. In the co-adaptation setting, the encoding model (At, Bt) regularly changes to adapt to the changing decoder. This means that (At, Bt) is only used for one timestep (or a few) before it is updated. The effective planning horizon is thus shortened from its ideal infinity, and now depends on the rate and magnitude of the perturbations introduced in the encoding model. Eq. (14) can be solved for this finite horizon, but here for simplicity we assume the encoder updates introduce small or infrequent enough changes to keep the planning horizon very long, and the stationary control close to optimal. 4 2 4 6 8 10 12 14 16 18 20 6000 7000 8000 9000 10000 11000 12000 13000 14000 error (summed over x,y,z) update iteration index (a) 1 2 3 4 5 6 7 8 9 10 0.7 0.75 0.8 0.85 0.9 0.95 1 encoder update iteration index ρ (b) Figure 4: (a) Each curve plots single trial changes in decoding mean squared error (MSE) over whole timeseries as a function of the number of update half-iterations. The encoder is updated in even steps, the decoder in odd ones. Distinct curves are for multiple, random initializations of the encoder. (b) Plots the corresponding changes in encoder parameter updates - y-axis, ρ, is correlation between the vectorized encoder parameters after each update with the final values. 3 Perfect estimation setting We can consider co-adaptation in a hypothetical setting where each agent has instant access to a perfect estimate of the other’s parameters as soon as they change. To keep this setting comparable to the setting of section 4, where parameter estimation is needed, we only allow each agent access to those variables that it could, in principle, estimate. We assume both agents know the parameters P and Q of the intention dynamics, that the encoder knows FC and G of eq. (11), and that the decoder knows CA, CB and RC of eq. (5) and (6). These are the same parameters needed by each agent for its own re-optimization. This process of parameter updates is performed by alternating between the encoder update equations (7)-(9) and the decoder update equations (14)-(15). Since the agents take turns minimizing the expected infinite-horizon objectives of eq. (12) given the other, this cost will tend to decrease, approximately converging. Note that neither of these steps depends explicitly on the observed values of the neural signal ut or the decoded output ˆxt. In other words, co-adaptation can be simulated without ever actually generating the stochastic process of intention, encoding and decoding. However, this process and the signal-feedback loop become crucial when estimation is involved, as in section 4. Then each agent’s update indirectly depends on its observations through its estimated model of the other agent. To examine the dynamics in this idealized setting, we hold fixed the target trajectory x1...T as well as the realization of the noise terms. We initialize the simulation with a random encoding model and observe empirically that, as the encoder and the decoder are updated alternatingly, the error rapidly reduces to a plateau. As the improvement saturates, the joint encoder-decoder pair approximates a locally optimal solution to the co-adaptation problem. Figure 4(a) plots the error as a function of the number of model update iterations – the different curves correspond to distinct, random initializations of the encoder parameters A, B with everything else held fixed. We emphasize that for a fixed encoder, the first decoder update would yield the infinite-horizon optimal update if the encoder could not adapt, and the error can be interpreted relative to this initial optimal decoding (see supplementary fig1(a) for depiction of initial error and improvement by encoder adaptation in supplementary fig1(b)). This method obtains optimized encoder-decoder pairs with moderate sensitivity to the initial parameters of the encoding model. Interpreted in the context of BCI, this suggests that the initial tuning of the observed neurons may affect the local optima attainable for BCI performance due to standard co-adaptation. We may also be able to optimize the final error by cleverly choosing updates to decoder parameters in a fashion which shifts which optimum is reached. Figure 4(b) displays the corresponding approximate convergence of the encoder parameters - as the error decreases, the encoder parameters settle to a stable set (the actual final values across initializations vary). Parameters free from the standpoint of the simulation are the neural noise covariance RC and the Hessian ˜R of the neural signal cost. We set these to reasonable values - the noise to a moderate 5 level and the cost sufficiently high as to prevent an exceedingly large neural signal which would swamp the noise and yield arbitrarily low error (see supplement). In an experimental setting, these parameters would be set by the physical system and they would need to be estimated beforehand. 4 Partially observable setting with estimation More realistic than the model of co-adaptation where the decoder-side and encoder-side agents automatically know each other’s parameters, is one where the rate of updating is limited by the partial knowledge each agent has about the other. In each timestep, each agent will update its estimate of the other agent’s parameters, and then use the current estimates to re-optimize its own parameters. In this work we use a recursive least squares (RLS) which is presented in the supplement section 3 for this estimation. RLS has a forgetting factor λ which regulates how quickly the routine expects the parameters it estimates to change. This co-adaptation process is detailed in procedure 1. We elect to use the same estimation routine for each agent and assume that the user performs idealobserver style optimal estimation. In general, if more knowledge is available about how a real BCI user updates their estimates of the decoder parameters, such a model could easily be used. We could also explore in simulation how various suboptimal estimation models employed by the user affect co-adaptation. As noted previously, we will assume the noise model is fixed and that the decoder side knows the neural signal noise covariance RC (eq. (6)). The encoder-side will use a scaled identity matrix as the estimate of the electrodes-decoder noise model. To jointly estimate the decoder parameters and the noise model, an EM-based scheme would be a natural approach (such estimation of the BCI user’s internal model of the decoder has been treated explicitly in [21]). Procedure 1 standard co-adaptation for t = 1 to lengthT raining do Encoder-side Get xt and ˆxt−1 Update encoder-side estimate of decoder d FC, bG (RLS) Update optimal encoder A, B using current decoder estimate d FC, bG (LQR) Encode current intention using A, B and send signal yt Decoder-side Get xt and yt Update decoder-side estimate of encoder d CA, d CB (RLS) Update optimal decoder F, G using current encoder estimate d CA, d CB (LQE) Decode current signal using F, G and display as feedback ˆxt end for Standard co-adaptation yields improvements in decoding performance over time as the encoder and decoder agents estimate each others’ parameters and update based on those estimates. Appropriately, that model will improve the encoder-decoder pair over time, as in the blue curves of figure 5 below. 5 Encoder-aware decoder updates In this section, we present an approach to model the encoder updates from the decoder side. We will use this to “take an extra step” towards optimizing the decoder for what the anticipated future encoder ought to look like. In the most general case, the encoder can update At and Bt in an unconstrained fashion at each timestep t. From the decoder side, we do not know C and therefore we cannot know FC, an estimate of which is needed by the user to update the encoder. However, the decoder sets F and can predict updates to [CA CB] directly, instead of to [A B] as the actual encoder does (equation 15). We emphasize that this update is not actually how the user will update the encoder, rather it captures how the encoder ought to change the signals observed by the decoder (from the decoder’s perspective). 6 Figure 5: In each subplot, the blue line corresponds to decreasing error as a function of simulated time from standard co-adaptation (procedure 1). The green line corresponds to the improved onestep-ahead co-adaptation (procedure 2). Plots from left to right have decreasing RLS forgetting factor used by the encoder-side to estimate the decoder parameters. Curves depict the median error across 20 simulations with confidence intervals of 25% and 75% quantiles. Error at each timestep is appropriately cross-validated, it corresponds to taking the encoder-decoder pair of that timestep and computing error on “test” data. We can find the update [CApred CBpred] by solving a modified version of the LQR problem presented in section 2.3, eq. (15) [CApred CBpred] = −( ˜D′T V ˜D′ + ˜R′ + ˜S′)−1( ˜N ′T + ˜D′T V ˜P), (16) with terms defined similarly to section 2.3, except ˜D′ = 0 F , ˜S′ = F T F, ˜N ′ = −F GT F . (17) We also note that the quadratic penalty used in this approximation been transformed from a cost on the responses of all of the neural units to a cost only on the observed ones. ˜R′ serves as a regularization parameter which now must be tuned so the decoder-side estimate of the encoding update is reasonable. For simplicity we let ˜R′ = γI for some constant coarsely tuned γ, though in general this cost need not be a scaled identity matrix. Equations 16 & 17 only use information available at the decoder side, with terms dependent on FC having been replaced by terms dependent instead on F. These predictions will be used only to engineer decoder update schemes that can be used to improve co-adaptation (as in procedure 2). Procedure 2 r-step-ahead co-adaptation for t = 1 to lengthT raining do Encoder-side As in section 5 Decoder-side Get xt and yt Update decoder-side estimate of encoder d CA, d CB (RLS) Update optimal decoder F, G using current encoder estimate d CA, d CB (LQE) for r = 1 to numStepsAhead do Anticipate encoder update CApred, CBpred to updated decoder F, G (modified LQR) Update r-step-ahead optimal decoder F, G using CApred, CBpred (LQE) end for Decode current signal using r-step-ahead F, G and display as feedback bxt end for The ability to compute decoder-side approximate encoder updates opens the opportunity to improve encoder-decoder update dynamics by anticipating encoder-side adaptation to guide the process towards faster convergence, and possibly to better solutions. For the current estimate of the encoder, we update the optimal decoder, anticipate the encoder update by the method of section above, and then update the decoder in response to the anticipated encoder update. This procedure allows rstep-ahead updating as presented in procedure 2. Figure 5 demonstrates how the one-step-ahead 7 scheme can improve the co-adaptation dynamics. It is not a priori obvious that this method would help - the decoder-side estimate of the encoder update is not identical to the actual update. An encoder-side agent more permissive of rapid changes in the decoder may better handle r-step-ahead co-adaptation. We have also tried r-step-ahead updates for r > 1. However, this did not outperform the one-step-ahead method, and in some cases yields a decline relative to standard co-adaptation. These simulations are susceptible to the setting of the forgetting factor used by each agent in the RLS estimation, the initial uncertainty of the parameters, and the quadratic cost used in the onestep-ahead approximation ˜R′. The encoder-side RLS parameters in a real setting will be determined by the BCI user and ˜R′ should be tuned (as a regularization parameter). The encoder-side forgetting factor would correspond roughly to the plasticity of the BCI user with respect to the task. A high forgetting factor permits the user to tolerate very large changes in the decoder, and a low forgetting factor corresponds to the user assuming more decoder stability. From left to right in the subplots of figure 5, encoder-side forgetting factor decreases - the regime where augmenting co-adaptation may offer the most benefit corresponds to a user that is most uncertain about the decoder and willing to tolerate decoder changes. Whether or not co-adaptation gains are possible in our model depend upon parameters of the system. Nevertheless, for appropriately selected parameters, attempting to augment the co-adaptation should not hurt performance even if the user were outside of the regime where the most benefit is possible. A real user will likely perform their half of co-adaptation sub-optimally relative to our idealized BCI user and the structure of such suboptimalities will likely increase the opportunity for co-adaptation to be augmented. The timescale of these simulation results are unspecified, but would correspond to the timescale on which the biological neural encoding can change. This varies by task and implicated brain-region, ranging from a few training sessions [22, 23] to days [24]. 6 Conclusion Our work represents a step in the direction of exploiting co-adaptation to jointly optimize the neural encoding and the decoder parameters, rather than simply optimizing the decoder parameters without taking the encoder parameter adaptation into account. We model the process of co-adaptation that occurs in closed-loop BCI use between the user and decoding algorithm. Moreover, the results using our modified decoding update demonstrate a proof of concept that reliable improvement can be obtained relative to naive adaptive decoders by encoder-aware updates to the decoder in a simulated system. It is still open how well methods based on this approach will extend to experimental data. BCI is a two-agent system, and we may view co-adaptation as we have formulated it within multiagent control theory. As both agents adapt to reduce the error of the decoded intention given their respective estimates of the other agent, a fixed point of this co-adaptation process is a Nash equilibrium. This equilibrium is only known to be unique in the case where the intention at each timestep is independent [25]. In our more general setting, there may be more than one encoder-decoder pair for which each is optimal given the other. Moreover, there may exist non-linear encoders with which non-linear decoders can be in equilibrium. These connections will be explored in future work. Obviously our model of the neural encoding and the process by which the neural encoding model is updated are idealizations. Future experimental work will determine how well our co-adaptive model can be applied to the real neuroprosthetic context. For rapid, low-cost experiments it might be best to begin with a human, closed-loop experiments intended to simulate a BCI [26]. As the Kalman filter is a standard decoder, it will be useful to begin experimental investigations with this choice (as analyzed in this work). More complicated decoding schemes also appear to improve decoding performance [23] by better accounting for the non-linearities in the real neural encoding, and such methods scale to BCI contexts with many output degrees of freedom [27]. An important extension of the co-adaptation model presented in this work is to non-linear encoding and decoding schemes. Even in more complicated, realistic settings, we hope the framework presented here will offer similar practical benefits for improving BCI control. Acknowledgments This project is supported in part by the Gatsby Charitable Foundation. Liam Paninski receives support from a NSF CAREER award. 8 References [1] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue, “Instant neural control of a movement signal.,” Nature, vol. 416, no. 6877, pp. 141–142, 2002. [2] K. J. Miller et al., “Cortical activity during motor execution, motor imagery, and imagery-based online feedback.,” PNAS, vol. 107, no. 9, pp. 4430–4435, 2010. [3] D. J. McFarland, W. A. Sarnacki, and J. R. Wolpaw, “Electroencephalographic (eeg) control of threedimensional movement.,” Journal of Neural Engineering, vol. 7, no. 3, p. 036007, 2010. [4] V. Gilja et al., “A high-performance neural prosthesis enabled by control algorithm design.,” Nat Neurosci, 2012. [5] L. R. Hochberg et al., “Reach and grasp by people with tetraplegia using a neurally controlled robotic arm,” Nature, vol. 485, no. 7398, pp. 372–375, 2012. [6] D. Putrino et al., “Development of a closed-loop feedback system for real-time control of a highdimensional brain machine interface,” Conf Proc IEEE EMBS, vol. 2012, pp. 4567–4570, 2012. [7] S. Koyama et al., “Comparison of brain-computer interface decoding algorithms in open-loop and closedloop control.,” Journal of Computational Neuroscience, vol. 29, no. 1-2, pp. 73–87, 2010. [8] J. M. Carmena et al., “Learning to control a brainmachine interface for reaching and grasping by primates,” PLoS Biology, vol. 1, no. 2, p. E42, 2003. [9] V. Gilja et al., “A brain machine interface control algorithm designed from a feedback control perspective.,” Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 1318–22, 2012. [10] Z. Li, J. E. ODoherty, M. A. Lebedev, and M. A. L. Nicolelis, “Adaptive decoding for brain-machine interfaces through bayesian parameter updates.,” Neural Comput., vol. 23, no. 12, pp. 3162–204, 2011. [11] K. Kowalski, B. He, and L. Srinivasan, “Dynamic analysis of naive adaptive brain-machine interfaces,” Neural Comput., vol. 25, no. 9, pp. 2373–2420, 2013. [12] C. Vidaurre, C. Sannelli, K.-R. Muller, and B. Blankertz, “Machine-learning based co-adaptive calibration for brain-computer interfaces,” Neural Computation, vol. 816, no. 3, pp. 791–816, 2011. [13] M. Lagang and L. Srinivasan, “Stochastic optimal control as a theory of brain-machine interface operation,” Neural Comput., vol. 25, pp. 374–417, Feb. 2013. [14] R. Heliot, K. Ganguly, J. Jimenez, and J. M. Carmena, “Learning in closed-loop brain-machine interfaces: Modeling and experimental validation,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 40, no. 5, pp. 1387–1397, 2010. [15] S. Dangi, A. L. Orsborn, H. G. Moorman, and J. M. Carmena, “Design and Analysis of Closed-Loop Decoder Adaptation Algorithms for Brain-Machine Interfaces,” Neural Computation, pp. 1–39, Apr. 2013. [16] Y. Zhang, A. B. Schwartz, S. M. Chase, and R. E. Kass, “Bayesian learning in assisted brain-computer interface tasks.,” Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 2740–3, 2012. [17] S. Waldert et al., “A review on directional information in neural signals for brain-machine interfaces.,” Journal Of Physiology Paris, vol. 103, no. 3-5, pp. 244–254, 2009. [18] G. P. Papavassilopoulos, “Solution of some stochastic quadratic Nash and leader-follower games,” SIAM J. Control Optim., vol. 19, pp. 651–666, Sept. 1981. [19] E. Doi and M. S. Lewicki, “Characterization of minimum error linear coding with sensory and neural noise.,” Neural Computation, vol. 23, no. 10, pp. 2498–2510, 2011. [20] M. Athans, “The discrete time linear-quadratic-Gaussian stochastic control problem,” Annals of Economic and Social Measurement, vol. 1, pp. 446–488, September 1972. [21] M. D. Golub, S. M. Chase, and B. M. Yu, “Learning an internal dynamics model from control demonstration.,” 30th International Conference on Machine Learning, 2013. [22] R. Shadmehr, M. A. Smith, and J. W. Krakauer, “Error correction, sensory prediction, and adaptation in motor control.,” Annual Review of Neuroscience, vol. 33, no. March, pp. 89–108, 2010. [23] L. Shpigelman, H. Lalazar, and E. Vaadia, “Kernel-arma for hand tracking and brain-machine interfacing during 3d motor control,” in NIPS, pp. 1489–1496, 2008. [24] A. C. Koralek, X. Jin, J. D. Long II, R. M. Costa, and J. M. Carmena, “Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills.,” Nature, vol. 483, no. 7389, pp. 331–335, 2012. [25] T. Basar, “On the uniqueness of the Nash solution in linear-quadratic differential games,” International Journal of Game Theory, vol. 5, no. 2-3, pp. 65–90, 1976. [26] J. P. Cunningham et al., “A closed-loop human simulator for investigating the role of feedback control in brain-machine interfaces.,” Journal of Neurophysiology, vol. 105, no. 4, pp. 1932–1949, 2010. [27] Y. T. Wong et al., “Decoding arm and hand movements across layers of the macaque frontal cortices.,” Conf Proc IEEE Eng Med Biol Soc, vol. 2012, pp. 1757–60, 2012. 9
|
2013
|
50
|
5,126
|
Conditional Random Fields via Univariate Exponential Families Eunho Yang Department of Computer Science University of Texas at Austin eunho@cs.utexas.edu Pradeep Ravikumar Department of Computer Science University of Texas at Austin pradeepr@cs.utexas.edu Genevera I. Allen Department of Statistics and Electrical & Computer Engineering Rice University gallen@rice.edu Zhandong Liu Department of Pediatrics-Neurology Baylor College of Medicine zhandonl@bcm.edu Abstract Conditional random fields, which model the distribution of a multivariate response conditioned on a set of covariates using undirected graphs, are widely used in a variety of multivariate prediction applications. Popular instances of this class of models, such as categorical-discrete CRFs, Ising CRFs, and conditional Gaussian based CRFs, are not well suited to the varied types of response variables in many applications, including count-valued responses. We thus introduce a novel subclass of CRFs, derived by imposing node-wise conditional distributions of response variables conditioned on the rest of the responses and the covariates as arising from univariate exponential families. This allows us to derive novel multivariate CRFs given any univariate exponential distribution, including the Poisson, negative binomial, and exponential distributions. Also in particular, it addresses the common CRF problem of specifying “feature” functions determining the interactions between response variables and covariates. We develop a class of tractable penalized M-estimators to learn these CRF distributions from data, as well as a unified sparsistency analysis for this general class of CRFs showing exact structure recovery can be achieved with high probability. 1 Introduction Conditional random fields (CRFs) are a popular class of models that combine the advantages of discriminative modeling and undirected graphical models. They are widely used across structured prediction domains such as natural language processing, computer vision, and bioinformatics. The key idea in this class of models is to represent the joint distribution of a set of response variables conditioned on a set of covariates using a product of clique-wise compatibility functions. Given an underlying graph over the response variables, each of these compatibility functions depends on all the covariates, but only on a subset of response variables within any clique of the underlying graph. They are thus a discriminative counterpart of undirected graphical models, where we have covariates that provide information about the multivariate response, and the underlying graph structure encodes conditional independence assumptions among the responses conditioned on the covariates. There is a key model specification question that arises, however, in any application of CRFs: how do we specify the clique-wise sufficient statistics, or compatibility functions (sometimes also called feature functions), that characterize the conditional graphical model between responses? In par1 ticular, how do we tune these to the particular types of variables being modeled? Traditionally, these questions have been addressed either by hand-crafted feature functions, or more generally by discretizing the multivariate response vectors into a set of indicator vectors and then letting the compatibility functions be linear combinations of the product of indicator functions [1]. This approach, however, may not be natural for continuous, skewed continuous or count-valued random variables. Recently, spurred in part by applications in bioinformatics, there has been much research on other sub-classes of CRFs. The Ising CRF which models binary responses, was studied by [2] and extended to higher-order interactions by [3]. Several versions and extensions of Gaussian-based CRFs have also been proposed [4, 5, 6, 7, 8]. These sub-classes of CRFs, however, are specific to Gaussian and binary variable types, and may not be appropriate for multivariate count data or skewed continuous data, for example, which are increasingly seen in big-data settings such as high-throughput genomic sequencing. In this paper, we seek to (a) formulate a novel subclass of CRFs that have the flexibility to model responses of varied types, (b) address how to specify compatibility functions for such a family of CRFs, and (c) develop a tractable procedure with strong statistical guarantees for learning this class of CRFs from data. We first show that when node-conditional distributions of responses conditioned on other responses and covariates are specified by univariate exponential family distributions, there exists a consistent joint CRF distribution, that necessarily has a specific form: with terms that are tensorial products of functions over the responses, and functions over the covariates.This subclass of “exponential family” CRFs can be viewed as a conditional extension of the MRF framework of [9, 10]. As such, this broadens the class of off-the-shelf CRF models to encompass data that follows distributions other than the standard discrete, binary, or Gaussian instances. Given this new family of CRFs, we additionally show that if covariates also follow node-conditional univariate exponential family distributions, then the functions over features in turn are precisely specified by the exponential family sufficient statistics. Thus, our twin results definitively answer for the first time the key model specification question of specifying compatibility or feature functions for a broad family of CRF distributions. We then provide a unified M-estimation procedure, via penalized neighborhood estimation, to learn our family of CRFs from i.i.d. observations that simultaneously addresses all three sub-tasks of CRF learning: feature selection (where we select a subset of the covariates for any response variable), structure recovery (where we learn the graph structure among the response variables), and parameter learning (where we learn the parameters specifying the CRF distribution). We also present a single theorem that gives statistical guarantees saying that with highprobability, our M-estimator achieves each of these three sub-tasks. Our result can be viewed as an extension of neighborhood selection results for MRFs [11, 12, 13]. Overall, this paper provides a family of CRFs that generalizes many of the sub-classes in the existing literature and broadens the utility and applicability of CRFs to model many other types of multivariate responses. 2 Conditional Graphical Models via Exponential Families Suppose we have a p-variate random response vector Y = (Y1, . . . , Yp), with each response variable Ys taking values in a set Ys. Suppose we also have a set of covariates X = (X1, . . . , Xq) associated with this response vector Y . Suppose G = (V, E) is an undirected graph over p nodes corresponding to the p response variables. Given the underlying graph G, and the set of cliques (fully-connected sub-graphs) C of the graph G, the corresponding conditional random field (CRF) is a set of distributions over the response conditioned on the covariates that satisfy Markov independence assumptions with respect to the graph G. Specifically, letting {φc(Yc, X)}c2C denote a set of clique-wise sufficient statistics, any strictly positive distribution of Y conditioned on X within the conditional random field family takes the form: P(Y |X) / exp{P c2C φc(Yc, X)}. With a pair-wise conditional random field distribution, the set of cliques consists of the set of nodes V and the set of edges E, so that P(Y |X) / exp ⇢X s2V φs(Ys, X) + X (s,t)2E φst(Ys, Yt, X) $ . A key model specification question is how to select the class of sufficient statistics, φ. We have a considerable understanding of how to specify univariate distributions over various types of variables as well as on how to model their conditional response through regression. Consider the univariate exponential family class of distributions: P(Z) = exp(✓B(Z) + C(Z) −D(✓)), with sufficient 2 statistics B(Z), base measure C(Z), and log-normalization constant D(✓). Such exponential family distributions include a wide variety of commonly used distributions such as Gaussian, Bernoulli, multinomial, Poisson, exponential, gamma, chi-squared, beta, any of which can be instantiated with particular choices of the functions B(·), and C(·). Such univariate exponential family distributions are thus used to model a wide variety of data types including skewed continuous data and count data. Additionally, through generalized linear models, they are used to model the response of various data types conditional on a set of covariates. Here, we seek to use our understanding of univariate exponential families and generalized linear models to specify a conditional graphical model distribution. Consider the conditional extension of the construction in [14, 9, 10]. Suppose that the nodeconditional distributions of response variables, Ys, conditioned on the rest of the response variables, YV \s, and the covariates, X, is given by an univariate exponential family: P(Ys|YV \s, X) = exp{Es(YV \s, X) Bs(Ys) + Cs(Ys) −¯Ds(YV \s, X)}. (1) Here, the functions Bs(·), Cs(·) are specified by the choice of the exponential family, and the parameter Es(YV \s, X) is an arbitrary function of the variables Yt in N(s) and the covariates X; N(s) is the set of neighbors of node s according to an undirected graph G = (V, E). Would these node-conditional distributions be consistent with a joint distribution? Would this joint distribution factor according a conditional random field given by graph G? And would there be restrictions on the form of the functions Es(YV \s, X)? The following theorem answers these questions. We note that it generalizes the MRF framework of [9, 10] in two ways: it allows for the presence of conditional covariates, and moreover allows for heterogeneous types and domains of distributions with the different choices of Bs(·) and Cs(·) at each individual node. Theorem 1. Consider a p-dimensional random vector Y = (Y1, Y2, . . . , Yp) denoting the set of responses, and let X = (X1, . . . , Xq) be a q-dimensional covariate vector. Consider the following two assertions: (a) the node-conditional distributions of each P(Ys|YV \s, X) are specified by univariate exponential family distributions as detailed in (1); and (b) the joint multivariate conditional distribution P(Y |X) factors according to the graph G = (V, E) with clique-set C, but with factors over response-variable-cliques of size at most k. These assertions on the conditional and joint distributions respectively are consistent if and only if the conditional distribution in (1) has the tensor-factorized form: P(Ys|YV \s, X; ✓) = exp ⇢ Bs(Ys) ⇣ ✓s(X) + X t2N(s) ✓st(X) Bt(Yt) + . . . + X t2,...,tk2N(s) ✓s t2...tk(X) k Y j=2 Btj(Ytj) ⌘ + Cs(Ys) −¯Ds(YV \s) & , (2) where ✓s·(X) := {✓s(X), ✓st(X), . . . , ✓s t2...tk(X)} is a set of functions that depend only on the covariates X. Moreover, the corresponding joint conditional random field distribution has the form: P(Y |X; ✓) = exp ⇢X s ✓s(X)Bs(Ys) + X s2V X t2N(s) ✓st(X) Bs(Ys)Bt(Yt) + . . . + X (t1,...,tk)2C ✓t1...tk(X) k Y j=1 Btj(Ytj) + X s Cs(Ys) −A ' ✓(X) (& , (3) where A % ✓(X) & is the log-normalization constant. Theorem 1 specifies the form of the function Es(YV \s, X) defining the canonical parameter in the univariate exponential family distribution (1). This function is a tensor factorization of products of sufficient statistics of YV \s, and “observation functions”, ✓(X), of the covariates X alone. A key point to note is that the observation functions, ✓(X), in the CRF distribution (3) should ensure that the density is normalizable, that is, A % ✓(X) & < +1. We also note that we can allow different exponential families for each of the node-conditional distributions of the response variables, meaning that the domains, Ys, or the sufficient statistics functions, Bs(·), can vary across the response variables Ys. A common setting of these sufficient statistics functions however, for many popular distributions (Gaussian, Bernoulli, etc.), is a linear function, so that Bs(Ys) = Ys. 3 An important special case of the above result is when the joint CRF has response-variable-clique factors of size at most two. The node conditional distributions (2) would then have the form: P(Ys|YV \s, X; ✓) / exp ⇢ Bs(Ys) · ⇣ ✓s(X) + X t2N(s) ✓st(X) Bt(Yt) ⌘ + Cs(Ys) & , while the joint distribution in (3) has the form: P(Y |X; ✓) = exp ⇢X s2V ✓s(X)Bs(Ys) + X (s,t)2E ✓st(X) Bs(Ys) Bt(Yt) + X s2V Cs(Ys) −A ' ✓(X) (& , (4) with the log-partition function, A % ✓(X) & , given the covariates, X, defined as A ' ✓(X) ( := log Z Yp exp ⇢X s2V ✓s(X)Bs(Ys) + X (s,t)2E ✓st(X) Bs(Ys) Bt(Yt) + X s2V Cs(Ys) & . (5) Theorem 1 then addresses the model specification question of how to select the compatibility functions in CRFs for varied types of responses. Our framework permits arbitrary observation functions, ✓(X), with the only stipulation that the log-partition function must be finite. (This only provides a restriction when the domain of the response variables is not finite). In the next section, we address the second model specification question of how to set the covariate functions. 2.1 Setting Covariate Functions A candidate approach to specifying the observation functions, ✓(X), in the CRF distribution above would be to make distributional assumptions on X. Since Theorem 1 specifies the conditional distribution P(Y |X), specifying the marginal distribution P(X) would allow us to specify the joint distribution P(Y, X) without further restrictions on P(Y |X) using the simple product rule: P(X, Y ) = P(Y |X) P(X). As an example, suppose that the covariates X follow an MRF distribution with graph G0 = (V 0, E0), and parameters #: P(X) = exp n X u2V 0 #uφu(Xu) + X (u,v)2V 0⇥V 0 #uvφuv(Xu, Xv) −A(#) o . Then, for any CRF distribution P(Y |X) in (4), we have P(X, Y ) = exp n X u #uφu(Xu) + X (u,v) #uvφuv(Xu, Xv) + X s ✓s(X)Ys + X (s,t) ✓st(X)YsYt + X s Cs(Ys) −A(#) −A % ✓(X) &o . The joint distribution, P(X, Y ), is valid provided P(Y |X) and P(X) are valid distributions. Thus, a distributional assumption on P(X) does not restrict the set of covariate functions in any way. On the other hand, specifying the conditional distribution, P(X|Y ), naturally entails restrictions on the form of P(Y |X). Consider the case where the conditional distributions P(Xu|XV 0\u, Y ) are also specified by univariate exponential families: P(Xu|XV 0\u, Y ) = exp{Eu(XV 0\u, Y ) Bu(Xu) + Cu(Xu) −¯Du(XV 0\u, Y )}, (6) where Eu(XV 0\u, Y ) is an arbitrary function of the rest of the variables, and Bu(·), Cu(·), ¯Du(·) are specified by the univariate exponential family. Under these additional distributional assumptions in (6), what form would the CRF distribution in Theorem 1 take? Specifically, what would be the form of the observation functions ✓(X)? The following theorem provides an answer to this question. (In the following, we use the shorthand sm 1 to denote the sequence (s1, . . . , sm).) Theorem 2. Consider the following assertions: (a) the conditional CRF distribution of the responses Y = (Y1, . . . , Yp) given covariates X = (X1, . . . , Xq) is given by the family (4); and (b) the conditional distributions of individual covariates given rest of the variables P(Xu|XV 0\u, Y ) is given by an exponential family of the form in (6); and (c) the joint distribution P(X, Y ) belongs to a graphical model with graph ¯G = (V [ V 0, ¯E), with clique-set C, with factors of size at most k. These assertions are consistent if and only if the CRF distribution takes the form: 4 P(Y |X) = exp n k X l=1 X tr 12V,sl−r 1 2V 0 (tr 1,sl−r 1 )2C ↵tr 1,sl−r 1 l−r Y j=1 Bsj(Xsj) r Y j=1 Btj(Ytj) + X t2V Ct(Yt) −A(↵, X) o , (7) so that the observation functions ✓t1,...,tr(X) in the CRF distribution (4) are tensor products of univariate functions: ✓t1,...,tr(X) = k X l=1 X sl−r 1 2V 0 (tr 1,sl−r 1 )2C ↵tr 1,sl−r 1 l−r Y j=1 Bsj(Xsj). Let us examine the consequences of this theorem for the pair-wise CRF distributions (4). Theorem 2 then entails that the observation functions, {✓s(X), ✓st(X)}, have the following form when the distribution has factors of size at most two: ✓s(X) = ✓s + X u2V 0 ✓suBu(Xu) , ✓st(X) = ✓st, (8) for some constant parameters ✓s, ✓su and ✓st. Similarly, if the joint distribution has factors of size at most three, we have: ✓s(X) = ✓s + X u2V 0 ✓suBu(Xu) + X (u,v)2V 0⇥V 0 ✓suvBu(Xu)Bv(Xv), ✓st(X) = ✓st + X u2V 0 ✓stuBu(Xu). (9) (Remark 1) While we have derived the covariate functions in Theorem 2 by assuming a distributional form on X, using the resulting covariate functions do not necessarily impose distributional assumptions on X. This is similar to “generative-discriminative” pairs of models [15]: a “generative” Naive Bayes distribution for P(X|Y ) corresponds to a “discriminative” logistic regression model for P(Y |X), but the converse need not hold. We can thus leverage the parametric CRF distributional form in Theorem 2 without necessarily imposing stringent distributional assumptions on X. (Remark 2) Consider the form of the covariate functions given by (8) compared to (9). What does sparsity in the parameters entail in terms of conditional independence assumptions? ✓st = 0 in (8) entails that Ys is conditionally independent of Yt given the other responses and all the covariates. Thus, the parametrization in (8) corresponds to pair-wise conditional independence assumptions between the responses (structure learning) and between the responses and covariates (feature selection). In contrast, (9) lets the edges weights between the responses, ✓st(X) vary as a linear combination of the covariates. Letting ✓stu = 0 entails the lack of a third-order interaction between the pair of responses Ys and Yt and the covariate Xu, conditioned on all other responses and covariates. (Remark 3) Our general subclasses of CRFs specified by Theorems 1 and 2 encompass many existing CRF families as special cases, in addition to providing many novel forms of CRFs. • The Gaussian CRF presented in [7] as well as the reparameterization in [8] can be viewed as an instance of our framework by substituting in Gaussian sufficient statistics in (8): here the Gaussian mean of the CRF depends on the covariates, but not the covariance. We can correspondingly derive a novel Gaussian CRF formulation from (9), where the Gaussian covariance of Y |X would also depend on X. • By using the Bernoulli distribution as the node-conditional distribution, we can derive the Ising CRF, recently studied in [2] with an application to studying tumor suppressor genes. • Several novel forms of CRFs can be derived by specifying node-conditional distributions as Poisson or exponential, for example. With certain distributions, such as the multivariate Poisson for example, we would have to enforce constraints on the parameters to ensure normalizability of the distribution. For the Poisson CRF distribution, it can be verified that for the log-partition function to be finite, A % ✓st(X) & < 1, the observation functions are constrained to be non-positive, ✓st(X) 0. Such restrictions are typically needed for cases where the variables have infinite domains. 5 3 Graphical Model Structure Learning We now address the task of learning a CRF distribution from our general family given i.i.d. observations of the multivariate response vector and covariates. Structure recovery and estimation for CRFs has not attracted as much attention as that for MRFs. Schmidt et al. [16], Torralba et al. [17] empirically study greedy methods and block `1 regularized pseudo-likelihood respectively to learn the discrete CRF graph structure. Bradley and Guestrin [18], Shahaf et al. [19] provide guarantees on structure recovery for low tree-width discrete CRFs using graph cuts, and a maximum weight spanning tree based method respectively. Cai et al. [4], Liu et al. [6] provide structure recovery guarantees for their two-stage procedure for recovering (a reparameterization of) a conditional Gaussian based CRF; and the semi-parameteric partition based Gaussian CRF respectively. Here, we provide a single theorem that provides structure recovery guarantees for any CRF from our class of exponential family CRFs, which encompasses not only Ising, and Gaussian based CRFs, but all other instances within our class, such as Poisson CRFs, exponential CRFs, and so on. We are given n i.i.d. samples Z := {X(i), Y (i)}n i=1 from a pair-wise CRF distribution, of the form specified by Theorems 1 and 2 with covariate functions as given in (8): P(Y |X; ✓⇤) / exp ⇢X s2V ' ✓⇤ s + X u2N0(s) ✓⇤ suBu(Xu) ( Bs(Ys) + X (s,t)2E ✓⇤ st Bs(Ys) Bt(Yt) + X s C(Ys) & , (10) with unknown parameters, ✓⇤. The task of CRF parameter learning corresponds to estimating the parameters ✓⇤, structure learning corresponds to recovering the edge-set E, and feature selection corresponds to recovering the neighborhoods N 0(s) in (10). Note that the log-partition function A(✓⇤) is intractable to compute in general (other than special cases such as Gaussian CRFs). Accordingly, we adopt the node-based neighborhood estimation approach of [12, 13, 9, 10]. Given the joint distribution in (10), the node-wise conditional distribution of Ys given the rest of the nodes and covariates, is given by P(Ys|YV \s, X; ✓⇤) = exp{⌘· Bs(Ys) + Cs(Ys) −Ds(⌘)} which is a univariate exponential family, with parameter ⌘= ✓⇤ s + P u2V 0 ✓⇤ suBu(Xu) + P t2V \s ✓⇤ stBt(Yt), as discussed in the previous section. The corresponding negative log-conditional-likelihood can be written as `(✓; Z) := −1 n log Qn i=1 P(Y (i) s |Y (i) V \s, X(i); ✓). For each node s, we have three components of the parameter set, ✓:= (✓s, ✓x, ✓y): a scalar ✓s, a length q vector ✓x := [u2V 0✓su, and a length p −1 vector ✓y := [t2V \s✓st. Then, given samples Z, these parameters can be selected by the following `1 regularized M-estimator: min ✓2R1+(p−1)+q `(✓) + λx,nk✓xk1 + λy,nk✓yk1, (11) where λx,n, λy,n are the regularization constants. Note that λx,n and λy,n do not need to be the same as λy,n determines the degree of sparsity between Ys and YV \s, and similarly λx,n does the degree of sparsity between Ys and covariates X. Given this M-estimator, we can recover the response-variable-neighborhood of response Ys as N(s) = {t 2 V \s | ✓y st 6= 0}, and the featureneighborhood of the response Ys as N 0(s) = {t 2 V 0 | ✓x su 6= 0}. Armed with this machinery, we can provide the statistical guarantees on successful learning of all three sub-tasks of CRFs: Theorem 3. Consider a CRF distribution as specified in (10). Suppose that the regularization parameters in (11) are chosen such that λx,n ≥M1 r log q n , λy,n ≥M1 r log p n and max{λx,n, λy,n} M2, where M1 and M2 are some constants depending on the node conditional distribution in the form of exponential family. Further suppose that mint2N(s) |✓⇤ st| ≥ 10 ⇢min max +pdxλx,n, p dyλy,n where ⇢min is the minimum eigenvalue of the Hessian of the loss function at ✓x⇤, ✓y⇤, and dx, dy are the number of nonzero elements in ✓x⇤and ✓y⇤, respectively. Then, for some positive constants L, c1, c2, and c3, if n ≥L(dx + dy)2(log p + log q)(max{log n, log(p + q)})2, then with probability at least 1 −c1 max{n, p + q}−2 −exp(−c2n) −exp(−c3n), the following statements hold. (a) (Parameter Error) For each node s 2 V , the solution b✓of the M-estimation problem in (11) is unique with parameter error bound kc ✓x −✓x⇤k2 + kc ✓y −✓y⇤k2 5 ⇢min max +p dxλx,n, p dyλy,n 6 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate G−CRF cGGM pGGM 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate G−CRF cGGM pGGM (a) Gaussian graphical models 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate I−CRF I−MRF 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate I−CRF I−MRF (b) Ising models 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate P−CRF P−MRF 0 0.2 0.4 0 0.5 1 False Positive Rate True Positive Rate P−CRF P−MRF (c) Poisson graphical models Figure 1: (a) ROC curves averaged over 50 simulations from a Gaussian CRF with p = 50 responses, q = 50 covariates, and (left) n = 100 and (right) n = 250 samples. Our method (G-CRF) is compared to that of [7] (cGGM) and [8] (pGGM). (b) ROC curves for simulations from an Ising CRF with p = 100 responses, q = 10 covariates, and (left) n = 50 and (right) n = 150 samples. Our method (I-CRF) is compared to the unconditional Ising MRF (I-MRF). (c) ROC curves for simulations from a Poisson CRF with p = 100 responses, q = 10 covariates, and (left) n = 50 and (right) n = 150 samples. Our method (P-CRF) is compared to the Poisson MRF (P-MRF). (b) (Structure Recovery) The M-estimate recovers the response-feature neighborhoods exactly, so that c N 0(s) = N 0(s), for all s 2 V . (c) (Feature Selection) The M-estimate recovers the true response neighborhoods exactly, so that b N(s) = N(s), for all s 2 V . The proof requires modifying that of Theorem 1 in [9, 10] to allow for two different regularization parameters, λx,n and λy,n, and for two distinct sets of random variables (responses and covariates). This introduces subtleties related to interactions in the analyses. Extending our statistical analysis in Theorem 3 for pair-wise CRFs to general CRF distributions (3) as well as general covariate functions, such as in (9), are omitted for space reasons and left for future work. 4 Experiments Simulation Studies. In order to evaluate the generality of our framework, we simulate data from three different instances of our model: those given by Gaussian, Bernoulli (Ising), and Poisson node-conditional distributions. We assume the true conditional distribution, P(Y |X), follows (7) with the parameters: ✓s(X) = ✓s + P u2V 0 ✓suXu , ✓st(X) = ✓st + P u2V 0 ✓stuXu for some constant parameters ✓s, ✓su, ✓st and ✓stu. In other words, we permit both the mean, ✓s(X) and the covariance or edge-weights, ✓st(X), to depend on the covariates. For the Gaussian CRFs, our goal is to infer the precision (or inverse covariance) matrix. We first generate covariates as X ⇠U[−0.05, 0.05]. Given X, the precision matrix of Y , ⇥(X), is generated as follows. All the diagonal elements are set to 1. For each node s, 4 nearest neighbors in the pp ⇥pp lattice structure are selected, and ✓st = 0 for non-neighboring nodes. For a given edge structure, the strength is now a function of covariates, X, by letting ✓st(X) = c + h!st, Xi where c is a constant bias term and !st is target vector of length q. Data of size p = 50 responses and q = 50 covariates was generated for n = 100 and n = 250 samples. Figure 1(a) reports the receiveroperator curves (ROC) averaged over 50 trials for three different methods: the model of [7] (denoted as cGGM), the model of [8] (denoted as pGGM), and our method (denoted as G-CRF). Results show that our method outperforms competing methods as their edge-weights are restricted to be constants, while our method allows them to linearly depend on the covariates. Data was similarly generated using a 4 nearest neighbor lattice structure for Ising and Poisson CRFs with p = 100 responses, 7 A Figure 2: From left to right: Gaussian MRF, mean-specified Gaussian CRF, and the set corresponding to the covariance-specified Gaussian CRF. The latter shows the third-order interactions between gene-pairs and each of the five common aberration covariates (EGFR, PTEN, CDKN2A, PDGFRA, and CDK4). The models were learned from gene expression array data of Glioblastoma samples, and the plots display the response neighborhoods of gene TWIST1. q = 10 covariates, and n = 50 or n = 150 samples. Figure 1(b) and Figure 1(c) report the ROC curves averaged over 50 trials for the Ising and Poisson CRFs respectively. The performance of our method is compared to that of the unconditional Ising and Poisson MRFs of [9, 10]. Real Data Example: Genetic Networks of Glioblastoma. We demonstrate the performance of our CRF models by learning genetic networks of Glioblastoma conditioned on common copy number aberrations. Level III gene expression data measured by Aglient arrays for n = 465 Glioblastoma tumor samples as well as copy number variation measured by CGH-arrays were downloaded from the Cancer Genome Atlas data portal [20]. The data was processed according to standard techniques, and we only consider genes from the C2 Pathway Database. The five most common copy number aberrations across all subjects were taken as covariates. We fit our Gaussian “meanspecified” CRFs (with covariate functions given in (8)) and Gaussian “covariance-specified” CRFs (with covariate functions given in (9)) by penalized neighborhood estimation to learn the graph structure of gene expression responses, p = 876, conditional on q = 5 aberrations: EGFR, PTEN, CDKN2A, PDGFRA, and CDK4. Stability selection [21] was used to determine the sparsity of the network. Due to space limitations, the entire network structures are not shown. Instead, we show the results of the mean- and covariance-specified Gaussian CRFs and that of the Gaussian graphical model (GGM) for one particularly important gene neighborhood: TWIST1 is a transcription factor for epithelial to mesenchymal transition [22] and has been shown to promote tumor invasion in multiple cancers including Glioblastoma [23]. The neighborhoods of TWIST1 learned by GGMs and mean-specified CRFs share many of the known interactors of TWIST1, such as SNAI2, MGP, and PMAIP1 [24]. The mean-specified CRF is more sparse as conditioning on copy number aberrations may explain many of the conditional dependencies with TWIST1 that are captured by GGMs, demonstrating the utility of conditional modeling via CRFs. For the covariance-specified Gaussian CRF, we plot the neighborhood given by ✓stu in (9) for the five values of u corresponding to each aberration. The results of this network denote third-order effects between gene-pairs and aberrations, and are thus even more sparse with no neighbors for the interactions between TWIST1 and PTEN, CDK4, and EGFR. TWIST1 has different interactions between PDGFRA and CDKN2A, which have high frequency for proneual subtypes of Glioblastoma tumors. Thus, our covariance-specified CRF network may indicate that these two aberrations are the most salient in interacting with pairs of genes that include the gene TWIST1. Overall, our analysis has demonstrated the applied advantages of our CRF models; namely, one can study the network structure between responses conditional on covariates and/or between pairs of responses that interact with particular covariates. Acknowledgments The authors acknowledge support from the following sources: ARO via W911NF-12-1-0390 and NSF via IIS-1149803 and DMS-1264033 to E.Y. and P.R; Ken Kennedy Institute for Information Technology at Rice to G.A. and Z.L.; NSF DMS-1264058 and DMS-1209017 to G.A.; and NSF DMS-1263932 to Z.L.. 8 References [1] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1—305, December 2008. [2] J. Cheng, E. Levina, P. Wang, and J. Zhu. Sparse ising models with covariates. Arxiv preprint arXiv:1209.6419, 2012. [3] S. Ding, G. Wahba, and X. Zhu. Learning Higher-Order Graph Structure with Features by Structure Penalty. In NIPS, 2011. [4] T. Cai, H. Li, W. Liu, and J. Xie. Covariate adjusted precision matrix estimation with an application in genetical genomics. Biometrika, 2011. [5] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS Genetics, 2009. [6] H. Liu, X. Chen, J. Lafferty, and L. Wasserman. Graph-valued regression. In NIPS, 2010. [7] J. Yin and H. Li. A sparse conditional gaussian graphical model for analysis of genetical genomics data. Annals of Applied Statistics, 5(4):2630–2650, 2011. [8] X. Yuan and T. Zhang. Partial gaussian graphical model estimation. Arxiv preprint arXiv:1209.6419, 2012. [9] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur. Info. Proc. Sys., 25, 2012. [10] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family distributions. Arxiv preprint arXiv:1301.4183, 2013. [11] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS), 14, 2011. [12] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436–1462, 2006. [13] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using `1regularized logistic regression. Annals of Statistics, 38(3):1287–1319, 2010. [14] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2):192–236, 1974. [15] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Neur. Info. Proc. Sys., 2002. [16] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion abnormality detection. In Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008. [17] A. Torralba, K. P. Murphy, and W. T. Freeman. Contextual models for object detection using boosted random fields. In NIPS, 2004. [18] J. K. Bradley and C. Guestrin. Learning tree conditional random fields. In ICML, 2010. [19] D. Shahaf, A. Chechetka, and C. Guestrin. Learning thin junction trees via graph cuts. In AISTATS, 2009. [20] Cancer Genome Atlas Research Network. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature, 455(7216):1061–1068, October 2008. [21] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for high dimensional graphical models. Arxiv preprint arXiv:1006.3316, 2010. [22] J. Yang, S. A. Mani, J. L. Donaher, S. Ramaswamy, R. A. Itzykson, C. Come, P. Savagner, I. Gitelman, A. Richardson, and R. A. Weinberg. Twist, a master regulator of morphogenesis, plays an essential role in tumor metastasis. Cell, 117(7):927–939, 2004. [23] S. A. Mikheeva, A. M. Mikheev, A. Petit, R. Beyer, R. G. Oxford, L. Khorasani, J.-P. Maxwell, C. A. Glackin, H. Wakimoto, I. Gonz´alez-Herrero, et al. Twist1 promotes invasion through mesenchymal change in human glioblastoma. Mol Cancer, 9:194, 2010. [24] M. A. Smit, T. R. Geiger, J.-Y. Song, I. Gitelman, and D. S. Peeper. A twist-snail axis critical for trkbinduced epithelial-mesenchymal transition-like transformation, anoikis resistance, and metastasis. Molecular and cellular biology, 29(13):3722–3737, 2009. [25] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1-constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183–2202, May 2009. [26] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Arxiv preprint arXiv:1010.2731, 2010. 9
|
2013
|
51
|
5,127
|
Adaptivity to Local Smoothness and Dimension in Kernel Regression Samory Kpotufe Toyota Technological Institute-Chicago∗ samory@ttic.edu Vikas K Garg Toyota Technological Institute-Chicago vkg@ttic.edu Abstract We present the first result for kernel regression where the procedure adapts locally at a point x to both the unknown local dimension of the metric space X and the unknown H¨older-continuity of the regression function at x. The result holds with high probability simultaneously at all points x in a general metric space X of unknown structure. 1 Introduction Contemporary statistical procedures are making inroads into a diverse range of applications in the natural sciences and engineering. However it is difficult to use those procedures ”off-the-shelf” because they have to be properly tuned to the particular application. Without proper tuning their prediction performance can suffer greatly. This is true in nonparametric regression (e.g. tree-based, k-NN and kernel regression) where regression performance is particularly sensitive to how well the method is tuned to the unknown problem parameters. In this work, we present an adaptive kernel regression procedure, i.e. a procedure which self-tunes, optimally, to the unknown parameters of the problem at hand. We consider regression on a general metric space X of unknown metric dimension, where the output Y is given as f(x) + noise. We are interested in adaptivity at any input point x ∈X: the algorithm must self-tune to the unknown local parameters of the problem at x. The most important such parameters (see e.g. [1, 2]), are (1) the unknown smoothness of f, and (2) the unknown intrinsic dimension, both defined over a neighborhood of x. Existing results on adaptivity have typically treated these two problem parameters separately, resulting in methods that solve only part of the self-tuning problem. In kernel regression, the main algorithmic parameter to tune is the bandwidth h of the kernel. The problem of (local) bandwidth selection at a point x ∈X has received considerable attention in both the theoretical and applied literature (see e.g. [3, 4, 5]). In this paper we present the first method which provably adapts to both the unknown local intrinsic dimension and the unknown H¨oldercontinuity of the regression function f at any point x in a metric space of unknown structure. The intrinsic dimension and H¨older-continuity are allowed to vary with x in the space, and the algorithm must thus choose the bandwidth h as a function of the query x, for all possible x ∈X. It is unclear how to extend global bandwidth selection methods such as cross-validation to the local bandwidth selection problem at x. The main difficulty is that of evaluating the regression error at x since the ouput Y at x is unobserved. We do have the labeled training sample to guide us in selecting h(x), and we will show an approach that guarantees a regression rate optimal in terms of the local problem complexity at x. ∗Other affiliation: Max Planck Institute for Intelligent Systems, Germany 1 The result combines various insights from previous work on regression. In particular, to adapt to H¨older-continuity, we build on acclaimed results of Lepski et al. [6, 7, 8]. In particular some such Lepski’s adaptive methods consist of monitoring the change in regression estimates fn,h(x) as the bandwidth h is varied. The selected estimate has to meet some stability criteria. The stability criteria is designed to ensure that the selected fn,h(x) is sufficiently close to a target estimate fn,˜h(x) for a bandwidth ˜h known to yield an optimal regression rate. These methods however are generally instantiated for regression in R, but extend to high-dimensional regression if the dimension of the input space X is known. In this work however the dimension of X is unknown, and in fact X is allowed to be a general metric space with significantly less regularity than usual Euclidean spaces. To adapt to local dimension we build on recent insights of [9] where a k-NN procedure is shown to adapt locally to intrinsic dimension. The general idea for selecting k = k(x) is to balance surrogates of the unknown bias and variance of the estimate. As a surrogate for the bias, nearest neighbor distances are used, assuming f is globally Lipschitz. Since Lipschitz-continuity is a special case of H¨older-continuity, the work of [9] corresponds in the present context to knowing the smoothness of f everywhere. In this work we do not assume knowledge of the smoothness of f, but simply that f is locally H¨older-continuous with unknown H¨older parameters. Suppose we knew the smoothness of f at x, then we can derive an approach for selecting h(x), similar to that of [9], by balancing the proper surrogates for the bias and variance of a kernel estimate. Let ¯h be the hypothetical bandwidth so-obtained. Since we don’t actually know the local smoothness of f, our approach, similar to Lepski’s, is to monitor the change in estimates fn,h(x) as h varies, and pick the estimate fn,ˆh(x) which is deemed close to the hypothetical estimate fn,¯h(x) under some stability condition. We prove nearly optimal local rates ˜O λ2d/(2α+d)n−2α/(2α+d) in terms of the local dimension d at any point x and H¨older parameters λ, α depending also on x. Furthermore, the result holds with high probability, simultaneously at all x ∈X, for n sufficiently large. Note that we cannot unionbound over all x ∈X, so the uniform result relies on proper conditioning on particular events in our variance bounds on estimates fn,h(·). We start with definitions and theoretical setup in Section 2. The procedure is given in Section 3, followed by a technical overview of the result in Section 4. The analysis follows in Section 5. 2 Setup and Notation 2.1 Distribution and sample We assume the input X belongs to a metric space (X, ρ) of bounded diameter ∆X ≥1. The output Y belongs to a space Y of bounded diameter ∆Y. We let µ denote the marginal measure on X and µn denote the corresponding empirical distribution on an i.i.d. sample of size n. We assume for simplicity that ∆X and ∆Y are known. The algorithm runs on an i.i.d training sample {(Xi, Yi)}n i=1 of size n. We use the notation X .= {Xi}n 1 and Y = {Yi}n 1. Regression function We assume the regression function f(x) .= E [Y |x] satisfies local H¨older assumptions: for every x ∈X and r > 0, there exists λ, α > 0 depending on x and r, such that f is (λ, α)-H¨older at x on B(x, r): ∀x′ ∈B(x, r) |f(x) −f(x′)| ≤λρ(x, x′)α. We note that the α parameter is usually assumed to be in the interval (0, 1] for global definitions of H¨older continuity, since a global α > 1 implies that f is constant (for differentiable f). Here however, the definition being given relative to x, we can simply assume α > 0. For instance the function f(x) = xα is clearly locally α-H¨older at x = 0 with constant λ = 1 for any α > 0. With higher α = α(x), f gets flatter locally at x, and regression gets easier. 2 Notion of dimension We use the following notion of metric-dimension, also employed in [9]. This notion extends some global notions of metric dimension to local regions of space . Thus it allows for the intrinsic dimension of the data to vary over space. As argued in [9] (see also [10] for a more general theory) it often coincides with other natural measures of dimension such as manifold dimension. Definition 1. Fix x ∈X, and r > 0. Let C ≥1 and d ≥1. The marginal µ is (C, d)-homogeneous on B(x, r) if we have µ(B(x, r′)) ≤Cϵ−dµ(B(x, ϵr′)) for all r′ ≤r and 0 < ϵ < 1. In the above definition, d will be viewed as the local dimension at x. We will require a general upper-bound d0 on the local dimension d(x) over any x in the space. This is defined below and can be viewed as the worst-case intrinsic dimension over regions of space. Assumption 1. The marginal µ is (C0, d0)-maximally-homogeneous for some C0 ≥1 and d0 ≥1, i.e. the following holds for all x ∈X and r > 0: suppose there exists C ≥1 and d ≥1 such that µ is (C, d)-homogeneous on B(x, r), then µ is (C0, d0)-homogeneous on B(x, r). Notice that if µ is (C, d)-homogeneous on some B(x, r), then it is (C0, d0)-homogeneous on B(x, r) for any C0 > C and d0 > d. Thus, C0, d0 can be viewed as global upper-bounds on the local homogeneity constants. By the definition, it can be the case that µ is (C0, d0)-maximallyhomogeneous without being (C0, d0)-homogeneous on the entire space X. The algorithm is assumed to know the upper-bound d0. This is a minor assumption: in many situations where X is a subset of a Euclidean space RD, D can be used in place of d0; more generally, the global metric entropy (log of covering numbers) of X can be used in the place of d0 (using known relations between the present notion of dimension and metric entropies [9, 10]). The metric entropy is relatively easy to estimate since it is a global quantity independent of any particular query x. Finally we require that the local dimension is tight in small regions. This is captured by the following assumption. Assumption 2. There exists rµ > 0, C′ > 0 such that if µ is (C, d)-homogeneous on some B(x, r) where r < rµ, then for any r′ ≤r, µ(B(x, r′)) ≤C′r′d. This last assumption extends (to local regions of space) the common assumption that µ has an upperbounded density (relative to Lebesgue). This is however more general in that µ is not required to have a density. 2.2 Kernel Regression We consider a positive kernel K on [0, 1] highest at 0, decreasing on [0, 1], and 0 outside [0, 1]. The kernel estimate is defined as follows: if B(x, h) ∩X ̸= ∅, fn,h(x) = X i wi(x)Yi, where wi(x) = K(ρ(x, Xi)/h) P j K(ρ(x, Xj)/h). We set wi(x) = 1/n, ∀i ∈[n] if B(x, h) ∩X = ∅. 3 Procedure for Bandwidth Selection at x Definition 2. (Global cover size) Let ϵ > 0. Let Nρ(ϵ) denote an upper-bound on the size of the smallest ϵ-cover of (X, ρ). We assume the global quantity Nρ(ϵ) is known or pre-estimated. Recall that, as discussed in Section 2, d0 can be picked to satisfy ln(Nρ(ϵ)) = O(d0 log(∆X /ϵ)), in other words the procedure requires only knowledge of upper-bounds Nρ(ϵ) on global cover sizes. The procedure is given as follows: Fix ϵ = ∆X n . For any x ∈X, the set of admissible bandwidths is given as ˆHx = h ≥16ϵ : µn(B(x, h/32)) ≥32 ln(Nρ(ϵ/2)/δ) n \ ∆X 2i ⌈log(∆X /ϵ)⌉ i=0 . 3 Let Cn,δ ≥2K(0) K(1) 4 ln (Nρ(ϵ/2)/δ) + 9C04d0 . For any h ∈ˆHx, define ˆσh = 2 ∆2 YCn,δ n · µn(B(x, h/2)) and Dh = h fn,h(x) − p 2ˆσh, fn,h(x) + p 2ˆσh i . At every x ∈X select the bandwidth: ˆh = max h ∈ˆHx : \ h′∈ˆ Hx:h′<h Dh′ ̸= ∅ . The main difference with Lepski’s-type methods is in the parameter ˆσh. In Lepski’s method, since d is assumed known, a better surrogate depending on d will be used. 4 Discussion of Results We have the following main theorem. Theorem 1. Let 0 < δ < 1/e. Fix ϵ = ∆X /n. Let Cn,δ ≥2K(0) K(1) 9C04d0 + 4 ln (Nρ(ϵ/2)/δ) . Define C2 = 4−d0 6C0 . There exists N such that, for n > N, the following holds with probability at least 1 −2δ over the choice of (X, Y), simultaneously for all x ∈X and all r satisfying rµ > r > rn ≜2 2d0C2 0∆d0 X C2λ2 !1/(2α+d0) ∆2 YCn,δ n 1/(2α+d0) . Let x ∈X, and suppose f is (λ, α)-H¨older at x on B(x, r). Suppose µ is (C, d)-homogeneous on B(x, r). Let Cr .= 1 CC0∆d0 X rd0−d. We have fˆh(x) −f(x) 2 ≤96C02d0 · λ2d/(2α+d) 2d∆2 YCn,δ C2Crλ2n !2α/(2α+d) . The result holds with high probability for all x ∈X, and for all rµ > r > rn, where rn n→∞ −−−−→0. Thus, as n grows, the procedure is eventually adaptive to the H¨older parameters in any neighborhood of x. Note that the dimension d is the same for all r < rµ by definition of rµ. As previously discussed, the definition of rµ corresponds to a requirement that the intrinsic dimension is tight in small enough regions. We believe this is a technical requirement due to our proof technique. We hope this requirement might be removed in a longer version of the paper. Notice that r is a factor of n in the upper-bound. Since the result holds simultaneously for all rµ > r > rn, the best tradeoff in terms of smoothness and size of r is achieved. A similar tradeoff is observed in the result of [9]. As previously mentioned, the main idea behind the proof is to introduce hypothetical bandwidths ¯h and and ˜h which balance respectively, ˆσh and λ2h2α, and O(∆2 Y/(nhd)) and λ2h2α (see Figure 1). In the figure, d and α are the unknown parameters in some neighborhood of point x. The first part of the proof consists in showing that the variance of the estimate using a bandwidth h is at most ˆσh. With high probability ˆσh is bounded above by O(∆2 Y/(nhd). Thus by balancing O(∆2 Y/(nhd) and λ2h2α, using ˜h we would achieve a rate of n−2α/(2α+d). We then have to show that the error of fn,¯h cannot be too far from that fn,˜h. Finally the error of fn,ˆh, ˆh being selected by the procedure, will be related to that of fn,¯h. The argument is a bit more nuanced that just described above and in Figure 1: the respective curves O(∆2 Y/(nhd) and λ2h2α are changing with h since dimension and smoothness at x depend on the size of the region considered. Special care has to be taken in the analysis to handle this technicality. 4 3000 4000 5000 6000 7000 8000 9000 10000 1 2 3 4 5 6 7 Training Size NMSE Cross Validation Adaptive Figure 1: (Left) The proof argues over ¯h, ˜h which balance respectively, ˆσh and λ2h2α, and O(∆2 Y/(nhd)) and λ2h2α. The estimates under ˆh selected by the procedure is shown to be close to that of ¯h, which in turn is shown to be close to that of ˜h which is of the right adaptive form. (Right) Simulation results comparing the error of the proposed method to that of a global h selected by cross-validation. The test size is 1000 for all experiments. X ⊂R70 has diameter 1, and is a collection of 3 disjoint flats (clusters) of dimension d1 = 2, d2 = 5, d3 = 10, and equal mass 1/3. For each x from cluster i we have the output Y = (sin ∥x∥)ki + N(0, 1) where k1 = 0.8, k2 = 0.6, k3 = 0.4. For the implementation of the proposed method, we set ˆσh(x) = ˆ varY /nµn(B(x, h)), where ˆ varY is the variance of Y on the training sample. For both our method and cross-validation, we use a box-kernel, and we vary h on an equidistant 100-knots grid on the interval from the smallest to largest interpoint distance on the training sample. 5 Analysis We will make use of the the following bias-variance decomposition throughout the analysis. For any x ∈X and bandwidth h, define the expected regression estimate efn,h(x) .= EY|Xfn,h(x) = X i wif(Xi). We have |fn,h(x) −f(x)|2 ≤2 fn,h(x) −efn,h(x) 2 + 2 efn,h(x) −f(x) 2 . (1) The bias term above is easily bounded in a standard way. This is stated in the Lemma below. Lemma 1 (Bias). Let x ∈X, and suppose f is (λ, α)-H¨older at x on B(x, h). For any h > 0, we have efn,h(x) −f(x) 2 ≤λ2h2α. Proof. We have efn,h(x) −f(x) ≤P i wi(x) |f(Xi) −f(x)| ≤λhα. The rest of this section is dedicated to the analysis of the variance term of (1). We will need various supporting Lemmas relating the empirical mass of balls to their true mass. This is done in the next subsection. The variance results follow in the subsequent subsection. 5.1 Supporting Lemmas We often argue over the following distributional counterpart to ˆHx(ϵ). Definition 3. Let x ∈X and ϵ > 0. Define Hx(ϵ) = h ≥8ϵ : µ(B(x, h/8)) ≥12 ln(Nρ(ϵ/2)/δ) n \ ∆X 2i ⌈log(∆X /ϵ)⌉ i=0 . 5 Lemma 2. Fix ϵ > 0 and let Z denote an ϵ/2-cover of X, and let Sϵ = ∆X 2i ⌈log(∆X /ϵ)⌉ i=0 . Define γn .= 4 ln(Nρ(ϵ/2)/δ) n . With probability at least 1 −δ, for all z ∈Z and h ∈Sϵ we have µn(B(z, h)) ≤µ(B(z, h)) + p γn · µ(B(z, h)) + γn/3, (2) µ(B(z, h)) ≤µn(B(z, h)) + p γn · µn(B(z, h)) + γn/3. (3) Idea. Apply Bernstein’s inequality followed by a union bound on Z and Sϵ. The following two lemmas result from the above Lemma 2. Lemma 3. Fix ϵ > 0 and 0 < δ < 1. With probability at least 1 −δ, for all x ∈X and h ∈Hx(ϵ), we have for C1 = 3C04d0 and C2 = 4−d0 6C0 , C2µ(B(x, h/2)) ≤µn(B(x, h/2)) ≤C1µ(B(x, h/2)). Lemma 4. Let 0 < δ < 1, and ϵ > 0. With probability at least 1−δ, for all x ∈X, ˆHx(ϵ) ⊂Hx(ϵ). Proof. Again, let Z be an ϵ/2 cover and define Sϵ and γn as in Lemma 2. Assume (2) in the statement of Lemma 2. Let h > 16ϵ, we have for any z ∈Z and x within ϵ/2 of z, µn(B(x, h/32)) ≤µn(B(z, h/16)) ≤2µ(B(z, h/16)) + 2γn ≤2µ(B(x, h/8)) + 2γn, and we therefore have µ(B(x, h/8)) ≥1 2µn(B(x, h/32)) −γn. Pick h ∈ˆHx and conclude. 5.2 Bound on the variance The following two results of Lemma 5 to 6 serve to bound the variance of the kernel estimate. These results are standard and included here for completion. The main result of this section is the variance bound of Lemma 7. This last lemma bounds the variance term of (1) with high probability simultaneously for all x ∈X and for values of h relevant to the algorithm. Lemma 5. For any x ∈X and h > 0: EY|X fn,h(x) −efn,h(x) 2 ≤ X i w2 i (x)∆2 Y. Lemma 6. Suppose that for some x ∈X and h > 0, µn(B(x, h)) ̸= 0. We then have: P i w2 i (x) ≤maxi wi(x) ≤ K(0) K(1) · nµn(B(x, h)). Lemma 7 (Variance bound). Let 0 < δ < 1/2 and ϵ > 0. Define Cn,δ .= 2K(0) K(1) 9C04d0 + 4 ln (Nρ(ϵ/2)/δ) , With probability at least 1 −3δ over the choice of (X, Y), for all x ∈X and all h ∈ˆHx(ϵ), fn,h(x) −efn,h(x) 2 ≤ ∆2 YCn,δ nµn(B(x, h/2)). Proof. We prove the lemma statement for h ∈Hx(ϵ). The result then follows for h ∈ˆHx(ϵ) with the same probability since, by Lemma 4, ˆHx(ϵ) ⊂Hx(ϵ) under the same event of Lemma 2. Consider any ϵ/2-cover Z of X. Define γn as in Lemma 2 and assume statement (3). Let x ∈X and z ∈Z within distance ϵ/2 of x. Let h ∈Hx(ϵ). We have µ(B(x, h/8)) ≤µ(B(z, h/4)) ≤2µn(B(z, h/4)) + 2γn ≤2µn(B(x, h/2)) + 2γn, and we therefore have µn(B(x, h/2)) ≥1 2µ(B(x, h/8)) −γn ≥1 2γn. Thus define Hz denote the union of Hx(ϵ) for x ∈B(z, ϵ/2). With probability at least 1 −δ, for all z ∈Z, and x ∈B(z, ϵ/2), 6 and h ∈Hz the sets B(z, h)∩X, B(x, h)∩X are all non empty since they all contain B(x′, h/2)∩X for some x′ such that h ∈Hx′(ϵ) . The corresponding kernel estimates are therefore well defined. Assume w.l.o.g. that Z is a minimal cover, i.e. all B(z, ϵ/2) contain some x ∈X. We first condition on X fixed and argue over the randomness in Y. For any x ∈X and h > 0, let Yx,h denote the subset of Y corresponding to points from X falling in B(x, h). We define φ(Yx,h) .= fn,h(x) −efn,h(x) . We note that changing any Yi value changes φ(Yz,h) by at most ∆Ywi(z). Applying McDiarmid’s inequality and taking a union bound over z ∈Z and h ∈Hz, we get P(∃z ∈Z, ∃h ∈Sϵ, φ(Yz,h) > Eφ(Yz,h) + t) ≤N 2 ρ (ϵ/2) exp − 2t2 ∆2 Y X i w2 i (z) . We then have with probability at least 1 −2δ, for all z ∈Z and h ∈Hz, fn,h(z) −efn,h(z) 2 ≤2 EY |X fn,h(z) −efn,h(z) 2 + 2 ln Nρ(ϵ/2) δ ∆2 Y · X i w2 i (z) ≤ 4 ln Nρ(ϵ/2) δ · K(0)∆2 Y K(1) · nµn(B(z, h)), (4) where we apply Lemma 5 and 6 for the last inequality. Now fix any z ∈ Z, h ∈ Hz and x ∈ B(z, ϵ/2). We have |φ(Yx,h) −φ(Yz,h)| ≤ max {φ(Yx,h), φ(Yz,h)} since both quantities are positive. Thus |φ(Yx,h) −φ(Yz,h)| changes by at most maxi,j {wi(z), wj(x)} · ∆Y if we change any Yi value out of the contributing Y values. By Lemma 6, maxi,j {wi(z), wj(x)} ≤βn,h(x, z) .= K(0) nK(1) min(µn(B(x, h)), µn(B(z, h))). Thus define ψh(x, z) .= 1 βn,h(x, z) |φ(Yx,h) −φ(Yz,h))| and ψh(z) .= sup x:ρ(x,z)≤ϵ/2 ψh(x, z). By what we just argued, changing any Yi makes ψh(z) vary by at most ∆Y. We can therefore apply McDiarmid’s inequality to have that, with probability at least 1 −3δ, for all z ∈Z and h ∈Hz, ψh(z) ≤EY|Xψh(z) + ∆Y r 2 ln(Nρ(ϵ/2)/δ) 2n . (5) To bound the above expectation for any z and h ∈Hz, consider a sequence {xi}∞ 1 , xi ∈B(z, ϵ/2) such that ψh(xi, z) i→∞ −−−→ψh(z). Fix any such xi. Using Holder’s inequality and invoking Lemma 5 and Lemma 6, we have EY|Xψh(xi, z) = 1 βn,h(xi, z)EY|X |φ(Yxi,h) −φ(Yz,h)| ≤ p EY|X(φ(Yxi,h) −φ(Yz,h)2) βn,h(xi, z) ≤ p 2EY|Xφ(Yxi,h)2 + 2EY|Xφ(Yz,h)2 βn,h(xi, z) ≤ q 4∆2 Yβn,h(xi, z) βn,h(xi, z) = 2∆Y p βn,h(xi, z) ≤2∆Y s nK(1)µn(B(z, h)) K(0) . Since ψh(xi, z) is bounded for all xi ∈B(z, ϵ), the Dominated Convergence Theorem yields EY|Xψh(z) = lim i→∞E Y|Xψh(xi, z) ≤2∆Y s nK(1)µn(B(z, h)) K(0) . Therefore, using (5), we have for any z ∈Z, any h ∈Hz, and any x ∈B(z, ϵ/2) that, with probability at least 1 −3δ |φ(Yx,h) −φ(Yz,h))| ≤∆Yβn,h(x, z) 2 s nK(1)µn(B(z, h)) K(0) + r 2 ln(Nρ(ϵ/2)/δ) 2n ! . (6) 7 Figure 2: Illustration of the selection procedure. The intervals Dh are shown containing f(x). We will argue that fn,ˆh(x) cannot be too far from fn,¯h(x). Now notice that βn,h(x, z) ≤ K(0) nK(1)µn(B(x, h/2)), so by Lemma 3, µn(B(z, h)) ≤µn(B(x, 2h)) ≤C1µ(B(x, 2h)) ≤C1C04d0µ(B(x, h/2)) ≤C2C1C04d0µn(B(x, h/2)) ≤C04d0µn(B(x, h/2)). Hence, (6) becomes |φ(Yx,h) −φ(Yz,h))| ≤3∆Y q C04d0K(0) nK(1)µn(B(x,h/2)). Combine with (4), using again the fact that µn(B(z, h)) ≥µn(B(x, h/2)) to obtain fn,h(x) −efn,h(x) 2 ≤2 fn,h(z) −efn,h(z) 2 + 2 |φ(Yx,h) −φ(Yz,h))|2 ≤ 2∆2 Y nµn(B(x, h/2)) · 9C04d0 + 4 ln (Nρ(ϵ/2)/δ) . 5.3 Adaptivity The proof of Theorem 1 is given in the appendix. As previously discussed, the main part of the argument consists of relating the error of fn,¯h(x) to that of fn,˜h(x) which is of the right form for B(x, r) appropriately defined as in the theorem statement. To relate the error of fn,ˆh(x) to that fn,¯h(x), we employ a simple argument inspired by Lepski’s adaptivity work. Notice that, by definition of ˆh (see Figure 1 (Left)), for any h ≤¯h ˆσh ≥λ2h2α. Therefore by Lemma 1 and 7 that, for any h < ¯h, ∥fn,h −f∥2 ≤2ˆσh so the intervals Dh must all contain f(x) and therefore must intersect. By the same argument ˆh ≥¯h and Dˆh and D¯h must intersect. Now since ˆσh is decreasing, we can infer that fn,ˆh(x) cannot be too far from fn,¯h(x), so their errors must be similar. This is illustrated in Figure 2. References [1] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348– 1360, 1980. [2] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist., 10:1340–1353, 1982. [3] W. S. Cleveland and C. Loader. Smoothing by local regression: Principles and methods. Statistical theory and computational aspects of smoothing, 1049, 1996. [4] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric Regression. Springer, New York, NY, 2002. 8 [5] J. Lafferty and L. Wasserman. Rodeo: Sparse nonparametric regression in high dimensions. Arxiv preprint math/0506342, 2005. [6] O. V. Lepski, E. Mammen, and V. G. Spokoiny. Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. The Annals of Statistics, pages 929–947, 1997. [7] O. V. Lepski and V. G. Spokoiny. Optimal pointwise adaptive methods in nonparametric estimation. The Annals of Statistics, 25(6):2512–2546, 1997. [8] O. V. Lepski and B. Y. Levit. Adaptive minimax estimation of infinitely differentiable functions. Mathematical Methods of Statistics, 7(2):123–156, 1998. [9] S. Kpotufe. k-NN Regression Adapts to Local Intrinsic Dimension. NIPS, 2011. [10] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor Methods for Learning and Vision: Theory and Practice, 2005. 9
|
2013
|
52
|
5,128
|
Online Learning with Costly Features and Labels Navid Zolghadr Department of Computing Science University of Alberta zolghadr@ualberta.ca G´abor Bart´ok Department of Computer Science ETH Z¨urich bartok@inf.ethz.ch Russell Greiner Andr´as Gy¨orgy Csaba Szepesv´ari Department of Computing Science, University of Alberta {rgreiner,gyorgy,szepesva}@ualberta.ca Abstract This paper introduces the online probing problem: In each round, the learner is able to purchase the values of a subset of feature values. After the learner uses this information to come up with a prediction for the given round, he then has the option of paying to see the loss function that he is evaluated against. Either way, the learner pays for both the errors of his predictions and also whatever he chooses to observe, including the cost of observing the loss function for the given round and the cost of the observed features. We consider two variations of this problem, depending on whether the learner can observe the label for free or not. We provide algorithms and upper and lower bounds on the regret for both variants. We show that a positive cost for observing the label significantly increases the regret of the problem. 1 Introduction In this paper, we study a variant of online learning, called online probing, which is motivated by practical problems where there is a cost to observing the features that may help one’s predictions. Online probing is a class of online learning problems. Just like in standard online learning problems, the learner’s goal is to produce a good predictor. In each time step t, the learner produces his prediction based on the values of some feature xt = (xt,1, . . . , xt,d)> 2 X ⇢Rd.1 However, unlike in the standard online learning settings, if the learner wants to use the value of feature i to produce a prediction, he has to purchase the value at some fixed, a priori known cost, ci ≥0. Features whose value is not purchased in a given round remain unobserved by the learner. Once a prediction ˆyt 2 Y is produced, it is evaluated against a loss function `t : Y ! R. At the end of a round, the learner has the option of purchasing the full loss function, again at a fixed prespecified cost cd+1 ≥0 (by default, the loss function is not revealed to the learner). The learner’s performance is measured by his regret as he competes against some prespecified set of predictors. Just like the learner, a competing predictor also needs to purchase the feature values needed in the prediction. If st 2 {0, 1}d+1 is the indicator vector denoting what the learner purchased in round t (st,i = 1 if the learner purchased xt,i for 1 i d, and purchased the label for i = d + 1) and c 2 [0, 1)d+1 denotes the respective costs, then the regret with respect to a class of prediction functions F ⇢{f | f : X ! Y} is defined by RT = T X t=1 {`t(ˆyt) + h st, c i} −inf f2F ( Th s(f), c1:d i + T X t=1 `t(f(xt)) ) , where c1:d 2 Rd is the vector obtained from c by dropping its last component and for a given function f : Rd ! Y, s(f) 2 {0, 1}d is an indicator vector whose ith component indicates whether f 1We use > to denote the transpose of vectors. Throughout, all vectors x2Rd will denote column vectors. 1 is sensitive to its ith input (in particular, si(f) = 0 by definition when f(x1, . . . , xi, . . . , xd) = f(x1, . . . , x0 i, . . . , xd) holds for all (x1, . . . , xi, . . . , xd), (x1, . . . , x0 i, . . . , xd) 2 X; otherwise si(f) = 1). Note that when defining the best competitor in hindsight, we did not include the cost of observing the loss function. This is because (i) the reference predictors do not need it; and (ii) if we did include the cost of observing the loss function for the reference predictors, then the loss of each predictor would just be increased by cd+1T, and so the regret RT would just be reduced by cd+1T, making it substantially easier for the learner to achieve sublinear regret. Thus, we prefer the current regret definition as it promotes the study of regret when there is a price attached to observing the loss functions. To motivate our framework, consider the problem of developing a computer-assisted diagnostic tool to determine what treatment to apply to a patient in a subpopulation of patients. When a patient arrives, the computer can order a number of tests that cost money, while other information (e.g., the medical record of the patient) is available for free. Based on the available information, the system chooses a treatment. Following-up the patient may or may not incur additional cost. In this example, there is typically a delay in obtaining the information whether the treatment was effective. However, for simplicity, in this work we have decided not to study the effect of this delay. Several works in the literature show that delays usually increase the regret in a moderate fashion (Mesterharm, 2005; Weinberger and Ordentlich, 2006; Agarwal and Duchi, 2011; Joulani et al., 2013). As another example, consider the problem of product testing in a manufacturing process (e.g., the production of electronic consumer devices). When the product arrives, it can be subjected to a large number of diagnostic tests that differ in terms of their costs and effectiveness. The goal is to predict whether the product is defect-free. Obtaining the ground truth can also be quite expensive, especially for complex products. The challenge is that the effectiveness of the various tests is often a priori unknown and that different tests may provide complementary information (meaning that many tests may be required). . Hence, it might be challenging to decide what form the most costeffective diagnostic procedure may take. Yet another example is the problem of developing a costeffective way of instrument calibration. In this problem, the goal is to predict one or more real-valued parameters of some product. Again, various tests with different costs and reliability can be used as the input to the predictor. Finally, although we pose the task as an online learning problem, it is easy to show that the procedures we develop can also be used to attack the batch learning problem, when the goal is to learn a predictor that will be cost-efficient on future data given a database of examples. Obviously, when observing the loss is costly, the problem is related to active learning. However, to our best knowledge, the case when observing the features is costly has not been studied before in the online learning literature. Section 1.1 will discusses the relationship of our work to the existing literature in more detail. This paper analyzes two versions of the online problem. In the first version, free-label online probing, there is no cost to seeing the loss function, that is, cd+1 = 0. (The loss function often compares the predicted value with some label in a known way, in which case learning the value of the label for the round means that the whole loss function becomes known; hence the choice of the name.) Thus, the learner naturally will choose to see the loss function after he provides his prediction; this provides feedback that the learner can use, to improve the predictor he produces. In the second version, non-free-label online probing, the cost of seeing the loss function is positive: cd+1 > 0. In Section 2 we study the case of free-label online probing. We give an algorithm that enjoys a regret of O( p 2dLT ln NT (1/(TL))) when the losses are L-equi-Lipschitz (Theorem 2.2), where NT (") is the "-covering number of F on sequences of length T. This leads to an ˜O( p 2dLT) regret bound for typical function classes, such as the class of linear predictors with bounded weights and bounded inputs. We also show that, in the worst case, the exponential dependence on the dimension cannot be avoided in the bound. For the special case of linear prediction with quadratic loss, we give an algorithm whose regret scales only as ˜O( p dt), a vast improvement in the dependence on d. The case of non-free-label online probing is treated in Section 3. Here, in contrast to the free-label case, we prove that the minimax growth rate of the regret is of the order ˜⇥(T 2/3). The increase of regret-rate stems from the fact that the “best competitor in hindsight” does not have to pay for the label. In contrast to the previous case, since the label is costly here, if the algorithm decides to see the 2 label it does not even have to reason about which features to observe, as querying the label requires paying a cost that is a constant over the cost of the best predictor in hindsight, already resulting in the ˜⇥(T 2/3) regret rate. However, in practice (for shorter horizons) it still makes sense to select the features that provide the best balance between the feature-cost and the prediction loss. Although we do not study this, we note that by combining the algorithmic ideas developed for the free-label case with the ideas developed for the non-free-label case, it is possible to derive an algorithm that reasons actively about the cost of observing the features, too. In the part dealing with the free-label problem, we build heavily on the results of Mannor and Shamir (2011), while in the part dealing with the non-free-label problem we build on the ideas of (Cesa-Bianchi et al., 2006). Due to space limitations, all of our proofs are relegated to the appendix. 1.1 Related Work This paper analyzes online learning when features (and perhaps labels) have to be purchased. The standard “batch learning” framework has a pure explore phase, which gives the learner a set of labeled, completely specified examples, followed by a pure exploit phase, where the learned predictor is asked to predict the label for novel instances. Notice the learner is not required (nor even allowed) to decide which information to gather. By contrast, “active (batch) learning” requires the learner to identify that information (Settles, 2009). Most such active learners begin with completely specified, but unlabeled instances; they then purchase labels for a subset of the instances. Our model, however, requires the learner to purchase feature values as well. This is similar to the “active feature-purchasing learning” framework (Lizotte et al., 2003). This is extended in Kapoor and Greiner (2005) to a version that requires the eventual predictor (as well as the learner) to pay to see feature values as well. However, these are still in the batch framework: after gathering the information, the learner produces a predictor, which is not changed afterwards. Our problem is an online problem over multiple rounds, where at each round the learner is required to predict the label for the current example. Standard online learning algorithms typically assume that each example is given with all the features. For example, Cesa-Bianchi et al. (2005) provided upper and lower bounds on the regret where the learner is given all the features for each example, but must pay for any labels he requests. In our problem, the learner must pay to see the values of the features of each example as well as the cost to obtain its true label at each round. This cost model means there is an advantage to finding a predictor that involves few features, as long as it is sufficiently accurate. The challenge, of course, is finding these relevant features, which happens during this online learning process. Other works, in particular Rostamizadeh et al. (2011) and Dekel et al. (2010), assume the features of different examples might be corrupted, missed, or partially observed due to various problems, such as failure in sensors gathering these features. Having such missing features is realistic in many applications. Rostamizadeh et al. (2011) provided an algorithm for this task in the online settings, with optimal O( p T) regret where T is the number of rounds. Our model differs from this model as in our case the learner has the option to obtain the values of only the subset of the features that he selects. 2 Free-Label Probing In this section we consider the case when the cost of observing the loss function is zero. Thus, we can assume without loss of generality that the learner receives the loss function at the end of each round (i.e., st,d+1 = 1). We will first consider the general setting where the only restriction is that the losses are equi-Lipschitz and the function set F has a finite empirical worst-case covering number. Then we consider the special case where the set of competitors are the linear predictors and the losses are quadratic. 2.1 The Case of Lipschitz losses In this section we assume that the loss functions, `t, are Lipschitz with a known, common Lipschitz constant L over Y w.r.t. to some semi-metric dY of Y: for all t ≥1 sup y,y02Y |`t(y) −`t(y0)| L dY(y, y0). (1) 3 Clearly, the problem is an instance of prediction with expert advice under partial information feedback (Auer et al., 2002), where each expert corresponds to an element of F. Note that, if the learner chooses to observe the values of some features, then he will also be able to evaluate the losses of all the predictors f 2 F that use only these selected features. This can be formalized as follows: By a slight abuse of notation let st 2 {0, 1}d be the indicator showing the features selected by the learner at time t (here we drop the last element of st as st,d1 is always 1); similarly, we will drop the last coordinate of the cost vector c throughout this section. Then, the learner can compute the loss of any predictor f 2 F such that s(f) st, where denotes the conjunction of the component-wise comparison. However, for some loss functions, it may be possible to estimate the losses of other predictors, too. We will exploit this when we study some interesting special cases of the general problem. However, in general, it is not possible to infer the losses for functions such that st,i < s(f)i for some i (cf. Theorem 2.3). The idea is to study first the case when F is finite and then reduce the general case to the finite case by considering appropriate finite coverings of the space F. The regret will then depend on how the covering numbers of the space F behave. Mannor and Shamir (2011) studied problems similar to this in a general framework, where in addition to the loss of the selected predictor (expert), the losses of some other predictors are also communicated to the learner in every round. The connection between the predictors is represented by a directed graph whose nodes are labeled as elements of F (i.e., as the experts) and there is an edge from f 2 F to g 2 F if, when choosing f, the loss of g is also revealed to the learner. It is assumed that the graph of any round t, Gt = (F, Et) becomes known to the learner at the beginning of the round. Further, it is also assumed that (f, f) 2 Et for every t ≥1 and f 2 F. Mannor and Shamir (2011) gave an algorithm, called ELP (exponential weights with linear programming), to solve this problem, which calls the Exponential Weights algorithm, but modifies it to explore less, exploiting the information structure of the problem. The exploration distribution is found by solving a linear program, explaining the name of the algorithm. The regret of ELP is analyzed in the following theorem. Theorem 2.1 (Mannor and Shamir 2011). Consider a prediction with expert advice problem over F where in round t, Gt = (F, Et) is the directed graph that encodes which losses become available to the learner. Assume that for any t ≥1, at most χ(Gt) cliques of Gt can cover all vertices of Gt. Let B be a bound on the non-negative losses `t: maxt≥1,f2F `t(f(xt)) B. Then, there exists a constant CELP > 0 such that for any T > 0, the regret of Algorithm 2 (shown in the Appendix) when competing against the best predictor using ELP satisfies E[RT ] CELPB v u u t(ln |F|) T X t=1 χ(Gt) . (2) The algorithm’s computational cost in any given round is poly(|F|). For a finite F, define Et ⌘E .= {(f, g) | s(g) s(f)}. Then clearly, χ(Gt) 2d. Further, B = kc1:dk1 + maxt≥1,y2Y `t(y) .= C1 + `max (i.e., C1 = kc1:dk1). Plugging these into (2) gives E[RT ] CELP(C1 + `max) q 2dT ln |F| . (3) To apply this algorithm in the case when F is infinite, we have to approximate F with a finite set F0 ⇢{f | f : X ! Y}. The worst-case maximum approximation error of F using F0 over sequences of length T can be defined as AT (F0, F) = max x2X T sup f2F inf f 02F0 1 T T X t=1 dY(f(xt), f 0(xt)) + h (s(f 0) −s(f))+, c1:d i , where (s(f 0)−s(f))+ denotes the coordinate-wise positive part of s(f 0)−s(f), that is, the indicator vector of the features used by f 0 and not used by f. The average error can also be viewed as a (normalized) dY-“distance” between the vectors (f(xt))1tT and (f 0(xt))1tT penalized with the extra feature costs. For a given positive number ↵, define the worst-case empirical covering number of F at level ↵and horizon T > 0 by NT (F, ↵) = min{ |F0| | F0 ⇢{f | f : X ! Y}, AT (F0, F) ↵}. 4 We are going to apply the ELP algorithm to F0 and apply (3) to obtain a regret bound. If f 0 uses more features than f then the cost-penalized distance between f 0 and f is bounded from below by the cost of observing the extra features. This means that unless the problem is very special, F0 has to contain, for all s 2 {s(f) | f 2 F}, some f 0 with s(f 0) = s. Thus, if F contains a function for all s 2 {0, 1}d, χ(Gt) = 2d. Selecting a covering F0 that achieves accuracy ↵, the approximation error becomes TL↵(using equation 1), giving the following bound: Theorem 2.2. Assume that the losses (`t)t≥1 are L-Lipschitz (cf. (1)) and ↵> 0. Then, there exists an algorithm such that for any T > 0, knowing T, the regret satisfies E[RT ] CELP(C1 + `max) q 2dT ln NT (F, ↵) + TL↵. In particular, by choosing ↵= 1/(TL), we have E[RT ] CELP(C1 + `max) q 2dT ln NT (F, 1/(TL)) + 1 . We note in passing that the the dependence of the algorithm on the time horizon T can be alleviated, using, for example, the doubling trick. In order to turn the above bound into a concrete bound, one must investigate the behavior of the metric entropy, ln NT (F, ↵). In many cases, the metric entropy can be bounded independently of T. In fact, often, ln NT (F, ↵) = D ln(1 + c/↵) for some c, D > 0. When this holds, D is often called the “dimension” of F and we get that E [RT ] CELP(C1 + `max) q 2dTD ln(1 + cTL) + 1 . As a specific example, we will consider the case of real-valued linear functions over a ball in a Euclidean space with weights belonging to some other ball. For a normed vector space V with norm k · k and dual norm k · k⇤, x 2 V , r ≥0, let Bk·k(x, r) = {v 2 V | kvk r} denote the ball in V centered at x that has radius r. For X ⇢Rd, W ⇢Rd, let F ⇢Lin(X, W) .= {g : X ! R | g(·) = h w, · i , w 2 W} (4) be the space of linear mappings from X to reals with weights belonging to W. We have the following lemma: Lemma 2.1. Let X, W > 0, dY(y, y0) = |y −y0|, X ⇢Bk·k(0, X) and W ⇢Bk·k⇤(0, W). Consider a set of real-valued linear predictors F ⇢Lin(X, W). Then, for any ↵> 0, ln NT (F, ↵) d ln(1 + 2WX/↵). The previous lemma, together with Theorem 2.2 immediately gives the following result: Corollary 2.1. Assume that F ⇢Lin(X, W), X ⇢Bk·k(0, X), W ⇢Bk·k⇤(0, W) for some X, W > 0. Further, assume that the losses (`t)t≥1 are L-Lipschitz. Then, there exists an algorithm such that for any T > 0, the regret of the algorithm satisfies, E [RT ] CELP(C1 + `max) q d2dT ln(1 + 2TLWX) + 1 . Note that if one is given an a priori bound p on the maximum number of features that can be used in a single round (allowing the algorithm to use fewer than p, but not more features) then 2d in the above bound could be replaced by P 1ip *d i + ⇡dp, where the approximation assumes that p < d/2. Such a bound on the number of features available per round may arise from strict budgetary considerations. When dp is small, this makes the bound non-vacuous even for small horizons T. In addition, in such cases the algorithm also becomes computationally feasible. It remains an interesting open question to study the computational complexity when there is no restriction on the number of features used. In the next theorem, however, we show that the worst-case exponential dependence of the regret on the number of features cannot be improved (while keeping the root-T dependence on the horizon). The bound is based on the lower bound construction of Mannor and Shamir (2011), which reduces the problem to known lower bounds in the multi-armed bandit case. Theorem 2.3. There exist an instance of free-label online probing such that the minimax regret of any algorithm is ⌦ ⇣q* d d/2 + T ⌘ . 5 2.2 Linear Prediction with Quadratic Losses In this section, we study the problem under the assumption that the predictors have a linear form and the loss functions are quadratic. That is, F ⇢Lin(X, W) where W = {w 2 Rd | kwk⇤wlim} and X = {x 2 Rd | kxk xlim} for some given constants wlim, xlim > 0, while `t(y) = (y −yt)2, where |yt| xlimwlim. Thus, choosing a predictor is akin to selecting a weight vector wt 2 W, as well as a binary vector st 2 G ⇢{0, 1}d that encodes the features to be used in round t. The prediction for round t is then ˆyt = h wt, st ⊙xt i, where ⊙denotes coordinate-wise product, while the loss suffered is (ˆyt−yt)2. The set G is an arbitrary non-empty, a priori specified subset of {0, 1}d that allows the user of the algorithm to encode extra constraints on what subsets of features can be selected. In this section we show that in this case a regret bound of size ˜O( p poly(d)T) is possible. The key idea that permits the improvement of the regret bound is that a randomized choice of a weight vector Wt (and thus, of a subset) helps one construct unbiased estimates of the losses `t(h w, s ⊙xt i) for all weight vectors w and all subsets s 2 G under some mild conditions on the distribution of Wt. That the construction of such unbiased estimates is possible, despite that some feature values are unobserved, is because of the special algebraic structure of the prediction and loss functions. A similar construction has appeared in a different context, e.g., in the paper of Cesa-Bianchi et al. (2010). The construction works as follows. Define the d⇥d matrix, Xt by (Xt)i,j = xt,ixt,j (1 i, j d). Expanding the loss of the prediction ˆyt = h w, xt i, we get that the loss of using w 2 W is `t(w) .= `t(h w, xt i) = w>Xt w −2 w>xtyt + y2 t , where with a slight abuse of notation we have introduced the loss function `t : W ! R (we’ll keep abusing the use of `t by overloading it based on the type of its argument). Clearly, it suffices to construct unbiased estimates of `t(w) for any w 2 W. We will use a discretization approach. Therefore, assume that we are given a finite subset W0 of W that will be constructed later. In each step t, our algorithm will choose a random weight vector Wt from a probability distribution supported on W0. Let pt(w) be the probability of selecting the weight vector, w 2 W0. For 1 i d, let qt(i) = X w2W0:i2s(w) pt(w) , be the probability that s(Wt) will contain i, while for 1 i, j d, let qt(i, j) = X w2W0:i,j2s(w) pt(w) , be the probability that both i, j 2 s(Wt).2 Assume that pt(·) is constructed such that qt(i, j) > 0 holds for any time t and indices 1 i, j d. This also implies that qt(i) > 0 for all 1 i d. Define the vector ˜xt 2 Rd and matrix ˜Xt 2 Rd⇥d using the following equations: ˜xt,i = {i2s(Wt)}xt,i qt(i) , ( ˜Xt)i,j = {i,j2s(Wt)}xt,ixt,j qt(i, j) . (5) It can be readily verified that E [˜xt | pt] = xt and E h ˜Xt | pt i = Xt. Further, notice that both ˜xt and ˜Xt can be computed based on the information available at the end of round t, i.e., based on the feature values (xt,i)i2s(Wt). Now, define the estimate of prediction loss ˜`t(w) = w> ˜Xt w −2 w>˜xtyt + y2 t . (6) Note that yt can be readily computed from `t(·), which is available to the algorithm (equivalently, we may assume that the algorithm observed yt). Due to the linearity of expectation, we have E h ˜`t(w)|pt i = `t(w). That is, ˜`t(w) provides an unbiased estimate of the loss `t(w) for any w 2 W. Hence, by adding a feature cost term we get ˜`t(w) + h s(w), c i as an estimate of the loss that the learner would have suffered at round t had he chosen the weight vector w. 2Note that, following our earlier suggestion, we view the d-dimensional binary vectors as subsets of {1, . . . , d}. 6 Algorithm 1 The LQDEXP3 Algorithm Parameters: Real numbers 0 ⌘, 0 < γ 1, W0 ⇢W finite set, a distribution µ over W0, horizon T > 0. Initialization: u1(w) = 1 (w 2 W0). for t = 1 to T do Draw Wt 2 W0 from the probability mass function pt(w) = (1 −γ)ut(w) Ut + γµ(w), w 2 W0 . Obtain the features values, (xt,i)i2s(Wt). Predict ˆyt = P i2s(Wt) wt,ixt,i. for w 2 W0 do Update the weights using (6) for the definitions of ˜`t(w): ut+1(w) = ut(w)e−⌘(˜`t(w)+h c,s(w) i), w 2 W0 . end for end for 2.2.1 LQDExp3 – A Discretization-based Algorithm Next we show that the standard EXP3 Algorithm applied to a discretization of the weight space W achieves O( p dT) regret. The algorithm, called LQDEXP3 is given as Algorithm 1. In the name of the algorithm, LQ stands for linear prediction with quadratic losses and D denotes discretization. Note that if the exploration distribution µ in the algorithm is such that for any 1 i, j d, P w2W 0:i,j2s(w) µ(w) > 0 then qt(i, j) > 0 will be guaranteed for all time steps. Using the notation ylim = wlimxlim and EG = maxs2G supw2W:kwk⇤=1 kw ⊙sk⇤, we can state the following regret bound on the algorithm Theorem 2.4. Let wlim, xlim > 0, c 2 [0, 1)d be given, W ⇢Bk·k⇤(0, wlim) convex, X ⇢ Bk·k(0, xlim) and fix T ≥1. Then, there exist a parameter setting for LQDEXP3 such that the following holds: Let RT denote the regret of LQDEXP3 against the best linear predictor from Lin(W, X) when LQDEXP3 is used in an online free-label probing problem defined with the sequence ((xt, yt))1tT (kxtk xlim, |yt| ylim, 1 t T), quadratic losses `t(y) = (y −yt)2, and feature-costs given by the vector c. Then, E [RT ] C q Td (4y2 lim + kck1)(w2 limx2 lim + 2ylimwlimxlim + 4y2 lim + kck1) ln(EGT) , where C > 0 is a universal constant (i.e., the value of C does not depend on the problem parameters). The actual parameter setting to be used with the algorithm is constructed in the proof. The computational complexity of LQDEXP3 is exponential in the dimension d due to the discretization step, hence quickly becomes impractical when the number of features is large. On the other hand, one can easily modify the algorithm to run without discretization by replacing EXP3 with its continuous version. The resulting algorithm enjoys essentially the same regret bound, and can be implemented efficiently whenever efficient sampling is possible from the resulting distribution. This approach seems to be appealing, since, from a first look, it seems to involve sampling from truncated Gaussian distributions, which can be done efficiently. However, it is easy to see that when the sampling probabilities of some feature are small, the estimated loss will not be convex as ˜Xt may not be positive semi-definite, and therefore the resulting distributions will not always be truncated Gaussians. Finding an efficient sampling procedure for such situations is an interesting open problem. The optimality of LQDEXP3 can be seen by the following lower bound on the regret: Theorem 2.5. Let d > 0, and consider the online free label probing problem with linear predictors, where W = {w 2 Rd | kwk1 wlim} and X = {x 2 Rd | kxk1 1}. Assume, for all t ≥1, that the loss functions are of the form `t(w) = (w>xt −yt)2 + h s(w), c i, where |yt| 1 and c = 1/2 ⇥1 2 Rd. Then, for any prediction algorithm and for any T ≥ 4d 8 ln(4/3), there exists a 7 sequence ((xt, yt))1tT 2 (X ⇥[−1, 1])T such that the regret of the algorithm can be bounded from below as E[RT ] ≥ p 2 −1 p 32 ln(4/3) p Td . 3 Non-Free-Label Probing If cd+1 > 0, the learner has to pay for observing the true label. This scenario is very similar to the well-known label-efficient prediction case in online learning (Cesa-Bianchi et al., 2006). In fact, the latter problem is a special case of this problem, immediately giving us that the regret of any algorithm is at least of order T 2/3. It turns out that if one observes the (costly) label in a given round then it does not effect the regret rate if one observes all the features at the same time. The resulting “revealing action algorithm”, given in Algorithm 3 in the Appendix, achieves the following regret bound for finite expert classes: Lemma 3.1. Given any non-free-label online probing with finitely many experts, Algorithm 3 with appropriately set parameters achieves E[RT ] C max ⇣ T 2/3(`2 maxkck1 ln |F|)1/3, `max p T ln |F| ⌘ for some constant C > 0. Using the fact that, in the linear prediction case, approximately (2TLWX + 1)d experts are needed to approximate each expert in W with precision ↵= 1 LT in worst-case empirical covering, we obtain the following theorem (note, however, that the complexity of the algorithm is again exponential in the dimension d, as we need to keep a weight for each expert): Theorem 3.1. Given any non-free-label online probing with linear predictor experts and Lipschitz prediction loss function with constant L, Algorithm 3 with appropriately set parameters running on a sufficiently discretized predictor set achieves E[RT ] C max ⇣ T 2/3 ⇥ `2 maxkck1 d ln(TLWX) ⇤1/3 , `max p Td ln(TLWX) ⌘ for some universal constant C > 0. That Algorithm 3 is essentially optimal for linear predictions and quadratic losses is a consequence of the following almost matching lower bound: Theorem 3.2. There exists a constant C such that, for any non-free-label probing with linear predictors, quadratic loss, and cj > (1/d) Pd i=1 ci −1/2d for every j = 1, . . . , d, the expected regret of any algorithm can be lower bounded by E[RT ] ≥C(cd+1d)1/3T 2/3 . 4 Conclusions We introduced a new problem called online probing. In this problem, the learner has the option of choosing the subset of features he wants to observe as well as the option of observing the true label, but has to pay for this information. This setup produced new challenges in solving the online problem. We showed that when the labels are free, it is possible to devise algorithms with optimal regret rate ⇥( p T) (up to logarithmic factors), while in the non-free-label case we showed that only ⇥(T 2/3) is achievable. We gave algorithms that achieve the optimal regret rate (up to logarithmic factors) when the number of experts is finite or in the case of linear prediction. Unfortunately either our bounds or the computational complexity of the corresponding algorithms are exponential in the problem dimension, and it is an open problem whether these disadvantages can be eliminated simultaneously. Acknowledgements The authors thank Yevgeny Seldin for finding a bug in an earlier version of the paper. This work was supported in part by DARPA grant MSEE FA8650-11-1-7156, the Alberta Innovates Technology Futures, AICML, and the Natural Sciences and Engineering Research Council (NSERC) of Canada. 8 References Agarwal, A. and Duchi, J. C. (2011). Distributed delayed stochastic optimization. In Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F. C. N., and Weinberger, K. Q., editors, NIPS, pages 873–881. Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002). The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77. Bart´ok, G. (2012). The role of information in online learning. PhD thesis, Department of Computing Science, University of Alberta. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and games. Cambridge Univ Pr. Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. (2005). Minimizing regret with label efficient prediction. IEEE Transactions on Information Theory, 51(6):2152–2162. Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. (2006). Regret minimization under partial monitoring. Math. Oper. Res., 31(3):562–580. Cesa-Bianchi, N., Shalev-Shwartz, S., and Shamir, O. (2010). Efficient learning with partially observed attributes. CoRR, abs/1004.4421. Dekel, O., Shamir, O., and Xiao, L. (2010). Learning to classify with missing and corrupted features. Machine Learning, 81(2):149–178. Joulani, P., Gy¨orgy, A., and Szepesv´ari, C. (2013). Online learning under delayed feedback. In 30th International Conference on Machine Learning, Atlanta, GA, USA. Kapoor, A. and Greiner, R. (2005). Learning and classifying under hard budgets. In European Conference on Machine Learning (ECML), pages 166–173. Lizotte, D., Madani, O., and Greiner, R. (2003). Budgeted learning of naive-Bayes classifiers. In Conference on Uncertainty in Artificial Intelligence (UAI). Mannor, S. and Shamir, O. (2011). From bandits to experts: On the value of side-observations. CoRR, abs/1106.2436. Mesterharm, C. (2005). On-line learning with delayed label feedback. In Proceedings of the 16th international conference on Algorithmic Learning Theory, ALT’05, pages 399–413, Berlin, Heidelberg. Springer-Verlag. Rostamizadeh, A., Agarwal, A., and Bartlett, P. L. (2011). Learning with missing features. In UAI, pages 635–642. Settles, B. (2009). Active learning literature survey. Technical report. Weinberger, M. J. and Ordentlich, E. (2006). On delayed prediction of individual sequences. IEEE Trans. Inf. Theor., 48(7):1959–1976. 9
|
2013
|
53
|
5,129
|
An Approximate, Efficient Solver for LP Rounding Srikrishna Sridhar1, Victor Bittorf1, Ji Liu1, Ce Zhang1 Christopher R´e2, Stephen J. Wright1 1Computer Sciences Department, University of Wisconsin-Madison, Madison, WI 53706 2Computer Science Department, Stanford University, Stanford, CA 94305 {srikris,vbittorf,ji-liu,czhang,swright}@cs.wisc.edu chrismre@cs.stanford.edu Abstract Many problems in machine learning can be solved by rounding the solution of an appropriate linear program (LP). This paper shows that we can recover solutions of comparable quality by rounding an approximate LP solution instead of the exact one. These approximate LP solutions can be computed efficiently by applying a parallel stochastic-coordinate-descent method to a quadratic-penalty formulation of the LP. We derive worst-case runtime and solution quality guarantees of this scheme using novel perturbation and convergence analysis. Our experiments demonstrate that on such combinatorial problems as vertex cover, independent set and multiway-cut, our approximate rounding scheme is up to an order of magnitude faster than Cplex (a commercial LP solver) while producing solutions of similar quality. 1 Introduction A host of machine-learning problems can be solved effectively as approximations of such NP-hard combinatorial problems as set cover, set packing, and multiway-cuts [8, 11, 16, 22]. A popular scheme for solving such problems is called LP rounding [22, chs. 12-26], which consists of the following three-step process: (1) construct an integer (binary) linear program (IP) formulation of a given problem; (2) relax the IP to an LP by replacing the constraints x 2 {0, 1} by x 2 [0, 1]; and (3) round an optimal solution of the LP to create a feasible solution for the original IP problem. LP rounding is known to work well on a range of hard problems, and comes with theoretical guarantees for runtime and solution quality. The Achilles’ heel of LP-rounding is that it requires solutions of LPs of possibly extreme scale. Despite decades of work on LP solvers, including impressive advances during the 1990s, commercial codes such as Cplex or Gurobi may not be capable of handling problems of the required scale. In this work, we propose an approximate LP solver suitable for use in the LP-rounding approach, for very large problems. Our intuition is that in LP rounding, since we ultimately round the LP to obtain an approximate solution of the combinatorial problem, a crude solution of the LP may suffice. Hence, an approach that can find approximate solutions of large LPs quickly may be suitable, even if it is inefficient for obtaining highly accurate solutions. This paper focuses on the theoretical and algorithmic aspects of finding approximate solutions to an LP, for use in LP-rounding schemes. Our three main technical contributions are as follows: First, we show that one can approximately solve large LPs by forming convex quadratic programming (QP) approximations, then applying stochastic coordinate descent to these approximations. Second, we derive a novel convergence analysis of our method, based on Renegar’s perturbation theory for linear programming [17]. Finally, we derive bounds on runtime as well as worst-case approximation ratio of our rounding schemes. Our experiments demonstrate that our approach, called Thetis, produces solutions of comparable quality to state-of-the-art approaches on such tasks as noun-phrase chunking and entity resolution. We also demonstrate, on three different classes of combinatorial problems, that Thetis can outperform Cplex (a state-of-the-art commercial LP and IP solver) by up to an order of magnitude in runtime, while achieving comparable solution quality. 1 Related Work. Recently, there has been some focus on the connection between LP relaxations and maximum a posteriori (MAP) estimation problems [16, 19]. Ravikumar et. al [16] proposed rounding schemes for iterative LP solvers to facilitate MAP inference in graphical models. In contrast, we propose to use stochastic descent methods to solve a QP relaxation; this allows us to take advantage of recent results on asynchronous parallel methods of this type [12,14]. Recently, Makari et. al [13] propose an intriguing parallel scheme for packing and covering problems. In contrast, our results apply to more general LP relaxations, including set-partitioning problems like multiway-cut. Additionally, the runtime of our algorithm is less sensitive to approximation error. For an error ", the bound on runtime of the algorithm in [13] grows as "−5, while the bound on our algorithm’s runtime grows as "−2. 2 Background: Approximating NP-hard problems with LP Rounding In this section, we review the theory of LP-rounding based approximation schemes for NP-hard combinatorial problems. We use the vertex cover problem as an example, as it is the simplest nontrivial setting that exposes the main ideas of this approach. Preliminaries. For a minimization problem Φ, an algorithm ALG is an ↵-factor approximation for Φ, for some ↵> 1, if any solution produced by ALG has an objective value at most ↵times the value of an optimal (lowest cost) solution. For some problems, such as vertex cover, there is a constant-factor approximation scheme (↵= 2). For others, such as set cover, the value of ↵can be as large as O(log N), where N is the number of sets. An LP-rounding based approximation scheme for the problem Φ first constructs an IP formulation of Φ which we denote as “P”. This step is typically easy to perform, but the IP formulation P is, in theory, as hard to solve as the original problem Φ. In this work, we consider applications in which the only integer variables in the IP formulation are binary variables x 2 {0, 1}. The second step in LP rounding is a relax / solve step: We relax the constraints in P to obtain a linear program LP(P), replacing the binary variables with continuous variables in [0, 1], then solve LP(P). The third step is to round the solution of LP(P) to an integer solution which is feasible for P, thus yielding a candidate solution to the original problem Φ. The focus of this paper is on the relax / solve step, which is usually the computational bottleneck in an LP-rounding based approximation scheme. Example: An Oblivious-Rounding Scheme For Vertex Cover. Let G(V, E) denote a graph with vertex set V and undirected edges E ✓(V ⇥V ). Let cv denote a nonnegative cost associated with each vertex v 2 V . A vertex cover of a graph is a subset of V such that each edge e 2 E is incident to at least one vertex in this set. The minimum-cost vertex cover is the one that minimizes the sum of terms cv, summed over the vertices v belonging to the cover. Let us review the “construct,” “relax / solve,” and “round” phases of an LP-rounding based approximation scheme applied to vertex cover. In the “construct” phase, we introduce binary variables xv 2 {0, 1}, 8v 2 V , where xv is set to 1 if the vertex v 2 V is selected in the vertex cover and 0 otherwise. The IP formulation is as follows: min x X v2V cvxv s.t. xu + xv ≥1 for (u, v) 2 E and xv 2 {0, 1} for v 2 V. (1) Relaxation yields the following LP min x X v2V cvxv s.t. xu + xv ≥1 for (u, v) 2 E and xv 2 [0, 1] for v 2 V. (2) A feasible solution of the LP relaxation (2) is called a “fractional solution” of the original problem. In the “round” phase, we generate a valid vertex cover by simply choosing the vertices v 2 V whose fractional solution xv ≥ 1 2. It is easy to see that the vertex cover generated by such a rounding scheme costs no more than twice the cost of the fractional solution. If the fractional solution chosen for rounding is an optimal solution of (2), then we arrive at a 2-factor approximation scheme for vertex cover. We note here an important property: The rounding algorithm can generate feasible integral solutions while being oblivious of whether the fractional solution is an optimal solution of (2). We formally define the notion of an oblivious rounding scheme as follows. Definition 1. For a minimization problem Φ with an IP formulation P whose LP relaxation is denoted by LP(P), a γ-factor ‘oblivious’ rounding scheme converts any feasible point xf 2 LP(P) to an integral solution xI 2 P with cost at most γ times the cost of LP(P) at xf. 2 Problem Family Approximation Factor Machine Learning Applications Set Covering log(N) [20] Classification [3], Multi-object tracking [24]. Set Packing es + o(s) [1] MAP-inference [19], Natural language [9]. Multiway-cut 3/2 −1/k [5] Computer vision [4], Entity resolution [10]. Graphical Models Heuristic Semantic role labeling [18], Clustering [21]. Figure 1: LP-rounding schemes considered in this paper. The parameter N refers to the number of sets; s refers to s-column sparse matrices; and k refers to the number of terminals. e is the Euler’s constant. Given a γ-factor oblivious algorithm ALG to the problem Φ, one can construct a γ-factor approximation algorithm for Φ by using ALG to round an optimal fractional solution of LP(P). When we have an approximate solution for LP(P) that is feasible for this problem, rounding can produce an ↵-factor approximation algorithm for Φ for a factor ↵slightly larger than γ, where the difference between ↵and γ takes account of the inexactness in the approximate solution of LP(P). Many LP-rounding schemes (including the scheme for vertex cover discussed in Section 2) are oblivious. We implemented the oblivious LP-rounding algorithms in Figure 1 and report experimental results in Section 4. 3 Main results In this section, we describe how we can solve LP relaxations approximately, in less time than traditional LP solvers, while still preserving the formal guarantees of rounding schemes. We first define a notion of approximate LP solution and discuss its consequences for oblivious rounding schemes. We show that one can use a regularized quadratic penalty formulation to compute these approximate LP solutions. We then describe a stochastic-coordinate-descent (SCD) algorithm for obtaining approximate solutions of this QP, and mention enhancements of this approach, specifically, asynchronous parallel implementation and the use of an augmented Lagrangian framework. Our analysis yields a worst-case complexity bound for solution quality and runtime of the entire LP-rounding scheme. 3.1 Approximating LP Solutions Consider the LP in the following standard form min cT x s.t. Ax = b, x ≥0, (3) where c 2 Rn, b 2 Rm, and A 2 Rm⇥n and its corresponding dual max bT u s.t. c −AT u ≥0. (4) Let x⇤denote an optimal primal solution of (3). An approximate LP solution ˆx that we use for LProunding may be infeasible and have objective value different from the optimum cT x⇤. We quantify the inexactness in an approximate LP solution as follows. Definition 2. A point ˆx is an (✏, δ)-approximate solution of the LP (3) if ˆx ≥0 and there exists constants ✏> 0 and δ > 0 such that kAˆx −bk1 ✏ and |cT ˆx −cT x⇤| δ|cT x⇤|. Using Definitions 1 and 2, it is easy to see that a γ-factor oblivious rounding scheme can round a (0, δ) approximate solution to produce a feasible integral solution whose cost is no more than γ(1 + δ) times the optimal solution of the P. The factor (1 + δ) arises because the rounding algorithm does not have access to an optimal fractional solution. To cope with the infeasibility, we convert an (✏, δ)-approximate solution to a (0, ˆδ) approximate solution where ˆδ is not too large. For vertex cover (2), we prove the following result in Appendix C. (Here, ⇧[0,1]n(·) denotes projection onto the unit hypercube in Rn.) Lemma 3. Let ˆx be an (", δ) approximate solution to the linear program (2) with " 2 [0, 1). Then, ˜x = ⇧[0,1]n((1 −")−1ˆx) is a (0, δ(1 −")−1)-approximate solution. Since ˜x is a feasible solution for (2), the oblivious rounding scheme in Section 2 results in an 2(1 + δ(1 −")−1) factor approximation algorithm. In general, constructing (0, ˆδ) from (✏, δ) approximate solutions requires reasoning about the structure of a particular LP. In Appendix C, we establish statements analogous to Lemma 3 for packing, covering and multiway-cut problems. 3 3.2 Quadratic Programming Approximation to the LP We consider the following regularized quadratic penalty approximation to the LP (3), parameterized by a positive constant β, whose solution is denoted by x(β): x(β) := arg min x≥0 fβ(x) := cT x −¯uT (Ax −b) + β 2 kAx −bk2 + 1 2β kx −¯xk2, (5) where ¯u 2 Rm and ¯x 2 Rn are arbitrary vectors. (In practice, ¯u and ¯x may be chosen as approximations to the dual and primal solutions of (3), or simply set to zero.) The quality of the approximation (5) depends on the conditioning of underlying linear program (3), a concept that was studied by Renegar [17]. Denoting the data for problem (3) by d := (A, b, c), we consider perturbations ∆d := (∆A, ∆b, ∆c) such that the linear program defined by d+∆d is primal infeasible. The primal condition number δP is the infimum of the ratios k∆dk/kdk over all such vectors ∆d. The dual condition number δD is defined analogously. (Clearly both δP and δD are in the range [0, 1]; smaller values indicate poorer conditioning.) We have the following result, which is proven in the supplementary material. Theorem 4. Suppose that δP and δD are both positive, and let (x⇤, u⇤) be any primal-dual solution pair for (3), (4). If we define C⇤:= max(kx⇤−¯xk, ku⇤−¯uk), then the unique solution x(β) of (5) satisfies kAx(β) −bk (1/β)(1 + p 2)C⇤, kx(β) −x⇤k p 6C⇤. If in addition the parameter β ≥ 10C⇤ kdk min(δP ,δD), then we have |cT x⇤−cT x(β)| 1 β 25C⇤ 2δP δD + 6C2 ⇤+ p 6k¯xkC⇤ # . In practice, we solve (5) approximately, using an algorithm whose complexity depends on the threshold ¯✏for which the objective is accurate to within ¯✏. That is, we seek ˆx such that β−1kˆx −x(β)k2 fβ(ˆx) −fβ(x(β)) ¯✏, where the left-hand inequality follows from the fact that fβ is strongly convex with modulus β−1. If we define ¯✏:= C2 20 β3 , C20 := 25C⇤ 2kdkδP δD , (6) then by combining some elementary inequalities with the results of Theorem 4, we obtain the bounds |cT ˆx −cT x⇤| 1 β 25C⇤ δP δD + 6C2 ⇤+ p 6k¯xkC⇤ # , kAˆx −bk 1 β (1 + p 2)C⇤+ 25C⇤ 2δP δD # . The following result is almost an immediate consequence. Theorem 5. Suppose that δP and δD are both positive and let (x⇤, u⇤) be any primal-dual optimal pair. Suppose that C⇤is defined as in Theorem 4. Then for any given positive pair (✏, δ), we have that ˆx satisfies the inequalities in Definition 2 provided that β satisfies the following three lower bounds: β ≥ 10C⇤ kdk min(δP , δD), β ≥ 1 δ|cT x⇤| 25C⇤ δP δD + 6C2 ⇤+ p 6k¯xkC⇤ # , β ≥1 ✏ (1 + p 2)C⇤+ 25C⇤ 2δP δD # . For an instance of vertex cover with n nodes and m edges, we can show that δ−1 P = O(n1/2(m + n)1/2) and δ−1 D = O((m + n)1/2) (see Appendix D). The values ¯x = 1 and ¯u = ~0 yield C⇤pm. We therefore obtain β = O(m1/2n1/2(m + n)(min{✏, δ|cT x⇤|})−1). 4 Algorithm 1 SCD method for (5) 1: Choose x0 2 Rn; j 0 2: loop 3: Choose i(j) 2 {1, 2, . . . , n} randomly with equal probability; 4: Define xj+1 from xj by setting [xj+1]i(j) max(0, [xj]i(j) −(1/Lmax)[rfβ(xj)]i(j)), leaving other components unchanged; 5: j j + 1; 6: end loop 3.3 Solving the QP Approximation: Coordinate Descent We propose the use of a stochastic coordinate descent (SCD) algorithm [12] to solve (5). Each step of SCD chooses a component i 2 {1, 2, . . . , n} and takes a step in the ith component of x along the partial gradient of (5) with respect to this component, projecting if necessary to retain nonnegativity. This simple procedure depends on the following constant Lmax, which bounds the diagonals of the Hessian in the objective of (5): Lmax = β( max i=1,2,...,n AT :iA:i) + β−1, (7) where A:i denotes the ith column of A. Algorithm 1 describes the SCD method. Convergence results for Algorithm 1 can be obtained from [12]. In this result, E(·) denotes expectation over all the random variables i(j) indicating the update indices chosen at each iteration. We need the following quantities: l := 1 β , R := sup j=1,2,...n kxj −x(β)k2, (8) where xj denotes the jth iterate of the SCD algorithm. (Note that R bounds the maximum distance that the iterates travel from the solution x(β) of (5).) Theorem 6. For Algorithm 1 we have Ekxj −x(β)k2 + 2 Lmax E(fβ(xj) −f ⇤ β) ✓ 1 − l n(l + Lmax) ◆j ✓ R2 + 2 Lmax (fβ(x0) −f ⇤ β) ◆ , where f ⇤ β := fβ(x(β)). We obtain high-probability convergence of fβ(xj) to f ⇤ β in the following sense: For any ⌘2 (0, 1) and any small ¯✏, we have P(fβ(xj) −f ⇤ β < ¯✏) ≥1 −⌘, provided that j ≥n(l + Lmax) l &&&&log Lmax 2⌘¯✏ ✓ R2 + 2 Lmax (fβ(x0) −f ⇤ β) ◆&&&& . Worst-Case Complexity Bounds. We now combine the analysis in Sections 3.2 and 3.3 to derive a worst-case complexity bound for our approximate LP solver. Supposing that the columns of A have norm O(1), we have from (7) and (8) that l = β−1 and Lmax = O(β). Theorem 6 indicates that we require O(nβ2) iterations to solve (5) (modulo a log term). For the values of β described in Section 3.2, this translates to a complexity estimate of O(m3n2/✏2). In order to obtain the desired accuracy in terms of feasibility and function value of the LP (captured by ✏) we need to solve the QP to within the different, tighter tolerance ¯✏introduced in (6). Both tolerances are related to the choice of penalty parameter β in the QP. Ignoring here the dependence on dimensions m and n, we note the relationships β ⇠✏−1 (from Theorem 5) and ¯✏⇠β−3 ⇠ ✏3 (from (6)). Expressing all quantities in terms of ✏, and using Theorem 6, we see an iteration complexity of ✏−2 for SCD (ignoring log terms). The linear convergence rate of SCD is instrumental to this favorable value. By contrast, standard variants of stochastic-gradient descent (SGD) applied to the QP yield poorer complexity. For diminishing-step or constant-step variants of SGD, we see complexity of ✏−7, while for robust SGD, we see ✏−10. (Besides the inverse dependence on ¯✏or its square in the analysis of these methods, there is a contribution of order ✏−2 from the conditioning of the QP.) 3.4 Enhancements We mention two important enhancements that improve the efficiency of the approach outlined above. The first is an asynchronous parallel implementation of Algorithm 1 and the second is the use of an augmented Lagrangian framework rather than “one-shot” approximation by the QP in (5). 5 Thetis Gibbs Sampling Task Formulation PV NNZ P R F1 Rank P R F1 Rank CoNLL Skip-chain CRF 25M 51M .87 .90 .89 10/13 .86 .90 .88 10/13 TAC-KBP Factor graph 62K 115K .79 .79 .79 6/17 .80 .80 .80 6/17 Figure 2: Solution quality of our LP-rounding approach on two tasks. PV is the number of primal variables and NNZ is the number of non-zeros in the constraint matrix of the LP in standard form. The rank indicates where we would been have placed, had we participated in the competition. Asynchronous Parallel SCD. An asynchronous parallel version of Algorithm 1, described in [12], is suitable for execution on multicore, shared-memory architectures. Each core, executing a single thread, has access to the complete vector x. Each thread essentially runs its own version of Algorithm 1 independently of the others, choosing and updating one component i(j) of x on each iteration. Between the time a thread reads x and performs its update, x usually will have been updated by several other threads. Provided that the number of threads is not too large (according to criteria that depends on n and on the diagonal dominance properties of the Hessian matrix), and the step size is chosen appropriately, the convergence rate is similar to the serial case, and near-linear speedup is observed. Augmented Lagrangian Framework. It is well known (see for example [2, 15]) that the quadratic-penalty approach can be extended to an augmented Lagrangian framework, in which a sequence of problems of the form (5) are solved, with the primal and dual solution estimates ¯x and ¯u (and possibly the penalty parameter β) updated between iterations. Such a “proximal method of multipliers” for LP was described in [23]. We omit a discussion of the convergence properties of the algorithm here, but note that the quality of solution depends on the values of ¯x, ¯u and β at the last iteration before convergence is declared. By applying Theorem 5, we note that the constant C⇤ is smaller when ¯x and ¯u are close to the primal and dual solution sets, thus improving the approximation and reducing the need to increase β to a larger value to obtain an approximate solution of acceptable accuracy. 4 Experiments Our experiments address two main questions: (1) Is our approximate LP-rounding scheme useful in graph analysis tasks that arise in machine learning? and (2) How does our approach compare to a state-of-the-art commercial solver? We give favorable answers to both questions. 4.1 Is Our Approximate LP-Rounding Scheme Useful in Graph Analysis Tasks? LP formulations have been used to solve MAP inference problems on graphical models [16], but general-purpose LP solvers have rarely been used, for reasons of scalability. We demonstrate that the rounded solutions obtained using Thetis are of comparable quality to those obtained with stateof-the-art systems. We perform experiments on two different tasks: entity linking and text chunking. For each task, we produce a factor graph [9], which consists of a set of random variables and a set of factors to describe the correlation between random variables. We then run MAP inference on the factor graph using the LP formulation in [9] and compare the quality of the solutions obtained by Thetis with a Gibbs sampling-based approach [26]. We follow the LP-rounding algorithm in [16] to solve the MAP estimation problem. For entity linking, we use the TAC-KBP 2010 benchmark1. The input graphical model has 12K boolean random variables and 17K factors. For text chunking, we use the CoNLL 2000 shared task2. The factor graph contained 47K categorical random variables (with domain size 23) and 100K factors. We use the training sets provided by TAC-KBP 2010 and CoNLL 2000 respectively. We evaluate the quality of both approaches using the official evaluation scripts and evaluation data sets provided by each challenge. Figure 2 contains a description of the three relevant quality metrics, precision (P), recall (R) and F1-scores. Figure 2 demonstrates that our algorithm produces solutions of quality comparable with state-of-the-art approaches for these graph analysis tasks. 4.2 How does our proposed approach compare to a state-of-the-art commercial solver? We conducted numerical experiments on three different combinatorial problems that commonly arise in graph analysis tasks in machine learning: vertex cover, independent set, and multiway cuts. For 1http://nlp.cs.qc.cuny.edu/kbp/2010/ 2http://www.cnts.ua.ac.be/conll2000/chunking/ 6 each problem, we compared the performance of our LP solver against the LP and IP solvers of Cplex (v12.5) (denoted as Cplex-LP and Cplex-IP respectively). The two main goals of this experiment are to: (1) compare the quality of the integral solutions obtained using LP-rounding with the integral solutions from Cplex-IP and (2) compare wall-clock times required by Thetis and Cplex-LP to solve the LPs for the purpose of LP-rounding. Datasets. Our tasks are based on two families of graphs. The first family of instances (frb59-26-1 to frb59-26-5) was obtained from Bhoslib3 (Benchmark with Hidden Optimum Solutions); they are considered difficult problems [25]. The instances in this family are similar; the first is reported in the figures of this section, while the remainder appear in Appendix E. The second family of instances are social networking graphs obtained from the Stanford Network Analysis Platform (SNAP)4. System Setup. Thetis was implemented using a combination of C++ (for Algorithm 1) and Matlab (for the augmented Lagrangian framework). Our implementation of the augmented Lagrangian framework was based on [6]. All experiments were run on a 4 Intel Xeon E7-4450 (40 cores @ 2Ghz) with 256GB of RAM running Linux 3.8.4 with a 15-disk RAID0. Cplex used 32 (of the 40) cores available in the machine, and for consistency, our implementation was also restricted to 32 cores. Cplex implements presolve procedures that detect redundancy, and substitute and eliminate variables to obtain equivalent, smaller LPs. Since the aim of this experiment is compare the algorithms used to solve LPs, we ran both Cplex-LP and Thetis on the reduced LPs generated by the presolve procedure of Cplex-LP. Both Cplex-LP and Thetis were run to a tolerance of ✏= 0.1. Additional experiments with Cplex-LP run using its default tolerance options are reported in Appendix E. We used the barrier optimizer while running Cplex-LP. All codes were provided with a time limit of 3600 seconds excluding the time taken for preprocessing as well as the runtime of the rounding algorithms that generate integral solutions from fractional solutions. Tasks. We solved the vertex cover problem using the approximation algorithm described in Section 2. We solved the maximum independent set problem using a variant of the es + o(s)-factor approximation in [1] where s is the maximum degree of a node in the graph (see Appendix C for details). For the multiway-cut problem (with k = 3) we used the 3/2 −1/k-factor approximation algorithm described in [22]. The details of the transformation from approximate infeasible solutions to feasible solutions are provided in Appendix C. Since the rounding schemes for maximumindependent set and multiway-cut are randomized, we chose the best feasible integral solution from 10 repetitions. Minimization problems Maximization problems Instance VC MC MIS PV NNZ S Q PV NNZ S Q PV NNZ S Q frb59-26-1 0.12 0.37 2.8 1.04 0.75 3.02 53.3 1.01 0.12 0.38 5.3 0.36 Amazon 0.39 1.17 8.4 1.23 5.89 23.2 0.42 0.39 1.17 7.4 0.82 DBLP 0.37 1.13 8.3 1.25 6.61 26.1 0.33 0.37 1.13 8.5 0.88 Google+ 0.71 2.14 9.0 1.21 9.24 36.8 0.83 0.71 2.14 10.2 0.82 Figure 3: Summary of wall-clock speedup (in comparison with Cplex-LP) and solution quality (in comparison with Cplex-IP) of Thetis on three graph analysis problems. Each code is run with a time limit of one hour and parallelized over 32 cores, with ‘-’ indicating that the code reached the time limit. PV is the number of primal variables while NNZ is the number of nonzeros in the constraint matrix of the LP in standard form (both in millions). S is the speedup, defined as the time taken by Cplex-LP divided by the time taken by Thetis. Q is the ratio of the solution objective obtained by Thetis to that reported by Cplex-IP. For minimization problems (VC and MC) lower Q is better; for maximization problems (MIS) higher Q is better. For MC, a value of Q < 1 indicates that Thetis found a better solution than Cplex-IP found within the time limit. Results. The results are summarized in Figure 3, with additional details in Figure 4. We discuss the results for the vertex cover problem. On the Bhoslib instances, the integral solutions from Thetis were within 4% of the documented optimal solutions. In comparison, Cplex-IP produced 3http://www.nlsde.buaa.edu.cn/˜kexu/benchmarks/graph-benchmarks.htm 4http://snap.stanford.edu/ 7 VC Cplex IP Cplex LP Thetis (min) t (secs) BFS Gap (%) t (secs) LP RSol t (secs) LP RSol frb59-26-1 1475 0.67 2.48 767 1534 0.88 959.7 1532 Amazon 85.5 1.60⇥105 24.8 1.50⇥105 2.04⇥105 2.97 1.50⇥105 1.97⇥105 DBLP 22.1 1.65⇥105 22.3 1.42⇥105 2.08⇥105 2.70 1.42⇥105 2.06⇥105 Google+ 1.06⇥105 0.01 40.1 1.00⇥105 1.31⇥105 4.47 1.00⇥105 1.27⇥105 MC Cplex IP Cplex LP Thetis (min) t (secs) BFS Gap (%) t (secs) LP RSol t (secs) LP RSol frb59-26-1 72.3 346 312.2 346 346 5.86 352.3 349 Amazon 12 NA 55.8 7.28 5 DBLP 15 NA 63.8 11.7 5 Google+ 6 NA 109.9 5.84 5 MIS Cplex IP Cplex LP Thetis (max) t (secs) BFS Gap (%) t (secs) LP RSol t (secs) LP RSol frb59-26-1 50 18.0 4.65 767 15 0.88 447.7 18 Amazon 35.4 1.75⇥105 23.0 1.85⇥105 1.56⇥105 3.09 1.73⇥105 1.43⇥105 DBLP 17.3 1.52⇥105 23.2 1.75⇥105 1.41⇥105 2.72 1.66⇥105 1.34⇥105 Google+ 1.06⇥105 44.5 1.11⇥105 9.39⇥104 4.37 1.00⇥105 8.67⇥104 Figure 4: Wall-clock time and quality of fractional and integral solutions for three graph analysis problems using Thetis, Cplex-IP and Cplex-LP. Each code was given a time limit of one hour, with ‘-’ indicating a timeout. BFS is the objective value of the best integer feasible solution found by Cplex-IP. The gap is defined as (BFS−BB)/BFS where BB is the best known solution bound found by Cplex-IP within the time limit. A gap of ‘-’ indicates that the problem was solved to within 0.01% accuracy and NA indicates that Cplex-IP was unable to find a valid solution bound. LP is the objective value of the LP solution, and RSol is objective value of the rounded solution. integral solutions that were within 1% of the documented optimal solutions, but required an hour for each of the instances. Although the LP solutions obtained by Thetis were less accurate than those obtained by Cplex-LP, the rounded solutions from Thetis and Cplex-LP are almost exactly the same. In summary, the LP-rounding approaches using Thetis and Cplex-LP obtain integral solutions of comparable quality with Cplex-IP — but Thetis is about three times faster than Cplex-LP. We observed a similar trend on the large social networking graphs. We were able to recover integral solutions of comparable quality to Cplex-IP, but seven to eight times faster than using LP-rounding with Cplex-LP. We make two additional observations. The difference between the optimal fractional and integral solutions for these instances is much smaller than frb59-26-1. We recorded unpredictable performance of Cplex-IP on large instances. Notably, Cplex-IP was able to find the optimal solution for the Amazon and DBLP instances, but timed out on Google+, which is of comparable size. On some instances, Cplex-IP outperformed even Cplex-LP in wall clock time, due to specialized presolve strategies. 5 Conclusion We described Thetis, an LP rounding scheme based on an approximate solver for LP relaxations of combinatorial problems. We derived worst-case runtime and solution quality bounds for our scheme, and demonstrated that our approach was faster than an alternative based on a state-of-theart LP solver, while producing rounded solutions of comparable quality. Acknowledgements SS is generously supported by ONR award N000141310129. JL is generously supported in part by NSF awards DMS-0914524 and DMS-1216318 and ONR award N000141310129. CR’s work on this project is generously supported by NSF CAREER award under IIS-1353606, NSF award under CCF-1356918, the ONR under awards N000141210041 and N000141310129, a Sloan Research Fellowship, and gifts from Oracle and Google. SJW is generously supported in part by NSF awards DMS-0914524 and DMS-1216318, ONR award N000141310129, DOE award DE-SC0002283, and Subcontract 3F-30222 from Argonne National Laboratory. Any recommendations, findings or opinions expressed in this work are those of the authors and do not necessarily reflect the views of any of the above sponsors. 8 References [1] Nikhil Bansal, Nitish Korula, Viswanath Nagarajan, and Aravind Srinivasan. Solving packing integer programs via randomized rounding with alterations. Theory of Computing, 8(1):533–565, 2012. [2] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [3] Jacob Bien and Robert Tibshirani. Classification by set cover: The prototype vector machine. arXiv preprint arXiv:0908.2284, 2009. [4] Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26:1124–1137, 2004. [5] Gruia C˘alinescu, Howard Karloff, and Yuval Rabani. An improved approximation algorithm for multiway cut. In Proceedings of the thirtieth annual ACM symposium on Theory of Computing, pages 48–52. ACM, 1998. [6] Jonathan Eckstein and Paulo JS Silva. A practical relative error criterion for augmented lagrangians. Mathematical Programming, pages 1–30, 2010. [7] Dorit S Hochbaum. Approximation algorithms for the set covering and vertex cover problems. SIAM Journal on Computing, 11(3):555–556, 1982. [8] VK Koval and MI Schlesinger. Two-dimensional programming in image analysis problems. USSR Academy of Science, Automatics and Telemechanics, 8:149–168, 1976. [9] Frank R Kschischang, Brendan J Frey, and H-A Loeliger. Factor graphs and the sum-product algorithm. Information Theory, IEEE Transactions on, 47(2):498–519, 2001. [10] Taesung Lee, Zhongyuan Wang, Haixun Wang, and Seung-won Hwang. Web scale entity resolution using relational evidence. Technical report, Microsoft Research, 2011. [11] Victor Lempitsky and Yuri Boykov. Global optimization for shape fitting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’07), pages 1–8. IEEE, 2007. [12] Ji Liu, Stephen J. Wright, Christopher R´e, and Victor Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. Technical report, University of Wisconsin-Madison, October 2013. [13] F Manshadi, Baruch Awerbuch, Rainer Gemulla, Rohit Khandekar, Juli´an Mestre, and Mauro Sozio. A distributed algorithm for large-scale generalized matching. Proceedings of the VLDB Endowment, 2013. [14] Feng Niu, Benjamin Recht, Christopher R´e, and Stephen J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. arXiv preprint arXiv:1106.5730, 2011. [15] Jorge Nocedal and Stephen J Wright. Numerical Optimization. Springer, 2006. [16] Pradeep Ravikumar, Alekh Agarwal, and Martin J Wainwright. Message-passing for graph-structured linear programs: Proximal methods and rounding schemes. The Journal of Machine Learning Research, 11:1043–1080, 2010. [17] J. Renegar. Some perturbation theory for linear programming. Mathenatical Programming, Series A, 65:73–92, 1994. [18] Dan Roth and Wen-tau Yih. Integer linear programming inference for conditional random fields. In Proceedings of the 22nd International Conference on Machine Learning, pages 736–743. ACM, 2005. [19] Sujay Sanghavi, Dmitry Malioutov, and Alan S Willsky. Linear programming analysis of loopy belief propagation for weighted matching. In Advances in Neural Information Processing Systems, pages 1273– 1280, 2007. [20] Aravind Srinivasan. Improved approximation guarantees for packing and covering integer programs. SIAM Journal on Computing, 29(2):648–670, 1999. [21] Jurgen Van Gael and Xiaojin Zhu. Correlation clustering for crosslingual link detection. In IJCAI, pages 1744–1749, 2007. [22] Vijay V Vazirani. Approximation Algorithms. Springer, 2004. [23] Stephen J. Wright. Implementing proximal point methods for linear programming. Journal of Optimization Theory and Applications, 65(3):531–554, 1990. [24] Zheng Wu, Ashwin Thangali, Stan Sclaroff, and Margrit Betke. Coupling detection and data association for multiple object tracking. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1948–1955. IEEE, 2012. [25] Ke Xu and Wei Li. Many hard examples in exact phase transitions. Theoretical Computer Science, 355(3):291–302, 2006. [26] Ce Zhang and Christopher R´e. Towards high-throughput gibbs sampling at scale: A study across storage managers. In SIGMOD Proceedings, 2013. 9
|
2013
|
54
|
5,130
|
Regression-tree Tuning in a Streaming Setting Samory Kpotufe∗ Toyota Technological Institute at Chicago† firstname@ttic.edu Francesco Orabona∗ Toyota Technological Institute at Chicago francesco@orabona.com Abstract We consider the problem of maintaining the data-structures of a partition-based regression procedure in a setting where the training data arrives sequentially over time. We prove that it is possible to maintain such a structure in time O (log n) at any time step n while achieving a nearly-optimal regression rate of ˜O n−2/(2+d) in terms of the unknown metric dimension d. Finally we prove a new regression lower-bound which is independent of a given data size, and hence is more appropriate for the streaming setting. 1 Introduction Traditional nonparametric regression such as kernel or k-NN can be expensive to estimate given modern large training data sizes. It is therefore common to resort to cheaper methods such as treebased regression which precompute the regression estimates over a partition of the data space [7]. Given a future query x, the estimate fn(x) simply consists of finding the closest cell of the partition by traversing an appropriate tree-structure and returning the precomputed estimate. The partition and precomputed estimates depend on the training data and are usually maintained in batch-mode. We are interested in maintaining such a partition and estimates in a real-world setting where the training data arrives sequentially over time. Our constraints are that of fast-update at every time step, while maintaining a near-minimax regression error-rate at any point in time. The error-rate of tree-based regression is well known to depend on the size of the partition’s cells. We will call this size the binwidth. The minimax-optimal binwidth ϵn is known to be of the form O n−1/(2+d) , assuming a training data of size n from a metric space of unknown dimension d, and unknown Lipschitz target function f. This setting of ϵn would then yield a minimax error rate of O n−2/(2+d) . Thus, the dimension d is the most important problem variable entering the rate (and the tuning of ϵn), while other problem variables such as the Lipschitz properties of f are less crucial in comparison. The main focus of this work is therefore that of adapting to the unknown d while maintaining fast partition estimates in a streaming setting. A first idea would be to start with an initial dimension estimation phase (where the regression estimates are suboptimal), and using the estimated dimension for subsequent data in a following phase, which leaves only the problem of maintaining partition estimates over time. However, while this sounds reasonable, it is generally unclear when to confidently stop such an initial phase since this would depend on the unknown d and the distribution of the data. Our solution is to interleave dimension estimation with regression updates as the data arrives sequentially. However the algorithm never relies on the estimated dimensions and views them rather as guesses di. Even if di ̸= d, it is kept as long as it is not hurtful to regression performance. The guess di is discarded once we detect that it hurts the regression, a new di+1 is then estimated and a new phase i+1 is started. The decision to discard di relies on monitoring quantities that play into the tradeoff between regression variance and bias, more precisely we monitor the size of the partition ∗SK and FO contributed equally to this paper. †Other affiliation: Max Planck Institute for Intelligent Systems, Germany 1 and the partition’s binwidth ϵn. We note that the idea can be applied to other forms of regression where other quantities control the regression variance and bias (see longer version of the paper). 1.1 Technical Overview of Results We assume that training data (xi, Yi) is sampled sequentially over time, xi belongs to a general metric space X of unknown dimension d, and Yi is real. The exact setup is given in Section 2. The algorithm (presented in Section 2.3) maintains regression estimates for all training samples xn ≜{xt}n t=1 arriving over time, while constantly updating a partition of the data and partition binwidth. At any time t = n, all updates are proveably of order log n with constants depending on the unknown dimension d of X. At time t = n, the estimate for a query point x is given by the precomputed estimate for the closest point to x in xn, which can be found fast using an off-the-shelf similarity search structure, such as those of [2, 10]. We prove that the L2 error of the algorithm is ˜O n−2/(2+d) , nearly optimal in terms of the unknown dimension d of the metric X. Finally, we prove a new lower-bound for regression on a generic metric X, where the worst-case distribution is the same as n increases. Note that traditional lower-bounds for the offline setting derive a different worst-case distribution for each sample size n. Thus, our lower-bound is more appropriate to the streaming setting where the data arrives over time from the same distribution. The results are discussed in more technical details in Section 3. 1.2 Related Work Although various interesting heuristics have been proposed for maintaining tree-based learners in streaming settings (see e.g. [1, 5, 11, 15]), the problem has not received much theoretical attention. This is however an important problem given the growing size of modern datasets, and given that in many modern applications, training data is actually acquired sequentially over time and incremental updates have to be efficient (see e.g. Robotics [12, 16], Finance [8]). The most closely related theoretical work is that of [6] which treats the problem of tuning a local polynomial regressor where the training data is acquired over time. Their setting however is that of a Euclidean space where d is known (ambient Euclidean dimension). [6] is thus concerned with maintaining a minimax error rate w.r.t. the known dimension d, while efficiently tuning regression bandwidth. A possible alternative to the method analyzed here is to employ some form of cross-validation or even online solutions based on mixture of experts [3], by keeping track of different partitions, each corresponding to some setting of the bindwidth ϵn. This is however likely expensive to maintain in practice if good prediction performance is desired. 2 Preliminaries 2.1 Notions of metric dimension We consider the following notion of dimension which extends traditional notions of dimension (e.g. Euclidean dimension and manifold dimension) to general metric spaces [4]. We assume throughout, w.l.o.g. that the space X has diameter at most 1 under a metric ρ. Definition 1. The metric measure space (X, µ, ρ) has metric measure-dimension d, if there exist ˇCµ, ˆCµ such that for all ϵ > 0, and x ∈X, ˇCµϵd ≤µ(B(x, ϵ)) ≤ˆCµϵd. The assumption of finite metric-measure dimension ensures that the measure µ has mass everywhere on the space ρ. This assumption is a generalization (to a metric space) of common assumptions where the measure has an upper and lower-bounded density on a compact Euclidean space, however is more general in that it does not require the measure µ to have a density (relative to any reference measure). The metric-measure dimension implies the following other notion of metric dimension. Definition 2. The metric (X, ρ) has metric dimension d, if there exists ˆCρ such that, for all ϵ > 0, X has an ϵ-cover of size at most ˆCρϵ−d. 2 The relation between the two notions of dimension is stated in the following lemma of [9], which allows us to use either notion as needed. Lemma 1 ([9]). If (X, µ, ρ) has metric-measure dimension d, then there exists ˇCρ, ˆCρ such that, for all ϵ > 0, any ball B(x, r) centered on (X, ρ) has an ϵr-cover of size in [ ˇCρϵ−d, ˆCρϵ−d]. 2.2 Problem Setup We receive data pairs (x1, Y1), (x2, Y2), . . . sequentially, i.i.d. The input xt belongs to a metric measure space (X, ρ, µ) of diameter at most 1, and of metric measure dimension d. The output Yt belongs to a subset of R of bounded diameter ∆Y , and satisfies Yt = f(xt) + η(xt). The noise η(xt) has 0 mean. The unknown function f is assumed to be λ-Lipschitz w.r.t. ρ for an unknown parameter λ, that is ∀x, x′ ∈X, |f(x) −f(x′)| ≤λρ (x, x′). L2 error: Our main performance result bounds the excess L2 risk E xn,Y n ∥fn −f∥2 2,µ ≜ E xn,Y n E X |fn(X) −f(X)|2 . We will often also be interested in the average error on the training sample: recall that at any time t, an estimate ft(xs) of f is produced for every xs ∈xt. The average error on xn at t = n is denoted E n E Y n |fn(X) −f(X)|2 ≜1 n n X t=1 |fn(xt) −f(xt)|2 . 2.3 Algorithm The procedure (Algorithm 1) works by partitioning the data into small regions of size roughly ϵt/2 at any time t, and computing the regression estimate of the centers of each region. All points falling in the same region (identified by a center point) are assigned the same regression estimate: the average Y values of all points in the region. The procedure works in phases, where each Phase i corresponds to a guess di of the metric dimension d. Where ϵt might have been set to t−1/(2+d) if we knew d, we set it to t−1/(2+di) i where ti is the current time step within Phase i. We ensure that in each phase our guess di does not hurt the variance-bias tradeoff of the estimates: this is done by monitoring the size of the partition (|Xi| in the algorithm), which controls the variance (see analysis in Section 4), relative to the bindwidth ϵt, which controls bias. Whenever |Xi| is too large relative to ϵt, the variance of the procedure is likely too large, so we start a new phase with an new guess of di. Since the algorithm maintains at any time n an estimate fn(xt) for all xt ∈xn, for any query point x ∈X, we simply return fn(x) = fn(xt) where xt is the closest point to x in xn. Despite having to adaptively tune to the unknown d, the main computation at each time step consists of just a 2-approximate nearest neighbor search for the closest center. These searches can be done fast in time O (log n) by employing the off-the-shelf online search-procedure of [10]. This is emphasized in Lemma 2 below. Finally, the algorithm employs a constant ¯C which is assumed to upper-bound the constant Cρ in Definition 2. This is a minor assumption since Cρ is generally taken to be small, e.g. 1, in machine learning literature, and is exactly quantifieable for various metrics [4, 10]. 3 Discussion of Results 3.1 Time complexity The time complexity of updates is emphasized in the following Lemma. Lemma 2. Suppose (X, ρ, µ) has metric dimension d. Then there exist C depending on d such that all computations of the algorithm at any time t = n can be done in time C log n. 3 Algorithm 1 Incremental tree-based regressor. 1: Initialize: i = 1, di = 1, ti = 0, Centers Xi = ∅ 2: for t = 1, 2, . . . , T do 3: Receive (xt, yt) 4: ti ←ti + 1 // counts the time steps within Phase i 5: ϵt ←t−1/(2+di) i 6: Set xs ∈Xi to the 2-approximate nearest neighbor of xt 7: if ρ (xt, xs) ≤ϵt then 8: Assign xt to xs 9: fn(xs) ←update average Y for center xs with yt 10: For every r ≤t assigned to xs, fn(xr) = fn(xs) 11: else 12: if |Xi| + 1 > ˆC 4diϵ−di t then 13: // Start of Phase i + 1 14: di+1 ← l log( |Xi|+1 ˆ C )/ log(4/ϵt) m 15: i ←i + 1 16: end if 17: Add xt as a new center in Xi 18: end if 19: end for Figure 1: As ϵt varies over time, a ball around a center xs can eventually contain both points assigned to xs and points non-assigned to it, and even contain other centers. This results in a complex partitioning of the data. Proof. The main computation at time n consists of finding the 2-approximate nearest neighbor xn in Xi and update the data structure for the nearest neighbor search. These centers are all at least ϵn 2 farapart. Using the results of [10], this can be done online in time O (log(1/ϵn) + log log(1/ϵn)). 3.2 Convergence rates The main theorem below bounds the L2 error of the algorithm at any given point in time. The main difficulty is in the fact that the data is partitioned in a complicated way due to the ever-changing bindwidth ϵt: every ball around a center can eventually contain both points assigned to the center and points not assigned to the center, in fact can contain other centers (see Figure 1). This makes it hard to get a handle on the number of points assigned to a single center xt (contributing to the variance of fn(xt)) and the distance between points assigned to the same center (contributing to the bias). This is not the case in classical analyses of tree-based regression since the data partitioning is usually clearly defined. The problem is handled by first looking at the average error over points in xn, which is less difficult. Theorem 1. Suppose the space (X, µ, ρ) has metric-measure dimension d. For any x ∈X, define fn(x) = fn(xt) where xt is the closest point to x in xn. Then at any time t = n, we have for some C independent of n, E xn,Y n ∥fn −f∥2 2,µ ≤C(d log n) sup xn E n E Y n ∥fn(X) −f(X)∥2 + Cλ2 d log n n 2/d + ∆2 Y n . If the algorithm parameter ˆC ≥ˆCρ, then for some constant C′ independent of n, we have at any time n that sup xn E n E Y n |fn(X) −f(X)|2 ≤C′ ∆2 Y + λ2 n−2/(2+d) . The convergence rate is therefore ˜O(n−2/(2+d)), nearly optimal in terms of the unknown d (up to a log n factor). In the simulation of Figure 2(Left) we compare our procedure to tree-based regressors with a fixed setting of d and of ϵt = t−1/(2+d). We use the classic rotating-Teapot dataset, where the target output values are the cosine of the rotation angles. Our method attains the same performance as the one with the right fixed setting of d. As alluded to above, the proof of Theorem 1 proceeds by first bounding the average error E n EY n |fn(X) −f(X)|2 on the sample xn. Interestingly, the analysis of the average error is 4 0 1000 2000 3000 4000 5000 6000 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 # Training samples Normalized RMSE on test set Teapot dataset Incremental Tree−based fixed d=1 fixed d=4 fixed d=8 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 # Training samples Normalized RMSE on test set Synthetic dataset, d=5, D=100, first 1000 samples d=1 Incremental Tree−based fixed d=1 fixed d=5 fixed d=10 Figure 2: Simulation results on Teapot (Left) and Synthetic (Right) datasets. ˆC set to 1, size of the test sets 1800 and 12500, respectively. of a worst-case nature where the data x1, x2, . . . is allowed to arrive adversarially (see analysis of Section 4.1). This shows a sense in which the algorithm is robust to bad dimension estimates: the average error is of the optimal form in terms of d, even though the data could trick us into picking a bad guess di of d. Thus the insights behind the algorithm are perhaps of wider applicability to problems of a more adversarial nature. This is shown empirically in Figure 2(Right), where we created a synthetic datasets with d = 5, while the first 1000 samples are from a line in X. An algorithm that estimates the dimension in a first phase would end up using the suboptimal setting d = 1, while our algorithm robustly updates its estimate over time. As mentioned in the introduction, the same insights can be applied to other forms of regression in a streaming setting. We show in the longer version of the paper a procedure more akin to kernel regression, which employs other quantities (appropriate to the method) to control the bias-variance tradeoff while deciding on keeping or rejecting the guess di. 3.3 Lower-bounds We have to produce a distribution for which the problem is hard, and which matches our streaming setting as well as possible. With this in mind, our lower-bound result differs from existing nonparametric lower-bounds by combining two important aspects. First, the lower-bound holds for any given metric measure space (X, ρ, µ) with finite measure-dimension: we constrain the worst-case distribution to have the marginal µ that nature happens to choose. In contrast, lower-bounds in literature would commonly pick a suitable marginal on the space X [13, 14]. Second, the worst-case distribution does not depend on the sample size as is common in literature. Instead, we show that the rate of n−2/(2+d) holds for infinitely many n for a distribution fixed beforehand. This is more appropriate for the online setting where the data is generated over time from a fixed distribution. The lower-bound result of [9] also holds for a given measure space (X, µ, ρ), but the worst-case distribution depends on sample size. A lower-bound of [7] holds for infinitely many n, but is restricted to distributions on a Euclidean cube, and thus benefits from the regularity of the cube. Our result combines some technical intuition from these two results in a way described in Section 4.3. We need the following definition. Definition 3. Given a metric-measure space (X, µ, ρ), we let Dµ,λ denote the set of distributions on X, Y , X ∈X, Y ∈R, where the marginal on X is µ, and where the function f(x) = E[Y |X = x] is λ-Lipschitz w.r.t. ρ. Theorem 2. Let (X, µ, ρ) be a metric space of diameter 1, and metric-measure dimension d. For any n ∈N, define r2 n = (λ2n)−2/(2+d). Pick any positive sequence {βn}n∈N , βn = o λ2r2 n . There exists an indexing subsequence {nt}t∈N , nt+1 > nt, such that inf {fn} sup Dµ,λ lim t→∞ EXnt,Y nt ∥fnt −f∥2 2,µ βnt = ∞, where the infimum is taken over all sequences {fn} of estimators fn : Xn, Y n 7→L2,µ. 5 By the statement of the theorem, if we pick any rate βn faster than n−2/(2+d), then there exists a distribution with marginal µ for which E ∥fn −f∥2 /βn either diverges or tends to ∞. 4 Analysis We first analyze the average error of the algorithm over the data xn in Section 4.1. The proof of the main theorem follows in Section 4.2. 4.1 Bounds on Average Error We start by bounding the average error on the sample xn at time n, that is we upper-bound E n EY n |fn(X) −f(X)|2. The proof idea of the upper bound is the following. We bound the error in a given phase (Lemma 4), then combine these errors over all phases to obtain the final bounds (Corollary 1). To bound the error in a phase, we decompose the error in terms of squared bias and variance. The main technical difficulty is that the bandwidth ϵt varies over time and thus points at varying distances are included in each estimate. Nevertheless, if ni is the number of steps in Phase i, we will see that both average squared bias and variance can be bounded by roughly n−2/(2+di) i . Finally, the algorithm ensures that the guess di is always an under-estimate of the unknown dimension d, as proven in Lemma 3 (proof in the supplemental appendix), so integrating over all phases yields an adaptive bound on the average error. We assume throughout this section that the space (X, ρ) has dimension d for some ˆCρ (see Def. 2). Lemma 3. Suppose the algorithm parameter ˆC ≥ˆCρ. The following invariants hold throughout the procedure for all phases i ≥1 of Algorithm 1: • i ≤di ≤d. • For any t ∈Phase i we have |Xi| ≤ˆC 4diϵ−di t . Lemma 4 (Bound on single phase). Suppose the algorithm parameter ˆC ≥ˆCρ. Consider Phase i ≥1 and suppose this phase lasts ni steps. Let Eni denote expectation relative to the uniform choice of X out of {xt : t ∈Phase i}. We have the following bound: E ni E Y n |fn(X) −f(X)|2 ≤ ˆC4d∆2 Y + 12λ2 n − 2 2+d i . Proof. Let Xi(X) denote the center closest to X in Xi. Suppose Xi(X) = xs, s ∈[n], we let nxs denote the number of points assigned to the center xs. We use the notation xt →xs to say that xt is assigned to center xs. First fix X ∈{xt : t ∈Phase i} and let xs = Xi(xt). Define ˜fn(X) ≡EY n fn(X) = 1 nxs P xt→xs f(xt). We proceed with the following standard bias-variance decomposition E Y n |fn(X) −f(X)|2 = E Y n fn(X) −˜fn(X) 2 + ˜fn(X) −f(X) 2 . (1) Let X = xr, r ≥s. We first bound the bias term. Using the Lipschitz property of f, and Jensen’s inequality, we have ˜fn(X) −f(X) 2 ≤ 1 nxs X xt→xs λρ (xr, xt) !2 ≤ 1 nxs X xt→xs λ2ρ (xr, xt)2 ≤2λ2 nxs X xt→xs ρ (xr, xs)2 + ρ (xs, xt)2 ≤2λ2 nxs X xt→xs ϵ2 r + ϵ2 t . The variance term is easily bounded as follows: E Y n fn(X) −˜fn(X) 2 = X xt→xs EY n |Yt −f(xt)|2 n2xs ≤∆2 Y nxs . 6 Now take the expectation over X ∼U {xt : t ∈Phase i}. We have: E ni E Y n |fn(X) −f(X)|2 = X xs∈Xi E n E Y n |fn(X) −f(X)|2 · 1{X →xs} ≤1 ni X xs∈Xi X xr→xs ∆2 Y nxs + 2λ2 nxs X xt→xs ϵ2 r + ϵ2 t ! = 1 ni X xs∈Xi ∆2 Y + 2λ2 ni X xs∈Xi 1 nxs X xr→xs X xt→xs ϵ2 r + ϵ2 t = ∆2 Y · |Xi| ni + 4λ2 ni X xs∈Xi X xt→xs ϵ2 t = ∆2 Y · |Xi| ni + 4λ2 ni X t∈Phase i ϵ2 t . To bound the last term, we have X t∈Phase i ϵ2 t = X ti∈[ni] t− 2 2+di ≤ Z ni 0 τ − 2 2+di dτ ≤3n 1− 2 2+di i . Combine with the previous derivation and with both statements of Lemma 3 to get E ni E Y n |fn(X) −f(X)|2 ≤∆2 Y · |Xi| ni + 12λ2n − 2 2+di i ≤ ˆC 4d ∆2 Y + 12λ2 n − 2 2+d i . Corollary 1 (Combined phases). Suppose the algorithm parameter ˆC ≥ˆCρ, then we have E n E Y n |fn(X) −f(X)|2 ≤2 ˆC 4d∆2 Y + 12λ2 n− 2 2+d . Proof. Let I denote the number of phases up to time n. We will decompose the expectation E n in terms of the various phases i ∈[I] and apply Lemma 4. Let B ≜ˆC 4d ∆2 Y + 12λ2. We have: E n E Y n |fn(X) −f(X)|2 ≤B I X i=1 ni n n − 2 2+d i = B I n I X i=1 1 I n d 2+d i ≤B I n I X i=1 ni I ! d 2+d = B I n n I d 2+d = B · I 2 2+d n− 2 2+d ≤B · d 2 2+d n− 2 2+d , where in the second inequality we use Jensen’s inequality, and in the last inequality Lemma 3. 4.2 Bound on L2 Error We need the following lemma, whose proof is in the supplemental appendix, which bounds the probability that a ρ-ball of a given radius contains a sample from xn. This will then allow us to bound the bias induced by transforming a solution for the adversarial setting to a solution for the stochastic setting. Lemma 5. Suppose (X, ρ, µ) has metric measure dimension d. Let µ be a distribution on X and let µn denote an empirical distribution on an i.i.d sample xn from µ. For ϵ > 1/n, let Bϵ denote the class of ρ-balls centered on X of radius ϵ. There exists C depending on d such that the following holds. Let 0 < δ < 1, Define αn,δ = C (d log n + log(1/δ)). Then, with probability at least 1 −δ, for all B ∈Bϵ satisfying µ(B) ≥αn,δ/n we have µn(B) > 1/n. We are now ready to prove Theorem 1. Proof of Theorem 1. Fix δ = 1/n and define αn,δ as in Lemma 5. Pick ϵ = (αn,δ/C1n)1/d ≥1/n, where C1 is such that every B ∈Bϵ has mass at least C1ϵd. Since for every B ∈Bϵ, µ(B) ≥ C1ϵ−d ≥αn,δ/n, we have by Lemma 5, that with probability at least 1 −δ, all B ∈Bϵ contain a point from xn. In other words, the event E that xn forms an ϵ-cover of X is (1 −δ)-likely. 7 Suppose xt is the closest point in xn to x ∈X. We write x →xt. Then, under E, we have, ∥f(x) −f(xt)∥≤λϵ. We therefore have by Fubini’s theorem E xn,Y n ∥fn −f∥2 2,µ = E xn E X E Y n|xn |fn(X) −f(X)|2 · (1{E} + 1 ¯E ) ≤E xn n X t=1 2µ(x : x →xt) E Y n|xn |fn(xt) −f(xt)|2 + 2λ2ϵ2 + δ∆2 Y ≤E xn n X t=1 2C2ϵd E Y n|xn |fn(xt) −f(xt)|2 + 2λ2ϵ2 + δ∆2 Y ≤2C2αn,δ C1 sup xn E n E Y n |fn(xt) −f(xt)|2 + 2λ2ϵ2 + δ∆2 Y , where in the first inequality we break the integration over the Voronoi partition of X defined by the points in xn, and introduce f(xt); the second inequality uses {x : x →xt} ⊂B(xt, ϵ) under E. 4.3 Lower-bound Let’s consider first the case of a fixed n. The idea behind the proof is as follows: for µ fixed, we have to come up with a class F of functions which vary considerably on the space X. To this end we discretize X into as many cells as possible, and let any f ∈F potentially change sign from one cell to the other. The larger the dimension d the more we can discretize the space and the more complex F, subject to a Lipschitz constraint. The problem of picking the right f can thus be reduced to that of classification, since the learner has to discover the sign of f on sufficiently many cells. In order to handle many data sizes n simultaneously, we borrow from the idea above. Say we want to show that the lower-bound holds for a subsequence {ni} simultaneously. Then we reserve a subset of the space X for each n1, n2, . . ., and discretize each subset according to ni. The functions in F have to then vary sufficiently in each subset of the space X according to the corresponding ni. This is illustrated in Figure 3. Figure 3: Lower bound proof idea. We can then apply the same idea of reduction to classification for each nt separately. This sort of idea appears in [7] where µ is uniform on the Euclidean cube, where they use the regularity of the cube to set up the right sequence of discretizations over subsets of the cube. The main technicality in our result is that we work with a general space without much regularity. The lack of regularity makes it unclear a priori how to divide such a space into subsets of the proper size for each ni. Last, we have to ensure that the functions f ∈F resulting from our discretization of a general metric space X are in fact Lipschitz. For this, we extend some of the ideas from [9] which handles the case of a fixed n. For lack of space, the complete proof is in the extended version of the paper. 5 Conclusions We presented an efficient and nearly minimax optimal approach to nonparametric regression in a streaming setting. The streaming setting is gaining more attention as modern data sizes are getting larger, and as data is being acquired online in many applications. The main insights behind the approach presented extend to other nonparametric methods, and are likely to extend to settings of a more adversrial nature. We left open the question of optimal adaptation to the smoothness of the unknown function, while we effciently solve the equally or more important question of adapting to the the unknown dimension of the data, which generally has a stronger effect on the convergence rate. 8 References [1] Y. Ben-Haim and E. Tom-Tov. A streaming parallel decision tree algorithm. Journal of Machine Learning Research, 11:849–872, 2010. [2] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbors. ICML, 2006. [3] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [4] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor Methods for Learning and Vision: Theory and Practice, 2005. [5] P. Domingos and G. Hulten. Mining high-speed data streams. In Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 71–80, 2000. [6] H. Gu and J. Lafferty. Sequential nonparametric regression. ICML, 2012. [7] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric Regression. Springer, New York, NY, 2002. [8] A. Kalai and S. Vempala. Efficient algorithms for universal portfolios. Journal of Machine Learning Research, 3:423–440, 2002. [9] S. Kpotufe. k-NN Regression Adapts to Local Intrinsic Dimension. NIPS, 2011. [10] R. Krauthgamer and J. R. Lee. Navigating nets: simple algorithms for proximity search. In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ’04, pages 798–807, Philadelphia, PA, USA, 2004. Society for Industrial and Applied Mathematics. [11] B. Pfahringer, G. Holmes, and R. Kirkby. Handling numeric attributes in hoeffding trees. In Advances in Knowledge Discovery and Data Mining: Proceedings of the 12th Pacific-Asia Conference (PAKDD), volume 5012, pages 296–307. Springer, 2008. [12] S. Schaal and C. Atkeson. Robot Juggling: An Implementation of Memory-based Learning. Control Systems Magazine, IEEE, 1994. [13] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348– 1360, 1980. [14] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist., 10:1340–1353, 1982. [15] M. A. Taddy, R. B. Gramacy, and N. G. Polson. Dynamic trees for learning and design. Journal of the American Statistical Association, 106(493), 2011. [16] S. Vijayakumar and S. Schaal. Locally weighted projection regression: An O(n) algorithm for incremental real time learning in high dimensional space. In in Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pages 1079–1086, 2000. 9
|
2013
|
55
|
5,131
|
Estimating LASSO Risk and Noise Level Mohsen Bayati Stanford University bayati@stanford.edu Murat A. Erdogdu Stanford University erdogdu@stanford.edu Andrea Montanari Stanford University montanar@stanford.edu Abstract We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning a coefficient vector θ0 ∈Rp from noisy linear observations y = Xθ0 + w ∈Rn (p > n) and the popular estimation procedure of solving the ℓ1-penalized least squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In this context, we develop new estimators for the ℓ2 estimation risk ∥bθ −θ0∥2 and the variance of the noise when distributions of θ0 and w are unknown. These can be used to select the regularization parameter optimally. Our approach combines Stein’s unbiased risk estimate [Ste81] and the recent results of [BM12a][BM12b] on the analysis of approximate message passing and the risk of LASSO. We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain conjecture from statistical physics. To the best of our knowledge, this result is the first that provides an asymptotically consistent risk estimator for the LASSO solely based on data. In addition, we demonstrate through simulations that our variance estimation outperforms several existing methods in the literature. 1 Introduction In Gaussian random design model for the linear regression, we seek to reconstruct an unknown coefficient vector θ0 ∈Rp from a vector of noisy linear measurements y ∈Rn: y = Xθ0 + w, (1.1) where X ∈Rn×p is a measurement (or feature) matrix with iid rows generated through a multivariate normal density. The noise vector, w, has iid entries with mean 0 and variance σ2. While this problem is well understood in the low dimensional regime p ≪n, a growing corpus of research addresses the more challenging high-dimensional scenario in which p > n. The Basis Pursuit Denoising (BPDN) or LASSO [CD95, Tib96] is an extremely popular approach in this regime, that finds an estimate for θ0 by minimizing the following cost function CX,y(λ, θ) ≡(2n)−1 ∥y −Xθ∥2 2 + λ∥θ∥1 , (1.2) with λ > 0. In particular, θ0 is estimated by bθ(λ; X, y) = argminθ CX,y(λ, θ). This method is well suited for the ubiquitous case in which θ0 is sparse, i.e. a small number of features effectively predict the outcome. Since this optimization problem is convex, it can be solved efficiently, and fast specialized algorithms have been developed for this purpose [BT09]. Research has established a number of important properties of LASSO estimator under suitable conditions on the design matrix X, and for sufficiently sparse vectors θ0. Under irrepresentability conditions, the LASSO correctly recovers the support of θ0 [ZY06, MB06, Wai09]. Under weaker 1 conditions such as restricted isometry or compatibility properties the correct recovery of support fails however, the ℓ2 estimation error ∥bθ −θ0∥2 is of the same order as the one achieved by an oracle estimator that knows the support [CRT06, CT07, BRT09, BdG11]. Finally, [DMM09, RFG09, BM12b] provided asymptotic formulas for MSE or other operating characteristics of bθ, for Gaussian design matrices X. While the aforementioned research provides solid justification for using the LASSO estimator, it is of limited guidance to the practitioner. For instance, a crucial question is how to set the regularization parameter λ. This question becomes even more urgent for high-dimensional methods with multiple regularization terms. The oracle bounds of [CRT06, CT07, BRT09, BdG11] suggest to take λ = c σ√log p with c a dimension-independent constant (say c = 1 or 2). However, in practice a factor two in λ can make a substantial difference for statistical applications. Related to this issue is the question of estimating accurately the ℓ2 error ∥bθ −θ0∥2 2. The above oracle bounds have the form ∥bθ −θ0∥2 2 ≤C kλ2, with k = ∥θ0∥0 the number of nonzero entries in θ0, as long as λ ≥cσ√log p. As a consequence, minimizing the bound does not yield a recipe for setting λ. Finally, estimating the noise level is necessary for applying these formulae, and this is in itself a challenging question. The results of [DMM09, BM12b] provide exact asymptotic formulae for the risk, and its dependence on the regularization parameter λ. This might appear promising for choosing the optimal value of λ, but has one serious drawback. The formulae of [DMM09, BM12b] depend on the empirical distribution1 of the entries of θ0, which is of course unknown, as well as on the noise level2. A step towards the resolution of this problem was taken in [DMM11], which determined the least favorable noise level and distribution of entries, and hence suggested a prescription for λ, and a predicted risk in this case. While this settles the question (in an asymptotic sense) from a minimax point of view, it would be preferable to have a prescription that is adaptive to the distribution of the entries of θ0 and to the noise level. Our starting point is the asymptotic results of [DMM09, DMM11, BM12a, BM12b]. These provide a construction of an unbiased pseudo-data bθu that is asymptotically Gaussian with mean θ0. The LASSO estimator bθ is obtained by applying a denoiser function to bθu. We then use Stein’s Unbiased Risk Estimate (SURE) [Ste81] to derive an expression for the ℓ2 risk (mean squared error) of this operation. What results is an expression for the mean squared error of the LASSO that only depends on the observed data y and X. Finally, by modifying this formula we obtain an estimator for the noise level. We prove that these estimators are asymptotically consistent for sequences of design matrices X with converging aspect ratio and iid Gaussian entries. We expect that the consistency holds far beyond this case. In particular, for the case of general Gaussian design matrices, consistency holds conditionally on a conjectured formula stated in [JM13] on the basis of the “replica method” from statistical physics. For the sake of concreteness, let us briefly describe our method in the case of standard Gaussian design that is when the design matrix X has iid Gaussian entries. We construct the unbiased pseudodata vector by bθu = bθ + XT (y −Xbθ)/[n −∥bθ∥0] . (1.3) Our estimator of the mean squared error is derived from applying SURE to unbiased pseudo-data. In particular, our estimator is bR(y, X, λ, bτ) where bR(y, X, λ, τ) ≡τ 2 2∥bθ∥0/p −1 + ∥XT (y −Xbθ)∥2 2 . p(n −∥bθ∥0)2 (1.4) Here bθ(λ; X, y) is the LASSO estimator and bτ = ∥y −Xbθ∥2/[n −∥bθ∥0]. Our estimator of the noise level is bσ2/n = bτ 2 −bR(y, X, λ, bτ )/δ where δ = n/p. Although our rigorous results are asymptotic in the problem dimensions, we show through numerical simulations that they are accurate already on problems with a few thousands of 1The probability distribution that puts a point mass 1/p at each of the p entries of the vector. 2Note that our definition of noise level σ corresponds to σ√n in most of the compressed sensing literature. 2 G G G G G G G G G G G G G G G G G G G G 0.0 0.1 0.2 0.3 0.0 0.5 1.0 1.5 2.0 λ MSE Results in a Single Run G Estimated MSE True MSE 90% Confidence Bands Estimated MSE True MSE Asymptotics Asymptotic MSE MSE Estimation G G G G G G G G G G G G G G G G G G G G x 0.0 0.2 0.4 0.6 0.0 0.5 1.0 1.5 2.0 λ σ^2 n G AMP.LASSO N.LASSO PMLE RCV.LASSO SCALED.LASSO TRUE Noise Level Estimation Figure 1: Red color represents the estimated values by our estimators and green color represents the true values to be estimated. Left: MSE versus regularization parameter λ. Here, δ = 0.5, σ2/n = 0.2, X ∈Rn×p with iid N1(0, 1) entries where n = 4000. Right: ˆσ2/n versus λ. Comparison of different estimators of σ2 under the same model parameters. Scaled Lasso’s prescribed choice of (λ, ˆσ2/n) is marked with a bold x. variables. To the best of our knowledge, this is the first method for estimating the LASSO mean square error solely based on data. We compare our approach with earlier work on the estimation of the noise level. The authors of [NSvdG10] target this problem by using a ℓ1-penalized maximum log-likelihood estimator (PMLE) and a related method called “Scaled Lasso” [SZ12] (also studied by [BC13]) considers an iterative algorithm to jointly estimate the noise level and θ0. Moreover, authors of [FGH12] developed a refitted cross-validation (RCV) procedure for the same task. Under some conditions, the aforementioned studies provide consistency results for their noise level estimators. We compare our estimator with these methods through extensive numerical simulations. The rest of the paper is organized as follows. In order to motivate our theoretical work, we start with numerical simulations in Section 2. The necessary background on SURE and asymptotic distributional characterization of the LASSO is presented in Section 3. Finally, our main theoretical results can be found in Section 4. 2 Simulation Results In this section, we validate the accuracy of our estimators through numerical simulations. We also analyze the behavior of our variance estimator as λ varies, along with four other methods. Two of these methods rely on the minimization problem, (bθ, bσ) = argminθ,σ ∥y −Xθ∥2 2 2nh1(σ) + h2(σ) + λ ∥θ∥1 23 h3(σ) , where for PMLE h1(σ) = σ2, h2(σ) = log(σ), h3(σ) = σ and for the Scaled Lasso h1(σ) = σ, h2(σ) = σ/2, and h3(σ) = 1. The third method is a na¨ıve procedure that estimates the variance in two steps: (i) use the LASSO to determine the relevant variables; (ii) apply ordinary least squares on the selected variables to estimate the variance. The fourth method is Refitted Cross-Validation (RCV) by [FGH12] which also has two-stages. RCV requires sure screening property that is the model selected in its first stage includes all the relevant variables. Note that this requirement may not be satisfied for many values of λ. In our implementation of RCV, we used the LASSO for variable selection. In our simulation studies, we used the LASSO solver l1 ls [SJKG07]. We simulated across 50 replications within each, we generated a new Gaussian design matrix X. We solved for LASSO over 20 equidistant λ’s in the interval [0.1, 2]. For each λ, a new signal θ0 and noise independent from X were generated. 3 G G G G G G G G G G G G G G G G G G G G 0.0 0.1 0.2 0.3 0.0 0.5 1.0 1.5 2.0 λ MSE Results in a Single Run G Estimated MSE True MSE 90% Confidence Bands Estimated MSE True MSE Asymptotics Asymptotic MSE MSE Estimation G G G G G G G G G G G G G G G G G G G G x 0.0 0.2 0.4 0.6 0.0 0.5 1.0 1.5 2.0 λ σ^2 n G AMP.LASSO N.LASSO PMLE RCV.LASSO SCALED.LASSO TRUE Noise Level Estimation Figure 2: Red color represents the estimated values by our estimators and green color represents the true values to be estimated. Left: MSE versus regularization parameter λ. Here, δ = 0.5, σ2/n = 0.2, rows of X ∈Rn×p are iid from Np(0, Σ) where n = 5000 and Σ has entries 1 on the main diagonal, 0.4 on above and below the main diagonal. Right: Comparison of different estimators of σ2/n. Parameter values are the same as in Figure 1. Scaled Lasso’s prescribed choice of (λ, ˆσ2/n) is marked with a bold x. The results are demonstrated in Figures 1 and 2. Figure 1 is obtained using n = 4000, δ = 0.5 and σ2/n = 0.2. The coordinates of true signal independently get values 0, 1, −1 with probabilities 0.9, 0.05, 0.05 respectively. For each replication, we used a design matrix X where Xi,j iid ∼N1(0, 1). Figure 2 is obtained with n = 5000 and same values of δ and σ2 as in Figure 1. The coordinates of true signal independently get values 0, 1, −1 with probabilities 0.9, 0.05, 0.05 respectively. For each replication, we used a design matrix X where each row is independently generated through Np(0, Σ) where Σ has 1 on the main diagonal and 0.4 above and below the diagonal. As can be seen from the figures, the asymptotic theory applies quite well to the finite dimensional data. We refer reader to [BEM13] for a more detailed simulation analysis. 3 Background and Notations 3.1 Preliminaries and Definitions First, we need to provide a brief introduction to approximate message passing (AMP) algorithm suggested by [DMM09] and its connection to LASSO (see [DMM09, BM12b] for more details). For an appropriate sequence of non-linear denoisers {ηt}t≥0, the AMP algorithm constructs a sequence of estimates {θt}t≥0, pseudo-data {yt}t≥0, and residuals {ϵt}t≥0 where θt, yt ∈Rp and ϵt ∈Rn. These sequences are generated according to the iteration θt+1 = ηt(yt) , yt = θt + XT ϵt/n , ϵt = y −Xθt + 1 δ ϵt−1 η′ t(yt−1) , (3.1) where δ ≡n/p and the algorithm is initialized with θ0 = ϵ0 = 0 ∈Rp. In addition, each denoiser ηt(·) is a separable function and its derivative is denoted by η′ t( · ). Given a scalar function f and a vector u ∈Rm, we let f(u) denote the vector (f(u1), . . . , f(um)) ∈Rm obtained by applying f component-wise and ⟨u⟩≡m−1 Pm i=1 ui is the average of the vector u ∈Rm. Next, consider the state evolution for the AMP algorithm. For the random variable Θ0 ∼pθ0, a positive constant σ2 and a given sequence of non-linear denoisers {ηt}t≥0, define the sequence {τ 2 t }t≥0 iteratively by τ 2 t+1 = Ft(τ 2 t ) , Ft(τ 2) ≡σ2 + 1 δ E{ [ηt(Θ0 + τZ) −Θ0]2} , (3.2) where τ 2 0 = σ2 + E{Θ2 0}/δ and Z ∼N1(0, 1) is independent of Θ0. From Eq. 3.2, it is apparent that the function Ft depends on the distribution of Θ0. It is shown in [BM12a] that the pseudo-data 4 yt has the same asymptotic distribution as Θ0 + τtZ. This result can be roughly interpreted as the pseudo-data generated by AMP is the summation of the true signal and a normally distributed noise which has zero mean. Its variance is determined by the state evolution. In other words, each iteration produces a pseudo-data that is distributed normally around the true signal, i.e. yt i ≈θ0,i+N1(0, τ 2 t ). The importance of this result will appear later when we use Stein’s method in order to obtain an estimator for the MSE and the variance of the noise. We will use state evolution in order to describe the behavior of a specific type of converging sequence defined as the following: Definition 1. The sequence of instances {θ0(n), X(n), σ2(n)}n∈N indexed by n is said to be a converging sequence if θ0(n) ∈Rp, X(n) ∈Rn×p, σ2(n) ∈R and p = p(n) is such that n/p → δ ∈(0, ∞), σ2(n)/n →σ2 0 for some σ0 ∈R and in addition the following conditions hold: (a) The empirical distribution of {θ0,i(n)}p i=1, converges in distribution to a probability measure pθ0 on R with bounded 2nd moment. Further, as n →∞, p−1 Pp i=1 θ0,i(n)2 →Epθ0 {Θ2 0}. (b) If {ei}1≤i≤p ⊂Rp denotes the standard basis, then n−1/2 maxi∈[p] ∥X(n)ei∥2 →1, n−1/2 mini∈[p] ∥X(n)ei∥2 →1, as n →∞with [p] ≡{1, . . . , p}. We provide rigorous results for the special class of converging sequences when entries of X are iid N1(0, 1) (i.e., standard gaussian design model). We also provide results (assuming Conjecture 4.4 is correct) when rows of X are iid multivariate normal Np(0, Σ) (i.e., general gaussian design model). In order to discuss the LASSO connection for the AMP algorithm, we need to use a specific class of denoisers and apply an appropriate calibration to the state evolution. Here, we provide briefly how this can be done and we refer the reader to [BEM13] for a detailed discussion. Denote by η : R × R+ →R the soft thresholding denoiser where η(x; ξ) = ( x −ξ if x > ξ 0 if −ξ ≤x ≤ξ . x + ξ if x < −ξ Also, denote by η′( · ; · ), the derivative of the soft-thresholding function with respect to its first argument. We will use the AMP algorithm with the soft-thresholding denoiser ηt( · ) = η( · ; ξt ) along with a suitable sequence of thresholds {ξt}t≥0 in order to obtain a connection to the LASSO. Let α > 0 be a constant and at every iteration t, choose the threshold ξt = ατt. It was shown in [DMM09] and [BM12b] that the state evolution has a unique fixed point τ∗= limt→∞τt, and there exists a mapping α 7→τ∗(α), between those two parameters. Further, it was shown that a function α 7→λ(α) with domain (αmin(δ), ∞) for some constant αmin, and given by λ(α) ≡ατ∗ 1 −1 δ E η′(Θ0 + τ∗Z; ατ∗) , admits a well-defined continuous and non-decreasing inverse α : (0, ∞) →(αmin, ∞). In particular, the functions λ 7→α(λ) and α 7→τ∗(α) provide a calibration between the AMP algorithm and the LASSO where λ is the regularization parameter. 3.2 Distributional Results for the LASSO We will proceed by stating a distributional result on LASSO which was established in [BM12b]. Theorem 3.1. Let {θ0(n), X(n), σ2(n)}n∈N be a converging sequence of instances of the standard Gaussian design model. Denote the LASSO estimator of θ0(n) by bθ(n, λ) and the unbiased pseudodata generated by LASSO by bθu(n, λ) ≡bθ + XT (y −Xbθ)/[n −∥bθ∥0]. Then, as n →∞, the empirical distribution of {bθu i , θ0,i}p i=1 converges weakly to the joint distribution of (Θ0 + τ∗Z, Θ0) where Θ0 ∼pθ0, τ∗= τ∗(α(λ)), Z ∼N1(0, 1) and Θ0 and Z are independent random variables. The above theorem combined with the stationarity condition of the LASSO implies that the empirical distribution of {bθi, θ0,i}p i=1 converges weakly to the joint distribution of η(Θ0 + τ∗Z; ξ∗), Θ0 5 where ξ∗= α(λ)τ∗(α(λ)). It is also important to emphasize a relation between the asymptotic MSE, τ 2 ∗and the model variance. By Theorem 3.1 and the state evolution recursion, almost surely, lim p→∞∥bθ −θ0∥2 2/p = E h [η(Θ0 + τ∗Z; ξ∗) −Θ0]2 i = δ(τ 2 ∗−σ2 0) , (3.3) which will be helpful to get an estimator for the noise level. 3.3 Stein’s Unbiased Risk Estimator In [Ste81], Stein proposed a method to estimate the risk of an almost arbitrary estimator of the mean of a multivariate normal vector. A generalized form of his method can be stated as the following. Proposition 3.2. [Ste81]&[Joh12] Let x, µ ∈Rn and V ∈Rn×n be such that x ∼Nn(µ, V ). Suppose that ˆµ(x) ∈Rn is an estimator of µ for which ˆµ(x) = x + g(x) and that g : Rn →Rn is weakly differentiable and that ∀i, j ∈[n], Eν[|xigi(x)| + |xjgj(x)|] < ∞where ν is the measure corresponding to the multivariate Gaussian distribution Nn(µ, V ). Define the functional S(x, ˆµ) ≡Tr(V ) + 2Tr(V Dg(x)) + ∥g(x)∥2 2 , where Dg is the vector derivative. S(x, ˆµ) is an unbiased estimator of the risk, i.e. Eν∥ˆµ(x) −µ∥2 2 = Eν[S(x, ˆµ)]. In the literature of statistics, the above estimator is called “Stein’s Unbiased Risk Estimator” or SURE. The following remark will be helpful to build intuition about our approach. Remark 1. If we consider the risk of soft thresholding estimator η(xi; ξ) for µi when xi ∼ N1(µi, σ2) for i ∈[m], the above formula suggests the functional S(x, η( · ; ξ)) m = σ2 −2σ2 m m X i=1 1{|xi|≤ξ} + 1 m m X i=1 [min{|xi|, ξ}]2 , as an estimator of the corresponding MSE. 4 Main Results 4.1 Standard Gaussian Design Model We start by defining two estimators that are motivated by Proposition 3.2. Definition 2. Define bRψ(x, τ) ≡−τ 2 + 2τ 2⟨ψ′(x)⟩+ ⟨(ψ(x) −x)2⟩, where x ∈Rm, τ ∈R+, and ψ : R →R is a suitable non-linear function. Also for y ∈Rn and X ∈Rn×p denote by bR(y, X, λ, τ), the estimator of the mean squared error of LASSO where bR(y, X, λ, τ) ≡τ 2 p (2∥bθ∥0 −p) + ∥XT (y −Xbθ)∥2 2 p(n −∥bθ∥0)2 . Remark 2. Note that bR(y, X, λ, τ) is just a special case of bRψ(x, τ) when x = bθu and ψ( · ) = η( · ; ξ ) for ξ = λ/(1 −∥bθ∥0/p). We are now ready to state the following theorem on the asymptotic MSE of the AMP: Theorem 4.1. Let {θ0(n), X(n), σ2(n)}n∈N be a converging sequence of instances of the standard Gaussian design model. Denote the sequence of estimators of θ0(n) by {θt(n)}t≥0, the pseudodata by {yt(n)}t≥0, and residuals by {ϵt(n)}t≥0 produced by AMP algorithm using the sequence of Lipschitz continuous functions {ηt}t≥0 as in Eq. 3.1. Then, as n →∞, the mean squared error of the AMP algorithm at iteration t+1 has the same limit as bRηt(yt, bτ) where bτt = ∥ϵt∥2/√n. More precisely, with probability one, lim n→∞∥θt+1 −θ0∥2 2/p(n) = lim n→∞ bRηt(yt, bτt) . (4.1) In other words, bRηt(yt, bτt) is a consistent estimator of the asymptotic mean squared error of the AMP algorithm at iteration t + 1. 6 The above theorem allows us to accurately predict how far the AMP estimate is from the true signal at iteration t + 1 and this can be utilized as a stopping rule for the AMP algorithm. Note that it was shown in [BM12b] that the left hand side of Eq. (4.1) is E[(ηt(Θ0 + τtZ) −Θ0)2]. Combining this with the above theorem, we easily obtain, lim n→∞ bRηt(yt, bτt) = E[(ηt(Θ0 + τtZ) −Θ0)2] . We state the following version of Theorem 4.1 for the LASSO. Theorem 4.2. Let {θ0(n), X(n), σ2(n)}n∈N be a converging sequence of instances of the standard Gaussian design model. Denote the LASSO estimator of θ0(n) by bθ(n, λ). Then with probability one, lim n→∞∥bθ −θ0∥2 2/p(n) = lim n→∞ bR(y, X, λ, bτ) , where bτ = ∥y −Xbθ∥2/[n −∥bθ∥0]. In other words, bR(y, X, λ, bτ) is a consistent estimator of the asymptotic mean squared error of the LASSO. Note that Theorem 4.2 enables us to assess the quality of the LASSO estimation without knowing the true signal itself or the noise (or their distribution). The following corollary can be shown using the above theorem and Eq. 3.3. Corollary 4.3. In the standard Gaussian design model, the variance of the noise can be accurately estimated by bσ2/n ≡bτ 2 −bR(y, X, λ, bτ)/δ where δ = n/p and other variables are defined as in Theorem 4.2. In other words, we have lim n→∞ˆσ2/n = σ2 0 , (4.2) almost surely, providing us a consistent estimator for the variance of the noise in the LASSO. Remark 3. Theorems 4.1 and 4.2 provide a rigorous method for selecting the regularization parameter optimally. Also, note that obtaining the expression in Theorem 4.2 only requires solving one solution path to LASSO problem versus k solution paths required by k-fold cross-validation methods. Additionally, using the exponential convergence of AMP algorithm for the standard gaussian design model, proved by [BM12b], one can use O(log(1/ϵ)) iterations of AMP algorithm and Theorem 4.1 to obtain the solution path with an additional error up to O(ϵ). 4.2 General Gaussian Design Model In Section 4.1, we devised our estimators based on the standard Gaussian design model. Motivated by Theorem 4.2, we state the following conjecture of [JM13]. Let {Ω(n)}n∈N be a sequence of inverse covariance matrices. Define the general Gaussian design model by the converging sequence of instances {θ0(n), X(n), σ2(n)}n∈N where for each n, rows of design matrix X(n) are iid multivariate Gaussian, i.e. Np(0, Ω(n)−1). Conjecture 4.4 ([JM13]). Let {θ0(n), X(n), σ2(n)}n∈N be a converging sequence of instances under the general Gaussian design model with a sequence of proper inverse covariance matrices {Ω(n)}n∈N. Assume that the empirical distribution of {(θ0,i, Ωii}p i=1 converges weakly to the distribution of a random vector (Θ0, Υ). Denote the LASSO estimator of θ0(n) by bθ(n, λ) and the LASSO pseudo-data by bθu(n, λ) ≡bθ + ΩXT (y −Xbθ)/[n −∥bθ∥0]. Then, for some τ ∈R, the empirical distribution of {θ0,i, bθu i , Ωii} converges weakly to the joint distribution of (Θ0, Θ0 + τΥ1/2Z, Υ), where Z ∼N1(0, 1), and (Θ0, Υ) are independent random variables. Further, the empirical distribution of (y −Xbθ)/[n −∥bθ∥0] converges weakly to N(0, τ 2). A heuristic justification of this conjecture using the replica method from statistical physics is offered in [JM13]. Using the above conjecture, we define the following generalized estimator of the linearly transformed risk under the general Gaussian design model. The construction of the estimator is essentially the same as before i.e. apply SURE to unbiased pseudo-data. 7 Definition 3. For an inverse covariance matrix Ωand a suitable matrix V ∈Rp×p, let W = V ΩV T and define an estimator of ∥V (bθ −θ)∥2 2/p as bΓΩ(y, X, τ, λ, V ) =τ 2 p Tr (WSS) −Tr (W ˜S ˜S) −2Tr W ˜SSΩS ˜SΩ−1 ˜S ˜S + ∥V ΩXT (y −Xbθ)∥2 2 p(n −∥bθ∥0)2 where y ∈Rn and X ∈Rn×p denote the linear observations and the design matrix, respectively. Further, bθ(n, λ) is the LASSO solution for penalty level λ and τ is a real number. S ⊂[p] is the support of bθ and ˜S is [p] \ S. Finally, for a p × p matrix M and subsets D, E of [p] the notation MDE refers to the |D| × |E| sub-matrix of M obtained by intersection of rows with indices from D and columns with indices from E. Derivation of the above formula is rather complicated and we refer the reader to [BEM13] for a detailed argument. A notable case, when V = I, corresponds to the mean squared error of LASSO for the general Gaussian design and the estimator bR(y, X, λ, τ) is just a special case of the estimator bΓΩ(y, X, τ, λ, V ). That is, when V = Ω= I, we have bΓI(y, X, τ, λ, I) = bR(y, X, λ, τ). Now, we state the following analog of Theorem 4.2. Theorem 4.5. Let {θ0(n), X(n), σ2(n)}n∈N be a converging sequence of instances of the general Gaussian design model with the inverse covariance matrices {Ω(n)}n∈N. Denote the LASSO estimator of θ0(n) by bθ(n, λ). If Conjecture 4.4 holds, then, with probability one, lim n→∞∥bθ −θ0∥2 2/p(n) = lim n→∞ bΓΩ(y, X, bτ, λ, I) where bτ = ∥y −Xbθ∥2/[n −∥bθ∥0]. In other words, bΓΩ(y, X, bτ, λ, I) is a consistent estimator of the asymptotic MSE of the LASSO. We will assume that a similar state evolution holds for the general design. In fact, for the general case, replica method suggests the relation lim n→∞∥Ω−1 2 (bθ −θ)∥2 2/p(n) = δ(τ 2 −σ2 0). Hence motivated by the Corollary 4.3, we state the following result on the general Gaussian design model. Corollary 4.6. Assume that Conjecture 4.4 holds. In the general Gaussian design model, the variance of the noise can be accurately estimated by ˆσ2(n, Ω)/n ≡bτ 2 −bΓΩ(y, X, bτ, λ, Ω−1 2 )/δ , where δ = n/p and other variables are defined as in Theorem 4.5. Also, we have lim n→∞ˆσ2/n = σ2 0 , almost surely, providing us a consistent estimator for the noise level in LASSO. Corollary 4.6, extends the results stated in Corollary 4.3 to the general Gaussian design matrices. The derivation of formulas in Theorem 4.5 and Corollary 4.6 follows similar arguments as in the standard Gaussian design model. In particular, they are obtained by applying SURE to the distributional result of Conjecture 4.4 and using the stationary condition of the LASSO. Details of this derivation can be found in [BEM13]. 8 References [BC13] A. Belloni and V. Chernozhukov, Least Squares after Model Selection in High-Dimensional Sparse Models, Bernoulli (2013). [BdG11] P. B¨uhlmann and S. Van de Geer, Statistics for high-dimensional data, Springer-Verlag Berlin Heidelberg, 2011. [BEM13] M. Bayati, M. A. Erdogdu, and A. Montanari, Estimating LASSO Risk and Noise Level, long version (in preparation), 2013. [BM12a] M. Bayati and A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing, IEEE Trans. on Inform. Theory 57 (2012), 764–785. [BM12b] , The LASSO risk for gaussian matrices, IEEE Trans. on Inform. Theory 58 (2012). [BRT09] P. Bickel, Y. Ritov, and A. Tsybakov, Simultaneous Analysis of Lasso and Dantzig Selector, The Annals of Statistics 37 (2009), 1705–1732. [BS05] Z. Bai and J. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer, 2005. [BT09] A. Beck and M. Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems, SIAM J. Imaging Sciences 2 (2009), 183–202. [BY93] Z. D. Bai and Y. Q. Yin, Limit of the Smallest Eigenvalue of a Large Dimensional Sample Covariance Matrix, The Annals of Probability 21 (1993), 1275–1294. [CD95] S.S. Chen and D.L. Donoho, Examples of basis pursuit, Proceedings of Wavelet Applications in Signal and Image Processing III (San Diego, CA), 1995. [CRT06] E. C`andes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics 59 (2006), 1207–1223. [CT07] E. C`andes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n, Annals of Statistics 35 (2007), 2313–2351. [DMM09] D. L. Donoho, A. Maleki, and A. Montanari, Message Passing Algorithms for Compressed Sensing, Proceedings of the National Academy of Sciences 106 (2009), 18914–18919. [DMM11] , The noise-sensitivity phase transition in compressed sensing, Information Theory, IEEE Transactions on 57 (2011), no. 10, 6920–6941. [FGH12] J. Fan, S. Guo, and N. Hao, Variance estimation using refitted cross-validation in ultrahigh dimensional regression, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74 (2012), 1467–9868. [JM13] A. Javanmard and A. Montanari, Hypothesis testing in high-dimensional regression under the gaussian random design model: Asymptotic theory, preprint available in arxiv:1301.4240, 2013. [Joh12] I. Johnstone, Gaussian estimation: Sequence and wavelet models, Book draft, 2012. [MB06] N. Meinshausen and P. B¨uhlmann, High-dimensional graphs and variable selection with the lasso, The Annals of Statistics 34 (2006), no. 3, 1436–1462. [NSvdG10] P. B¨uhlmann N. St¨adler and S. van de Geer, ℓ1-penalization for Mixture Regression Models (with discussion), Test 19 (2010), 209–285. [RFG09] S. Rangan, A. K. Fletcher, and V. K. Goyal, Asymptotic analysis of map estimation via the replica method and applications to compressed sensing, 2009. [SJKG07] M. Lustig S. Boyd S. J. Kim, K. Koh and D. Gorinevsky, An Interior-Point Method for Large-Scale l1-Regularized Least Squares, IEEE Journal on Selected Topics in Signal Processing 4 (2007), 606–617. [Ste81] C. Stein, Estimation of the mean of a multivariate normal distribution, The Annals of Statistics 9 (1981), 1135–1151. [SZ12] T. Sun and C. H. Zhang, Scaled sparse linear regression, Biometrika (2012), 1–20. [Tib96] R. Tibshirani, Regression shrinkage and selection with the lasso, J. Royal. Statist. Soc B 58 (1996), 267–288. [Wai09] M. J. Wainwright, Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1 constrained quadratic programming, Information Theory, IEEE Transactions on 55 (2009), no. 5, 2183–2202. [ZY06] P. Zhao and B. Yu, On model selection consistency of Lasso, The Journal of Machine Learning Research 7 (2006), 2541–2563. 9
|
2013
|
56
|
5,132
|
Demixing odors — fast inference in olfaction Agnieszka Grabska-Barwi´nska Gatsby Computational Neuroscience Unit UCL agnieszka@gatsby.ucl.ac.uk Jeff Beck Duke University jeff@gatsby.ucl.ac.uk Alexandre Pouget University of Geneva Alexandre.Pouget@unige.ch Peter E. Latham Gatsby Computational Neuroscience Unit UCL pel@gatsby.ucl.ac.uk Abstract The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1 Introduction The problem faced by the sensory system is to infer the underlying causes of a set of input spike trains. For the olfactory system, the input spikes come from a few hundred different types of olfactory receptor neurons, and the problem is to infer which odors caused them. As there are more than 10,000 possible odors, and more than one can be present at a time, the search space for mixtures of odors is combinatorially large. Nevertheless, olfactory processing is fast: organisms can typically determine what odors are present in a few hundred ms. Here we ask how organisms could do this. Since our focus is on inference, not learning: we assume that the olfactory system has learned both the statistics of odors in the world and the mapping from those odors to olfactory receptor neuron activity. We then choose a particular model for both, and compute, via Bayes rule, the full posterior distribution. This distribution is, however, highly complex: it tells us, for example, the probability of coffee at a concentration of 14 parts per million (ppm), and no bacon, and a rose at 27 ppm, and acetone at 3 ppm, and no apples and so on, where the “so on” is a list of thousands more odors. It is unlikely that such detailed information is useful to an organism. It is far more likely that organisms are interested in marginal probabilities, such as whether or not coffee is present independent of all the other odors. Unfortunately, even though we can write down the full posterior, calculation of marginal probabilities is intractable due to the 1 sum over all possible combinations of odors: the number of terms in the sum is exponential in the number of odors. We must, therefore, consider approximate algorithms. Here we consider two: a variational approximation, which naturally generates approximate posterior marginals, and sampling from the posterior, which directly gives us the marginals. Our main goal is to determine which, if either, is capable of performing inference on ecologically relevant timescales using biologically plausible circuits. We begin by introducing a generative model for spikes in a population of olfactory receptor neurons. We then describe the variational and sampling inference schemes. Both descriptions lead very naturally to network equations. We simulate those equations, and find that both the variational and sampling approaches work well, and require less than 100 ms to converge to a reasonable solution. Therefore, from the point of view of speed and accuracy – things that can be measured from behavioral experiments – it is not possible to rule out either of them. However, they do make different predictions about activity, and so it should be possible to tell them apart from electrophysiological experiments. They also make different predictions about the neural representation of probability distributions. If one or the other could be corroborated experimentally, that would provide valuable insight into how the brain (or at least one part of the brain) codes for probabilities [1]. 2 The generative model for olfaction The generative model consists of a probabilistic mapping from odors (which for us are a high level percepts, such as coffee or bacon, each of which consists of a mixture of many different chemicals) to odorant receptor neurons, and a prior over the presence or absence of odors and their concentrations. It is known that each odor, by itself, activates a different subset of the olfactory receptor neurons; typically on the order of 10%-30% [2]. Here we assume, for simplicity, that activation is linear, for which the activity of odorant receptor neuron i, denoted ri is linearly related to the concentrations, cj of the various odors which are present in a given olfactory scene, plus some background rate, r0. Assuming Poisson noise, the response distribution has the form P(r|c) = Y i r0 + P j wijcj ri ri! e− r0+P j wijcj . (2.1) In a nutshell, ri is Poisson with mean r0 + P j wijcj. In contrast to previous work [3], which used a smooth prior on the concentrations, here we use a spike and slab prior. With this prior, there is a finite probability that the concentration of any particular odor is zero. This prior is much more realistic than a smooth one, as it allows only a small number of odors (out of ∼10,000) to be present in any given olfactory scene. It is modeled by introducing a binary variable, sj, which is 1 if odor j is present and 0 otherwise. For simplicity we assume that odors are independent and statistically homogeneous, P(c|s) = Y j (1 −sj)δ(cj) + sjΓ(cj|α1, β1) (2.2a) P(s) = Y j πsj(1 −π)1−sj (2.2b) where δ(c) is the Dirac delta function and Γ(c|α, β) is the Gamma distribution: Γ(c|α, β) = βαcα−1e−βc/Γ(α) with Γ(α) the ordinary Gamma function, Γ(α) = R ∞ 0 dx xα−1e−x. 3 Inference 3.1 Variational inference Because of the delta-function in the prior, performing efficient variational inference in our model is difficult. Therefore, we smooth the delta-function, and replace it with a Gamma distribution. With this manipulation, the approximate (with respect to the true model, Eq. (2.2a)) prior on c is Pvar(c|s) = Y j (1 −sj)Γ(cj|α0, β0) + sjΓ(cj|α1, β1) . (3.1) 2 The approximate prior allows absent odors to have nonzero concentration. We can partially compensate for that by setting the background firing rate, r0 to zero, and choosing α0 and β0 such that the effective background firing rate (due to the small concentration when sj = 0) is equal to r0; see Sec. 4. As is typical in variational inference, we use a factorized approximate distribution. This distribution, denoted Q(c, s|r),was set to Q(c|s, r)Q(s|r) where Q(c|s, r) = Y j (1 −sj)Γ(cj|α0j, β0j) + sjΓ(cj|α1j, β1j) (3.2a) Q(s|r) = Y j λsj j (1 −λj)1−sj . (3.2b) Introducing auxiliary variables, as described in Supplementary Material, and minimizing the Kullback-Leibler distance between Q and the true posterior augmented by the auxiliary variables leads to a set of nonlinear equations for the parameters of Q. To simplify those equations, we set α1 to α0 + 1, resulting in α0j = α0 + X i riwijFj(λj, α0j) P k=1 wikFk(λk, α0k) (3.3a) Lj ≡log λj 1 −λj = L0j + log(α0j/α0) + α0j log(β0j/β1j) (3.3b) where L0j ≡log π 1 −π −α0 log (β0/β1) + log(β1/β1j) (3.3c) Fj(λ, α) ≡exp [(1 −λ)(Ψ(α) −log β0j) + λ(Ψ(α + 1) −log β1j)] (3.3d) and Ψ(α) ≡d log Γ(α)/dα is the digamma function. The remaining two parameters, β0j and β1j, are fixed by our choice of weights and priors: β0j = β0 + P i wij and β1j = β1 + P i wij. To solve Eqs. (3.3a-b) in a way that mimics the kinds of operations that could be performed by neuronal circuitry, we write down a set of differential equations that have fixed points satisfying Eq. (3.3), τρ dρi dt = ri −ρi X j wijFj(λj, α0j) (3.4a) τα dα0j dt = α0 + Fj(λj, α0j) X i ρiwij −α0j (3.4b) τλ dLj dt = L0j + log(α0j/α0) + α0j log(β0j/β1j) −Lj (3.4c) Note that we have introduced an additional variable, ρi. This variable is proportional to ri, but modulated by divisive inhibition: the fixed point of Eq. (3.4a) is ρi = ri P k wikFk(λk, α0k) . (3.5) Close scrutiny of Eqs. (3.4) and (3.3d) might raise some concerns: (i) ρ and α are reciprocally and symmetrically connected; (ii) there are multiplicative interactions between F(λj, α0j) and ρ; and (iii) the neurons need to compute nontrivial nonlinearities, such as logarithm, exponent and a mixture of digamma functions. However: (i) reciprocal and symmetric connectivity exists in the early olfactory processing system [4, 5, 6]; (ii) although multiplicative interactions are in general not easy for neurons, the divisive normalization (Eq. (3.5)) has been observed in the olfactory bulb [7], and (iii) the nonlinearities in our algorithms are not extreme (the logarithm is defined only on the positive range (α0j > α0, Eq. (3.3a)), and Fj(λ, α) function is a soft-thresholded linear function; see Fig. S1). Nevertheless, a realistic model would have to approximate Eqs. (3.4a-c), and thus degrade slightly the quality of the inference. 3 3.2 Sampling The second approximate algorithm we consider is sampling. To sample efficiently from our model, we introduce a new set of variables, ˜cj, cj = ˜cjsj . (3.6) When written in terms of ˜cj rather than cj, the likelihood becomes P(r|˜c, s) = Y i (r0 + P j wij˜cjsj)ri ri! e− r0+P j wij˜cjsj . (3.7) Because the value of ˜cj is unconstrained when sj = 0, we have complete freedom in choosing P(˜cj|sj = 0), the piece of the prior corresponding to the absence of odor j. It is convenient to set it to the same prior we use when sj = 1, which is Γ(˜cj|α1, β1). With this choice, ˜c is independent of s, and the prior over ˜c is simply P(˜c) = Y j Γ(˜cj|α1, β1) . (3.8) The prior over s, Eq. (2.2b), remains the same. Note that this set of manipulations does not change the model: the likelihood doesn’t change, since by definition ˜cjsj = cj; when sj = 1, ˜cj is drawn from the correct prior; and when sj = 0, ˜cj does not appear in the likelihood. To sample from this distribution we use Langevin sampling on c and Gibbs sampling on s. The former is standard, τc d˜cj dt = ∂log P(˜c, s|r) ∂˜cj + ξ(t) = α1 −1 ˜cj −β1 + sj X i wij ri r0 + P k wik˜cksk −1 + ξ(t) (3.9) where ξ(t) is delta-correlated white noise with variance 2τ: ⟨ξj(t)ξj′(t′)⟩= 2τδ(t −t′)δjj′. Because the ultimate goal is to implement this algorithm in networks of neurons, we need a Gibbs sampler that runs asynchronously and in real time. This can be done by discretizing time into steps of length dt, and computing the update probability for each odor on each time step. This is a valid Gibbs sampler only in the limit dt →0, where no more than one odor can be updated per time step that’s the limit of interest here. The update rule is T(s′ j|˜c, s, r) = ν0dtP(s′ j|˜c, s, r) + (1 −ν0dt) ∆(s′ j −sj) (3.10) where s′ j ≡sj(t + dt), s and ˜c should be evaluated at time t, and ∆(s) is the Kronecker delta: ∆(s) = 1 if s = 0 and 0 otherwise. As is straightforward to show, this update rule has the correct equilibrium distribution in the limit dt →0 (see Supplementary Material). Computing P(s′ j = 1|˜c, s, r) is straightforward, and we find that P(s′ j = 1|˜c, s, r) = 1 1 + exp[−Φj] Φj = log π 1 −π + X i " ri log r0 + P k̸=j wik˜cksk + wij˜cj r0 + P k̸=j wik˜cksk −˜cjwij # . (3.11) Equations (3.9) and (3.11) raise almost exactly the same concerns that we saw for the variational approach: (i) c and s are reciprocally and symmetrically connected; (ii) there are multiplicative interactions between ˜c and s; and (iii) the neurons need to compute nontrivial nonlinearities, such as logarithm and divisive normalization. Additionally, the noise in the Langevin sampler (ξ in Eq. 3.9) has to be white and have exactly the right variance. Thus, as with the variational approach, we expect a biophysical model to introduce approximations, and, therefore — as with the variational algorithm — degrade slightly the quality of the inference. 4 0 10 20 30 40 50 60 70 80 90 100 0 0.02 δ(c) P0(c) P1(c) Concentration 0 0.5 1 0 0.05 Figure 1: Priors over concentration. The true priors – the ones used to generate the data – are shown in red and magenta; these correspond to δ(c) and Γ(c|α1, β1), respectively. The variational prior in the absence of an odor, Γ(c|α0, β0) with α0 = 0.5 and β0 = 20, is shown in blue. 4 Simulations To determine how fast and accurate these two algorithms are, we performed a set of simulations using either Eq. (3.4) (variational inference) or Eqs. (3.9 - 3.11) (sampling). For both algorithms, the odors were generated from the true prior, Eq. (2.2). We modeled a small olfactory system, with 40 olfactory receptor types (compared to approximately 350 in humans and 1000 in mice [8]). To keep the ratio of identifiable odors to receptor types similar to the one in humans [8], we assumed 400 possible odors, with 3 odors expected to be present in the scene (π = 3/400). If an odor was present, its concentration was drawn from a Gamma distribution with α1 = 1.5 and β1 = 1/40. The background spike count, r0, was set to 1. The connectivity matrix was binary and random, with a connection probability, pc (the probability that any particular element is 1), set to 0.1 [2]. All network time constants (τρ, τα, τλ, τc and 1/ν0, from Eqs (3.4), (3.9) and (3.10)) were set to 10 ms. The differential equations were solved using the Euler method with a time step of 0.01 ms. Because we used α1 = α0 + 1, the choice α1 = 1.5 forced α0 to be 0.5. Our remaining parameter, β0, was set to ensure that, for the variational algorithm, the absent odors (those with sj = 0) contributed a background firing rate of r0 on average. This average background rate is given by P j⟨wij⟩⟨cj⟩= pcNodorsα0/β0. Setting this to r0 yields β0 = pcNodorsα0/r0 = 0.1 × 400 × 0.5/1 = 20. The true (Eq. (2.2)) and approximate (Eq. (3.1)) prior distributions over concentration are shown in Fig. 1. Figure 2 shows how the inference process evolves over time for a typical set of odors and concentrations. The top panel shows concentration, with variational inference on the left (where we plot the mean of the posterior distribution over concentration, (1−λj)α0j(t)/β0j(t)+λjα1j(t)/β1j(t); see Eq. (3.2)) and sampling on the right (where we plot ˜cj, the output of our Langevin sampler; see Eq. (3.9)) for a case with three odors present. The three colored lines correspond to the odors that 0 50 100 150 Concentrations Variational 0 0.2 0.4 0.6 0.8 1 −6 −4 −2 0 Log−probabilities Time [sec] 0 0.2 0.4 0.6 0.8 1 0 100 200 300 400 Time [sec] odors 0 50 100 150 Sampling c(t) Figure 2: Example run for the variational algorithm (left) and sampling (right); see text for details. In the bottom left panel the green, blue and red lines go to a probability of 1 ( log probability of 0) within about 50 ms. In sampling, the initial value of concentrations is set to the most likely value under the prior (˜c(0) = (α1 −1)/β1). The dashed lines are the true concentrations. 5 were presented, with solid lines for the inferred concentrations and dashed lines for the true ones. Black lines are the odors that were not present. At least in this example, both algorithms converge rapidly to the true concentration. In the bottom left panel of Fig. 2 we plot the log-probability that each of the odors is present, λj(t). The present odors quickly approach probabilities of 1; the absent odors all have probabilities below 10−4 within about 200 ms. The bottom right panel shows samples from sj for all the odors, with dots denoting present odors (sj(t) = 1) and blanks absent odors (sj(t) = 0). Beyond about 500 ms, the true odors (the colored lines at the bottom) are on continuously, and for the odors that were not present, sj is still occasionally 1, but relatively rarely. In Fig. 3 we show the time course of the probability of odors when between 1 and 5 odors were presented. We show only the first 100 ms, to emphasize the initial time course. Again, variational inference is on the left and sampling is on the right. The black lines are the average values of the probability of the correct odors; the gray regions mark 25%–75% percentiles. Ideally, we would like to compare these numbers to those expected from a true posterior. However, due to its intractability, we must seek different means of comparison. Therefore, we plot the probability of the most likely non-presented odor (red); the average probability of the non-presented odors (green), and the probability of guessing the correct odors via simple template matching (dashed; see Fig. 3 legend for details). Although odors are inferred relatively rapidly (they exceed template matching within 20 ms), there were almost always false positives. Even with just one odor present, both algorithms consistently report the existence of another odor (red). This problem diminishes with time if fewer odors are presented than the expected three, but it persists for more complex mixtures. The false positives are in fact consistent with human behavior: humans have difficulty correctly identify more than one odor in a mixture, with the most common problem being false positives [9]. Finally, because the two algorithms encode probabilities differently (see Discussion below), we also look into the time courses of the neural activity. In Fig. 4, we show the log-probability, L (left), and probability, λ (right), averaged across 400 scenes containing 3 odors (see Supplementary Fig. 2 for the other odor mixtures). The probability of absent odors drops from log(3/400) ≈e−5 (the prior) to e−12 (the final inferred probability). For the variational approach, this represents a drop in activity of 7 log units, comparable to the increase of about 5 log units for the present odors (whose probability is inferred to be near 1). For the sampling based approach, on the other hand, this represents a very small drop in activity. Thus, for the variational algorithm the average activity associated with the absent odors exhibits a large drop, whereas for the sampling based approach the average activity associated with the absent odors starts small and stays small. 5 Discussion We introduced two algorithms for inferring odors from the activity of the odorant receptor neurons. One was a variational method; the other sampling based. We mapped both algorithms onto dynamical systems, and, assuming time constants of 10 ms (plausible for biophysically realistic networks), tested the time course of the inference. The two algorithms performed with striking similarity: they both inferred odors within about 100 ms and they both had about the same accuracy. However, since the two methods encode probabilities differently (linear vs logarithmic encoding), they can be differentiated at the level of neural activity. As can be seen by examining Eqs. (3.4a) and (3.4c), for variational inference the log probability of concentration and presence/absence are related to the dynamical variables via log Q(cj) ∼α1j log cj −β1jcj (5.1a) log Q(sj) ∼Ljsj (5.1b) where ∼indicates equality within a constant. If we interpret α0j and Lj as firing rates, then these equations correspond to a linear probabilistic population code [10]: the log probability inferred by the approximate algorithm is linear in firing rate, with a parameter-dependent offset (the term −β1jcj in Eq. (5.1a)). For the sampling-based algorithm, on the other hand, activity generates samples from the posterior; an average of those samples codes for the probability of an odor being present. Thus, if the olfactory system uses variational inference, activity should code for log probability, whereas if it uses sampling, activity should code for probability. 6 0 0.5 1 <p(s=1)> 1 odor 0 0.5 1 <p(s=1)> 2 odors 0 0.5 1 <p(s=1)> 3 odors 0 0.5 1 <p(s=1)> 4 odors 0 20 40 60 80 100 0 0.5 1 Time [ms] <p(s=1)> 5 odors Variational 0 0.5 1 <p(s=1)> 1 odor 0 0.5 1 <p(s=1)> 2 odors 0 0.5 1 <p(s=1)> 3 odors 0 0.5 1 <p(s=1)> 4 odors 0 200 400 600 800 1000 0 0.5 1 Time [ms] <p(s=1)> 5 odors Sampling Figure 3: Inference by networks — initial 100 ms. Black: average value of the probability of correct odors; red: probability of the most likely non-presented odor; green: average probability of the nonpresented odors. Shaded areas represent 25th–75th percentile of values across 400 olfactory scenes. In the variational approach, values are often either 0 or 1, which makes it possible for the mean to land outside of the chosen percentile range; this happens whenever the odors are guessed correctly more than 75% of the time, in which case the 25th–75th percentile collapses to 1, or less than 25% of the time, in which case the 25th–75th percentile collapses to 0. The left panel shows variational inference, where we plot λj(t); the right one shows sampling, where we plot sk(t) averaged over 20 repetitions of the algorithm (to avoid arbitrariness in decoding/smoothing/averaging the samples). Both methods exceed template matching within 20 ms (dashed line). (Template matching finds odors (the j’s) that maximize the dot product between the activity, ri, and the weights, wij, associated, with odor j; that is, it chooses j’s that maximize P i riwij/ P i r2 i P i w2 ij 1/2. The number of odors chosen by template matching was set to the number of odors presented.) For more complex mixtures, sampling is slightly more efficient at inferring the presented odors. See Supplementary Material for the time course out to 1 second and for mixtures of up to 10 odors. 7 0 20 40 60 80 100 −10 −5 0 Time [ms] L 3 odors Variational 0 20 40 60 80 100 0 0.5 1 Time [ms] λ 3 odors Sampling Figure 4: Average time course of log(p(s)) (left) and p(s) (right, same as in Fig. 3). For the variational algorithm, the activity of the neurons codes for log probability (relative to some background to keep firing rates non-negative). For this algorithm, the drop in probability of the non-presented odors from about e−5 to e−12 corresponds to a large drop in firing rate. For the sampling based algorithm, on the other hand, activity codes for probability, and there is almost no drop in activity. There are two ways to determine which. One is to note that for the variational algorithm there is a large drop in the average activity of the neurons coding for the non-present odors (Fig. 4 and Supplementary Figure 2). This drop could be detected with electrophysiology. The other focuses on the present odors, and requires a comparison between the posterior probability inferred by an animal and neural activity. The inferred probability can be measured by so-called “opt-out” experiments [11]; the latter by sticking an electrode into an animal’s head, which is by now standard. The two algorithms also make different predictions about the activity coding for concentration. For the variational approach, activity, α0j, codes for the parameters of a probability distribution. Importantly, in the variational scheme the mean and variance of the distribution are tied – both are proportional to activity. Sampling, on the other hand, can represent arbitrary concentration distributions. These two schemes could, therefore, be distinguished by separately manipulating average concentration and uncertainty – by, for example, showing either very similar or very different odors. Unfortunately, it is not clear where exactly one needs to stick the electrode to record the trace of the olfactory inference. A good place to start would be the olfactory bulb, where odor representations have been studied extensively [12, 13, 14]. For example, the dendro-dendritic connections observed in this structure [4] are particularly well suited to meet the symmetry requirements on wij. We note in passing that these connections have been the subject of many theoretical studies. Most, however, considered single odors [15, 6, 16], for which one does not need a complicated inference process An early notable exception to the two-odor standard was Zhaoping [17], who proposed a model for serial analysis of complex mixtures, whereby higher cortical structures would actively adapt the already recognized components and send a feedback signal to the lower structures. Exactly how her network relates to our inference algorithms remains unclear. We should also point out that although the olfactory bulb is a likely location for at least part of our two inference algorithms, both are sufficiently complicated that they may need to be performed by higher cortical structures, such as the anterior piriform cortex, [18, 19]. Future directions. We have made several unrealistic assumptions in this analysis. For instance, the generative model was very simple: we assumed that concentrations added linearly, that weights were binary (so that each odor activated a subset of the olfactory receptor neurons at a finite value, and did not activate the rest at all), and that noise was Poisson. None of these are likely to be exactly true. And we considered priors such that all odors were independent. This too is unlikely to be true – for instance, the set of odors one expects in a restaurant are very different than the ones one expects in a toxic waste dump, consistent with the fact that responses in the olfactory bulb are modulated by task-relevant behavior [20]. Taking these effects into account will require a more complicated, almost certainly hierarchical, model. We have also focused solely on inference: we assumed that the network knew perfectly both the mapping from odors to odorant receptor neurons and the priors. In fact, both have to be learned. Finally, the neurons in our network had to implement relatively complicated nonlinearities: logs, exponents, and digamma and quadratic functions, and neurons had to be reciprocally connected. Building a network that can both exhibit the proper nonlinearities (at least approximately) and learn the reciprocal weights is challenging. While these issues are nontrivial, they do not appear to be insurmountable. We expect, therefore, that a more realistic model will retain many of the features of the simple model we presented here. 8 References [1] J. Fiser, P. Berkes, G. Orban, and M. Lengyel. Statistically optimal perception and learning: from behavior to neural representations. Trends Cogn. Sci. (Regul. Ed.), 14(3):119–130, Mar 2010. [2] R. Vincis, O. Gschwend, K. Bhaukaurally, J. Beroud, and A. Carleton. Dense representation of natural odorants in the mouse olfactory bulb. Nat. Neurosci., 15(4):537–539, Apr 2012. [3] Jeff Beck, Katherine Heller, and Alexandre Pouget. Complex inference in neural circuits with probabilistic population codes and topic models. In NIPS, 2012. [4] W. Rall and G. M. Shepherd. Theoretical reconstruction of field potentials and dendrodendritic synaptic interactions in olfactory bulb. J. Neurophysiol., 31(6):884–915, Nov 1968. [5] Shepherd GM, Chen WR, and Greer CA. The synaptic organization of the brain, volume 4, chapter Olfactory bulb, pages 165–216. Oxford University Press Oxford, 2004. [6] A. A. Koulakov and D. Rinberg. Sparse incomplete representations: a potential role of olfactory granule cells. Neuron, 72(1):124–136, Oct 2011. [7] Shawn Olsen, Vikas Bhandawat, and Rachel Wilson. Divisive normalization in olfactory population codes. Neuron, 66(2):287–299, 2010. [8] P. Mombaerts. Genes and ligands for odorant, vomeronasal and taste receptors. Nat. Rev. Neurosci., 5(4):263–278, Apr 2004. [9] D. G. Laing and G. W. Francis. The capacity of humans to identify odors in mixtures. Physiol. Behav., 46(5):809–814, Nov 1989. [10] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nat. Neurosci., 9(11):1432–1438, Nov 2006. [11] R. Kiani and M. N. Shadlen. Representation of confidence associated with a decision by neurons in the parietal cortex. Science, 324(5928):759–764, May 2009. [12] G. Laurent, M. Stopfer, R. W. Friedrich, M. I. Rabinovich, A. Volkovskii, and H. D. Abarbanel. Odor encoding as an active, dynamical process: experiments, computation, and theory. Annu. Rev. Neurosci., 24:263–297, 2001. [13] H. Spors and A. Grinvald. Spatio-temporal dynamics of odor representations in the mammalian olfactory bulb. Neuron, 34(2):301–315, Apr 2002. [14] Kevin Cury and Naoshige Uchida. Robust odor coding via inhalation-coupled transient activity in the mammalian olfactory bulb. Neuron, 68(3):570–585, 2010. [15] Z. Li and J. J. Hopfield. Modeling the olfactory bulb and its neural oscillatory processings. Biol Cybern, 61(5):379–392, 1989. [16] Y. Yu, T. S. McTavish, M. L. Hines, G. M. Shepherd, C. Valenti, and M. Migliore. Sparse distributed representation of odors in a large-scale olfactory bulb circuit. PLoS Comput. Biol., 9(3):e1003014, 2013. [17] Z. Li. A model of olfactory adaptation and sensitivity enhancement in the olfactory bulb. Biol Cybern, 62(4):349–361, 1990. [18] Julie Chapuis and Donald Wilson. Bidirectional plasticity of cortical pattern recognition and behavioral sensory acuity. Nature neuroscience, 15(1):155–161, 2012. [19] Keiji Miura, Zachary Mainen, and Naoshige Uchida. Odor representations in olfactory cortex: distributed rate coding and decorrelated population activity. Neuron, 74(6):1087–1098, 2012. [20] R. A. Fuentes, M. I. Aguilar, M. L. Aylwin, and P. E. Maldonado. Neuronal activity of mitraltufted cells in awake rats during passive and active odorant stimulation. J. Neurophysiol., 100(1):422–430, Jul 2008. 9
|
2013
|
57
|
5,133
|
Zero-Shot Learning Through Cross-Modal Transfer Richard Socher, Milind Ganjoo, Christopher D. Manning, Andrew Y. Ng Computer Science Department, Stanford University, Stanford, CA 94305, USA richard@socher.org, {mganjoo, manning}@stanford.edu, ang@cs.stanford.edu Abstract This work introduces a model that can recognize objects in images even if no training data is available for the object class. The only necessary knowledge about unseen visual categories comes from unsupervised text corpora. Unlike previous zero-shot learning models, which can only differentiate between unseen classes, our model can operate on a mixture of seen and unseen classes, simultaneously obtaining state of the art performance on classes with thousands of training images and reasonable performance on unseen classes. This is achieved by seeing the distributions of words in texts as a semantic space for understanding what objects look like. Our deep learning model does not require any manually defined semantic or visual features for either words or images. Images are mapped to be close to semantic word vectors corresponding to their classes, and the resulting image embeddings can be used to distinguish whether an image is of a seen or unseen class. We then use novelty detection methods to differentiate unseen classes from seen classes. We demonstrate two novelty detection strategies; the first gives high accuracy on unseen classes, while the second is conservative in its prediction of novelty and keeps the seen classes’ accuracy high. 1 Introduction The ability to classify instances of an unseen visual class, called zero-shot learning, is useful in several situations. There are many species and products without labeled data and new visual categories, such as the latest gadgets or car models, that are introduced frequently. In this work, we show how to make use of the vast amount of knowledge about the visual world available in natural language to classify unseen objects. We attempt to model people’s ability to identify unseen objects even if the only knowledge about that object came from reading about it. For instance, after reading the description of a two-wheeled self-balancing electric vehicle, controlled by a stick, with which you can move around while standing on top of it, many would be able to identify a Segway, possibly after being briefly perplexed because the new object looks different from previously observed classes. We introduce a zero-shot model that can predict both seen and unseen classes. For instance, without ever seeing a cat image, it can determine whether an image shows a cat or a known category from the training set such as a dog or a horse. The model is based on two main ideas. Fig. 1 illustrates the model. First, images are mapped into a semantic space of words that is learned by a neural network model [15]. Word vectors capture distributional similarities from a large, unsupervised text corpus. By learning an image mapping into this space, the word vectors get implicitly grounded by the visual modality, allowing us to give prototypical instances for various words. Second, because classifiers prefer to assign test images into classes for which they have seen training examples, the model incorporates novelty detection which determines whether a new image is on the manifold of known categories. If the image is of a known category, a standard classifier can be used. Otherwise, images are assigned to a class based on the likelihood of being an unseen category. We explore two strategies for novelty detection, both of which are based on ideas from outlier detection methods. The first strategy prefers high accuracy for unseen classes, the second for seen classes. Unlike previous work on zero-shot learning which can only predict intermediate features or differentiate between various zero-shot classes [21, 27], our joint model can achieve both state of the art accuracy on known classes as well as reasonable performance on unseen classes. Furthermore, compared to related work on knowledge transfer [21, 28] we do not require manually defined semantic 1 Manifold of known classes auto horse dog truck New test image from unknown class cat Training images Figure 1: Overview of our cross-modal zero-shot model. We first map each new testing image into a lower dimensional semantic word vector space. Then, we determine whether it is on the manifold of seen images. If the image is ‘novel’, meaning not on the manifold, we classify it with the help of unsupervised semantic word vectors. In this example, the unseen classes are truck and cat. or visual attributes for the zero-shot classes, allowing us to use state-of-the-art unsupervised and unaligned image features instead along with unsupervised and unaligned language corpora. 2 Related Work We briefly outline connections and differences to five related lines of research. Due to space constraints, we cannot do justice to the complete literature. Zero-Shot Learning. The work most similar to ours is that by Palatucci et al. [27]. They map fMRI scans of people thinking about certain words into a space of manually designed features and then classify using these features. They are able to predict semantic features even for words for which they have not seen scans and experiment with differentiating between several zero-shot classes. However, they do not classify new test instances into both seen and unseen classes. We extend their approach to allow for this setup using novelty detection. Lampert et al. [21] construct a set of binary attributes for the image classes that convey various visual characteristics, such as “furry” and “paws” for bears and “wings” and “flies” for birds. Later, in section 6.4, we compare our method to their method of performing Direct Attribute Prediction (DAP). One-Shot Learning One-shot learning [19, 20] seeks to learn a visual object class by using very few training examples. This is usually achieved by either sharing of feature representations [2], model parameters [12] or via similar context [14]. A recent related work on one-shot learning is that of Salakhutdinov et al. [29]. Similar to their work, our model is based on using deep learning techniques to learn low-level image features followed by a probabilistic model to transfer knowledge, with the added advantage of needing no training data due to the cross-modal knowledge transfer from natural language. Knowledge and Visual Attribute Transfer. Lampert et al. and Farhadi et al. [21, 10] were two of the first to use well-designed visual attributes of unseen classes to classify them. This is different to our setting since we only have distributional features of words learned from unsupervised, nonparallel corpora and can classify between categories that have thousands or zero training images. Qi et al. [28] learn when to transfer knowledge from one category to another for each instance. Domain Adaptation. Domain adaptation is useful in situations in which there is a lot of training data in one domain but little to none in another. For instance, in sentiment analysis one could train a classifier for movie reviews and then adapt from that domain to book reviews [4, 13]. While related, this line of work is different since there is data for each class but the features may differ between domains. Multimodal Embeddings. Multimodal embeddings relate information from multiple sources such as sound and video [25] or images and text. Socher et al. [31] project words and image regions into a common space using kernelized canonical correlation analysis to obtain state of the art performance in annotation and segmentation. Similar to our work, they use unsupervised large text corpora to 2 learn semantic word representations. Their model does require a small amount of training data however for each class. Some work has been done on multimodal distributional methods [11, 23]. Most recently, Bruni et al. [5] worked on perceptually grounding word meaning and showed that joint models are better able to predict the color of concrete objects. 3 Word and Image Representations We begin the description of the full framework with the feature representations of words and images. Distributional approaches are very common for capturing semantic similarity between words. In these approaches, words are represented as vectors of distributional characteristics – most often their co-occurrences with words in context [26, 9, 1, 32]. These representations have proven very effective in natural language processing tasks such as sense disambiguation [30], thesaurus extraction [24, 8] and cognitive modeling [22]. Unless otherwise mentioned, all word vectors are initialized with pre-trained d = 50-dimensional word vectors from the unsupervised model of Huang et al. [15]. Using free Wikipedia text, their model learns word vectors by predicting how likely it is for each word to occur in its context. Their model uses both local context in the window around each word and global document contex, thus capturing distributional syntactic and semantic information. For further details and evaluations of these embeddings, see [3, 7]. We use the unsupervised method of Coates et al. [6] to extract I image features from raw pixels in an unsupervised fashion. Each image is henceforth represented by a vector x ∈RI. 4 Projecting Images into Semantic Word Spaces In order to learn semantic relationships and class membership of images we project the image feature vectors into the d-dimensional, semantic word space F. During training and testing, we consider a set of classes Y . Some of the classes y in this set will have available training data, others will be zero-shot classes without any training data. We define the former as the seen classes Ys and the latter as the unseen classes Yu. Let W = Ws ∪Wu be the set of word vectors in Rd for both seen and unseen visual classes, respectively. All training images x(i) ∈Xy of a seen class y ∈Ys are mapped to the word vector wy corresponding to the class name. To train this mapping, we train a neural network to minimize the following objective function : J(Θ) = X y∈Ys X x(i)∈Xy wy −θ(2)f θ(1)x(i) 2 , (1) where θ(1) ∈Rh×I, θ(2) ∈Rd×h and the standard nonlinearity f = tanh. We define Θ = (θ(1), θ(2)). A two-layer neural network is shown to outperform a single linear mapping in the experiments section below. The cost function is trained with standard backpropagation and L-BFGS. By projecting images into the word vector space, we implicitly extend the semantics with a visual grounding, allowing us to query the space, for instance for prototypical visual instances of a word. Fig. 2 shows a visualization of the 50-dimensional semantic space with word vectors and images of both seen and unseen classes. The unseen classes are cat and truck. The mapping from 50 to 2 dimensions was done with t-SNE [33]. We can observe that most classes are tightly clustered around their corresponding word vector while the zero-shot classes (cat and truck for this mapping) do not have close-by vectors. However, the images of the two zero-shot classes are close to semantically similar classes (such as in the case of cat, which is close to dog and horse but is far away from car or ship). This observation motivated the idea for first detecting images of unseen classes and then classifying them to the zero-shot word vectors. 5 Zero-Shot Learning Model In this section we first give an overview of our model and then describe each of its components. In general, we want to predict p(y|x), the conditional probability for both seen and unseen classes y ∈Ys ∪Yu given an image from the test set x ∈Xt. To achieve this we will employ the semantic vectors to which these images have been mapped to f ∈Ft. Because standard classifiers will never predict a class that has no training examples, we introduce a binary novelty random variable which indicates whether an image is in a seen or unseen class 3 airplane automobile bird cat deer dog frog horse ship truck cat automobile truck frog ship airplane horse bird dog deer Figure 2: T-SNE visualization of the semantic word space. Word vector locations are highlighted and mapped image locations are shown both for images for which this mapping has been trained and unseen images. The unseen classes are cat and truck. V ∈{s, u}. Let Xs be the set of all feature vectors for training images of seen classes and Fs their corresponding semantic vectors. We similarly define Fy to be the semantic vectors of class y. We predict a class y for a new input image x and its mapped semantic vector f via: p(y|x, Xs, Fs, W, θ) = X V ∈{s,u} P(y|V, x, Xs, Fs, W, θ)P(V |x, Xs, Fs, W, θ). Marginalizing out the novelty variable V allows us to first distinguish between seen and unseen classes. Each type of image can then be classified differently. The seen image classifier can be a state of the art softmax classifier while the unseen classifier can be a simple Gaussian discriminator. 5.1 Strategies for Novelty Detection We now consider two strategies for predicting whether an image is of a seen or unseen class. The term P(V = u|x, Xs, Fs, W, θ) is the probability of an image being in an unseen class. An image from an unseen class will not be very close to the existing training images but will still be roughly in the same semantic region. For instance, cat images are closest to dogs even though they are not as close to the dog word vector as most dog images are. Hence, at test time, we can use outlier detection methods to determine whether an image is in a seen or unseen class. We compare two strategies for outlier detection. Both are computed on the manifold of training images that were mapped to the semantic word space. The first method is relatively liberal in its assessment of novelty. It uses simple thresholds on the marginals assigned to each image under isometric, class-specific Gaussians. The mapped points of seen classes are used to obtain this marginal. For each seen class y ∈Ys, we compute P(x|Xy, wy, Fy, θ) = P(f|Fy, wy) = N(f|wy, Σy). The Gaussian of each class is parameterized by the corresponding semantic word vector wy for its mean and a covariance matrix Σy that is estimated from all the mapped training points with that label. We restrict the Gaussians to be isometric to prevent overfitting. For a new image x, the outlier detector then becomes the indicator function that is 1 if the marginal probability is below a certain threshold Ty for all the classes: P(V = u|f, Xs, W, θ) := 1{∀y ∈Ys : P(f|Fy, wy) < Ty} We provide an experimental analysis for various thresholds T below. The thresholds are selected to make at least some fraction of the vectors from training images above threshold, that is, to be classified as a seen class. Intuitively, smaller thresholds result in fewer images being labeled as unseen. The main drawback of this method is that it does not give a real probability for an outlier. 4 An alternative would be to use the method of [17] to obtain an actual outlier probability in an unsupervised way. Then, we can obtain the conditional class probability using a weighted combination of classifiers for both seen and unseen classes (described below). Fig. 2 shows that many unseen images are not technically outliers of the complete data manifold. Hence this method is very conservative in its assignment of novelty and therefore preserves high accuracy for seen classes. We need to slightly modify the original approach since we distinguish between training and test sets. We do not want to use the set of all test images since they would then not be considered outliers anymore. The modified version has the same two parameters: k = 20, the number of nearest neighbors that are considered to determine whether a point is an outlier and λ = 3, which can be roughly seen as a multiplier on the standard deviation. The larger it is, the more a point has to deviate from the mean in order to be considered an outlier. For each point f ∈Ft, we define a context set C(f) ⊆Fs of k nearest neighbors in the training set of seen categories. We can compute the probabilistic set distance pdist of each point x to the points in C(f): pdistλ(f, C(f)) = λ sP q∈C(f) d(f, q)2 |C(f)| , where d(f, q) defines some distance function in the word space. We use Euclidean distances. Next we define the local outlier factor: lofλ(f) = pdistλ(f, C(f)) Eq∼C(f)[pdistλ(f, C(q))] −1. Large lof values indicate increasing outlierness. In order to obtain a probability, we next define a normalization factor Z that can be seen as a kind of standard deviation of lof values in the training set of seen classes: Zλ(Fs) = λ q Eq∼Fs[(lof(q))2]. Now, we can define the Local Outlier Probability: LoOP(f) = max 0, erf lofλ(f) Zλ(Fs) , (2) where erf is the Gauss Error function. This probability can now be used to weigh the seen and unseen classifiers by the appropriate amount given our belief about the outlierness of a new test image. 5.2 Classification In the case where V = s, i.e. the point is considered to be of a known class, we can use any probabilistic classifier for obtaining P(y|V = s, x, Xs). We use a softmax classifier on the original I-dimensional features. For the zero-shot case where V = u we assume an isometric Gaussian distribution around each of the novel class word vectors and assign classes based on their likelihood. 6 Experiments For most of our experiments we utilize the CIFAR-10 dataset [18]. The dataset has 10 classes, each with 5,000 32 × 32 × 3 RGB images. We use the unsupervised feature extraction method of Coates and Ng [6] to obtain a 12,800-dimensional feature vector for each image. For word vectors, we use a set of 50-dimensional word vectors from the Huang dataset [15] that correspond to each CIFAR category. During training, we omit two of the 10 classes and reserve them for zero-shot analysis. The remaining categories are used for training. In this section we first analyze the classification performance for seen classes and unseen classes separately. Then, we combine images from the two types of classes, and discuss the trade-offs involved in our two unseen class detection strategies. Next, the overall performance of the entire classification pipeline is summarized and compared to another popular approach by Lampert et al. [21]. Finally, we run a few additional experiments to assess quality and robustness of our model. 6.1 Seen and Unseen Classes Separately First, we evaluate the classification accuracy when presented only with images from classes that have been used in training. We train a softmax classifier to label one of 8 classes from CIFAR-10 (2 are reserved for zero-shot learning). In this case, we achieve an accuracy of 82.5% on the set of 5 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Gaussian model Fraction of points classified as unseen Accuracy 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) LoOP model Outlier probability threshold Accuracy 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (c) Comparison Fraction unseen/outlier threshold Accuracy Gaussian LoOP 0.58667 seen classes seen classes unseen classes unseen classes 0.6557 Figure 4: Comparison of accuracies for images from previously seen and unseen categories when unseen images are detected under the (a) Gaussian threshold model, (b) LoOP model. The average accuracy on all images is shown in (c) for both models. We also show a line corresponding to the single accuracy achieved in the Bayesian pipeline. In these examples, the zero-shot categories are “cat” and “truck”. classes excluding cat and truck, which closely matches the SVM-based classification results in the original Coates and Ng paper [6] that used all 10 classes. We now focus on classification between only two zero-shot classes. In this case, the classification is based on isometric Gaussians which amounts to simply comparing distances between word vectors of unseen classes and an image mapped into semantic space. In this case, the performance is good if there is at least one seen class similar to the zero-shot class. For instance, when cat and dog are taken out from training, the resulting zero-shot classification does not work well because none of the other 8 categories is similar enough to both images to learn a good semantic distinction. On the other hand, if cat and truck are taken out, then the cat vectors can be mapped to the word space thanks to similarities to dogs and trucks can be distinguished thanks to car, yielding better performance. cat−dog plane−auto auto−deer deer−ship cat−truck 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Pair of zero−shot classes used Zero−shot accuracy Figure 3: Visualization of classification accuracy achieved for unseen images, for different choices of zero-shot classes selected before training. Fig. 3 shows the accuracy achieved in distinguishing images belonging to various combinations of zero-shot classes. We observe, as expected, that the maximum accuracy is achieved when choosing semantically distinct categories. For instance, frog-truck and cat-truck do very well. The worst accuracy is obtained when cat and dog are chosen instead. From the figure we see that for certain combinations of zero-shot classes, we can achieve accuracies up to 90%. 6.2 Influence of Novelty Detectors on Average Accuracy Our next area of investigation is to determine the average performance of the classifier for the overall dataset that includes both seen and unseen images. We compare the performance when each image is passed through either of the two novelty detectors which decide with a certain probability (in the second scenario) whether an image belongs to a class that was used in training. Depending on this choice, the image is either passed through the softmax classifier for seen category images, or assigned to the class of the nearest semantic word vector for unseen category images. Fig. 4 shows the accuracies for test images for different choices made by the two scenarios for novelty detection. The test set includes an equal number of images from each category, with 8 categories having been seen before, and 2 being new. We plot the accuracies of the two types of images separately for comparison. Firstly, at the left extreme of the curve, the Gaussian unseen image detector treats all of the images as unseen, and the LoOP model takes the probability threshold for an image being unseen to be 0. At this point, with all unseen images in the test set being treated as such, we achieve the highest accuracies, at 90% for this zero-shot pair. Similarly, at the other extreme of the curve, all images are classified as belonging to a seen category, and hence the softmax classifier for seen images gives the best possible accuracy for these images. 6 Between the extremes, the curves for unseen image accuracies and seen image accuracies fall and rise at different rates. Since the Gaussian model is liberal in designating an image as belonging to an unseen category, it treats more of the images as unseen, and hence we continue to get high unseen class accuracies along the curve. The LoOP model, which tries to detect whether an image could be regarded as an outlier for each class, does not assign very high outlier probabilities to zero-shot images due to a large number of them being spread on inside the manifold of seen images (see Fig. 2 for a 2-dimensional visualization of the originally 50-dimensional space). Thus, it continues to treat the majority of images as seen, leading to high seen class accuracies. Hence, the LoOP model can be used in scenarios where one does not want to degrade the high performance on classes from the training set but allow for the possibility of unseen classes. We also see from Fig. 4 (c) that since most images in the test set belong to previously seen categories, the LoOP model, which is conservative in assigning the unseen label, gives better overall accuracies than the Gaussian model. In general, we can choose an acceptable threshold for seen class accuracy and achieve a corresponding unseen class accuracy. For example, at 70% seen class accuracy in the Gaussian model, unseen classes can be classified with accuracies of between 30% to 15%, depending on the class. Random chance is 10%. 6.3 Combining predictions for seen and unseen classes The final step in our experiments is to perform the full Bayesian pipeline as defined by Equation 2. We obtain a prior probability of an image being an outlier. The LoOP model outputs a probability for the image instance being an outlier, which we use directly. For the Gaussian threshold model, we tune a cutoff fraction for log probabilities beyond which images are classified as outliers. We assign probabilities 0 and 1 to either side of this threshold. We show the horizontal lines corresponding to the overall accuracy for the Bayesian pipeline on Figure 4. 6.4 Comparison to attribute-based classification To establish a context for comparing our model performance, we also run the attribute-based classification approach outlined by Lampert et al. [21]. We construct an attribute set of 25 attributes highlighting different aspects of the CIFAR-10 dataset, with certain aspects dealing with animal-based attributes, and others dealing with vehicle-based attributes. We train each binary attribute classifier separately, and use the trained classifiers to construct attribute labels for unseen classes. Finally, we use MAP prediction to determine the final output class. The table below shows a summary of results. Our overall accuracies for both models outperform the attribute-based model. Bayesian pipeline (Gaussian) 74.25% Bayesian pipeline (LoOP) 65.31% Attribute-based (Lampert et al.) 45.25% In general, an advantage of our approach is the ability to adapt to a domain quickly, which is difficult in the case of the attribute-based model, since appropriate attribute types need to be carefully picked. 6.5 Novelty detection in original feature space 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fraction of points classified as seen Accuracy 1−layer NN 2−layer NN unseen accuracies seen accuracies Figure 5: Comparison of accuracies for images from previously seen and unseen categories for the modified CIFAR-100 dataset, after training the semantic mapping with a one-layer network and two-layer network. The deeper mapping function performs better. The analysis of novelty detectors in 6.2 involves calculation in the word space. As a comparison, we perform the same experiments with the Gaussian model in the original feature space. In the mapped space, we observe that of the 100 images assigned the highest probability of being an outlier, 12% of those images are false positives. On the other hand, in the original feature space, the false positive rate increases to 78%. This is intuitively explained by the fact that the mapping function gathers extra semantic information from the word vectors it is trained on, and images are able to cluster better around these assumed Gaussian centroids. In the original space, there is no semantic information, and the Gaussian centroids need to be inferred from among the images themselves, which are not truly representative of the center of the image space for their classes. 6.6 Extension to CIFAR-100 and Analysis of Deep Semantic Mapping So far, our tests were on the CIFAR-10 dataset. We now describe results on the more challenging CIFAR-100 7 dataset [18], which consists of 100 classes, with 500 32 × 32 × 3 RGB images in each class. We remove 4 categories for which no vector representations were available in our vocabulary. We then combined the CIFAR-10 dataset to get a set of 106 classes. Six zero-shot classes were chosen: ‘forest’, ‘lobster’, ‘orange’, ‘boy’, ‘truck’, and ‘cat’. As before, we train a neural network to map the vectors into semantic space. With this setup, we get a peak non-zero-shot accuracy of 52.7%, which is almost near the baseline on 100 classes [16]. When all images are labeled as zero shot, the peak accuracy for the 6 unseen classes is 52.7%, where chance would be at 16.6%. Because of the large semantic space corresponding to 100 classes, the proximity of an image to its appropriate class vector is dependent on the quality of the mapping into semantic space. We hypothesize that in this scenario a two layer neural network as described in Sec. 4 will perform better than a single layer or linear mapping. Fig. 5 confirms this hypothesis. The zero-shot accuracy is 10% higher with a 2 layer neural net compared to a single layer with 42.2%. 6.7 Zero-Shot Classes with Distractor Words 0 10 20 30 40 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of distractor words Accuracy Neighbors of cat Neighbors of truck Figure 6: Visualization of the zero-shot classification accuracy when distractor words from the nearest neighbor set of a given category are also present. We would like zero-shot images to be classified correctly when there are a large number of unseen categories to choose from. To evaluate such a setting with many possible but incorrect unseen classes we create a set of distractor words. We compare two scenarios. In the first, we add random nouns to the semantic space. In the second, much harder, setting we add the k nearest neighbors of a word vector. We then evaluate classification accuracy with each new set. For the zero-shot class cat and truck, the nearest neighbors distractors include rabbit, kitten and mouse, among others. The accuracy does not change much if random distractor nouns are added. This shows that the semantic space is spanned well and our zeroshot learning model is quite robust. Fig. 6 shows the classification accuracies for the second scenario. Here, accuracy drops as an increasing number of semantically related nearest neighbors are added to the distractor set. This is to be expected because there are not enough related categories to accurately distinguish very similar categories. After a certain number, the effect of a new distractor word is small. This is consistent with our expectation that a certain number of closely-related semantic neighbors would distract the classifier; however, beyond that limited set, other categories would be further away in semantic space and would not affect classification accuracy. 7 Conclusion We introduced a novel model for jointly doing standard and zero-shot classification based on deep learned word and image representations. The two key ideas are that (i) using semantic word vector representations can help to transfer knowledge between modalities even when these representations are learned in an unsupervised way and (ii) that our Bayesian framework that first differentiates novel unseen classes from points on the semantic manifold of trained classes can help to combine both zero-shot and seen classification into one framework. If the task was only to differentiate between various zero-shot classes we could obtain accuracies of up to 90% with a fully unsupervised model. Acknowledgments Richard is partly supported by a Microsoft Research PhD fellowship. The authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-13-2-0040, the DARPA Deep Learning program under contract number FA8650-10-C-7020 and NSF IIS-1159679. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government. 8 References [1] M. Baroni and A. Lenci. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721, 2010. [2] E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature replacement. In CVPR, 2005. [3] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. J. Mach. Learn. Res., 3, March 2003. [4] J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In ACL, 2007. [5] E. Bruni, G. Boleda, M. Baroni, and N. Tran. Distributional semantics in technicolor. In ACL, 2012. [6] A. Coates and A. Ng. The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization . In ICML, 2011. [7] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, 2008. [8] J. Curran. From Distributional to Semantic Similarity. PhD thesis, University of Edinburgh, 2004. [9] K. Erk and S. Pad´o. A structured vector space model for word meaning in context. In EMNLP, 2008. [10] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009. [11] Y. Feng and M. Lapata. Visual information in semantic representation. In HLT-NAACL, 2010. [12] M. Fink. Object classification from a single example utilizing class relevance pseudo-metrics. In NIPS, 2004. [13] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for Large-Scale sentiment classification: A deep learning approach. In ICML, 2011. [14] D. Hoiem, A.A. Efros, and M. Herbert. Geometric context from a single image. In ICCV, 2005. [15] E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. Improving Word Representations via Global Context and Multiple Word Prototypes. In ACL, 2012. [16] Yangqing Jia, Chang Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled image features. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3370 –3377, june 2012. [17] H. Kriegel, P. Kr¨oger, E. Schubert, and A. Zimek. LoOP: local Outlier Probabilities. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM ’09, 2009. [18] Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master’s thesis, Computer Science Department, University of Toronto, 2009. [19] R.; Perona L. Fei-Fei; Fergus. One-shot learning of object categories. TPAMI, 28, 2006. [20] B. M. Lake, J. Gross R. Salakhutdinov, and J. B. Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011. [21] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to Detect Unseen Object Classes by BetweenClass Attribute Transfer. In CVPR, 2009. [22] T. K. Landauer and S. T. Dumais. A solution to Plato’s problem: the Latent Semantic Analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2):211–240, 1997. [23] C.W. Leong and R. Mihalcea. Going beyond text: A hybrid image-text approach for measuring word relatedness. In IJCNLP, 2011. [24] D. Lin. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL, pages 768–774, 1998. [25] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A.Y. Ng. Multimodal deep learning. In ICML, 2011. [26] S. Pado and M. Lapata. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199, 2007. [27] M. Palatucci, D. Pomerleau, G. Hinton, and T. Mitchell. Zero-shot learning with semantic output codes. In NIPS, 2009. [28] Guo-Jun Qi, C. Aggarwal, Y. Rui, Q. Tian, S. Chang, and T. Huang. Towards cross-category knowledge propagation for learning visual concepts. In CVPR, 2011. [29] A. Torralba R. Salakhutdinov, J. Tenenbaum. Learning to learn with compound hierarchical-deep models. In NIPS, 2012. [30] H. Sch¨utze. Automatic word sense discrimination. Computational Linguistics, 24:97–124, 1998. 9 [31] R. Socher and L. Fei-Fei. Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora. In CVPR, 2010. [32] P. D. Turney and P. Pantel. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188, 2010. [33] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008. 10
|
2013
|
58
|
5,134
|
Optimistic policy iteration and natural actor-critic: A unifying view and a non-optimality result Paul Wagner Department of Information and Computer Science Aalto University FI-00076 Aalto, Finland paul.wagner@aalto.fi Abstract Approximate dynamic programming approaches to the reinforcement learning problem are often categorized into greedy value function methods and value-based policy gradient methods. As our first main result, we show that an important subset of the latter methodology is, in fact, a limiting special case of a general formulation of the former methodology; optimistic policy iteration encompasses not only most of the greedy value function methods but also natural actor-critic methods, and permits one to directly interpolate between them. The resulting continuum adjusts the strength of the Markov assumption in policy improvement and, as such, can be seen as dual in spirit to the continuum in TD(λ)-style algorithms in policy evaluation. As our second main result, we show for a substantial subset of softgreedy value function approaches that, while having the potential to avoid policy oscillation and policy chattering, this subset can never converge toward an optimal policy, except in a certain pathological case. Consequently, in the context of approximations (either in state estimation or in value function representation), the majority of greedy value function methods seem to be deemed to suffer either from the risk of oscillation/chattering or from the presence of systematic sub-optimality. 1 Introduction We consider the reinforcement learning problem in which one attempts to find an approximately optimal policy for controlling a stochastic nonlinear dynamical system. We focus on the setting in which the target system is actively sampled during the learning process. Here the sampling policy changes during the learning process in a manner that depends on the main policy being optimized. This learning setting is often called interactive learning [e.g., 23, §3]. Many approaches to the problem are value-based and build on the methodology of simulation-based approximate dynamic programming [23, 4, 9, 19, 8, 21]. The majority of these methods are often categorized into greedy value function methods (critic-only) and value-based policy gradient methods (actor-critic) [e.g., 23, 13]. Within this interactive setting, the policy gradient approach has better convergence guarantees, with the strongest case being for Monte Carlo evaluation with ‘compatible’ value function approximation. In this case, convergence with probability one (w.p.1) to a local optimum can be established for arbitrary differentiable policy classes under mild assumptions [22, 13, 19]. On the other hand, while the greedy value function approach is often considered to possess practical advantages in terms of convergence speed and representational flexibility, its behavior in the proximity of an optimum is currently not well understood. It is well known that interactively operated approximate hard-greedy An extended version of this paper with full proofs and additional background material is available at http://books.nips.cc/ and http://users.ics.aalto.fi/pwagner/. 1 value function methods can fail to converge to any single policy and instead become trapped in sustained policy oscillation or policy chattering, which is currently a poorly understood phenomenon [6, 7]. This applies to both non-optimistic and optimistic policy iteration (value iteration being a special case of the latter). In general, the best guarantees for this methodology exist in the form of sub-optimality bounds [6, 7]. The practical value of these bounds, however, is under question (e.g., [2; 7, §6.2.2]), as they can permit very bad solutions. Furthermore, it has been shown that these bounds are tight [7, §6.2.3; 12, §3.2]. A hard-greedy policy is a discontinuous function of its parameters, which has been identified as a key source of problems [18, 10, 17, 22]. In addition to the observation that the class of stochastic policies may often permit much simpler solutions [cf. 20], it is known that continuously stochastic policies can also re-gain convergence: both non-optimistic and optimistic soft-greedy approximate policy iteration using, for example, the Gibbs/Boltzmann policy class, is known to converge with enough softness, ‘enough’ being problem-specific. This has been shown by Perkins & Precup [18] and Melo et al. [14], respectively, although with no consideration of the quality of the obtained solutions nor with an interpretation of how ‘enough’ relates to the problem at hand. Unfortunately, the aforementioned sub-optimality bounds are also lost in this case (consider temperature τ →∞); while convergence is re-gained, the properties of the obtained solutions are rather unknown. To summarize, there are considerable shortcomings in the current understanding of the learning dynamics at the very heart of the approximate dynamic programming methodology. We share the belief of Bertsekas [5, 6], expressed in the context of the policy oscillation phenomenon, that a better understanding of these issues “has the potential to alter in fundamental ways our thinking about approximate DP.” In this paper, we provide insight into the convergence behavior and optimality of the generalized optimistic form of the greedy value function methodology by reflecting it against the policy gradient approach. While these two approaches are considered in the literature mostly separately, we are motivated by the belief that it is eventually possible to fully unify them, so as to have the benefits and insights from both in a single framework with no artificial (or historical) boundaries, and that such a unification can eventually resolve the issues outlined above. These issues revolve mainly around the greedy methodology, while at the same time, solid convergence results exist for the policy gradient methodology; connecting these methodologies more firmly might well lead to a fuller understanding of both. After providing background in Section 2, we take the following steps in this direction. First, we show that natural actor-critic methods from the policy gradient side are, in fact, a limiting special case of optimistic policy iteration (Sec. 3). Second, we show that while having the potential to avoid policy oscillation and chattering, a substantial subset of soft-greedy value function approaches can never converge to an optimal policy, except in a certain pathological case (Sec. 4). We then conclude with a discussion in a broader context and use the results to complete a high-level convergence and optimality property map of the variants of the considered methodology (Sec. 5). 2 Background A Markov decision process (MDP) is defined by a tuple M = (S, A, P, r), where S and A denote the state and action spaces. St ∈S and At ∈A denote random variables at time t. s, s′ ∈S and a, b ∈A denote state and action instances. P(s, a, s′) = P(St+1 = s′|St = s, At = a) defines the transition dynamics and r(s, a) ∈R defines the expected immediate reward function. Non-Markovian aggregate states, i.e., subsets of S, are denoted by y. A policy π(a|s, θk) ∈Π is a stochastic mapping from states to actions, parameterized by θk ∈Θ. Improvement is performed with respect to the performance metric J(θ) = 1/H PH t E[r(St, At)|π(θ)]. ∇θJ(θk) ∈Θ denotes a parameter gradient at θk. ∇πJ(θk) ∈Π denotes the corresponding policy gradient in the selected policy space. We define the policy distance ∥πu −πv∥as some p-norm of the action probability differences (P s P a |πu(a|s) −πv(a|s)|p)1/p. Action value functions ¯Q(s, a, ˆwk) and Q(s, a, ˆwk), parameterized by ˆwk, are estimators of the γ-discounted cumulative reward P t γtE[r(St, At)|S0 = s, A0 = a, π(θk)] for some (s, a) when following some policy π(θk). The state value function V (s, ˆwk) is an estimator of such cumulative reward that follows some s. We use ϵ to denote a small positive infinitesimal quantity. 2 We focus on the Gibbs (Boltzmann) policy class with a linear combination of basis functions φ: π(a|s, θk) = eθ⊤ k φ(s,a) P b eθ⊤ k φ(s,b) . (1) We shall use the term ‘semi-uniformly stochastic policy’ for referring to a policy for which π(a|s) = cs ∨π(a|s) = 0, ∀s, a, ∀s ∃cs ∈[0, 1]. Note that both the uniformly stochastic policy and all deterministic policies are special cases of semi-uniformly stochastic policies. For the value function, we focus on least-squares linear-in-parameters approximation with the same basis φ as in (1). We consider both advantage values [see 22, 19] ¯Qk(s, a, ˆwk) = ˆw⊤ k φ(s, a) − X b π(b|s, θk)φ(s, b) ! (2) and absolute action values Qk(s, a, ˆwk) = ˆw⊤ k φ(s, a) . (3) Evaluation can be based on either Monte Carlo or temporal difference estimation. We focus on optimistic policy iteration, which contains both non-optimistic policy iteration and value iteration as special cases, and on the policy gradient counterparts of these. In the general form of optimistic approximate policy iteration (e.g., [7, §6.4]; see also [6, §3.3]), a value function parameter vector w is gradually interpolated toward the most recent evaluation ˆw: wk+1 = wk + κk( ˆwk −wk) , κk ∈(0, 1] . (4) Non-optimistic policy iteration is obtained with κk = 1, ∀k and ‘complete’ evaluations ˆwk (see below). The corresponding Gibbs soft-greedy policy is obtained by combining (1) and a temperature (softness) parameter τ with θk+1 = wk+1/τk , τk ∈(0, ∞) . (5) Hard-greedy iteration is obtained in the limit as τ →0. In optimistic policy iteration, policy improvement is based on an incomplete evaluation. We distinguish between two dimensions of completeness, which are evaluation depth and evaluation accuracy. By evaluation depth, we refer to the look-ahead depth after which truncation with the previous value function estimate occurs. For example, LSPE(0) and LSTD(0) [e.g., 15] implement shallow and deep evaluation, respectively. With shallow evaluation, the current value function parameter vector wk is required for look-ahead truncation when computing ˆwk+1. Inaccurate (noisy) evaluation necessitates additional caution in the policy improvement process and is the usual motivation for using (4) with κ < 1. It is well known that greedy policy iteration can be non-convergent under approximations [4]. The widely used projected equation approach can manifest convergence behavior that is complex and not well understood, including bounded but potentially severe sustained policy oscillations [6, 7] (see the extended version for further details). Similar consequences arise in the context of partial observability for approximate or incomplete state estimation [e.g., 20, 16]. A novel explanation to the phenomenon in the non-optimistic case was recently proposed in [24, 25], where policy oscillation was re-cast as sustained overshooting over an attractive stochastic policy. Policy convergence can be established under various restrictions (see the extended version for further details). Most importantly to this paper, convergence can be established with continuously soft-greedy action selection [18, 14], in which case, however, the quality of the obtained solutions is unknown. In policy gradient reinforcement learning [22, 13, 19, 8], improvement is obtained via stochastic gradient ascent: θk+1 = θk + αkG(θk)−1 ∂J(θk) ∂θ = θk + αkηk , (6) where αk ∈(0, ∞), G is a Riemannian metric tensor that ideally encodes the curvature of the policy parameterization, and ηk is some estimate of the gradient. With value-based policy gradient methods, using (1) together with either (2) or (3) fulfills the ‘compatibility condition’ [22, 13]. With (2), the value function parameter vector ˆwk becomes the natural gradient estimate for the evaluated policy π(θk), leading to natural actor-critic algorithms [11, 19], for which ηk = ˆwk . (7) 3 For policy gradient learning with a ‘compatible’ value function and Monte Carlo evaluation, convergence w.p.1 to a local optimum is established under standard assumptions [22, 13]. Temporal difference evaluation can lead to sub-optimal results with a known sub-optimality bound [13, 8]. 3 Forgetful natural actor-critic In this section, we show that an important subset of natural actor-critic algorithms is a limiting special case of optimistic policy iteration. A related connection was recently shown in [24, 25], where a modified form of the natural actor-critic algorithm by Peters & Schaal [19] was shown to correspond to non-optimistic policy iteration. In the following, we generalize and simplify this result: by starting from the more general setting of optimistic policy iteration, we arrive at a unifying view that both encompasses a broader range of greedy methods and permits interpolation between the approaches directly with existing (unmodified) methodology. We consider the Gibbs policy class from (1) and the linear-in-parameters advantage function from (2), which form a ‘compatible’ actor-critic setup. We assume deep policy evaluation (cf. Section 2). We begin with the natural actor-critic (NAC) algorithm by Peters & Schaal [19] (cf. (6) and (7)) and generalize it by adding a forgetting term: θk+1 = θk + αkηk −κkθk , (8) where αk ∈(0, ∞), κk ∈(0, 1]. We refer to this generalized algorithm as the forgetful natural actor-critic algorithm, or NAC(κ). In the following, we show that this algorithm is, within the discussed context, equivalent to the general form of optimistic policy iteration in (4) and (5), with the following translation of the parameterization: τk = κk αk , or αk = κk τk . (9) Taking the forgetting factor κ in (8) toward zero leads back toward the original natural actor-critic algorithm, with the implication that the original algorithm is a limiting special case of optimistic policy iteration. Theorem 1. For the case of deep policy evaluation (Section 2), the natural actor-critic algorithm for the Gibbs policy class ((6), (7), (1), (2)) is a limiting special case of Gibbs soft-greedy optimistic policy iteration ((4), (5), (1), (2)). Proof. The update rule for Gibbs soft-greedy optimistic policy iteration is given in (4) and (5). By moving the temperature to scale ˆw (assume w0 to be scaled accordingly), we obtain w′ k+1 = w′ k + κk( ˆwk/τk −w′ k) θk+1 = w′ k+1 , (10) again with κk ∈(0, 1], τk ∈(0, ∞). Such a re-formulation effectively re-scales w and is possible only with deep policy evaluation (cf. Section 2), with which the non-scaled w is not needed by the policy evaluation process. We can now remove the redundant second line and rename w′ to θ: θk+1 = θk + κk( ˆwk/τk −θk) . (11) Finally, we open up the last term and encapsulate κ/τ into α: θk+1 = θk + κk( ˆwk/τk) −κkθk (12) = θk + αk ˆwk −κkθk , (13) with αk = κk/τk. Based on (7), we observe that (13) is equivalent to (8). The original natural actor-critic algorithm is obtained in the limit as κk →0, which causes the forgetting term κkθk to vanish (the effective step size α can still be controlled with τ). This result has some interesting implications. First, it becomes apparent that the implicit effective step size in optimistic policy iteration is, in fact, α = κ/τ, i.e., it is inversely related to the temperature τ. If the interpolation factor κ is held fixed, a low temperature, which can lead to policy 4 oscillation, equals a long effective step size. This agrees with the interpretation of policy oscillation as overshooting in [24, 25]. Likewise, a high temperature equals a short effective step size. In [18], convergence is established for a high enough constant temperature. This result now becomes translated to showing that convergence is established with a short enough constant effective step size,1 which creates an interesting and more direct connection to convergence results for (batch) steepest descent methods with a constant step size [e.g., 1, 3]. In addition, this connection might permit the application of the results in the aforementioned literature to establish, in the considered context, a constant step size convergence result for the natural actor-critic methodology. Second, we see that the interpolation scheme in optimistic policy iteration, while originally introduced for the sake of countering an inaccurate value function estimate, actually goes in the direction of the policy gradient methodology. Smooth interpolation between policy gradient and greedy value function learning turns out to be possible by simply adjusting the interpolation factor κ while treating the temperature τ as an inverse of the step size (we return to provide an interpretation of the role of κ at a later point). Contrary to the related result in [24], no modifications to existing algorithms are needed. This connection also allows the convergence results from the policy gradient literature to be brought in (see Section 2): convergence w.p.1, under standard assumptions from the referred literature, to an optimal solution is established in the limit for this class of approximate optimistic policy iteration as the interpolation factor κ is taken toward zero and the step size requirements are inversely enforced on the temperature τ. Third, we observe that in non-optimistic policy iteration (κ = 1), the forgetting term resets the parameter vector to the origin at the beginning of every iteration, with the implication that solutions that are not within the range of a single step from the origin in the direction of the natural gradient cannot be reached in any number of iterations. The choice of the effective step size, which is inversely controlled by the temperature, becomes again decisive: a step size that is too short (the temperature is too high) will cause the algorithm to permanently undershoot the desired optimum, thus trapping it in sustained sub-optimality, while a step size that is too long (the temperature is too low) will cause it to overshoot, which can additionally trap it in sustained oscillation. Unfortunately, even hitting the target exactly with a perfect step size will fail to lead to convergence and optimality at the same time. Our next section examines these issues more closely. 4 Systematic non-optimality of soft-greedy methods For greedy value function methods, using the hard-greedy policy class trivially prevents convergence to other than deterministic policies. Furthermore, the proximity of an attractive stochastic policy can prevent convergence altogether and trap the process in oscillation (cf. Section 2). The Gibbs soft-greedy policy class, on the other hand, can represent stochastic policies, fixed points do exist [10, 17], and convergence toward some policy is guaranteed with sufficient softness [18, 14]. While convergence toward deterministic optimal decisions is trivially lost as soon as any softness is introduced (τ ̸→0, and assuming a bounded value function), one might hope that convergence toward stochastic optimal decisions could still occur in some cases. Unfortunately, as we show in the following, this is not the case: in the presence of any softness, this approach can never converge toward any optimal policy (i.e., convergence and optimality become mutually exclusive), except in a certain pathological case. At this point, we wish to make clear that we are not arguing against the practical value of the greedy value function methodology in (interactively) approximated problems; the methodology has some clear merits, and the sub-optimality and oscillations could well be negligible in a given task. Instead, we take the following result, together with existing literature on policy oscillations, as an indication of a fundamental theoretical incompatibility of this methodology to this context: the way by which this methodology deals with stochastic optima seems to be fundamentally flawed, and we believe that a thorough understanding of this flaw will have, in addition to facilitating sound theoretical advances, also immediate practical value by permitting correctly informed trade-off decisions. Theorem 2. Assume an unbiased value function estimator (e.g., Monte Carlo evaluation). Now, for Gibbs soft-greedy policy iteration ((1), (4) and (5)) using a linear-in-parameters value function approximator ((2) or (3)), including optimistic and non-optimistic variants (any κ in (4)), there cannot exist a fixed point at an optimum, except for the uniformly stochastic policy. 1Note that the diminishing step size αt in [18, Fig. 1] concerns policy evaluation, not policy improvement. 5 Proof outline. A fixed point of the update rule (4) must satisfy ˆwk = wk , (14) i.e., at a fixed point, the policy evaluation step ˆwk := eval(π(wk/τk)) for the current parameter vector must yield the same parameter vector as its result: eval (π (wk/τk)) = wk . (15) By applying (14) and (7), we have wk = ˆwk = ηk = G(θk)−1∇θJ(θk) , (16) which shows that the fixed-point policy π(wk/τk) in (15) is defined solely by its own (scaled) performance gradient. For an optimal policy and an unbiased estimator, this parameter gradient must, by definition, map to the zero policy gradient, i.e., to ∇πJ(θk) = 0. Consequently, an optimal policy at a fixed point is defined solely by the zero policy gradient, making the policy equal to π(0), which is the uniformly stochastic policy. For the full proof, see the extended version. Theorem 3. Consider the family of methods from Theorem 2. Assume a smooth policy gradient field (∥∇πJ(πu) −∇πJ(πv)∥→0 as ∥πu −πv∥→0) and τ ̸→0. First, the policy distance between a fixed point policy πf and an optimal policy π⋆cannot be vanishingly small (
πf −π⋆
̸< ϵ), except if the optimal policy π⋆is a semi-uniformly stochastic policy. Second, for bounded returns (γ ̸→1 and r(s, a) ̸→±∞, ∀s, a), the policy distance between a fixed point policy πf and an optimal policy π⋆cannot be vanishingly small (
πf −π⋆
̸< ϵ), except if the optimal policy π⋆is the uniformly stochastic policy. Proof outline. For a policy ¯π = π(wk/τk) that is vanishingly close to an optimum, an unbiased parameter gradient ηk must, assuming a smooth gradient field, map to a policy gradient that is vanishingly close to zero, i.e., ηk must have a vanishingly small effect on ¯π with any finite step size: ∥π(wk/τk + αηk) −π(wk/τk)∥< ϵ , ∀α > 0, α ̸→∞. (17) If ¯π is also a fixed point, then, by (16), we can substitute both wk and ηk in (17) with ˆwk: ∥π( ˆwk/τk + α ˆwk) −π( ˆwk/τk)∥< ϵ , ∀α > 0, α ̸→∞ ⇔∥π ((1/τk + α) ˆwk) −π((1/τk) ˆwk)∥< ϵ , ∀α > 0, α ̸→∞. (18) We now see that ¯π is defined solely by a temperature-scaled version of a vanishingly small policy gradient, and that the condition in (17) is equivalent to stating that any finite decrease of the temperature must not have a non-vanishing effect on ¯π. As only semi-uniformly stochastic policies are invariant to such temperature decreases, it follows that ¯π must be vanishingly close to such a policy. Furthermore, if assuming bounded returns, then no dimension of the term ˆw⊤φ(s, a) can approach positive or negative infinity when ˆw is estimated using (2) or (3). Consequently, for τ ̸→0, the uniformly stochastic policy π(0) becomes the only semi-uniformly stochastic policy that the Gibbs policy class in (1) can approach, with the implication that ¯π must be vanishingly close to the uniformly stochastic policy. For the full proof, see the extended version. To interpret the preceding theorems, we observe that the gist of them is that, assuming a wellbehaved gradient field, the closer the evaluated policy is to an optimum, the closer the target point of the next greedy update will be to the origin (in policy parameter space). At a fixed point, the policy parameter vector must equal the target point of the next update, causing convergence to or toward a policy that is exactly optimal but not at the origin to be a contradiction (Theorem 2). Convergence to or toward a policy that is vanishingly close to an optimum is also impossible, except if the optimum is (semi-)uniformly stochastic (Theorem 3). In practical terms, Theorem 2 states that even if the task at hand and the chosen hyperparameters would allow convergence to some policy in a finite number of iterations, the resulting policy can 6 never contain optimal decisions, except for uniformly stochastic ones. Theorem 3 generalizes this result to the case of asymptotic convergence toward some limiting policy: for unbounded returns and any τ ̸→0, it is impossible to have asymptotic convergence toward any optimal decision in any state, except for semi-uniformly stochastic decisions, and for bounded returns and any τ ̸→0, it is impossible to have asymptotic convergence toward any non-uniform optimal decision in any state. If convergence is to occur, then the limiting policy must reside “between” the origin and an optimum, i.e., the result must always undershoot the optimum that the learning process was influenced by. However, we can see in (15) that by decreasing the temperature τ, it is possible to shift this point of convergence further away from the origin and closer to the optimum: in the limit of τ →0, (15) can permit the parameter vector ˆw to converge toward a point that approaches the origin while, at the same time, allowing the corresponding policy π( ˆw/τ) to converge toward a policy that is arbitrarily close to a distant optimum (one can also see that with τ →0, the inequality in (18) becomes satisfied for any ˆwk, due to α ̸→∞). Unfortunately, as we already know, such manipulation of the distance of the fixed point from an optimum by adjusting τ can ruin convergence altogether in non-Markovian problems. Perkins & Precup [18] report negative convergence results for non-optimistic iteration (κ = 1) with a too low τ, while for optimistic iteration (κ < 1), Melo et al. [14] report a lack of positive results. Interestingly, this latter case is exactly what Theorem 1 addressed, showing that there actually is a way out and that it is by moving toward natural policy gradient iteration: decreasing the temperature τ toward zero causes the sub-optimality to vanish, while decreasing the interpolation factor κ at the same rate prevents the effective step size from exploding. Finally, we provide a brief discussion on some questions that may have occurred to the reader by now. First, how does the preceding fit with the well-known soundness of greedy value function methods in the Markovian case? The crucial difference between the Markovian case (fully observable and tabular) and the non-Markovian case (partially observable or non-tabular) follows from the standard result for MDPs that states that in the former, all optima must be deterministic (with the possibility of redundant stochastic optima) [e.g., 23, §A.2]. For the Gibbs policy class, deterministic policies reside at infinity in some direction in the parameter space, with two implications for the Markovian case. First, the distance to an optimum never decreases. Consequently, the value function, being a correction toward an optimum, never vanishes toward a ‘neutral’ state. Second, only the direction of an optimum is relevant, as the distance can be always assumed to be infinite. This implies that in, and only in Markovian problems, the value function never ceases to retain all necessary information about the current solution, while in non-Markovian problems, relying solely on the value function can lead to losing track of the current solution. Second, when moving toward an optimum at infinity, how can the value function / natural gradient (encoded by ˆw = η) stay non-zero and continue to properly represent action values while the corresponding policy gradient ∇πJ(θ) must approach zero at the same time? We note that the equivalence in (7) is between a value function and a natural gradient η. We then recall that the curvature of the Gibbs policy class turns into a plateau at infinity, onto which the policy becomes pushed when moving toward a deterministic optimum. The increasing discrepancy between η = G(θ)−1∇θJ(θ) ̸→0 and ∇πJ(θ) →0 can be consumed by G(θ)−1 as it captures the curvature of this plateau. 5 Common ground Figure 1 shows a map of relevant variants of optimistic policy iteration, parameterized as in (4). As is well known, the hard-greedy variants of this methodology (seen on the left edge on the map) can become trapped in non-converging cycles over potentially non-optimal policies (see Section 2 for references and exceptions). For a continuously soft-greedy policy class (toward right on the map), convergence can be established with enough softness [18, 14]. The natural actor-critic algorithm, which is convergent and optimal, is placed to the lower left corner by Theorem 1, while the inevitable non-optimality of soft-greedy variants toward right follows from Theorems 2 and 3. The exact (problem-dependent) place and shape of the line separating non-convergent and convergent soft-greedy variants (dashed line on the map) remains an open problem. The main value of Theorem 1 is in bringing the greedy value function and policy gradient methodologies closer to each other. In our context, the unifying NAC(κ) formulation in (8) permits interpolation between the methodologies using the κ parameter. As discussed at the end of Section 4, the policy-forgetting term requires a Markovian problem for being justified: a greedy update implicitly 7 0 κ 1 0 τ ∞ Non-optimistic hard-greedy Oscillation (Bertsekas, . . . ) Non-optimality Optimistic hard-greedy Chattering (Bertsekas, . . . ) Non-optimality Natural actor-critic Convergence (Theorem 1) Optimality Non-optimistic soft-greedy (small τ) Non-convergence (Perkins & Precup) Non-optimality (Theorems 2–3) Non-optimistic soft-greedy (large τ) Convergence (Perkins & Precup) Non-optimality (Theorems 2–3) Optimistic soft-greedy (large τ) Convergence (Melo et al.) Non-optimality (Theorems 2–3) cf. Fig. 2b cf. Fig. 2c Figure 1: The hyperparameter space of the general form of (approximate) optimistic policy iteration in (4), with known convergence and optimality properties (see text for assumptions). y1 s1 s2 0 1 1/4 al al ar ar (a) A non-Markovian problem (adapted from [24]). The incoming arrow indicates the start state. Arrows leading out indicate termination with the shown reward. 0 5 10 15 20 0 0.5 1 iteration θ(left) −θ(right) κ = 0.2, τ = 1 κ = 0.2, τ = 0.2 κ = 0.2, τ = 0.05 (b) Non-optimality or oscillation with κ ̸→0. The variants are marked with in Fig. 1 (schematic). 0 5 10 15 20 0 0.5 1 iteration θ(left) −θ(right) κ = τ = 0.2 κ = τ = 0.05 κ = τ = 0.01 NAC (α = 1) (c) Interpolation toward NAC with κ →0 and τ →0. The variants are marked with in Fig. 1 (schematic). Figure 2: Empirical illustration of the behavior of optimistic policy iteration ((1), (2), (4) and (5), with tabular φ) in the proximity of a stochastic optimum. The problem is shown in Fig. 2a. In Figures 2b and 2c, the optimum at θ(left) −θ(right) = log(2) is denoted by a solid green line. The uniformly stochastic policy is denoted by a dashed red line. stands on a Markov assumption and the κ parameter in (8) can be interpreted as adjusting the strength of this assumption. In this respect, the policy improvement parameter κ in NAC(κ) can be seen (inversely) as a dual in spirit to the policy evaluation parameter λ in TD(λ)-style algorithms. On the policy evaluation side, having λ = 0 obtains variance reduction by assuming and exploiting Markovianity of the problem, while λ = 1 obtains unbiased estimates also for non-Markovian problems. On the policy improvement side, with κ = 1, we have strictly greedy updates that gain in speed as the policy can respond instantly to new opportunities appearing in the value function (for empirical observations of such a speed gain, see [11, 25]), and in representational flexibility due to the lack of continuity constraints between successive policies (for a canonical example, consider fitted Q iteration). This comes at the price of either oscillation or non-optimality if the Markov assumption fails to hold, which is illustrated in Figure 2b for the problem in 2a. With κ →0, we approach natural gradient updates that remain sound also in non-Markovian settings, which is illustrated in Figure 2c. The possibility to interpolate between the approaches might turn out useful in problems with partial Markovianity: a large κ in the NAC(κ) formulation can be used to quickly find the rough direction of the strongest attractors, after which gradually decreasing κ allows a convergent final ascent toward an optimum. Acknowledgments This work has been financially supported by the Academy of Finland through project no. 254104, and by the Foundation of Nokia Corporation. 8 References [1] Armijo, L. (1966). Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1), 1–3. [2] Baxter, J., & Bartlett, P. L. (2000). Reinforcement learning in POMDP’s via direct gradient ascent. In Proceedings of the Seventeenth International Conference on Machine Learning, (pp. 41–48). [3] Bertsekas, D. P. (1997). A new class of incremental gradient methods for least squares problems. SIAM Journal on Optimization, 7(4), 913–926. [4] Bertsekas, D. P. (2005). Dynamic Programming and Optimal Control. Athena Scientific. [5] Bertsekas, D. P. (2010). Pathologies of temporal difference methods in approximate dynamic programming. In 49th IEEE Conference on Decision and Control, (pp. 3034–3039). [6] Bertsekas, D. P. (2011). Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3), 310–335. [7] Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Athena Scientific. [8] Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., & Lee, M. (2009). Natural actor-critic algorithms. Automatica, 45(11), 2471–2482. [9] Bus¸oniu, L., Babuˇska, R., De Schutter, B., & Ernst, D. (2010). Reinforcement learning and dynamic programming using function approximators. CRC Press. [10] De Farias, D. P., & Van Roy, B. (2000). On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3), 589–608. [11] Kakade, S. M. (2002). A natural policy gradient. In Advances in Neural Information Processing Systems. [12] Kakade, S. M. (2003). On the Sample Complexity of Reinforcement Learning. Ph.D. thesis, University College London. [13] Konda, V. R., & Tsitsiklis, J. N. (2004). On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4), 1143–1166. [14] Melo, F. S., Meyn, S. P., & Ribeiro, M. I. (2008). An analysis of reinforcement learning with function approximation. In Proceedings of the 25th International Conference on Machine Learning, (pp. 664–671). [15] Nedi´c, A., & Bertsekas, D. P. (2003). Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems: Theory and Applications, 13(1–2), 79–110. [16] Pendrith, M. D., & McGarity, M. J. (1998). An analysis of direct reinforcement learning in non-Markovian domains. In Proceedings of the Fifteenth International Conference on Machine Learning. [17] Perkins, T. J., & Pendrith, M. D. (2002). On the existence of fixed points for Q-learning and sarsa in partially observable domains. In Proceedings of the Nineteenth International Conference on Machine Learning, (pp. 490–497). [18] Perkins, T. J., & Precup, D. (2003). A convergent form of approximate policy iteration. In Advances in Neural Information Processing Systems. [19] Peters, J., & Schaal, S. (2008). Natural actor-critic. Neurocomputing, 71(7-9), 1180–1190. [20] Singh, S. P., Jaakkola, T., & Jordan, M. I. (1994). Learning without state-estimation in partially observable Markovian decision processes. In Proceedings of the Eleventh International Conference on Machine Learning. [21] Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press. [22] Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems. [23] Szepesv´ari, C. (2010). Algorithms for Reinforcement Learning. Morgan & Claypool Publishers. [24] Wagner, P. (2011). A reinterpretation of the policy oscillation phenomenon in approximate policy iteration. In Advances in Neural Information Processing Systems 24, (pp. 2573–2581). [25] Wagner, P. (to appear). Policy oscillation is overshooting. Neural Networks. Author manuscript available at http://users.ics.aalto.fi/pwagner/. 9
|
2013
|
59
|
5,135
|
Robust Bloom Filters for Large Multilabel Classification Tasks Moustapha Ciss´e LIP6, UPMC Sorbonne Universit´e Paris, France first.last@lip6.fr Nicolas Usunier UT Compi`egne, CNRS Heudiasyc UMR 7253 Compi`egne, France nusunier@utc.fr Thierry Artieres, Patrick Gallinari LIP6, UPMC Sorbonne Universit´e Paris, France first.last@lip6.fr Abstract This paper presents an approach to multilabel classification (MLC) with a large number of labels. Our approach is a reduction to binary classification in which label sets are represented by low dimensional binary vectors. This representation follows the principle of Bloom filters, a space-efficient data structure originally designed for approximate membership testing. We show that a naive application of Bloom filters in MLC is not robust to individual binary classifiers’ errors. We then present an approach that exploits a specific feature of real-world datasets when the number of labels is large: many labels (almost) never appear together. Our approach is provably robust, has sublinear training and inference complexity with respect to the number of labels, and compares favorably to state-of-the-art algorithms on two large scale multilabel datasets. 1 Introduction Multilabel classification (MLC) is a classification task where each input may be associated to several class labels, and the goal is to predict the label set given the input. This label set may, for instance, correspond to the different topics covered by a text document, or to the different objects that appear in an image. The standard approach to MLC is the one-vs-all reduction, also called Binary Relevance (BR) [16], in which one binary classifier is trained for each label to predict whether the label should be predicted for that input. While BR remains the standard baseline for MLC problems, a lot of attention has recently been given to improve on it. The first main issue that has been addressed is to improve prediction performances at the expense of computational complexity by learning correlations between labels [5] [8], [9] or considering MLC as an unstructured classification problem over label sets in order to optimize the subset 0/1 loss (a loss of 1 is incurred as soon as the method gets one label wrong) [16]. The second issue is to design methods that scale to a large number of labels (e.g. thousands or more), potentially at the expense of prediction performances, by learning compressed representations of labels sets with lossy compression schemes that are efficient when label sets have small cardinality [6]. We propose here a new approach to MLC in this latter line of work. A “MLC dataset” refers here to a dataset with a large number of labels (at least hundreds to thousands), in which the target label sets are smaller than the number of labels by one or several orders of magnitude, which is the common in large-scale MLC datasets collected from the Web. The major difficulty in large-scale MLC problems is that the computational complexity of training and inference of standard methods is at least linear in the number of labels L. In order to scale better with L, our approach to MLC is to encode individual labels on K-sparse bit vectors of dimension B, where B ≪L, and use a disjunctive encoding of label sets (i.e. bitwise-OR of the codes of the labels that appear in the label set). Then, we learn one binary classifier for each of the B bits of the coding vector, similarly to BR (where K = 1 and B = L). By setting K > 1, one can encode individual labels unambiguously on far less than L bits while keeping the disjunctive encoding unambiguous 1 for a large number of labels sets of small cardinality. Compared to BR, our scheme learns only B binary classifiers instead of L, while conserving the desirable property that the classifiers can be trained independently and thus in parallel, making our approach suitable for large-scale problems. The critical point of our method is a simple scheme to select the K representative bits (i.e. those set to 1) of each label with two desirable properties. First, the encoding of “relevant” label sets are unambiguous with the disjunctive encoding. Secondly, the decoding step, which recovers a label set from an encoding vector, is robust to prediction errors in the encoding vector: in particular, we prove that the number of incorrectly predicted labels is no more than twice the number of incorrectly predicted bits. Our (label) encoding scheme relies on the existence of mutually exclusive clusters of labels in real-life MLC datasets, where labels in different clusters (almost) never appear in the same label set, but labels from the same clusters can. Our encoding scheme makes that B becomes smaller as more clusters of similar size can be found. In practice, a strict partitioning of the labels into mutually exclusive clusters does not exist, but it can be fairly well approximated by removing a few of the most frequent labels, which are then dealt with the standard BR approach, and clustering the remaining labels based on their co-occurrence matrix. That way, we can control the encoding dimension B and deal with the computational cost/prediction accuracy tradeoff. Our approach was inspired and motivated by Bloom filters [2], a well-known space-efficient randomized data structure designed for approximate membership testing. Bloom filters use exactly the principle of encoding objects (in our case, labels) by K-sparse vectors and encode a set with the disjunctive encoding of its members. The filter can be queried with one object and the answer is correct up to a small error probability. The data structure is randomized because the representative bits of each object are obtained by random hash functions; under uniform probability assumptions for the encoded set and the queries, the encoding size B of the Bloom filter is close to the information theoretic limit for the desired error rate. Such “random” Bloom filter encodings are our main baseline, and we consider our approach as a new design of the hash functions and of the decoding algorithm to make Bloom filter robust to errors in the encoding vector. Some background on (random) Bloom filters, as well as how to apply them for MLC is given in the next section. The design of hash functions and the decoding algorithm are then described in Section 3, where we also discuss the properties of our method compared to related works of [12, 15, 4]. Finally, in Section 4, we present experimental results on two benchmark MLC datasets with a large number of classes, which show that our approach obtains promising performances compared to existing approaches. 2 Bloom Filters for Multilabel Classification Our approach is a reduction from MLC to binary classification, where the rules of the reduction follow a scheme inspired by the encoding/decoding of sets used in Bloom filters. We first describe the formal framework to fix the notation and the goal of our approach, and then give some background on Bloom filters. The main contribution of the paper is described in the next section. Framework Given a set of labels L of size L, MLC is the problem of learning a prediction function c that, for each possible input x, predicts a subset of L. Throughout the paper, the letter y is used for label sets, while the letter ℓis used for individual labels. Learning is carried out on a training set ((x1, y1), ..., (xn, yn)) of inputs for which the desired label sets are known; we assume the examples are drawn i.i.d. from the data distribution D. A reduction from MLC to binary classification relies on an encoding function e : y ⊆L 7→ (e1(y), ..., eB(y)) ∈{0, 1}B, which maps subsets of L to bit vectors of size B. Then, each of the B bits are learnt independently by training a sequence of binary classifiers ˆe = (ˆe1, ..., ˆeB), where each ˆej is trained on ((x1, ej(y1)), ..., (xn, ej(yn))). Given a new instance x, the encoding ˆe(x) is predicted, and the final multilabel classifier c is obtained by decoding ˆe(x), i.e. ∀x, c(x) = d(ˆe(x)). The goal of this paper is to design the encoding and decoding functions so that two conditions are met. First, the code size B should be small compared to L, in order to improve the computational cost of training and inference relatively to BR. Second, the reduction should be robust in the sense that the final performance, measured by the expected Hamming loss HL(c) between the target label sets y and the predictions c(x) is not much larger than HB(ˆe), the average error of the classifiers we learn. Using ∆to denote the symmetric difference between sets, HL and HB are defined by: HL(c) = E(x,y)∼D h |c(x)∆y| L i and HB(ˆe) = 1 B PB j=1 E(x,y)∼D 1{ej(y)̸=ˆej(y)} . (1) 2 label h1 h2 h3 ℓ1 2 3 5 ℓ2 2 4 5 ℓ3 1 2 5 ℓ4 1 5 3 ℓ5 1 2 6 ℓ6 3 5 6 ℓ7 3 4 5 ℓ8 2 5 6 e({ℓ1}) 0 1 1 0 1 0 h1(ℓ1) h2(ℓ1) h3(ℓ1) e({ℓ4}) 1 0 1 0 1 0 e({ℓ1, ℓ3, ℓ4}) = e({ℓ1, ℓ4}) 1 1 1 0 1 0 ℓ3 example: (x, {ℓ1, ℓ4}) c(x) = d(ˆe(x)) = {ℓ3} 1 1 0 0 1 0 ˆe1(x) ˆe2(x) ˆe3(x) ˆe4(x) ˆe5(x) ˆe6(x) Figure 1: Examples of a Bloom filter for a set L = {ℓ1, ..., ℓ8} with 8 elements, using 3 hash functions and 6 bits). (left) The table gives the hash values for each label. (middle-left) For each label, the hash functions give the index of the bits that are set to 1 in the 6-bit boolean vector. The examples of the encodings for {ℓ1} and {ℓ4} are given. (middle-right) Example of a false positive: the representation of the subset {ℓ1, ℓ4} includes all the representative bits of label ℓ3 so that is ℓ3 would be decoded erroneously. (right) Example of propagation of errors: a single erroneous bit in the label set encoding, together with a false positive, leads to three label errors in the final prediction. Bloom Filters Given the set of labels L, a Bloom filter (BF) of size B uses K hash functions from L to {1, ..., B}, which we denote hk : L →{1, ..., B} for k ∈{1, ..., K} (in a standard approach, each value hk(ℓ) is chosen uniformly at random in {1, ..., B}). These hash functions define the representative bits (i.e. non-zero bits) of each label: each singleton {ℓ} for ℓ∈L is encoded by a bit vector of size B with at most K non-zero bits, and each hash function gives the index of one of these nonzero bits in the bit vector. Then, the Bloom filter encodes a subset y ⊆L by a bit vector of size B, defined by the bitwise OR of the bit vectors of the elements of y. Given the encoding of a set, the Bloom filter can be queried to test the membership of any label ℓ; the filter answers positively if all the representative bits of ℓare set to 1, and negatively otherwise. A negative answer of the Bloom filter is always correct; however, the bitwise OR of label set encodings leads to the possibility of false positives, because even though any two labels have different encodings, the representative bits of one label can be included in the union of the representative bits of two or more other labels. Figure 1 (left) to (middle-right) give representative examples of the encoding/querying scheme of Bloom filters and an example of false positive. Bloom Filters for MLC The encoding and decoding schemes of BFs are appealing to define the encoder e and the decoder d in a reduction of MLC to binary classification (decoding consists in querying each label), because they are extremely simple and computationally efficient, but also because, if we assume that B ≪L and that the random hash functions are perfect, then, given a random subset of size C ≪L, the false positive rate of a BF encoding this set is in O( 1 2 C B ln(2)) for the optimal number of hash functions. This rate is, up to a constant factor, the information theoretic limit [3]. Indeed, as shown in Section 4 the use of Bloom filters with random hash functions for MLC (denoted S-BF for Standard BF hereafter) leads to rather good results in practice. Nonetheless, there is much room for improvement with respect to the standard approach above. First, the distribution of label sets in usual MLC datasets is far from uniform. On the one hand, this leads to a substantial increase in the error rate of the BF compared to the theoretical calculation, but, on the other hand, it is an opportunity to make sure that false positive answers only occur in cases that are detectable from the observed distribution of label sets: if y is a label set and ℓ̸∈y is a false positive given e(y), ℓcan be detected as a false positive if we know that ℓnever (or rarely) appears together with the labels in y. Second and more importantly, the decoding approach of BFs is far from robust to errors in the predicted representation. Indeed, BFs are able to encode subsets on B ≪L bits because each bit is representative for several labels. In the context of MLC, the consequence is that any single bit incorrectly predicted may include in (or exclude from) the predicted label set all the labels for which it is representative. Figure 1 (right) gives an example of the situation, where a single error in the predicted encoding, added with a false positive, results in 3 errors in the final prediction. Our main contribution, which we detail in the next section, is to use the non-uniform distribution of label sets to design the hash functions and a decoding algorithm to make sure that any incorrectly predicted bit has a limited impact on the predicted label set. 3 3 From Label Clustering to Hash Functions and Robust Decoding We present a new method that we call Robust Bloom Filters (R-BF). It improves over random hash functions by relying on a structural feature of the label sets in MLC datasets: many labels are never observed in the same target set, or co-occur with a probability that is small enough to be neglected. We first formalize the structural feature we use, which is a notion of mutually exclusive clusters of labels, then we describe the hash functions and the robust decoding algorithm that we propose. 3.1 Label Clustering The strict formal property on which our approach is based is the following: given P subsets L1, ..., LP of L, we say that (L1, ..., LP ) are mutually exclusive clusters if no target set contains labels from more than one of each Lp, p = 1..P, or, equivalently, if the following condition holds: ∀p ∈{1, ..., P}, Py∼DY y ∩Lp ̸= ∅ and y ∩ [ p′̸=p Lp′ ̸= ∅ = 0 . (2) where DY is the marginal distribution over label sets. For the disjunctive encoding of Bloom filters, this assumption implies that if we design the hash functions such that the false positives for a label set y belong to a cluster that is mutually exclusive with (at least one) label in y, then the decoding step can detect and correct it. To that end, it is sufficient to ensure that for each bit of the Bloom filter, all the labels for which this bit is representative belong to mutually exclusive clusters. This will lead us to a simple two-step decoding algorithm cluster identification/label set prediction in the cluster. In terms of compression ratio B L , we can directly see that the more mutually exclusive clusters, the more labels can share a single bit of the Bloom filter. Thus, more (balanced) mutually exclusive clusters will result in smaller encoding vectors B, making our method more efficient overall. This notion of mutually exclusive clusters is much stronger than our basic observation that some pair of labels rarely or never co-occur with each other, and in practice it may be difficult to find a partition of L into mutually exclusive clusters because the co-occurrence graph of labels is connected. However, as we shall see in the experiments, after removing the few most central labels (which we call hubs, and in practice roughly correspond to the most frequent labels), the labels can be clustered into (almost) mutually exclusive labels using a standard clustering algorithm for weighted graph. In our approach, the hubs are dealt with outside the Bloom filter, with a standard binary relevance scheme. The prediction for the remaining labels is then constrained to predict labels from at most one of the clusters. From the point of view of prediction performance, we loose the possibility of predicting arbitrary label sets, but gain the possibility of correcting a non-negligible part of the incorrectly predicted bits. As we shall see in the experiments, the trade-off is very favorable. We would like to note at this point that dealing with the hubs or the most frequent labels with binary relevance may not particularly be a drawback of our approach: the occurrence probabilities of the labels is long-tailed, and the first few labels may be sufficiently important to deserve a special treatment. What really needs to be compressed is the large set of labels that occur rarely. To find the label clustering, we first build the co-occurrence graph and remove the hubs using the degree centrality measure. The remaining labels are then clustered using Louvain algorithm [1]; to control the number of clusters, a maximum size is fixed and larger clusters are recursively clustered until they reach the desired size. Finally, to obtain (almost) balanced clusters, the smallest clusters are merged. Both the number of hubs and the cluster size are parameters of the algorithm, and, in Section 4, we show how to choose them before training at negligible computational cost. 3.2 Hash functions and decoding From now on, we assume that we have access to a partition of L into mutually exclusive clusters (in practice, this corresponds to the labels that remain after removal of the hubs). Hash functions Given the parameter K, constructing K-sparse encodings follows two conditions: 1. two labels from the same cluster cannot share any representative bit; 2. two labels from different clusters can share at most K −1 representative bits. 4 bit representative bit representative index for labels index for labels 1 {1, 2, 3, 4, 5} 7 {16, 17, 18, 19, 20} 2 {1, 6, 7, 8, 9} 8 {16, 21, 22, 23, 24} 3 {2, 6, 10, 11, 12} 9 {17, 21, 25, 26, 27} 4 {3, 7, 10, 13, 14} 10 {18, 22, 25, 28, 29} 5 {4, 8, 11, 13, 15} 11 {19, 23, 26, 28, 30} 6 {5, 9, 12, 14, 15} 12 {20, 24, 27, 29, 30} cluster labels in cluster labels in index cluster index cluster 1 {1, 15} 9 {9, 23} 2 {2, 16} 10 {10, 24} 3 {3, 17} 11 {11, 25} 4 {4, 18} 12 {12, 26} 5 {5, 19} 13 {13, 27} 6 {6, 20} 14 {14, 28} 7 {7, 21} 15 {15, 29} 8 {8, 22} Figure 2: Representative bits for 30 labels partitioned into P = 15 mutually exclusive label clusters of size R = 2, using K = 2 representative bits per label and batches of Q = 6 bits. The table on the right gives the label clustering. The injective mapping between labels and subsets of bits is defined by g : ℓ7→{g1(ℓ) = (1+ℓ)/6, g2(ℓ) = 1+ℓmod 6} for ℓ∈{1, ..., 15} and, for ℓ∈{15, ..., 30}, it is defined by ℓ7→{(6 + g1(ℓ−15), 6 + g1(ℓ−15)}. Finding an encoding that satisfies the conditions above is not difficult if we consider, for each label, the set of its representative bits. In the rest of the paragraph, we say that a bit of the Bloom filter “is used for the encoding of a label” when this bit may be a representative bit of the label. If the bit “is not used for the encoding of a label”, then it cannot be a representative bit of the label. Let us consider the P mutually exclusive label clusters, and denote by R the size of the largest cluster. To satisfy Condition 1., we find an encoding on B = R.Q bits for Q ≥K and P ≤ Q K as follows. For a given r ∈{1, ..., R}, the r-th batch of Q successive bits (i.e. the bits of index (r −1)Q + 1, (r −1)Q + 2, ..., rQ) is used only for the encoding of the r-th label of each cluster. That way, each batch of Q bits is used for the encoding of a single label per cluster (enforcing the first condition) but can be used for the encoding of P labels overall. For the Condition 2., we notice that given a batch of Q bits, there are Q K different subsets of K bits. We then injectively map the (at most) P labels to the subsets of size K to define the K representative bits of these labels. In the end, with a Bloom filter of size B = R.Q, we have K-sparse encodings that satisfy the two conditions above for L ≤R. Q K labels partitioned into P ≤ Q K mutually exclusive clusters of size at most R. Figure 2 gives an example of such an encoding. In the end, the scheme is most efficient (in terms of the compression ratio B/L) when the clusters are perfectly balanced and when P is exactly equal to Q K for some Q. For instance, for K = 2 that we use in our experiments, if P = Q(Q+1) 2 for some integer Q, and if the clusters are almost perfectly balanced, then B/L ≈ p 2/P. The ratio becomes more and more favorable as both Q increases and K increases up to Q/2, but the number of different clusters P must also be large. Thus, the method should be most efficient on datasets with a very large number of labels, assuming that P increases with L in practice. Decoding and Robustness We now present the decoding algorithm, followed by a theoretical guarantee that each incorrectly predicted bit in the Bloom filter cannot imply more than 2 incorrectly predicted labels. Given an example x and its predicted encoding ˆe(x), the predicted label set d(ˆe(x)) is computed with the following two-step process, in which we say that a bit is “representative of one cluster” if it is a representative bit of one label in the cluster: a. (Cluster Identification) For each cluster Lp, compute its cluster score sp defined as the number of its representative bits that are set to 1 in ˆe(x). Choose Lˆp for ˆp ∈arg max p∈{1,...,P } sp; b. (Label Set Prediction) For each label ℓ∈Lˆp, let s′ ℓbe the number of representative bits of ℓset to 1 in ˆe(x); add ℓto d(ˆe(x)) with probability s′ ℓ K . In case of ties in the cluster identification, the tie-breaking rule can be arbitrary. For instance, in our experiments, we use logistic regression as base learners for binary classifiers, so we have access to posterior probabilities of being 1 for each bit of the Bloom filter. In case of ties in the cluster identification, we restrict our attention to the clusters that maximize the cluster score, and we recompute their cluster scores using the posterior probabilities instead of the binary decision. The 5 cluster which maximizes the new cluster score is chosen. The choice of a randomized prediction for the labels avoids a single incorrectly predicted bit to result in too many incorrectly predicted labels. The robustness of the encoding/decoding scheme is proved below: Theorem 1 Let the label set L , and let (L1, ..., LP ) be a partition of L satisfying (2). Assume that the encoding function satisfies Conditions 1. and 2., and that decoding is performed in the two-step process a.-b. Then, using the definitions of HL and HB of (1), we have: HL(d ◦ˆe) ≤2B L HB(ˆe) for a K-sparse encoding, where the expectation in HL is also taken over the randomized predictions. Sketch of proof Let (x, y) be an example. We compare the expected number of incorrectly predicted labels HL(y, d(ˆe(x))) = E |d(ˆe(x)) ∆y| (expectation taken over the randomized prediction) and the number of incorrectly predicted bits HB(ˆe(x) , e(y)) = PB j=1 1{ˆej(x)̸=ej(y)}. Let us denote by p∗the index of the cluster in which y is included, and ˆp the index of the cluster chosen in step a. We consider the two following cases: ˆp = p∗: if the cluster is correctly identified then each incorrectly predicted bit that is representative for the cluster costs 1 K in HL(y, d(ˆe(x))). All other bits do not matter. We thus have HL(y, d(ˆe(x))) ≤1 K HB(ˆe(x) , e(y)). ˆp ̸= p∗: If the cluster is not correctly identified, then HL(y, d(ˆe(x))) is the sum of (1) the number of labels that should be predicted but are not (|y|), and (2) the labels that are in the predicted label set but that should not. To bound the ratio HL(y,d(ˆe(x))) HB(ˆe(x),e(y)), we first notice that there are at least as much representative bits predicted as 1 for Lˆp than for Lp∗. Since each label of Lˆp shares at most K −1 representative bits with a label of Lp∗, there are at least |y| incorrect bits. Moreover, the maximum contribution to labels predicted in the incorrect cluster by correctly predicted bits is at most K−1 K |y|. Each additional contribution of 1 K in HL(y, d(ˆe(x))) comes from a bit that is incorrectly predicted to 1 instead of 0 (and is representative for Lˆp). Let us denote by k the number of such contributions. Then, the most defavorable ratio HL(y,d(ˆe(x))) HB(ˆe(x),e(y)) is smaller than max k≥0 k K +|y|(1+ K−1 K ) max(|y|,k) = |y| K +|y|(1+ K−1 K ) |y| = 2. Taking the expectation over (x, y) completes the proof ( B L comes from normalization factors). □ 3.3 Comparison to Related Works The use of correlations between labels has a long history in MLC [11] [8] [14], but correlations are most often used to improve prediction performances at the expense of computational complexity through increasingly complex models, rather than to improve computational complexity using strong negative correlations as we do here. The most closely related works to ours is that of Hsu et al. [12], where the authors propose an approach based on compressed sensing to obtain low-dimension encodings of label sets. Their approach has the advantage of a theoretical guarantee in terms of regret (rather than error as we do), without strong structural assumptions on the label sets; the complexity of learning scales in O(C ln(L)) where C is the number of labels in label sets. For our approach, since Q Q 2 ∼ Q→∞4Q/2/√8πQ, it could be possible to obtain a logarithmic rate under the rather strong assumption that the number of clusters P increases linearly with L. As we shall see in our experiments, however, even with a rather large number of labels (e.g. 1 000), the asymptotic logarithmic rate is far from being achieved for all methods. In practice, the main drawback of their method is that they need to know the size of the label set to predict. This is an extremely strong requirement when classification decisions are needed (less strong when only a ranking of the labels is needed), in contrast to our method which is inherently designed for classification. Another related work is that of [4], which is based on SVD for dimensionality reduction rather than compressed sensing. Their method can exploit correlations between labels, and take classification decisions. However, their approach is purely heuristic, and no theoretical guarantee is given. 6 Figure 3: (left) Unrecoverable Hamming loss (UHL) due to label clustering of the R-BF as a function of the code size B on RCV-Industries (similar behavior on the Wikipedia1k dataset). The optimal curve represents the best UHL over different settings (number of hubs,max cluster size) for a given code size. (right) Hamming loss vs code size on RCV-Industries for different methods. 100 150 200 250 0 1 2 3 4 5 B uncoverable loss×104 hubs = 0 hubs = 20 Optimal 60 80 100 120 140 160 180 200 220 2 2.2 2.4 2.6 2.8 B Test Hamming loss×103 SBF PLST CS-OMP R-BF 4 Experiments We performed experiments on two large-scale real world datasets: RCV-Industries, which is a subset of the RCV1 dataset [13] that considers the industry categories only (we used the first testing set file from the RCV1 site instead of the original training set since it is larger), and Wikipedia1k, which is a subsample of the wikipedia dataset release of the 2012 large scale hierarchical text classification challenge [17]. On both datasets, the labels are originally organized in a hierarchy, but we transformed them into plain MLC datasets by keeping only leaf labels. For RCV-Industries, we obtain 303 labels for 72, 334 examples. The average cardinality of label sets is 1.73 with a maximum of 30; 20% of the examples have label sets of cardinality ≥2. For Wikipedia1k, we kept the 1, 000 most represented leaf labels, which leads to 110, 530 examples with an average label set cardinality of 1.11 (max. 5). 10% of the examples have label sets of cardinality ≥2. We compared our methods, the standard (i.e. with random hash function) BF (S-BF) and the Robust BF (R-BF) presented in section 3, to binary relevance (BR) and to three MLC algorithms designed for MLC problems with a large number of labels: a pruned version of BR proposed in [7] (called BR-Dekel from now on), the compressed sensing approach (CS) of [12] and the principal label space transformation (PLST) [4]. BR-Dekel consists in removing from the prediction all the labels whose probability of a true positive (PTP) on the validation set is smaller than the probability of a false positive (PFP). To control the code size B in BR-Dekel, we rank the labels based on the ratio PTP/PFP and keep the top B labels. In that case, the inference complexity is similar to BF models, but the training complexity is still linear in L. For CS, following [4], we used orthogonal matching poursuit (CS-OMP) for decoding and selected the number of labels to predict in the range {1, 2, . . . , 30}, on the validation set. For S-BF, the number of (random) hash functions K is also chosen on the validation set among {1, 2, . . . , 10}. For R-BF, we use K = 2 hash functions. The code size B can be freely set for all methods except for Robust BF, where different settings of the maximum cluster size and the number of hubs may lead to the same code size. Since the use of a label clustering in R-BF leads to unrecoverable errors even if the classifiers perform perfectly well (because labels of different clusters cannot be predicted together), we chose the max cluster size among {10, 20, . . . , 50} and the number of hubs (among {0, 10, 20, 30, . . . , 100} for RCV-Industries and {0, 50, 100, . . . , 300} for Wikipedia1k) that minimize the resulting unrecoverable Hamming loss (UHL), computed on the train set. Figure 3 (left) shows how the UHL naturally decreases when the number of hubs increases since then the method becomes closer to BR, but at the same time the overall code size B increases because it is the sum of the filter’s size and the number of hubs. Nonetheless, we can observe on the figure that the UHL rapidly reaches a very low value, confirming that the label clustering assumption is reasonable in practice. All the methods involve training binary classifiers or regression functions. On both datasets, we used linear functions with L2 regularization (the global regularization factor in PLST and CS-OMP, as well as the regularization factor of each binary classifier in BF and BR approaches, were chosen on the validation set among {0, 0.1, . . . , 10−5}), and unit-nom normalized TF-IDF features. We used the Liblinear [10] implementation of logistic regression as base binary classifier. 7 Table 1: Test Hamming loss (HL, in %), micro (m-F1) and macro (M-F1) F1-scores. B is code size. The results of the significance test for a p-value less than 5% are denoted † to indicate the best performing method using the same B and ∗to indicate the best performing method overall. Classifier B HL m-F1 M-F1 B HL m-F1 M-F1 RCV-Industries Wikipedia1K BR 303 0.200∗ 72.43∗ 47.82∗ 1000 0.0711 55.96 34.7 BR-Dekel 150 0.308 46.98 30.14 250 0.0984 22.18 12.16 200 0.233 65.78 40.09 500 0.0868 38.33 24.52 S-BF 150 0.223 67.45 40.29 250 0.0742 53.02 31.41 200 0.217 68.32 40.95 500 0.0734 53.90 32.57 R-BF 150 0.210† 71.31† 43.44 240 0.0728† 55.85 34.65 200 0.205† 71.86† 44.57 500 0.0705†∗ 57.31 36.85 CS-OMP 150 0.246 67.59 45.22† 250 0.0886 57.96† 41.84† 200 0.245 67.71 45.82† 500 0.0875 58.46†∗ 42.52†∗ PLST 150 0.226 68.87 32.36 250 0.0854 42.45 09.53 200 0.221 70.35 40.78 500 0.0828 45.95 16.73 Results Table 1 gives the test performances of all the methods on both datasets for different code sizes. We are mostly interested in the Hamming loss but we also provide the micro and macro F-measure. The results are averaged over 10 random splits of train/validation/test of the datasets, respectively containing 50%/25%/25% of the data. The standard deviations of the values are negligible (smaller than 10−3 times the value of the performance measure). Our BF methods seem to clearly outperform all other methods and R-BF yields significant improvements over S-BF. On Wikipedia1k, with 500 classifiers, the Hamming loss (in %) of S-BF is 0.0734 while it is only 0.0705 for RBF. This performance is similar to that of BR’s (0.0711) which uses twice as many classifiers. The simple pruning strategy BR-Dekel is the worst baseline on both datasets, confirming that considering all classes is necessary on these datasets. CS-OMP reaches a much higher Hamming loss (about 23% worst than BR on both datasets when using 50% less classifiers). CS-OMP achieves the best performance on the macro-F measure though. This is because the size of the predicted label sets is fixed for CS, which increases recall but leads to poor precision. We used OMP as decoding procedure for CS since it seemed to perform better than Lasso and Correlation decoding (CD)[12]( for instance, on RCV-Industries with a code size of 500, OMP achieves a Hamming loss of 0.0875 while the Hamming loss is 0.0894 for Lasso and 0.1005 for CD). PLST improves over CS-OMP but its performances are lower than those of S-BF (about 3.5% on RCV-industries and 13% and Wikipedia when using 50% less classifiers than BR). The macro F-measure indicates that PLST likely suffers from class imbalance (only the most frequent labels are predicted), probably because the label set matrix on which SVD is performed is dominated by the most frequent labels. Figure 3 (right) gives the general picture of the Hamming loss of the methods on a larger range of code sizes. Overall, R-BF has the best performances except for very small code sizes because the UHL becomes too high. Runtime analysis Experiments were performed on a computer with 24 intel Xeon 2.6 GHz CPUs. For all methods, the overall training time is dominated by the time to train the binary classifiers or regressors, which depends linearly on the code size. For test, the time is also dominated by the classifiers’ predictions, and the decoding algorithm of R-BF is the fastest. For instance, on Wikipedia1k, training one binary classifier takes 12.35s on average, and inference with one classifier (for the whole test dataset) takes 3.18s. Thus, BR requires about 206 minutes (1000 × 12.35s) for training and 53m for testing on the whole test set. With B = 500, R-BF requires about half that time, including the selection of the number of hubs and the max. cluster size at training time, which is small (computing the UHL of a R-BF configuration takes 9.85s, including the label clustering step, and we try less than 50 of them). For the same B, encoding for CS takes 6.24s and the SVD in PSLT takes 81.03s, while decoding takes 24.39s at test time for CS and 7.86s for PSLT. Acknowledgments This work was partially supported by the French ANR as part of the project Class-Y (ANR-10BLAN-02) and carried out in the framework of the Labex MS2T (ANR-11-IDEX-0004-02). 8 References [1] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment., 10, 2008. [2] B. H. Bloom. Space/time trade-offs in hash coding with allowable errors. Commun. ACM, 13(7):422–426, 1970. [3] L. Carter, R. Floyd, J. Gill, G. Markowsky, and M. Wegman. Exact and approximate membership testers. In Proceedings of the tenth annual ACM symposium on Theory of computing, STOC ’78, pages 59–65, New York, NY, USA, 1978. ACM. [4] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label classification. In NIPS, pages 1538–1546, 2012. [5] W. Cheng and E. H¨ullermeier. Combining instance-based learning and logistic regression for multilabel classification. Machine Learning, 76(2-3):211–225, 2009. [6] K. Christensen, A. Roginsky, and M. Jimeno. A new analysis of the false positive rate of a bloom filter. Inf. Process. Lett., 110(21):944–949, Oct. 2010. [7] O. Dekel and O. Shamir. Multiclass-multilabel classification with more classes than examples. volume 9, pages 137–144, 2010. [8] K. Dembczynski, W. Cheng, and E. H¨ullermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML, pages 279–286, 2010. [9] K. Dembczynski, W. Waegeman, W. Cheng, and E. H¨ullermeier. On label dependence and loss minimization in multi-label classification. Machine Learning, 88(1-2):5–45, 2012. [10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874, June 2008. [11] B. Hariharan, S. V. N. Vishwanathan, and M. Varma. Large Scale Max-Margin Multi-Label Classification with Prior Knowledge about Densely Correlated Labels. In Proceedings of International Conference on Machine Learning, 2010. [12] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, pages 772–780, 2009. [13] RCV1. RCV1 Dataset, http://www.daviddlewis.com/resources/testcollections/rcv1/. [14] J. Read, B. Pfah ringer, G. Holmes, and E. Frank. Classifier chains for multi-label classification. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, ECML PKDD ’09, pages 254–269, Berlin, Heidelberg, 2009. Springer-Verlag. [15] F. Tai and H.-T. Lin. Multilabel classification with principal label space transformation. Neural Computation, 24(9):2508–2542, 2012. [16] G. Tsoumakas, I. Katakis, and I. Vlahavas. A Review of Multi-Label Classification Methods. In Proceedings of the 2nd ADBIS Workshop on Data Mining and Knowledge Discovery (ADMKD 2006), pages 99–109, 2006. [17] Wikipedia. Wikipedia Dataset, http://lshtc.iit.demokritos.gr/. 9
|
2013
|
6
|
5,136
|
Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC Roger Frigola1, Fredrik Lindsten2, Thomas B. Sch¨on2,3 and Carl E. Rasmussen1 1. Dept. of Engineering, University of Cambridge, UK, {rf342,cer54}@cam.ac.uk 2. Div. of Automatic Control, Link¨oping University, Sweden, lindsten@isy.liu.se 3. Dept. of Information Technology, Uppsala University, Sweden, thomas.schon@it.uu.se Abstract State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and, instead, infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity. 1 Introduction State-space models (SSMs) constitute a popular and general class of models in the context of time series and dynamical systems. Their main feature is the presence of a latent variable, the state xt ∈X ≜Rnx, which condenses all aspects of the system that can have an impact on its future. A discrete-time SSM with nonlinear dynamics can be represented as xt+1 = f(xt, ut) + vt, (1a) yt = g(xt, ut) + et, (1b) where ut denotes a known external input, yt denotes the measurements, vt and et denote i.i.d. noises acting on the dynamics and the measurements, respectively. The function f encodes the dynamics and g describes the relationship between the observation and the unobserved states. We are primarily concerned with the problem of learning general nonlinear SSMs. The aim is to find a model that can adaptively increase its complexity when more data is available. To this effect, we employ a Bayesian nonparametric model for the dynamics (1a). This provides a flexible model that is not constrained by any limiting assumptions caused by postulating a particular functional form. More specifically, we place a Gaussian process (GP) prior [1] over the unknown function f. The resulting model is a generalization of the standard parametric SSM. The functional form of the observation model g is assumed to be known, possibly parameterized by a finite dimensional parameter. This is often a natural assumption, for instance in engineering applications where g corresponds to a sensor model – we typically know what the sensors are measuring, at least up to some unknown parameters. Furthermore, using too flexible models for both f and g can result in problems of non-identifiability. We adopt a fully Bayesian approach whereby we find a posterior distribution over all the latent entities of interest, namely the state transition function f, the hidden state trajectory x0:T ≜{xi}T i=0 1 and any hyper-parameter θ of the model. This is in contrast with existing approaches for using GPs to model SSMs, which tend to model the GP using a finite set of target points, in effect making the model parametric [2]. Inferring the distribution over the state trajectory p(x0:T | y0:T , u0:T ) is an important problem in itself known as smoothing. We use a tailored particle Markov Chain Monte Carlo (PMCMC) algorithm [3] to efficiently sample from the smoothing distribution whilst marginalizing over the state transition function. This contrasts with conventional approaches to smoothing which require a fixed model of the transition dynamics. Once we have obtained an approximation of the smoothing distribution, with the dynamics of the model marginalized out, learning the function f is straightforward since its posterior is available in closed form given the state trajectory. Our only approximation is that of the sampling algorithm. We report very good mixing enabled by the use of recently developed PMCMC samplers [4] and the exact marginalization of the transition dynamics. There is by now a rich literature on GP-based SSMs. For instance, Deisenroth et al. [5, 6] presented refined approximation methods for filtering and smoothing for already learned GP dynamics and measurement functions. In fact, the method proposed in the present paper provides a vital component needed for these inference methods, namely that of learning the GP model in the first place. Turner et al. [2] applied the EM algorithm to obtain a maximum likelihood estimate of parametric models which had the form of GPs where both inputs and outputs were parameters to be optimized. This type of approach can be traced back to [7] where Ghahramani and Roweis applied EM to learn models based on radial basis functions. Wang et al. [8] learn a SSM with GPs by finding a MAP estimate of the latent variables and hyper-parameters. They apply the learning in cases where the dimension of the observation vector is much higher than that of the latent state in what becomes a form of dynamic dimensionality reduction. This procedure would have the risk of overfitting in the common situation where the state-space is high-dimensional and there is significant uncertainty in the smoothing distribution. 2 Gaussian Process State-Space Model We describe the generative probabilistic model of the Gaussian process SSM (GP-SSM) represented in Figure 1b by f(xt) ∼GP mθx(xt), kθx(xt, x′ t) , (2a) xt+1 | ft ∼N(xt+1 | ft, Q), (2b) yt | xt ∼p(yt | xt, θy), (2c) and x0 ∼p(x0), where we avoid notational clutter by omitting the conditioning on the known inputs ut. In addition, we put a prior p(θ) over the various hyper-parameters θ = {θx, θy, Q}. Also, note that the measurement model (2c) and the prior on x0 can take any form since we do not rely on their properties for efficient inference. The GP is fully described by its mean function and its covariance function. An interesting property of the GP-SSM is that any a priori insight into the dynamics of the system can be readily encoded in the mean function. This is useful, since it is often possible to capture the main properties of the dynamics, e.g. by using a simple parametric model or a model based on first principles. Such (a) Standard GP regression y0 y1 y2 y∗ f0 f1 f2 f∗ x0 x1 x2 x∗ (b) GP-SSM Figure 1: Graphical models for standard GP regression and the GP-SSM model. The thick horizontal bars represent sets of fully connected nodes. 2 simple models may be insufficient on their own, but useful together with the GP-SSM, as the GP is flexible enough to model complex departures from the mean function. If no specific prior model is available, the linear mean function m(xt) = xt is a good generic choice. Interestingly, the prior information encoded in this model will normally be more vague than the prior information encoded in parametric models. The measurement model (2c) implicitly contains the observation function g and the distribution of the i.i.d. measurement noise et. 3 Inference over States and Hyper-parameters Direct learning of the function f in (2a) from input/output data {u0:T −1, y0:T } is challenging since the states x0:T are not observed. Most (if not all) previous approaches attack this problem by reverting to a parametric representation of f which is learned alongside the states. We address this problem in a fundamentally different way by marginalizing out f, allowing us to respect the nonparametric nature of the model. A challenge with this approach is that marginalization of f will introduce dependencies across time for the state variables that lead to the loss of the Markovian structure of the state-process. However, recently developed inference methods, combining sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) allow us to tackle this problem. We discuss marginalization of f in Section 3.1 and present the inference algorithms in Sections 3.2 and 3.3. 3.1 Marginalizing out the State Transition Function Targeting the joint posterior distribution of the hyper-parameters, the latent states and the latent function f is problematic due to the strong dependencies between x0:T and f. We therefore marginalize the dynamical function from the model, and instead target the distribution p(θ, x0:T | y1:T ) (recall that conditioning on u0:T −1 is implicit). In the MCMC literature, this is referred to as collapsing [9]. Hence, we first need to find an expression for the marginal prior p(θ, x0:T ) = p(x0:T | θ)p(θ). Focusing on p(x0:T | θ) we note that, although this distribution is not Gaussian, it can be represented as a product of Gaussians. Omitting the dependence on θ in the notation, we obtain p(x1:T | θ, x0) = T Y t=1 p(xt | θ, x0:t−1) = T Y t=1 N xt | µt(x0:t−1), Σt(x0:t−1) , (3a) with µt(x0:t−1) = mt−1 + Kt−1,0:t−2 eK−1 0:t−2 (x1:t−1 −m0:t−2), (3b) Σt(x0:t−1) = eKt−1 −Kt−1,0:t−2 eK−1 0:t−2K⊤ t−1,0:t−2 (3c) for t ≥2 and µ1(x0) = m0, Σ1(x0) = eK0. Equation (3) follows from the fact that, once conditioned on x0:t−1, a one-step prediction for the state variable is a standard GP prediction. Here, we have defined the mean vector m0:t−1 ≜ m(x0)⊤ . . . m(xt−1)⊤⊤and the (nxt) × (nxt) positive definite matrix K0:t−1 with block entries [K0:t−1]i,j = k(xi−1, xj−1). We use two sets of indices, as in Kt−1,0:t−2, to refer to the off-diagonal blocks of K0:t−1. We also define eK0:t−1 = K0:t−1 + It ⊗Q. We can also express (3a) more succinctly as, p(x1:t | θ, x0) = |(2π)nxt eK0:t−1|−1 2 exp(−1 2(x1:t −m0:t−1)⊤eK−1 0:t−1(x1:t −m0:t−1)). (4) This expression looks very much like a multivariate Gaussian density function. However, we emphasize that this is not the case since both m0:t−1 and eK0:t−1 depend (nonlinearly) on the argument x1:t. In fact, (4) will typically be very far from Gaussian. 3.2 Sequential Monte Carlo With the prior (4) in place, we now turn to posterior inference and we start by considering the joint smoothing distribution p(x0:T | θ, y0:T ). The sequential nature of the proposed model suggests the use of SMC. Though most well known for filtering in Markovian SSMs – see [10, 11] for an introduction – SMC is applicable also for non-Markovian latent variable models. We seek to approximate the sequence of distributions p(x0:t | θ, y0:t), for t = 0, . . . , T. Let {xi 0:t−1, wi t−1}N i=1 3 be a collection of weighted particles approximating p(x0:t−1 | θ, y0:t−1) by the empirical distribution, bp(x0:t−1 | θ, y0:t−1) ≜PN i=1 wi t−1δxi 0:t−1(x0:t−1). Here, δz(x) is a point-mass located at z. To propagate this sample to time t, we introduce the auxiliary variables {ai t}N i=1, referred to as ancestor indices. The variable ai t is the index of the ancestor particle at time t −1, of particle xi t. Hence, xi t is generated by first sampling ai t with P(ai t = j) = wj t−1. Then, xi t is generated as, xi t ∼p(xt | θ, xai t 0:t−1, y0:t), (5) for i = 1, . . . , N. The particle trajectories are then augmented according to xi 0:t = {xai t 0:t−1, xi t}. Sampling from the one-step predictive density is a simple (and sensible) choice, but we may also consider other proposal distributions. In the above formulation the resampling step is implicit and corresponds to sampling the ancestor indices (cf. the auxiliary particle filter, [12]). Finally, the particles are weighted according to the measurement model, wi t ∝p(yt | θ, xi t) for i = 1, . . . , N, where the weights are normalized to sum to 1. 3.3 Particle Markov Chain Monte Carlo There are two shortcomings of SMC: (i) it does not handle inference over hyper-parameters; (ii) despite the fact that the sampler targets the joint smoothing distribution, it does in general not provide an accurate approximation of the full joint distribution due to path degeneracy. That is, the successive resampling steps cause the particle diversity to be very low for time points t far from the final time instant T. To address these issues, we propose to use a particle Markov chain Monte Carlo (PMCMC, [3, 13]) sampler. PMCMC relies on SMC to generate samples of the highly correlated state trajectory within an MCMC sampler. We employ a specific PMCMC sampler referred to as particle Gibbs with ancestor sampling (PGAS, [4]), given in Algorithm 1. PGAS uses Gibbs-like steps for the state trajectory x0:T and the hyper-parameters θ, respectively. That is, we sample first x0:T given θ, then θ given x0:T , etc. However, the full conditionals are not explicitly available. Instead, we draw samples from specially tailored Markov kernels, leaving these conditionals invariant. We address these steps in the subsequent sections. Algorithm 1 Particle Gibbs with ancestor sampling (PGAS) 1. Set θ[0] and x1:T [0] arbitrarily. 2. For ℓ≥1 do (a) Draw θ[ℓ] conditionally on x0:T [ℓ−1] and y0:T as discussed in Section 3.3.2. (b) Run CPF-AS (see [4]) targeting p(x0:T | θ[ℓ], y0:T ), conditionally on x0:T [ℓ−1]. (c) Sample k with P(k = i) = wi T and set x1:T [ℓ] = xk 1:T . 3. end 3.3.1 Sampling the State Trajectories To sample the state trajectory, PGAS makes use of an SMC-like procedure referred to as a conditional particle filter with ancestor sampling (CPF-AS). This approach is particularly suitable for non-Markovian latent variable models, as it relies only on a forward recursion (see [4]). The difference between a standard particle filter (PF) and the CPF-AS is that, for the latter, one particle at each time step is specified a priori. Let these particles be denoted ex0:T = {ex0, . . . , exT }. We then sample according to (5) only for i = 1, . . . , N −1. The Nth particle is set deterministically: xN t = ext. To be able to construct the Nth particle trajectory, xN t has to be associated with an ancestor particle at time t −1. This is done by sampling a value for the corresponding ancestor index aN t . Following [4], the ancestor sampling probabilities are computed as ewi t−1|T ∝wi t−1 p({xi 0:t−1, ext:T }, y0:T ) p(xi 0:t−1, y0:t−1) ∝wi t−1 p({xi 0:t−1, ext:T }) p(xi 0:t−1) = wi t−1p(ext:T | xi 0:t−1). (6) where the ratio is between the unnormalized target densities up to time T and up to time t −1, respectively. The second proportionality follows from the mutual conditional independence of the observations, given the states. Here, {xi 0:t−1, ext:T } refers to a path in XT +1 formed by concatenating 4 the two partial trajectories. The above expression can be computed by using the prior over state trajectories given by (4). The ancestor sampling weights {ewi t−1|T }N i=1 are then normalized to sum to 1 and the ancestor index aN t is sampled with P(aN t = j) = wj t−1|t. The conditioning on a prespecified collection of particles implies an invariance property in CPF-AS, which is key to our development. More precisely, given ex0:T let ex′ 0:T be generated as follows: 1. Run CPF-AS from time t = 0 to time t = T, conditionally on ex0:T . 2. Set ex′ 0:T to one of the resulting particle trajectories according to P(ex′ 0:T = xi 0:T ) = wi T . For any N ≥2, this procedure defines an ergodic Markov kernel M N θ (ex′ 0:T | ex0:T ) on XT +1, leaving the exact smoothing distribution p(x0:T | θ, y0:T ) invariant [4]. Note that this invariance holds for any N ≥2, i.e. the number of particles that are used only affect the mixing rate of the kernel M N θ . However, it has been experienced in practice that the autocorrelation drops sharply as N increases [4, 14], and for many models a moderate N is enough to obtain a rapidly mixing kernel. 3.3.2 Sampling the Hyper-parameters Next, we consider sampling the hyper-parameters given a state trajectory and sequence of observations, i.e. from p(θ | x0:T , y0:T ). In the following, we consider the common situation where there are distinct hyper-parameters for the likelihood p(y0:T | x0:T , θy) and for the prior over trajectories p(x0:T | θx). If the prior over the hyper-parameters factorizes between those two groups we obtain p(θ | x0:T , y0:T ) ∝p(θy | x0:T , y0:T ) p(θx | x0:T ). We can thus proceed to sample the two groups of hyper-parameters independently. Sampling θy will be straightforward in most cases, in particular if conjugate priors for the likelihood are used. Sampling θx will, nevertheless, be harder since the covariance function hyper-parameters enter the expression in a non-trivial way. However, we note that once the state trajectory is fixed, we are left with a problem analogous to Gaussian process regression where x0:T −1 are the inputs, x1:T are the outputs and Q is the likelihood covariance matrix. Given that the latent dynamics can be marginalized out analytically, sampling the hyper-parameters with slice sampling is straightforward [15]. 4 A Sparse GP-SSM Construction and Implementation Details A naive implementation of the CPF-AS algorithm will give rise to O(T 4) computational complexity, since at each time step t = 1, . . . , T, a matrix of size T × T needs to be factorized. However, it is possible to update and reuse the factors from the previous time step, bringing the total computational complexity down to the familiar O(T 3). Furthermore, by introducing a sparse GP model, we can reduce the complexity to O(M 2T) where M ≪T. In Section 4.1 we introduce the sparse GP model and in Section 4.2 we provide insight into the efficient implementation of both the vanilla GP and the sparse GP. 4.1 FIC Prior over the State Trajectory An important alternative to GP-SSM is given by exchanging the vanilla GP prior over f for a sparse counterpart. We do not consider the resulting model to be an approximation to GP-SSM, it is still a GP-SSM, but with a different prior over functions. As a result we expect it to sometimes outperform its non-sparse version in the same way as it happens with their regression siblings [16]. Most sparse GP methods can be formulated in terms of a set of so called inducing variables [17]. These variables live in the space of the latent function and have a set I of corresponding inducing inputs. The assumption is that, conditionally on the inducing variables, the latent function values are mutually independent. Although the inducing variables are marginalized analytically – this is key for the model to remain nonparametric – the inducing inputs have to be chosen in such a way that they, informally speaking, cover the same region of the input space covered by the data. Crucially, in order to achieve computational gains, the number M of inducing variables is selected to be smaller than the original number of data points. In the following, we will use the fully independent conditional (FIC) sparse GP prior as defined in [17] due to its very good empirical performance [16]. As shown in [17], the FIC prior can be obtained by replacing the covariance function k(·, ·) by, kFIC(xi, xj) = s(xi, xj) + δij k(xi, xj) −s(xi, xj) , (7) 5 where s(xi, xj) ≜k(xi, I)k(I, I)−1k(I, xj), δij is Kronecker’s delta and we use the convention whereby when k takes a set as one of its arguments it generates a matrix of covariances. Using the Woodbury matrix identity, we can express the one-step predictive density as in (3), with µFIC t (x0:t−1) = mt−1 + Kt−1,IPKI,0:t−2Λ−1 0:t−2 (x1:t−1 −m0:t−2), (8a) ΣFIC t (x0:t−1) = eKt−1 −St−1 + Kt−1,IPKI,t−1, (8b) where P ≜(KI,I + KI,0:t−2Λ−1 0:t−2K0:t−2,I)−1, Λ0:t−2 ≜diag[ eK0:t−2 −S0:t−2] and SA,B ≜ KA,IK−1 I,IKI,B. Despite its apparent cumbersomeness, the computational complexity involved in computing the above mean and covariance is O(M 2t), as opposed to O(t3) for (3). The same idea can be used to express (4) in a form which allows for efficient computation. Here diag refers to a block diagonalization if Q is not diagonal. We do not address the problem of choosing the inducing inputs, but note that one option is to use greedy methods (e.g. [18]). The fast forward selection algorithm is appealing due to its very low computational complexity [18]. Moreover, its potential drawback of interference between hyperparameter learning and active set selection is not an issue in our case since hyper-parameters will be fixed for a given run of the particle filter. 4.2 Implementation Details As pointed out above, it is crucial to reuse computations across time to attain the O(T 3) or O(M 2T) computational complexity for the vanilla GP and the FIC prior, respectively. We start by discussing the vanilla GP and then briefly comment on the implementation aspects of FIC. There are two costly operations of the CPF-AS algorithm: (i) sampling from the prior (5), requiring the computation of (3b) and (3c) and (ii) evaluating the ancestor sampling probabilities according to (6). Both of these operations can be carried out efficiently by keeping track of a Cholesky factorization of the matrix eK({xi 0:t−1, ext:T −1}) = Li tLi⊤ t , for each particle i = 1, . . . , N. Here, eK({xi 0:t−1, ext:T −1}) is a matrix defined analogously to eK0:T −1, but where the covariance function is evaluated for the concatenated state trajectory {xi 0:t−1, ext:T −1}. From Li t, it is possible to identify sub-matrices corresponding to the Cholesky factors for the covariance matrix Σt(xi 0:t−1) as well as for the matrices needed to efficiently evaluate the ancestor sampling probabilities (6). It remains to find an efficient update of the Cholesky factor to obtain Li t+1. As we move from time t to t + 1 in the algorithm, ext will be replaced by xi t in the concatenated trajectory. Hence, the matrix eK({xi 0:t, ext+1:T −1}) can be obtained from eK({xi 0:t−1, ext:T −1}) by replacing nx rows and columns, corresponding to a rank 2nx update. It follows that we can compute Li t+1 by making nx successive rank one updates and downdates on Li t. In summary, all the operations at a specific time step can be done in O(T 2) computations, leading to a total computational complexity of O(T 3). For the FIC prior, a naive implementation will give rise to O(M 2T 2) computational complexity. This can be reduced to O(M 2T) by keeping track of a factorization for the matrix P. However, to reach the O(M 2T) cost all intermediate operations scaling with T has to be avoided, requiring us to reuse not only the matrix factorizations, but also intermediate matrix-vector multiplications. 5 Learning the Dynamics Algorithm 1 gives us a tool to compute p(x0:T , θ | y1:T ). We now discuss how this can be used to find an explicit model for f. The goal of learning the state transition dynamics is equivalent to that of obtaining a predictive distribution over f ∗= f(x∗), evaluated at an arbitrary test point x∗, p(f ∗| x∗, y1:T ) = Z p(f ∗| x∗, x0:T , θ) p(x0:T , θ | y1:T ) dx0:T dθ. (9) Using a sample-based approximation of p(x0:T , θ | y1:T ), this integral can be approximated by p(f ∗| x∗, y1:T ) ≈1 L L X ℓ=1 p(f ∗| x∗, x0:T [ℓ], θ[ℓ]) = 1 L L X ℓ=1 N(f ∗| µℓ(x∗), Σℓ(x∗)), (10) 6 where L is the number of samples and µℓ(x∗) and Σℓ(x∗) follow the expressions for the predictive distribution in standard GP regression if x0:T −1[ℓ] are treated as inputs, x1:T [ℓ] are treated as outputs and Q is the likelihood covariance matrix. This mixture of Gaussians is an expressive representation of the predictive density which can, for instance, correctly take into account multimodality arising from ambiguity in the measurements. Although factorized covariance matrices can be pre-computed, the overall computational cost will increase linearly with L.The computational cost can be reduced by thinning the Markov chain using e.g. random sub-sampling or kernel herding [19]. In some situations it could be useful to obtain an approximation from the mixture of Gaussians consisting in a single GP representation. This is the case in applications such as control or real time filtering where the cost of evaluating the mixture of Gaussians can be prohibitive. In those cases one could opt for a pragmatic approach and learn the mapping x∗7→f ∗from a cloud of points {x0:T [ℓ], f0:T [ℓ]}L ℓ=1 using sparse GP regression. The latent function values f0:T [ℓ] can be easily sampled from the normally distributed p(f0:T [ℓ] | x0:T [ℓ], θ[ℓ]). 6 Experiments 6.1 Learning a Nonlinear System Benchmark Consider a system with dynamics given by xt+1 = axt + bxt/(1 + x2 t) + cut + vt, vt ∼N(0, q) and observations given by yt = dx2 t + et, et ∼N(0, r), with parameters (a, b, c, d, q, r) = (0.5, 25, 8, 0.05, 10, 1) and a known input ut = cos(1.2(t + 1)). One of the difficulties of this system is that the smoothing density p(x0:T | y0:T ) is multimodal since no information about the sign of xt is available in the observations. The system is simulated for T = 200 time steps, using log-normal priors for the hyper-parameters, and the PGAS sampler is then run for 50 iterations using N = 20 particles. To illustrate the capability of the GP-SSM to make use of a parametric model as baseline, we use a mean function with the same parametric form as the true system, but parameters (a, b, c) = (0.3, 7.5, 0). This function, denoted model B, is manifestly different to the actual state transition (green vs. black surfaces in Figure 2), also demonstrating the flexibility of the GP-SSM. Figure 2 (left) shows the samples of x0:T (red). It is apparent that the distribution covers two alternative state trajectories at particular times (e.g. t = 10). In fact, it is always the case that this bi-modal distribution covers the two states of opposite signs that could have led to the same observation (cyan). In Figure 2 (right) we plot samples from the smoothing distribution, where each circle corresponds to (xt, ut, E[ft]). Although the parametric model used in the mean function of the GP (green) is clearly not representative of the true dynamics (black), the samples from the smoothing distribution accurately portray the underlying system. The smoothness prior embodied by the GP allows for accurate sampling from the smoothing distribution even when the parametric model of the dynamics fails to capture important features. To measure the predictive capability of the learned transition dynamics, we generate a new dataset consisting of 10 000 time steps and present the RMSE between the predicted value of f(xt, ut) and the actual one. We compare the results from GP-SSM with the predictions obtained from two parametric models (one with the true model structure and one linear model) and two known models (the ground truth model and model B). We also report results for the sparse GP-SSM using an FIC prior with 40 inducing points. Table 1 summarizes the results, averaged over 10 independent training and test datasets. We also report the RMSE from the joint smoothing sample to the ground truth trajectory. Table 1: RMSE to ground truth values over 10 independent runs. RMSE prediction of f ∗|x∗ t , u∗ t , data smoothing x0:T |data Ground truth model (known parameters) – 2.7 ± 0.5 GP-SSM (proposed, model B mean function) 1.7 ± 0.2 3.2 ± 0.5 Sparse GP-SSM (proposed, model B mean function) 1.8 ± 0.2 2.7 ± 0.4 Model B (fixed parameters) 7.1 ± 0.0 13.6 ± 1.1 Ground truth model, learned parameters 0.5 ± 0.2 3.0 ± 0.4 Linear model, learned parameters 5.5 ± 0.1 6.0 ± 0.5 7 0 10 20 30 40 50 60 −20 −15 −10 −5 0 5 10 15 20 Time State Samples Ground truth ±(max(yt,0)/d)1/2 −20 −15 −10 −5 0 5 10 15 20 −1 −0.5 0 0.5 1 −20 −15 −10 −5 0 5 10 15 20 x(t) u(t) f(t) Figure 2: Left: Smoothing distribution. Right: State transition function (black: actual transition function, green: mean function (model B) and red: smoothing samples). 300 350 8 10 12 14 16 x 300 350 −2 0 2 ˙x 300 350 −10 −5 0 5 10 ˙θ 300 350 −2 −1 0 1 2 θ Figure 3: One step ahead predictive distribution for each of the states of the cart and pole system. Black: ground truth. Colored band: one standard deviation from the mixture of Gaussians predictive. 6.2 Learning a Cart and Pole System We apply our approach to learn a model of a cart and pole system used in reinforcement learning. The system consists of a cart, with a free-spinning pendulum, rolling on a horizontal track. An external force is applied to the cart. The system’s dynamics can be described by four states and a set of nonlinear ordinary differential equations [20]. We learn a GP-SSM based on 100 observations of the state corrupted with Gaussian noise. Although the training set only explores a small region of the 4-dimensional state space, we can learn a model of the dynamics which can produce one step ahead predictions such the ones in Figure 3. We obtain a predictive distribution in the form of a mixture of Gaussians from which we display the first and second moments. Crucially, the learned model reports different amounts of uncertainty in different regions of the state-space. For instance, note the narrower error-bars on some states between t = 320 and t = 350. This is due to the model being more confident in its predictions in areas that are closer to the training data. 7 Conclusions We have shown an efficient way to perform fully Bayesian inference and learning in the GP-SSM. A key contribution is that our approach retains the full nonparametric expressivity of the model. This is made possible by marginalizing out the state transition function, which results in a nontrivial inference problem that we solve using a tailored PGAS sampler. A particular characteristic of our approach is that the latent states can be sampled from the smoothing distribution even when the state transition function is unknown. Assumptions about smoothness and parsimony of this function embodied by the GP prior suffice to obtain high-quality smoothing distributions. Once samples from the smoothing distribution are available, they can be used to describe a posterior over the state transition function. This contrasts with the conventional approach to inference in dynamical systems where smoothing is performed conditioned on a model of the state transition dynamics. 8 References [1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006. [2] R. Turner, M. P. Deisenroth, and C. E. Rasmussen, “State-space inference and learning with Gaussian processes,” in 13th International Conference on Artificial Intelligence and Statistics, ser. W&CP, Y. W. Teh and M. Titterington, Eds., vol. 9, Chia Laguna, Sardinia, Italy, May 13–15 2010, pp. 868–875. [3] C. Andrieu, A. Doucet, and R. Holenstein, “Particle Markov chain Monte Carlo methods,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 3, pp. 269–342, 2010. [4] F. Lindsten, M. Jordan, and T. B. Sch¨on, “Ancestor sampling for particle Gibbs,” in Advances in Neural Information Processing Systems 25, P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, Eds., 2012, pp. 2600–2608. [5] M. Deisenroth, R. Turner, M. Huber, U. Hanebeck, and C. Rasmussen, “Robust filtering and smoothing with Gaussian processes,” IEEE Transactions on Automatic Control, vol. 57, no. 7, pp. 1865 –1871, july 2012. [6] M. Deisenroth and S. Mohamed, “Expectation Propagation in Gaussian process dynamical systems,” in Advances in Neural Information Processing Systems 25, P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, Eds., 2012, pp. 2618–2626. [7] Z. Ghahramani and S. Roweis, “Learning nonlinear dynamical systems using an EM algorithm,” in Advances in Neural Information Processing Systems 11, M. J. Kearns, S. A. Solla, and D. A. Cohn, Eds. MIT Press, 1999. [8] J. Wang, D. Fleet, and A. Hertzmann, “Gaussian process dynamical models,” in Advances in Neural Information Processing Systems 18, Y. Weiss, B. Sch¨olkopf, and J. Platt, Eds. Cambridge, MA: MIT Press, 2006, pp. 1441–1448. [9] J. S. Liu, Monte Carlo Strategies in Scientific Computing. Springer, 2001. [10] A. Doucet and A. Johansen, “A tutorial on particle filtering and smoothing: Fifteen years later,” in The Oxford Handbook of Nonlinear Filtering, D. Crisan and B. Rozovsky, Eds. Oxford University Press, 2011. [11] F. Gustafsson, “Particle filter theory and practice with positioning applications,” IEEE Aerospace and Electronic Systems Magazine, vol. 25, no. 7, pp. 53–82, 2010. [12] M. K. Pitt and N. Shephard, “Filtering via simulation: Auxiliary particle filters,” Journal of the American Statistical Association, vol. 94, no. 446, pp. 590–599, 1999. [13] F. Lindsten and T. B. Sch¨on, “Backward simulation methods for Monte Carlo statistical inference,” Foundations and Trends in Machine Learning, vol. 6, no. 1, pp. 1–143, 2013. [14] F. Lindsten and T. B. Sch¨on, “On the use of backward simulation in the particle Gibbs sampler,” in Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, Mar. 2012. [15] D. K. Agarwal and A. E. Gelfand, “Slice sampling for simulation based fitting of spatial data models,” Statistics and Computing, vol. 15, no. 1, pp. 61–69, 2005. [16] E. Snelson and Z. Ghahramani, “Sparse Gaussian processes using pseudo-inputs,” in Advances in Neural Information Processing Systems (NIPS), Y. Weiss, B. Sch¨olkopf, and J. Platt, Eds., Cambridge, MA, 2006, pp. 1257–1264. [17] J. Qui˜nonero-Candela and C. E. Rasmussen, “A unifying view of sparse approximate Gaussian process regression,” Journal of Machine Learning Research, vol. 6, pp. 1939–1959, 2005. [18] M. Seeger, C. Williams, and N. Lawrence, “Fast Forward Selection to Speed Up Sparse Gaussian Process Regression,” in Artificial Intelligence and Statistics 9, 2003. [19] Y. Chen, M. Welling, and A. Smola, “Super-samples from kernel herding,” in Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), P. Gr¨unwald and P. Spirtes, Eds. AUAI Press, 2010. [20] M. Deisenroth, “Efficient reinforcement learning using Gaussian processes,” Ph.D. dissertation, Karlsruher Institut f¨ur Technologie, 2010. 9
|
2013
|
60
|
5,137
|
Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex Sam Patterson Gatsby Computational Neuroscience Unit University College London spatterson@gatsby.ucl.ac.uk Yee Whye Teh Department of Statistics University of Oxford y.w.teh@stats.ox.ac.uk Abstract In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data. We apply this method to latent Dirichlet allocation in an online minibatch setting, and demonstrate that it achieves substantial performance improvements over the state of the art online variational Bayesian methods. 1 Introduction In recent years there has been increasing interest in probabilistic models where the latent variables or parameters of interest are discrete probability distributions over K items, i.e. vectors lying in the probability simplex ∆K = {(π1, . . . , πK) : πk ≥0, X k πk = 1} ⊂RK (1) Important examples include topic models like latent Dirichlet allocation (LDA) [BNJ03], admixture models in genetics like Structure [PSD00], and discrete directed graphical models with a Bayesian prior over the conditional probability tables [Hec99]. Standard approaches to inference over the probability simplex include variational inference [Bea03, WJ08] and Markov chain Monte Carlo methods (MCMC) like Gibbs sampling [GRS96]. In the context of LDA, many methods have been developed, e.g. variational inference [BNJ03], collapsed variational inference [TNW07, AWST09] and collapsed Gibbs sampling [GS04]. With the increasingly large scale document corpora to which LDA and other topic models are applied, there has also been developments of specialised and highly scalable algorithms [NASW09]. Most proposed algorithms are based on a batch learning framework, where the whole document corpus needs to be stored and accessed for every iteration. For very large corpora, this framework can be impractical. Most recently, [Sat01, HBB10, MHB12] proposed online Bayesian variational inference algorithms (OVB), where on each iteration only a small subset (a mini-batch) of the documents is processed to give a noisy estimate of the gradient, and a stochastic gradient descent algorithm [RM51] is employed to update the parameters of interest. These algorithms have shown impressive results on very large corpora like Wikipedia articles, where it is not even feasible to store the whole dataset in memory. This is achieved by simply fetching the mini-batch articles in an online manner, processing, and then discarding them after the mini-batch. In this paper, we are interested in developing scalable MCMC algorithms for models defined over the probability simplex. In some scenarios, and particularly in LDA, MCMC algorithms have been shown to work extremely well, and in fact achieve better results faster than variational inference on small to medium corpora [GS04, TNW07, AWST09]. However current MCMC methodology 1 have mostly been in the batch framework which, as argued above, cannot scale to the very large corpora of interest. We will make use of a recently developed MCMC method called stochastic gradient Langevin dynamics (SGLD) [WT11, ABW12] which operates in a similar online minibatch framework as OVB. Unlike OVB and other stochastic gradient descent algorithms, SGLD is not a gradient descent algorithm. Rather, it is a Hamiltonian MCMC [Nea10] algorithm which will asymptotically produce samples from the posterior distribution. It achieves this by updating parameters according to both the stochastic gradients as well as additional noise which forces it to explore the full posterior instead of simply converging to a MAP configuration. There are three difficulties that have to be addressed, however, to successfully apply SGLD to LDA and other models defined on probability simplices. Firstly, the probability simplex (1) is compact and has boundaries that has to be accounted for when an update proposes a step that brings the vector outside the simplex. Secondly, the typical Dirichlet priors over the probability simplex place most of its mass close to the boundaries and corners of the simplex. This is particularly the case for LDA and other linguistic models, where probability vectors parameterise distributions over a larger number of words, and it is often desirable to use distributions that place significant mass on only a few words, i.e. we want distributions over ∆K which place most of its mass near the boundaries and corners. This also causes a problem as depending on the parameterisation used, the gradient required for Langevin dynamics is inversely proportional to entries in π and hence can blow up when components of π are close to zero. Finally, again for LDA and other linguistic models, we would like algorithms that work well in high-dimensional simplices. These considerations lead us to the first contribution of this paper in Section 3, which is an investigation into different ways to parameterise the probability simplex. This section shows that the choice of a good parameterisation is not obvious, and that the use of the Riemannian geometry of the simplex [Ama95, GC11] is important in designing Langevin MCMC algorithms. In particular, we show that an unnormalized parameterisation, using a mirroring trick to remove boundaries, coupled with a natural gradient update, achieves the best mixing performance. In Section 4, we then show that the SGLD algorithm, using this parameterisation and natural gradient updates, performs significantly better than OVB algorithms [HBB10, MHB12]. Section 2 reviews Langevin dynamics, natural gradients and SGLD to setup the framework used in the paper, and Section 6 concludes. 2 Review 2.1 Langevin dynamics Suppose we model a data set x = x1, . . . , xN, with a generative model p(x | θ) = QN i=1 p(xi | θ) parameterized by θ ∈RD with prior p(θ) and that our aim is to compute the posterior p(θ | x). Langevin dynamics [Ken90, Nea10] is an MCMC scheme which produces samples from the posterior by means of gradient updates plus Gaussian noise, resulting in a proposal distribution q(θ∗| θ) as described by Equation 2. θ∗= θ + ϵ 2 ∇θlog p(θ) + N X i=1 ∇θlog p(xi|θ) ! + ζ, ζ ∼N(0, ϵI) (2) The mean of the proposal distribution is in the direction of increasing log posterior due to the gradient, while the added noise will prevent the samples from collapsing to a single (local) maximum. A Metropolis-Hastings correction step is required to correct for discretisation error, with proposals accepted with probability min 1, p(θ∗|x) p(θ|x) q(θ|θ∗) q(θ∗|θ) [RS02]. As ϵ tends to zero, the acceptance ratio tends to one as the Markov chain tends to a stochastic differential equation which has p(θ | x) as its stationary distribution [Ken78]. 2.2 Riemannian Langevin dynamics Langevin dynamics has an isotropic proposal distribution leading to slow mixing if the components of θ have very different scales or if they are highly correlated. Preconditioning can help with this. A recent approach, the Riemann manifold Metropolis adjusted Langevin algorithm [GC11] uses a user chosen matrix G(θ) to precondition in a locally adaptive manner. We will refer to their algorithm 2 as Riemannian Langevin dynamics (RLD) in this paper. The Riemannian manifold in question is the family of probability distributions p(x | θ) parameterised by θ, for which the expected Fisher information matrix Iθ defines a natural Riemannian metric tensor. In fact any positive definite matrix G(θ) defines a valid Riemannian manifold and hence we are not restricted to using G(θ) = Iθ. This is important in practice as for many models of interest the expected Fisher information is intractable. As in Langevin dynamics, RLD consists of a Gaussian proposal q(θ∗| θ), along with a MetropolisHastings correction step. The proposal distribution can be written as θ∗= θ + ϵ 2µ(θ) + G−1 2 (θ)ζ, ζ ∼N(0, ϵI) (3) where the jth component of µ(θ) is given by µ(θ)j = G−1(θ) ∇θlog p(θ) + N X i=1 ∇θlog p(xi|θ) !! j −2 D X k=1 G−1(θ)∂G(θ) ∂θk G−1(θ) jk + D X k=1 G−1(θ) jk Tr G−1(θ)∂G(θ) ∂θk (4) The first term in Equation 4 is now the natural gradient of the log posterior. Whereas the standard gradient gives the direction of steepest ascent in Euclidean space, the natural gradient gives the direction of steepest descent taking into account the geometry implied by G(θ). The remaining terms in Equation 4 describe how the curvature of the manifold defined by G(θ) changes for small changes in θ. The Gaussian noise in Equation 3 also takes the geometry of the manifold into account, having scale defined by G−1 2 (θ). 2.3 Stochastic gradient Riemannian Langevin dynamics In the Langevin dynamics and RLD algorithms, the proposal distribution requires calculation of the gradient of the log likelihood w.r.t. θ, which means processing all N items in the data set. For large data sets this is infeasible, and even for small data sets it may not be the most efficient use of computation. The stochastic gradient Langevin dynamics (SGLD) algorithm [WT11] replaces the calculation of the gradient over the full data set, with a stochastic approximation based on a subset of data. Specifically at iteration t we sample n data items indexed by Dt, uniformly from the full data set and replace the exact gradient in Equation 2 with the approximation ∇θlogp(x | θ) ≈ N |Dt| X i∈Dt ∇θlog p(xi|θ) (5) Also, SGLD does not use a Metropolis-Hastings correction step, as calculating the acceptance probability would require use of the full data set, hence defeating the purpose of the stochastic gradient approximation. Convergence to the posterior is still guaranteed as long as decaying step sizes satisfying P∞ t=1 ϵt = ∞, P∞ t=1 ϵ2 t < ∞are used. In this paper we combine the use of a preconditioning matrix G(θ) as in RLD with this stochastic gradient approximation, by replacing the exact gradient in Equation 4 with the approximation from Equation 5. The resulting algorithm, stochastic gradient Riemannian Langevin dynamics (SGRLD), avoids the slow mixing problems of Langevin dynamics, while still being applicable in a large scale online setting due to its use of stochastic gradients and lack of Metropolis-Hastings correction steps. 3 Riemannian Langevin dynamics on the probability simplex In this section, we investigate the issues which arise when applying Langevin Monte Carlo methods, specifically the Langevin dynamics and Riemannian Langevin dynamics algorithms, to models whose parameters lie on the probability simplex. In these experiments, a Metropolis-Hastings correction step was used. Consider the simplest possible model: a K dimensional probability vector π with Dirichlet prior p(π) ∝QK k παk−1 k , and data x = x1, . . . , xN with p(xi = k | π) = πk. This results in a Dirchlet posterior p(π | x) ∝QK k πnk+αk−1 k , where nk = PN i=1 δ(xi = k). In 3 Parameterisation Reduced-Mean Reduced-Natural Expanded-Mean Expanded-Natural θ θk = πk θk = log πk 1−PK−1 k πk πk = |θk| P k=1 |θk| πk = eθk P k=1 eθk ∇θlog p(θ|x) n+α θ −1 nK+α−1 πK n + α −(n· + Kα) π n+α−1 θ −n· θ· −1 n + α −n·π −eθ G(θ) n· diag(θ)−1 + 1 1−P k θk 11T 1 n· diag(π) −ππT diag (θ)−1 diag eθ G−1(θ) 1 n· diag(θ) −θθT n· diag(π)−1 + 1 1−P k πk 11T diag (θ) diag e−θ PD k=1 G−1 ∂G ∂θk G−1 jk Kθj −1 1 π2 j − K−1 (1−P k πk)2 −1 e−θj PD k=1 G−1(θ) jk Tr G−1(θ) ∂G ∂θk Kθj −1 1 π2 j − K−1 (1−P k πk)2 −1 e−θj Table 1: Parameterisation Details our experiments we use a sparse, symmetric prior with αk = 0.1 ∀k, and sparse count data, setting K = 10 and n1 = 90, n2 = n3 = 5 and the remaining nk to zero. This is to replicate the sparse nature of the posterior in many models of interest. The qualitative conclusions we draw are not sensitive to the precise choice of hyperparameters and data here. There are various possible ways to parameterise the probability simplex, and the performance of Langevin Monte Carlo depends strongly on the choice of parameterisation. We consider both the mean and natural parameter spaces, and in each of these we try both a reduced (K −1 dimensional) and expanded (K dimensional) parameterisation, with details as follows. Reduced-Mean: in the mean parameter space, the most obvious approach is to set θ = π directly, but there are two problems with this. Though π has K components, it must lie on the simplex, a K −1 dimensional space. Running Langevin dynamics or RLD on the full K dimensional parameterisation will result in proposals that are off the simplex with probability one. We can incorporate the constraint that PK k=1 πk = 1 by using the first K −1 components as the parameter θ, and setting πK = 1 −PK−1 k=1 πk. Note however that the proposals can still violate the boundary constraint 0 < πk < 1, and this is particularly problematic when the posterior has mass close to the boundaries. Expanded-Mean: we can simplify boundary considerations using a redundant parameterisation. We take as our parameter θ ∈RK + with prior a product of independent Gamma(αk, 1) distributions, p(θ) ∝QK k=1 θαk−1 k e−θk. π is then given by πk = θk P k θk and so the prior on π is still Dirichlet(α). The boundary conditions 0 < θk can be handled by simply taking the absolute value of the proposed θ∗. This is equivalent to letting θ take values in the whole of RK, with prior given by Gammas mirrored at 0, p(θ) ∝QK k=1 |θk|αk−1e−|θk|, and πk = |θk| P k |θk|, which again results in a Dirichlet(α) prior on π. This approach allows us to bypass boundary issues altogether. Reduced-Natural: in the natural parameter space, the reduced parameterisation takes the form πk = eθk 1+PK−1 k=1 eθk for k = 1, . . . , K −1. The prior on θ can be obtained from the Dirichlet(α) prior on π using a change of variables. There are no boundary constraints as the range of θk is R. Expanded-Natural: finally the expanded-natural parameterisation takes the form πk = eθk PK k=1 eθk for k = 1, . . . , K. As in the expanded-mean parameterisation, we use a product of Gamma priors, in this case for eθk, so that the prior for π remains Dirichlet(α). For all parameterisations, we run both Langevin dynamics and RLD. When applying RLD, we must choose a metric G(θ). For the reduced parameterisations, we can use the expected Fisher information matrix, but the redundancy in the full parameterisations means that this matrix has rank K−1 and hence is not invertible. For these parameterisations we use the expected Fisher information matrix for a Gamma/Poisson model, which is equivalent to the Dirichlet/Multinomial apart from the fact that the total number of data items is considered to be random as well. The details for each parameterisation are summarised in Table 1. In all cases we are interested in sampling from the posterior distribution on π, while θ is the specific parameterisation being used. For the mean parameterisations, the θ−1 term in the gradient of the log-posterior means that for components of θ which are close to zero, the proposal distribution for Langevin dynamics (Equation 2) has a large mean, resulting in unstable proposals with a small acceptance probability. Due to the form of G(θ)−1, the same argument holds for the RLD proposal distribution for the natural parameterisations. This leaves us with three possible combinations, RLD on the expandedmean parameterisation and Langevin dynamics on each of the natural parameterisations. 4 10^−5 10^−4 10^−3 10^−2 10^−1 10^0 0 100 200 300 400 500 600 700 800 900 1000 Step size ESS Expanded−Mean RLD median Expanded−Mean RLD mean Reduced−Natural LD median Reduced−Natural LD mean Expanded−Natural LD median Expanded−Natural LD mean (a) Effective sample size 0 100 200 300 400 500 600 700 800 900 1000 10 −8 10 −6 10 −4 10 −2 10 0 0 100 200 300 400 500 600 700 800 900 1000 10 −28 10 −21 10 −14 10 −7 10 0 0 100 200 300 400 500 600 700 800 900 1000 10 −16 10 −12 10 −8 10 −4 10 0 Thinned sample number (b) Samples Figure 1: Effective sample size and samples. Burn-in iterations is 10,000; thinning factor 100. To investigate their relative performances we run a small experiment, producing 110,000 samples from each of the three remaining parameterisations, discarding 10,000 burn-in samples and thinning the remaining samples by a factor of 100. For the resulting 1000 thinned samples of θ, we calculate the corresponding samples of π, and compute the effective sample size for each component of π. This was done for a range of step sizes ϵ, and the mean and median effective sample sizes for the components of π is shown in Figure 1(a). Figure 1(b) shows the samples from each sampler at their optimal step size of 0.1. The samples from Langevin dynamics on both natural parameterisations display higher auto-correlation than the RLD samples produced using the expanded-mean parameterisation, as would be expected from their lower effective sample sizes. In addition to the increased effective sample size, the expanded-mean parameterisation RLD sampler has the advantage that it is computationally efficient as G(θ) is a diagonal matrix. Hence it is this algorithm that we use when applying these techniques to latent Dirichlet allocation in Section 4. 4 Applying Riemannian Langevin dynamics to latent Dirichlet allocation Latent Dirichlet Allocation (LDA) [BNJ03] is a hierarchical Bayesian model, most frequently used to model topics arising in collections of text documents. The model consists of K topics πk, which are distributions over the words in the collection, drawn from a symmetric Dirichlet prior with hyper-parameter β. A document d is then modelled by a mixture of topics, with mixing proportion ηd, drawn from a symmetric Dirichlet prior with hyper-parameter α. The model corresponds to a generative process where documents are produced by drawing a topic assignment zdi i.i.d. from ηd for each word wdi in document d, and then drawing the word wdi from the corresponding topic πzdi. We integrate out η analytically, resulting in the semi-collapsed distribution: p(w, z, π | α, β) = D Y d=1 Γ (Kα) Γ (Kα + nd··) K Y k=1 Γ (α + ndk·) Γ (α) K Y k=1 Γ(Wβ) Γ(β)W W Y w=1 πβ+n·kw−1 kw (6) where as in [TNW07], ndkw = PNd i=1 δ(wdi = w, zdi = k) and · denotes summation over the corresponding index. Conditional on π, the documents are i.i.d., and we can factorise Equation 6 p(w, z, π | α, β) = p(π | β) D Y d=1 p(wd, zd | α, π) (7) where p(wd, zd, | α, π) = K Y k=1 Γ (α + ndk·) Γ (α) W Y w=1 πndkw kw (8) 5 4.1 Stochastic gradient Riemannian Langevin dynamics for LDA As we would like to apply these techniques to large document collections, we use the stochastic gradient version of the Riemannian Langevin dynamics algorithm, as detailed in Section 2.3. Following the investigation in Section 3 we use the expanded-mean parameterisation. For each of the K topics πk, we introduce a W-dimensional unnormalised parameter θk with an independent Gamma prior p(θk) ∝QW w=1 θβw−1 kw e−θkw and set πkw = θkw P w θkw , for w = 1, . . . , W. We use the mirroring idea as well. The metric G(θ) is then the diagonal matrix G(θ) = diag (θ11, . . . , θ1W , . . . , θK1, . . . , θKW )−1. The algorithm runs on mini-batches of documents: at time t it receives a mini-batch of documents indexed by Dt, drawn at random from the full corpus D. The stochastic gradient of the log posterior of θ on Dt is shown in Equation 9. ∂log p(θ | w, β, α) ∂θkw ≈β −1 θkw −1 + |D| |Dt| X d∈Dt Ezd|wd,θ,α ndkw θkw −ndk· θk· (9) For this choice of θ and G(θ), we use Equations 3, 4 to give the SGRLD update for θ, θ∗ kw = θkw + ϵ 2 β −θkw + |D| |Dt| X d∈Dt Ezd|wd,θ,α [ndkw −πkwndk·] ! + (θkw) 1 2 ζkw (10) where ζkw ∼N(0, ϵ). Note that the β−1 term in Equation 9 has been replaced with β in Equation 10 as the −1 cancels with the curvature terms as detailed in Table 1. As discussed in Section 3, we reflect moves across the boundary 0 < θkw by taking the absolute value of the proposed update. Comparing Equation 9 to the gradient for the simple model from Section 3, the observed counts nk for the simple model have been replaced with the expectation of the latent topic assignment counts ndkw. To calculate this expectation we use Gibbs sampling on the topic assignments in each document separately, using the conditional distributions p(zdi = k | wd, θ, α) = α + n\i dk· θkwdi P k α + n\i dk· θkwdi (11) where \i represents a count excluding the topic assignment variable we are updating. 5 Experiments We investigate the performance of SGRLD, with no Metropolis-Hastings correction step, on two real-world data sets. We compare it to two online variational Bayesian algorithms developed for latent Dirichlet allocation: online variational Bayes (OVB) [HBB10] and hybrid stochastic variational-Gibbs (HSVG) [MHB12]. The difference between these two methods is the form of variational assumption made. OVB assumes a mean-field variational posterior, q(η1:D, z1:D, π1:K) = Q d q(ηd) Q d,i q(zdi) Q k q(πk), in particular this means topic assignment variables within the same document are assumed to be independent, when in reality they will be strongly coupled. In contrast HSVG collapses ηd analytically and uses a variational posterior of the form q(z1:D, π1:K) = Q d q(zd) Q k q(πk), which allows dependence within the components of zd. This more complicated posterior requires Gibbs sampling in the variational update step for zd, and we combined the code for OVB [HBB10], with the Gibbs sampling routine from our SGRLD code to implement HSVG. 5.1 Evaluation Method The predictive performance of the algorithms can be measured by looking at the probability they assign to unseen data. A metric frequently used for this purpose is perplexity, the exponentiated cross entropy between the trained model probability distribution and the empirical distribution of the test data. For a held-out document wd and a training set W, the perplexity is given by perp(wd | W, α, β) = exp − Pnd·· i=1 log p(wdi | W, α, β) nd·· . (12) 6 This requires calculating p(wdi | W, α, β), which is done by marginalising out the parameters ηd, π1, . . . , πK and topic assignments zd, to give p(wdi | W, α, β) = Eηd,π "X k ηdkπkwdi # (13) We use a document completion approach [WMSM09], partitioning the test document wd into two sets of words, wtrain d , wtest d and using wtrain d to estimate ηd for the test document, then calculating the perplexity on wtest d using this estimate. To calculate the perplexity for SGRLD, we integrate η analytically, so Equation 13 is replaced by p(wdi | wtrain d , W, α, β) = Eπ|W,β " Eztrain d |π,α "X k ˆηdkπkwdi ## (14) where ˆηdk := p(ztest di = k | ztrain d , α) = ntrain dk· + α ntrain d·· + Kα. (15) We estimate these expectations using the samples we obtain for π from the Markov chain produced by SGRLD, and samples for ztrain d produced by Gibbs sampling the topic assignments on wtrain d . For OVB and HSVG, we estimate Equation 13 by replacing the true posterior p(η, β) with q(η, β). p(wdi | W, α, β) = Ep(ηd,π|W,α,β) "X k ηdkπkwdi # ≈ X k Eq(ηd) [ηdk] Eq(πk) [πkwdi] (16) We estimate the perplexity directly rather than use a variational bound [HBB10] so that we can compare results of the variational algorithms to those of SGRLD. 5.2 Results on NIPS corpus The first experiment was carried out on the collection of NIPS papers from 1988-2003 [GCPT07]. This corpus contains 2483 documents, which is small enough to run all three algorithms in batch mode and compare their performance to that of collapsed Gibbs sampling on the full collection. Each document was split 80/20 into training and test sets, the training portion of all 2483 documents were used in each update step, and the perplexity was calculated on the test portion of all documents. Hyper-parameters α and β were both fixed to 0.01, and 50 topics were used. A step-size schedule of the form ϵt = (a ∗(1 + t b))−c was used. Perplexities were estimated for a range of step size parameters, and for 1, 5 and 10 document updates per topic parameter update. For OVB the document updates are fixed point iterations of q(zd) while for HSVG and SGRLD they are Gibbs updates of zd, the first half of which were discarded as burn-in. These numbers of document updates were chosen as previous investigation of the performance of HSVG for varying numbers of Gibbs updates has shown that 6-10 updates are sufficient [MHB12] to achieve good performance. Figure 2(a) shows the lowest perplexities achieved along with the corresponding parameter settings. As expected, CGS achieves the lowest perplexities. It is surprising that HSVG performs slightly worse than OVB on this data set. As it uses a less restricted variational distribution it should perform at least as well. SGRLD improves on the performance of OVB and HSVG, but does not match the performance of Gibbs sampling. 5.3 Results on Wikipedia corpus The algorithms’ performances in an online scenario was assessed on a set of articles downloaded at random from Wikipedia, as in [HBB10]. The vocabulary used is again as per [HBB10]; it is not created from the Wikipedia data set, instead it is taken from the top 10,000 words in Project Gutenburg texts, excluding all words of less than three characters. This results in vocabulary size W of approximately 8000 words. 150,000 documents from Wikipedia were used in total, in minibatches of 50 documents each. The perplexities were estimated using the methods discussed in 7 0 200 400 600 800 1000 1400 1600 1800 2000 2200 HSVG OVB SGRLD Collapsed Gibbs (a) NIPS corpus 0 50000 100000 150000 1000 1200 1400 1600 1800 2000 2200 HSVG OVB SGRLD (b) Wikipedia corpus Figure 2: Test-set perplexities on NIPS and Wikipedia corpora. Section 5.1 on a separate holdout set of 1000 documents, split 90/10 training/test. As the corpus size is large, collapsed Gibbs sampling was not run on this data set. For each algorithm a grid-search was run on the hyper-parameters, step-size parameters, and number of Gibbs sampling sweeps / variational fixed point iterations per π update. The lowest three perplexities attained for each algorithm are shown in Figure 2(b). Corresponding parameters are given in the supplementary material. HSVG achieves better performance than OVB, as expected. The performance of SGRLD is a substantial improvement on both the variational algorithms. 6 Discussion We have explored the issues involved in applying Langevin Monte Carlo techniques to a constrained parameter space such as the probability simplex, and developed a novel online sampling algorithm which addresses those issues. Using an expanded parametrisation with a reflection trick for negative proposals removed the need to deal with boundary constraints, and using the Riemannian geometry of the parameter space dealt with the problem of parameters with differing scales. Applying the method to Latent Dirichlet Allocation on two data sets produced state of the art predictive performance for the same computational budget as competing methods, demonstrating that full Bayesian inference using MCMC can be practically applied to models of interest, even when the data set is large. Python code for our method is available at http://www.stats.ox.ac. uk/˜teh/sgrld.html. Due to the widespread use of models defined on the probability simplex, we believe the methods developed here for Langevin dynamics on the probability simplex will find further uses beyond latent Dirichlet allocation and stochastic gradient Monte Carlo methods. A drawback of SGLD algorithms is the need for decreasing step sizes; it would be interesting to investigate adaptive step sizes and the approximation entailed when using fixed step sizes (but see [AKW12] for a recent development). Acknowledgements We thank the Gatsby Charitable Foundation and EPSRC (grant EP/K009362/1) for generous funding, reviewers and area chair for feedback and support, and [HBB10] for use of their excellent publicly available source code. 8 References [ABW12] Sungjin Ahn, Anoop Korattikara Balan, and Max Welling, Bayesian posterior sampling via stochastic gradient fisher scoring., ICML, 2012. [AKW12] S. Ahn, A. Korattikara, and M. Welling, Bayesian posterior sampling via stochastic gradient Fisher scoring, Proceedings of the International Conference on Machine Learning, 2012. [Ama95] S. Amari, Information geometry of the EM and em algorithms for neural networks, Neural Networks 8 (1995), no. 9, 1379–1408. [AWST09] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh, On smoothing and inference for topic models, Proceedings of the International Conference on Uncertainty in Artificial Intelligence, vol. 25, 2009. [Bea03] M. J. Beal, Variational algorithms for approximate bayesian inference, Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London, 2003. [BNJ03] D. M. Blei, A. Y. Ng, and M. I. Jordan, Latent Dirichlet allocation, Journal of Machine Learning Research 3 (2003), 993–1022. [GC11] M. Girolami and B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, Journal of the Royal Statistical Society B 73 (2011), 1–37. [GCPT07] A. Globerson, G. Chechik, F. Pereira, and N. Tishby, Euclidean Embedding of Co-occurrence Data, The Journal of Machine Learning Research 8 (2007), 2265–2295. [GRS96] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov chain monte carlo in practice, Chapman and Hall, 1996. [GS04] T. L. Griffiths and M. Steyvers, Finding scientific topics, Proceedings of the National Academy of Sciences, 2004. [HBB10] M. D. Hoffman, D. M. Blei, and F. Bach, Online learning for latent dirichlet allocation, Advances in Neural Information Processing Systems, 2010. [Hec99] D. Heckerman, A tutorial on learning with Bayesian networks, Learning in Graphical Models (M. I. Jordan, ed.), Kluwer Academic Publishers, 1999. [Ken78] J. Kent, Time-reversible diffusions, Advances in Applied Probability 10 (1978), 819–835. [Ken90] A. D. Kennedy, The theory of hybrid stochastic algorithms, Probabilistic Methods in Quantum Field Theory and Quantum Gravity, Plenum Press, 1990. [MHB12] D. Mimno, M. Hoffman, and D. Blei, Sparse stochastic inference for latent Dirichlet allocation, Proceedings of the International Conference on Machine Learning, 2012. [NASW09] D. Newman, A. Asuncion, P. Smyth, and M. Welling, Distributed algorithms for topic models, Journal of Machine Learning Research (2009). [Nea10] R. M. Neal, MCMC using Hamiltonian dynamics, Handbook of Markov Chain Monte Carlo (S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, eds.), Chapman & Hall / CRC Press, 2010. [PSD00] J.K. Pritchard, M. Stephens, and P. Donnelly, Inference of population structure using multilocus genotype data, Genetics 155 (2000), 945–959. [RM51] H. Robbins and S. Monro, A stochastic approximation method, Annals of Mathematical Statistics 22 (1951), no. 3, 400–407. [RS02] G. O. Roberts and O. Stramer, Langevin diffusions and metropolis-hastings algorithms, Methodology and Computing in Applied Probability 4 (2002), 337–357, 10.1023/A:1023562417138. [Sat01] M. Sato, Online model selection based on the variational Bayes, Neural Computation 13 (2001), 1649–1681. [TNW07] Y. W. Teh, D. Newman, and M. Welling, A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation, Advances in Neural Information Processing Systems, vol. 19, 2007, pp. 1353–1360. [WJ08] M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference, Foundations and Trends in Machine Learning 1 (2008), no. 1-2, 1–305. [WMSM09] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno, Evaluation methods for topic models, Proceedings of the 26th International Conference on Machine Learning (ICML) (Montreal) (L´eon Bottou and Michael Littman, eds.), Omnipress, June 2009, pp. 1105–1112. [WT11] M. Welling and Y. W. Teh, Bayesian learning via stochastic gradient Langevin dynamics, Proceedings of the International Conference on Machine Learning, 2011. 9
|
2013
|
61
|
5,138
|
When Are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity Animashree Anandkumar University of California Irvine, CA a.anandkumar@uci.edu Daniel Hsu Columbia University New York, NY djhsu@cs.columbia.edu Majid Janzamin University of California Irvine, CA mjanzami@uci.edu Sham Kakade Microsoft Research Cambridge, MA skakade@microsoft.com Abstract Overcomplete latent representations have been very popular for unsupervised feature learning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider probabilistic admixture or topic models in the overcomplete regime, where the number of latent topics can greatly exceed the size of the observed word vocabulary. While general overcomplete topic models are not identifiable, we establish generic identifiability under a constraint, referred to as topic persistence. Our sufficient conditions for identifiability involve a novel set of “higher order” expansion conditions on the topic-word matrix or the population structure of the model. This set of higher-order expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words. We establish that random structured topic models are identifiable w.h.p. in the overcomplete regime. Our identifiability results allow for general (non-degenerate) distributions for modeling the topic proportions, and thus, we can handle arbitrarily correlated topics in our framework. Our identifiability results imply uniqueness of a class of tensor decompositions with structured sparsity which is contained in the class of Tucker decompositions, but is more general than the Candecomp/Parafac (CP) decomposition. Keywords: Overcomplete representation, admixture models, generic identifiability, tensor decomposition. 1 Introduction A probabilistic framework for incorporating features posits latent or hidden variables that can provide a good explanation to the observed data. Overcomplete probabilistic models can incorporate a much larger number of latent variables compared to the observed dimensionality. In this paper, we characterize the conditions under which overcomplete latent variable models can be identified from their observed moments. For any parametric statistical model, identifiability is a fundamental question of whether the model parameters can be uniquely recovered given the observed statistics. Identifiability is crucial in a number of applications where the latent variables are the quantities of interest, e.g. inferring diseases 1 (latent variables) through symptoms (observations), inferring communities (latent variables) via the interactions among the actors in a social network (observations), and so on. Moreover, identifiability can be relevant even in predictive settings, where feature learning is employed for some higher level task such as classification. For instance, non-identifiability can lead to the presence of nonisolated local optima for optimization-based learning methods, and this can affect their convergence properties, e.g. see [1]. In this paper, we characterize identifiability for a popular class of latent variable models, known as the admixture or topic models [2, 3]. These are hierarchical mixture models, which incorporate the presence of multiple latent states (i.e. topics) in documents consisting of a tuple of observed variables (i.e. words). In this paper, we characterize conditions under which the topic models are identified through their observed moments in the overcomplete regime. To this end, we introduce an additional constraint on the model, referred to as topic persistence. Intuitively, this captures the “locality” effect among the observed words, and goes beyond the usual “bag-of-words” or exchangeable topic models. Such local dependencies among observations abound in applications such as text, images and speech, and can lead to more faithful representation. In addition, we establish that the presence of topic persistence is central to obtaining model identifiability in the overcomplete regime, and we provide an in-depth analysis of this phenomenon in this paper. 1.1 Summary of Results In this paper, we provide conditions for generic1 model identifiability of overcomplete topic models given observable moments of a certain order (i.e., a certain number of words in each document). We introduce a novel constraint, referred to as topic persistence, and analyze its effect on identifiability. We establish identifiability in the presence of a novel combinatorial object, named as perfect n-gram matching, in the bipartite graph from topics to words (observed variables). Finally, we prove that random models satisfy these criteria, and are thus identifiable in the overcomplete regime. Persistent Topic Model: We first introduce the n-persistent topic model, where the parameter n determines the so-called persistence level of a common topic in a sequence of n successive words, as seen in Figure 1. The n-persistent model reduces to the popular “bag-of-words” model, when n = 1, and to the single topic model (i.e. only one topic in each document) when n →∞. Intuitively, topic persistence aids identifiability since we have multiple views of the common hidden topic generating a sequence of successive words. We establish that the bag-of-words model (with n = 1) is too non-informative about the topics to be identifiable in the overcomplete regime. On the other hand, n-persistent overcomplete topic models with n ≥2 are generically identifiable, and we provide a set of transparent conditions for identifiability. h y1 y2 y2r x1 xn xn+1 x2n x(2r−1)n+1 x2rn Figure 1: Hierarchical structure of the npersistent topic model. 2rn number of words (views) are shown for some integer r ≥1. A single topic yj, j ∈[2r], is chosen for each n successive views {x(j−1)n+1, . . . , x(j−1)n+n}. Deterministic Conditions for Identifiability: Our sufficient conditions for identifiability are in the form of expansion conditions from the latent topic space to the observed word space. In the overcomplete regime, there are more topics than words, and thus it is impossible to have expansion from topics to words. Instead, we impose a novel expansion constraint from topics to “higher order” words, which allows us to handle overcomplete models. We establish that this condition translates to the presence of a novel combinatorial object, referred to as perfect n-gram matching, on the bipartite graph from topics to words, which encodes the sparsity pattern of the topic-word matrix. Intuitively, this condition implies “diversity” of the word support for different topics which leads to identifiability. In addition, we present tradeoffs between the topic and word space dimensionality, topic persistence level, the order of the observed moments at hand, the maximum degree of any 1A model is generically identifiable, if all the parameters in the parameter space are identifiable, almost surely. Refer to Definition 1 for more discussion. 2 topic in the bipartite graph, and the Kruskal rank [4] of the topic-word matrix, for identifiability to hold. We also provide the discussion that how ℓ1-based optimization program can recover the model under additional constraints. Identifiability of Random Structured Topic Models: We explicitly characterize the regime of identifiability for the random setting, where each topic i is randomly supported on a set of di words, i.e. the bipartite graph is a random graph. For this random model with q topics, p-dimensional word vocabulary, and topic persistence level n, when q = O(pn) and Θ(log p) ≤di ≤Θ(p1/n), for all topics i, the topic-word matrix is identifiable from 2nth order observed moments with high probability. Furthermore, we establish that the size condition q = O(pn) for identifiability is tight. Implications on Uniqueness of Overcomplete Tucker and CP Tensor Decompositions: We establish that identifiability of an overcomplete topic model is equivalent to uniqueness of the observed moment tensor (of a certain order) decomposition. Our identifiability results for persistent topic models imply uniqueness of a structured class of tensor decompositions, which is contained in the class of Tucker decompositions, but is more general than the candecomp/parafac (CP) decomposition [5]. This sub-class of Tucker decompositions involves structured sparsity and symmetry constraints on the core tensor, and sparsity constraints on the inverse factors of the decomposition. Detailed discussion on overview of techniques and related works are provided in the long version [12]. 2 Model Notations: The set {1, 2, . . ., n} is denoted by [n] := {1, 2, . . ., n}. The cardinality of set S is denoted by |S|. For any vector u (or matrix U), the support denoted by Supp(u) corresponds to the location of its non-zero entries. For a vector u ∈Rq, Diag(u) ∈Rq×q is the diagonal matrix with u on its main diagonal. The column space of a matrix A is denoted by Col(A). Operators “⊗” and “⊙” refer to Kronecker and Khatri-Rao products [6], respectively. 2.1 Persistent topic model An admixture model specifies a q-dimensional vector of topic proportions h ∈∆q−1 := {u ∈ Rq : ui ≥0, Pq i=1 ui = 1} which generates the observed variables xl ∈Rp through vectors a1, . . . , aq ∈Rp. This collection of vectors ai, i ∈[q], is referred to as the population structure or topic-word matrix [7]. For instance, ai represents the conditional distribution of words given topic i. The latent variable h is a q dimensional random vector h := [h1, . . . , hq]⊤known as proportion vector. A prior distribution P(h) over the probability simplex ∆q−1 characterizes the prior joint distribution over the latent variables (topics) hi, i ∈[q]. The n-persistent topic model has a three-level multi-view hierarchy in Figure 1. In this model, a common hidden topic is persistent for a sequence of n words {x(j−1)n+1, . . . , x(j−1)n+n}, j ∈[2r]. Note that the random observed variables (words) are exchangeable within groups of size n, where n is the persistence level, but are not globally exchangeable. We now describe a linear representation for the n-persistent topic model, on lines of [9], but with extensions to incorporate persistence. Each random variable yj, j ∈[2r], is a discrete-valued qdimensional random variable encoded by the basis vectors ei, i ∈[q], where ei is the i-th basis vector in Rq with the i-th entry equal to 1 and all the others equal to zero. When yj = ei ∈Rq, then the topic of j-th group of words is i. Given proportion vector h ∈Rq, topics yj, j ∈[2r], are independently drawn according to the conditional expectation E yj|h = h, j ∈[2r], or equivalently Pr yj = ei|h = hi, j ∈[2r], i ∈[q]. Each observed variable xl for l ∈[2rn], is a discrete-valued p-dimensional random variable (word) where p is the size of vocabulary. Again, we assume that variables xl, are encoded by the basis vectors ek, k ∈[p], such that xl = ek ∈Rp when the l-th word in the document is k. Given the 3 corresponding topic yj, j ∈[2r], words xl, l ∈[2rn], are independently drawn according to the conditional expectation E x(j−1)n+k|yj = ei = ai, i ∈[q], j ∈[2r], k ∈[n], where vectors ai ∈Rp, i ∈[q], are the conditional probability distribution vectors. The matrix A = [a1|a2| · · · |aq] ∈Rp×q collecting these vectors is called population structure or topic-word matrix. The (2rn)-th order moment of observed variables xl, l ∈[2rn], for some integer r ≥1, is defined as (in the matrix form) M2rn(x) := E (x1 ⊗x2 ⊗· · · ⊗xrn)(xrn+1 ⊗xrn+2 ⊗· · · ⊗x2rn)⊤ ∈Rprn×prn. (1) For the n-persistent topic model with 2rn number of observations (words) xl, l ∈[2rn], the corresponding moment is denoted by M (n) 2rn(x). In this paper, we consider the problem of identifiability when exact moments are available. Given M (n) 2rn(x), what are the sufficient conditions under which the population structure A = [a1|a2| · · · |aq] ∈Rp×q is identifiable? This is answered in Section 3. 3 Sufficient Conditions for Generic Identifiability In this section, the identifiability result for the n-persistent topic model with access to (2n)-th order observed moment is provided. First, sufficient deterministic conditions on the population structure A are provided for identifiability in Theorem 1. Next, the deterministic analysis is specialized to a random structured model in Theorem 2. We now make the notion of identifiability precise. As defined in literature, (strict) identifiability means that the population structure A can be uniquely recovered up to permutation and scaling for all A ∈Rp×q. Instead, we consider a more relaxed notion of identifiability, known as generic identifiability. Definition 1 (Generic identifiability). We refer to a matrix A ∈Rp×q as generic, with a fixed sparsity pattern when the nonzero entries of A are drawn from a distribution which is absolutely continuous with respect to Lebesgue measure 2. For a given sparsity pattern, the class of population structure matrices is said to be generically identifiable [10], if all the non-identifiable matrices form a set of Lebesgue measure zero. The (2r)-th order moment of hidden variables h ∈Rq, denoted by M2r(h), is defined as M2r(h) := E r times z }| { h ⊗· · · ⊗h r times z }| { h ⊗· · · ⊗h ⊤ ∈Rqr×qr. (2) Condition 1 (Non-degeneracy). The (2r)-th order moment of hidden variables h ∈Rq, defined in equation (2), is full rank (non-degeneracy of hidden nodes). Note that there is no hope of distinguishing distinct hidden nodes without this non-degeneracy assumption. We do not impose any other assumption on hidden variables and can incorporate arbitrarily correlated topics. Furthermore, we can only hope to identify the population structure A up to scaling and permutation. Therefore, we can identify A up to a canonical form defined as: Definition 2 (Canonical form). Population structure A is said to be in canonical form if all of its columns have unit norm. 3.1 Deterministic Conditions for Generic Identifiability Before providing the main result, a generalized notion of (perfect) matching for bipartite graphs is defined. We subsequently impose these conditions on the bipartite graph from topics to words which encodes the sparsity pattern of population structure A. 2As an equivalent definition, if the non-zero entries of an arbitrary sparse matrix are independently perturbed with noise drawn from a continuous distribution to generate A, then A is called generic. 4 Generalized matching for bipartite graphs: A bipartite graph with two disjoint vertex sets Y and X and an edge set E between them is denoted by G(Y, X; E). Given the bi-adjacency matrix A, the notation G(Y, X; A) is also used to denote a bipartite graph. Here, the rows and columns of matrix A ∈R|X|×|Y | are respectively indexed by X and Y vertex sets. Furthermore, for any subset S ⊆Y , the set of neighbors of vertices in S with respect to A is denoted by NA(S). Definition 3 ((Perfect) n-gram matching). A n-gram matching M for a bipartite graph G(Y, X; E) is a subset of edges M ⊆E which satisfies the following conditions. First, for any j ∈Y , we have |NM(j)| ≤n. Second, for any j1, j2 ∈Y, j1 ̸= j2, we have min{|NM(j1)|, |NM(j2)|} > |NM(j1) ∩NM(j2)|. A perfect n-gram matching or Y -saturating n-gram matching for the bipartite graph G(Y, X; E) is a n-gram matching M in which each vertex in Y is the end-point of exactly n edges in M. In words, in a n-gram matching M, each vertex j ∈Y is at most the end-point of n edges in M and for any pair of vertices in Y (j1, j2 ∈Y, j1 ̸= j2), there exists at least one non-common neighbor in set X for each of them (j1 and j2). Note that 1-gram matching is the same as regular matching for bipartite graphs. Remark 1 (Necessary size bound). Consider a bipartite graph G(Y, X; E) with |Y | = q and |X| = p which has a perfect n-gram matching. Note that there are p n n-combinations on X side and each combination can at most have one neighbor (a node in Y which is connected to all nodes in the combination) through the matching, and therefore we necessarily have q ≤ p n . Identifiability conditions based on existence of perfect n-gram matching in topic-word graph: Now, we are ready to propose the identifiability conditions and result. Condition 2 (Perfect n-gram matching on A). The bipartite graph G(Vh, Vo; A) between hidden and observed variables, has a perfect n-gram matching. The above condition implies that the sparsity pattern of matrix A is appropriately scattered in the mapping from hidden to observed variables to be identifiable. Intuitively, it means that every hidden node can be distinguished from another hidden node by its unique set of neighbors under the corresponding n-gram matching. Furthermore, condition 2 is the key to be able to propose identifiability in the overcomplete regime. As stated in the size bound in Remark 1, for n ≥2, the number of hidden variables can be more than the number of observed variables and we can still have perfect n-gram matching. Definition 4 (Kruskal rank, [11]). The Kruskal rank or the krank of matrix A is defined as the maximum number k such that every subset of k columns of A is linearly independent. Condition 3 (Krank condition on A). The Kruskal rank of matrix A satisfies the bound krank(A) ≥ dmax(A)n, where dmax(A) is the maximum node degree of any column of A. In the overcomplete regime, it is not possible for A to be full column rank and krank(A) < |Vh| = q. However, note that a large enough krank ensures that appropriate sized subsets of columns of A are linearly independent. For instance, when krank(A) > 1, any two columns cannot be collinear and the above condition rules out the collinear case for identifiability. In the above condition, we see that a larger krank can incorporate denser connections between topics and words. The main identifiability result under a fixed graph structure is stated in the following theorem for n ≥2, where n is the topic persistence level. Theorem 1 (Generic identifiability under deterministic topic-word graph structure). Let M (n) 2rn(x) in equation (1) be the (2rn)-th order observed moment of the n-persistent topic model, for some integer r ≥1. If the model satisfies conditions 1, 2 and 3, then, for any n ≥2, all the columns of population structure A are generically identifiable from M (n) 2rn(x). Furthermore, the (2r)-th order moment of the hidden variables, denoted by M2r(h), is also generically identifiable. The theorem is proved in Appendix A of the long version [12]. It is seen that the population structure A is identifiable, given any observed moment of order at least 2n. Increasing the order of observed moment results in identifying higher order moments of the hidden variables. The above theorem does not cover the case of n = 1. This is the usual bag-of-words admixture model. Identifiability of this model has been studied earlier [13], and we recall it below. Remark 2 (Bag-of-words admixture model, [13]). Given (2r)-th order observed moments with r ≥1, the structure of the popular bag-of-words admixture model and the (2r)-th order moment of 5 hidden variables are identifiable, when A is full column rank and the following expansion condition holds [13] |NA(S)| ≥|S| + dmax(A), ∀S ⊆Vh, |S| ≥2. (3) Our result for n ≥2 in Theorem 1, provides identifiability in the overcomplete regime with weaker matching condition 2 and krank condition 3. The matching condition 2 is weaker than the above expansion condition which is based on the perfect matching and hence, does not allow overcomplete models. Furthermore, the above result for the bag-of-words admixture model requires full column rank of A which is more stringent than our krank condition 3. Remark 3 (Recovery using ℓ1 optimization). It turns out that our conditions for identifiability imply that the columns of the n-gram matrix 3 A⊙n, are the sparsest vectors in Col M (n) 2n (x) , having a tensor rank of one. See Appendix A in the long version [12]. This implies recovery of the columns of A through exhaustive search, which is not efficient. Efficient ℓ1-based recovery algorithms have been analyzed in [13, 14] for the undercomplete case (n = 1). They can be employed here for recovery from higher order moments as well. Exploiting additional structure present in A⊙n, for n > 1, such as rank-1 test devices proposed in [15] are interesting avenues for future investigation. 3.2 Analysis Under Random Topic-Word Graph Structures In this section, we specialize the identifiability result to the random case. This result is based on more transparent conditions on the size and the degree of the random bipartite graph G(Vh, Vo; A). We consider the random model where in the bipartite graph G(Vh, Vo; A), each node i ∈Vh is randomly connected to di different nodes in set Vo. Note that this is a heterogeneous degree model. Condition 4 (Size condition). The random bipartite graph G(Vh, Vo; A) with |Vh| = q, |Vo| = p, and A ∈Rp×q, satisfies the size condition q ≤ c p n n for some constant 0 < c < 1. This size condition is required to establish that the random bipartite graph has a perfect n-gram matching (and hence satisfies deterministic condition 2). It is shown that the necessary size constraint q = O(pn) stated in Remark 1, is achieved in the random case. Thus, the above constraint allows for the overcomplete regime, where q ≫p for n ≥2, and is tight. Condition 5 (Degree condition). In the random bipartite graph G(Vh, Vo; A) with |Vh| = q, |Vo| = p, and A ∈Rp×q, the degree di of nodes i ∈Vh satisfies the lower and upper bounds dmin ≥ max{1 + β log p, α log p} for some constants β > n−1 log 1/c, α > max 2n2 β log 1 c + 1 , 2βn , and dmax ≤(cp) 1 n . Intuitively, the lower bound on the degree is required to show that the corresponding bipartite graph G(Vh, Vo; A) has sufficient number of random edges to ensure that it has perfect n-gram matching with high probability. The upper bound on the degree is mainly required to satisfy the krank condition 3, where dmax(A)n ≤krank(A). It is important to see that, for n ≥2, the above condition on degree covers a range of models from sparse to intermediate regimes and it is reasonable in a number of applications that each topic does not generate a very large number of words. Probability rate constants: The probability rate of success in the following random identifiability result is specified by constants β′ > 0 and γ = γ1 + γ2 > 0 as β′ = −β log c −n + 1, (4) γ1 = en−1 c nn−1 + e2 1 −δ1 nβ′+1 , (5) γ2 = cn−1e2 nn(1 −δ2), (6) where δ1 and δ2 are some constants satisfying e2 p n −β log 1/c < δ1 < 1 and cn−1e2 nn p−β′ < δ2 < 1. 3A⊙n := A ⊙· · · ⊙A | {z } n times . 6 h y x1 xm xm+1 x2m (a) Single topic model (infinite-persistent topic model) h y1 ym ym+1 y2m x1 xm xm+1 x2m (b) Bag-of-words admixture model (1-persistent topic model) Figure 2: Hierarchical structure of the single topic model and bag-of-words admixture model shown for 2m number of words (views). Theorem 2 (Random identifiability). Let M (n) 2rn(x) in equation (1) be the (2rn)-th order observed moment of the n-persistent topic model for some integer r ≥1. If the model with random population structure A satisfies conditions 1, 4 and 5, then whp (with probability at least 1−γp−β′ for constants β′ > 0 and γ > 0, specified in (4)-(6)), for any n ≥2, all the columns of population structure A are identifiable from M (n) 2rn(x). Furthermore, the (2r)-th order moment of hidden variables, denoted by M2r(h), is also identifiable, whp. The theorem is proved in Appendix B of the long version [12]. Similar to the deterministic analysis, it is seen that the population structure A is identifiable given any observed moment with order at least 2n. Increasing the order of observed moment results in identifying higher order moments of the hidden variables. The above identifiability theorem only covers for n ≥2 and the n = 1 case is addressed in the following remark. Remark 4 (Bag-of-words admixture model). The identifiability result for the random bag-of-words admixture model is comparable to the result in [14], which considers exact recovery of sparsely-used dictionaries. They assume that Y = DX is given for some unknown arbitrary dictionary D ∈Rq×q and unknown random sparse coefficient matrix X ∈Rq×p. They establish that if D ∈Rq×q is full rank and the random sparse coefficient matrix X ∈Rq×p follows the Bernoulli-subgaussian model with size constraint p > Cq log q and degree constraint O(log q) < E[d] < O(q log q), then the model is identifiable, whp. Comparing the size and degree constraints, our identifiability result for n ≥2 requires more stringent upper bound on the degree (d = O(p1/n)), while more relaxed condition on the size (q = O(pn)) which allows to identifiability in the overcomplete regime. Remark 5 (The size condition is tight). The size bound q = O(pn) in the above theorem achieves the necessary condition that q ≤ p n = O(pn) (see Remark 1), and is therefore tight. The sufficiency is argued in Theorem 3 of the long version [12], where we show that the matching condition 2 holds under the above size and degree conditions 4 and 5. 4 Why Persistence Helps in Identifiability of Overcomplete Models? In this section, we provide the moment characterization of the 2-persistent topic model. Then, we provide a discussion and comparison on why persistence helps in providing identifiability in the overcomplete regime. The moment characterization and detailed tensor analysis is provided in the long version [12]. The single topic model (n →∞) is shown in Figure 2a and the bag-of-words admixture model (n = 1) is shown in Figure 2b. In order to have a fair comparison among these different models, we fix the number of observed variables to 4 (case m = 2) and vary the persistence level. Consider three different models: 2-persistent topic model, single topic model and bag-of-words admixture model (1-persistent topic model). From moment characterization results provided in the long version [12], we have the following moment forms for each of these models. For the 2-persistent topic model with 4 words (r = 1, n = 2), we have M (2) 4 (x) = (A ⊙A)E hh⊤](A ⊙A)⊤. (7) 7 For the single topic model with 4 words, we have M (∞) 4 (x) = (A ⊙A) Diag E h] (A ⊙A)⊤, (8) And for the bag-of-words-admixture model with 4 words (r = 2, n = 1), we have M (1) 4 (x) = (A ⊗A)E (h ⊗h)(h ⊗h)⊤ (A ⊗A)⊤. (9) Note that for the single topic model in (8), the Khatri-Rao product matrix A ⊙A ∈Rp2×q has the same as the number of columns (i.e. the latent dimensionality) of the original matrix A, while the number of rows (i.e. the observed dimensionality) is increased. Thus, the Khatri-Rao product “expands” the effect of hidden variables to higher order observed variables, which is the key towards identifying overcomplete models. In other words, the original overcomplete representation becomes determined due to the ‘expansion effect’ of the Khatri-Rao product structure of the higher order observed moments. On the other hand, in the bag-of-words admixture model in (9), this interesting ‘expansion property’ does not occur, and we have the Kronecker product A ⊗A ∈Rp2×q2, in place of the Khatri-Rao products. The Kronecker product operation increases both the number of the columns (i.e. latent dimensionality) and the number of rows (i.e. observed dimensionality), which implies that higher order moments do not help in identifying overcomplete models. Finally, Contrasting equation (7) with (8) and (9), we find that the 2-persistent model retains the desirable property of possessing Khatri-Rao products, while being more general than the form for single topic model in (8). This key property enables us to establish identifiability of topic models with finite persistence levels. Remark 6 (Relationship to tensor decompositions). In the long version of this work [12], we establish that the tensor representation of our model is a special case of the Tucker representation, but more general than the symmetric CP representation [6]. Therefore, our identifiability results also imply uniqueness of a class of tensor decompositions with structured sparsity which is contained in the class of Tucker decompositions, but is more general than the Candecomp/Parafac (CP) decomposition. 5 Proof sketch The moment of n-persistent topic model with 2n words is derived as M (n) 2n (x) = (A⊙n) E hh⊤ (A⊙n)⊤; see [12]. When hidden variables are non-degenerate and A⊙n is full column rank, we have Col M (n) 2n (x) = Col A⊙n . Therefore, the problem of recovering A from M (n) 2n (x) reduces to finding A⊙n in Col A⊙n . Then, under the expansion condition 4 NA⊙n Rest.(S) ≥|S| + dmax A⊙n , ∀S ⊆Vh, |S| > krank(A), we establish that, matrix A is identifiable from Col A⊙n . This identifiability claim is proved by showing that the columns of A⊙n are the sparsest and rank-1 (in the tensor form) vectors in Col A⊙n under the sufficient expansion and genericity conditions. Then, it is established that, sufficient combinatorial conditions on matrix A (conditions 2 and 3) ensure that the expansion and rank conditions on A⊙n are satisfied. This is shown by proving that the existence of perfect n-gram matching on A results in the existence of perfect matching on A⊙n. For further discussion on proof techniques, see the long version [12]. Acknowledgments The authors acknowledge useful discussions with Sina Jafarpour, Adel Javanmard, Alex Dimakis, Moses Charikar, Sanjeev Arora, Ankur Moitra and Kamalika Chaudhuri. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, NSF Award CCF-1219234, ARO Award W911NF-12-1-0404, and ARO YIP Award W911NF-13-1-0084. M. Janzamin is supported by NSF Award CCF-1219234, ARO Award W911NF-12-1-0404 and ARO YIP Award W911NF-13-1-0084. 4A⊙n Rest. is the restricted version of n-gram matrix A⊙n, in which the redundant rows of A⊙n are removed. 8 References [1] Andr´e Uschmajew. Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM Journal on Matrix Analysis and Applications, 33(2):639–652, 2012. [2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [3] J. K. Pritchard, M. Stephens, and P. Donnelly. Inference of population structure using multilocus genotype data. Genetics, 155:945–959, 2000. [4] J.B. Kruskal. More factors than subjects, tests and treatments: an indeterminacy theorem for canonical decomposition and individual differences scaling. Psychometrika, 41(3):281–293, 1976. [5] Tamara Kolda and Brett Bader. Tensor decompositions and applications. SIREV, 51(3):455– 500, 2009. [6] G.H. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, Maryland, 2012. [7] XuanLong Nguyen. Posterior contraction of the population polytope in finite admixture models. arXiv preprint arXiv:1206.0068, 2012. [8] T. Austin et al. On exchangeable random variables and the statistics of large graphs and hypergraphs. Probab. Surv, 5:80–145, 2008. [9] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor Methods for Learning Latent Variable Models. Under Review. J. of Machine Learning. Available at arXiv:1210.7559, Oct. 2012. [10] Elizabeth S. Allman, John A. Rhodes, and Amelia Taylor. A semialgebraic description of the general markov model on phylogenetic trees. Arxiv preprint arXiv:1212.1200, Dec. 2012. [11] J.B. Kruskal. Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95– 138, 1977. [12] A. Anandkumar, D. Hsu, M. Janzamin, and S. Kakade. When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity. Preprint available on arXiv:1308.2853, Aug. 2013. [13] A. Anandkumar, D. Hsu, A. Javanmard, and S. M. Kakade. Learning Linear Bayesian Networks with Latent Variables. ArXiv e-prints, September 2012. [14] Daniel A. Spielman, Huan Wang, and John Wright. Exact recovery of sparsely-used dictionaries. ArxXiv preprint, abs/1206.5882, 2012. [15] L. De Lathauwer, J. Castaing, and J.-F Cardoso. Fourth-order cumulant-based blind identification of underdetermined mixtures. IEEE Tran. on Signal Processing, 55:2965–2973, June 2007. 9
|
2013
|
62
|
5,139
|
Sign Cauchy Projections and Chi-Square Kernel Ping Li Dept of Statistics & Biostat. Dept of Computer Science Rutgers University pingli@stat.rutgers.edu Gennady Samorodnitsky ORIE and Dept of Stat. Science Cornell University Ithaca, NY 14853 gs18@cornell.edu John Hopcroft Dept of Computer Science Cornell University Ithaca, NY 14853 jeh@cs.cornell.edu Abstract The method of stable random projections is useful for efficiently approximating the lα distance (0 < α ≤2) in high dimension and it is naturally suitable for data streams. In this paper, we propose to use only the signs of the projected data and we analyze the probability of collision (i.e., when the two signs differ). Interestingly, when α = 1 (i.e., Cauchy random projections), we show that the probability of collision can be accurately approximated as functions of the chi-square (χ2) similarity. In text and vision applications, the χ2 similarity is a popular measure when the features are generated from histograms (which are a typical example of data streams). Experiments confirm that the proposed method is promising for large-scale learning applications. The full paper is available at arXiv:1308.1009. There are many future research problems. For example, when α →0, the collision probability is a function of the resemblance (of the binary-quantized data). This provides an effective mechanism for resemblance estimation in data streams. 1 Introduction High-dimensional representations have become very popular in modern applications of machine learning, computer vision, and information retrieval. For example, Winner of 2009 PASCAL image classification challenge used millions of features [29]. [1, 30] described applications with billion or trillion features. The use of high-dimensional data often achieves good accuracies at the cost of a significant increase in computations, storage, and energy consumptions. Consider two data vectors (e.g., two images) u, v ∈RD. A basic task is to compute their distance or similarity. For example, the correlation (ρ2) and lα distance (dα) are commonly used: ρ2(u, v) = ∑D i=1 uivi √∑D i=1 u2 i ∑D i=1 v2 i , dα(u, v) = D ∑ i=1 |ui −vi|α (1) In this study, we are particularly interested in the χ2 similarity, denoted by ρχ2: ρχ2 = D ∑ i=1 2uivi ui + vi , where ui ≥0, vi ≥0, D ∑ i=1 ui = D ∑ i=1 vi = 1 (2) The chi-square similarity is closely related to the chi-square distance dχ2: dχ2 = D ∑ i=1 (ui −vi)2 ui + vi = D ∑ i=1 (ui + vi) − D ∑ i=1 4uivi ui + vi = 2 −2ρχ2 (3) The chi-square similarity is an instance of the Hilbertian metrics, which are defined over probability space [10] and suitable for data generated from histograms. Histogram-based features (e.g., bagof-word or bag-of-visual-word models) are extremely popular in computer vision, natural language processing (NLP), and information retrieval. Empirical studies have demonstrated the superiority of the χ2 distance over l2 or l1 distances for image and text classification tasks [4, 10, 13, 2, 28, 27, 26]. The method of normal random projections (i.e., α-stable projections with α = 2) has become popular in machine learning (e.g., [7]) for reducing the data dimensions and data sizes, to facilitate 1 efficient computations of the l2 distances and correlations. More generally, the method of stable random projections [11, 17] provides an efficient algorithm to compute the lα distances (0 < α ≤2). In this paper, we propose to use only the signs of the projected data after applying stable projections. 1.1 Stable Random Projections and Sign (1-Bit) Stable Random Projections Consider two high-dimensional data vectors u, v ∈RD. The basic idea of stable random projections is to multiply u and v by a random matrix R ∈RD×k: x = uR ∈Rk, y = vR ∈Rk, where entries of R are i.i.d. samples from a symmetric α-stable distribution with unit scale. By properties of stable distributions, xj −yj follows a symmetric α-stable distribution with scale dα. Hence, the task of computing dα boils down to estimating the scale dα from k i.i.d. samples. In this paper, we propose to store only the signs of projected data and we study the probability of collision: Pα = Pr (sign(xj) ̸= sign(yj)) (4) Using only the signs (i.e., 1 bit) has significant advantages for applications in search and learning. When α = 2, this probability can be analytically evaluated [9] (or via a simple geometric argument): P2 = Pr (sign(xj) ̸= sign(yj)) = 1 π cos−1 ρ2 (5) which is an important result known as sim-hash [5]. For α < 2, the collision probability is an open problem. When the data are nonnegative, this paper (Theorem 1) will prove a bound of Pα for general 0 < α ≤2. The bound is exact at α = 2 and becomes less sharp as α moves away from 2. Furthermore, for α = 1 and nonnegative data, we have the interesting observation that the probability P1 can be well approximated as functions of the χ2 similarity ρχ2. 1.2 The Advantages of Sign Stable Random Projections 1. There is a significant saving in storage space by using only 1 bit instead of (e.g.,) 64 bits. 2. This scheme leads to an efficient linear algorithm (e.g., linear SVM). For example, a negative sign can be coded as “01” and a positive sign as “10” (i.e., a vector of length 2). With k projections, we concatenate k short vectors to form a vector of length 2k. This idea is inspired by b-bit minwise hashing [20], which was designed for binary sparse data. 3. This scheme also leads to an efficient near neighbor search algorithm [8, 12]. We can code a negative sign by “0” and positive sign by “1” and concatenate k such bits to form a hash table of 2k buckets. In the query phase, one only searches for similar vectors in one bucket. 1.3 Data Stream Computations Stable random projections are naturally suitable for data streams. In modern applications, massive datasets are often generated in a streaming fashion, which are difficult to transmit and store [22], as the processing is done on the fly in one-pass of the data. In the standard turnstile model [22], a data stream can be viewed as high-dimensional vector with the entry values changing over time. Here, we denote a stream at time t by u(t) i , i = 1 to D. At time t, a stream element (it, It) arrives and updates the it-th coordinate as u(t) it = u(t−1) it + It. Clearly, the turnstile data stream model is particularly suitable for describing histograms and it is also a standard model for network traffic summarization and monitoring [31]. Because this stream model is linear, methods based on linear projections (i.e., matrix-vector multiplications) can naturally handle streaming data of this sort. Basically, entries of the projection matrix R ∈RD×k are (re)generated as needed using pseudo-random number techniques [23]. As (it, It) arrives, only the entries in the it-th row, i.e., rit,j, j = 1 to k, are (re)generated and the projected data are updated as x(t) j = x(t−1) j + It × ritj. Recall that, in the definition of χ2 similarity, the data are assumed to be normalized (summing to 1). For nonnegative streams, the sum can be computed error-free by using merely one counter: ∑D i=1 u(t) i = ∑t s=1 Is. Thus we can still use, without loss of generality, the sum-to-one assumption, even in the streaming environment. This fact was recently exploited by another data stream algorithm named Compressed Counting (CC) [18] for estimating the Shannon entropy of streams. Because the use of the χ2 similarity is popular in (e.g.,) computer vision, recently there are other proposals for estimating the χ2 similarity. For example, [15] proposed a nice technique to approximate ρχ2 by first expanding the data from D dimensions to (e.g.,) 5 ∼10 × D dimensions through a nonlinear transformation and then applying normal random projections on the expanded data. The nonlinear transformation makes their method not applicable to data streams, unlike our proposal. 2 For notational simplicity, we will drop the superscript (t) for the rest of the paper. 2 An Experimental Study of Chi-Square Kernels We provide an experimental study to validate the use of χ2 similarity. Here, the “χ2-kernel” is defined as K(u, v) = ρχ2 and the “acos-χ2-kernel” as K(u, v) = 1 −1 π cos−1 ρχ2. With a slight abuse of terminology, we call both “χ2 kernel” when it is clear in the context. We use the “precomputed kernel” functionality in LIBSVM on two datasets: (i) UCI-PEMS, with 267 training examples and 173 testing examples in 138,672 dimensions; (ii) MNIST-small, a subset of the popular MNIST dataset, with 10,000 training examples and 10,000 testing examples. The results are shown in Figure 1. To compare these two types of χ2 kernels with “linear” kernel, we also test the same data using LIBLINEAR [6] after normalizing the data to have unit Euclidian norm, i.e., we basically use ρ2. For both LIBSVM and LIBLINEAR, we use l2-regularization with a regularization parameter C and we report the test errors for a wide range of C values. 10 −2 10 −1 10 0 10 1 10 2 10 3 0 20 40 60 80 100 C Classification Acc (%) PEMS linear χ2 acos χ2 10 −2 10 −1 10 0 10 1 10 2 60 70 80 90 100 C Classification Acc (%) MNIST−Small linear χ2 acos χ2 Figure 1: Classification accuracies. C is the l2-regularization parameter. We use LIBLINEAR for “linear” (i.e., ρ2) kernel and LIBSVM “precomputed kernel” for two types of χ2 kernels (“χ2kernel” and “acos-χ2-kernel”). For UCI-PEMS, the χ2-kernel has better performance than the linear kernel and acos-χ2-kernel. For MNIST-Small, both χ2 kernels noticeably outperform linear kernel. Note that MNIST-small used the original MNIST test set and merely 1/6 of the original training set. Here, we should state that it is not the intention of this paper to use these two small examples to conclude the advantage of χ2 kernels over linear kernel. We simply use them to validate our proposed method, which is general-purpose and is not limited to data generated from histograms. 3 Sign Stable Random Projections and the Collision Probability Bound We apply stable random projections on two vectors u, v ∈RD: x = ∑D i=1 uiri, y = ∑D i=1 viri, ri ∼S(α, 1), i.i.d. Here Z ∼S(α, γ) denotes a symmetric α-stable distribution with scale γ, whose characteristic function [24] is E ( e √−1Zt) = e−γ|t|α. By properties of stable distributions, we know x−y ∼S ( α, ∑D i=1 |ui −vi|α) . Applications including linear learning and near neighbor search will benefit from sign α-stable random projections. When α = 2 (i.e. normal), the collision probability Pr (sign(x) ̸= sign(y)) is known [5, 9]. For α < 2, it is a difficult probability problem. This section provides a bound of Pr (sign(x) ̸= sign(y)), which is fairly accurate for α close to 2. 3.1 Collision Probability Bound In this paper, we focus on nonnegative data (as common in practice). We present our first theorem. Theorem 1 When the data are nonnegative, i.e., ui ≥0, vi ≥0, we have Pr (sign(x) ̸= sign(y)) ≤1 π cos−1 ρα, where ρα = ∑D i=1 uα/2 i vα/2 i √∑D i=1 uα i ∑D i=1 vα i 2/α □ (6) For α = 2, this bound is exact [5, 9]. In fact the result for α = 2 leads to the following Lemma: Lemma 1 The kernel defined as K(u, v) = 1 −1 π cos−1 ρ2 is positive definite (PD). Proof: The indicator function 1 {sign(x) = sign(y)} can be written as an inner product (hence PD) and Pr (sign(x) = sign(y)) = E (1 {sign(x) = sign(y)}) = 1 −1 π cos−1 ρ2. □ 3 3.2 A Simulation Study to Verify the Bound of the Collision Probability We generate the original data u and v by sampling from a bivariate t-distribution, which has two parameters: the correlation and the number of degrees of freedom (which is taken to be 1 in our experiments). We use a full range of the correlation parameter from 0 to 1 (spaced at 0.01). To generate positive data, we simply take the absolute values of the generated data. Then we fix the data as our original data (like u and v), apply sign stable random projections, and report the empirical collision probabilities (after 105 repetitions). Figure 2 presents the simulated collision probability Pr (sign(x) ̸= sign(y)) for D = 100 and α ∈ {1.5, 1.2, 1.0, 0.5}. In each panel, the dashed curve is the theoretical upper bound 1 π cos−1 ρα, and the solid curve is the simulated collision probability. Note that it is expected that the simulated data can not cover the entire range of ρα values, especially as α →0. 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1.5, D = 100 ρα Collision probability 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1.2, D = 100 ρα Collision probability 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1, D = 100 ρα Collision probability 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 α = 0.5, D = 100 ρα Collision probability Figure 2: Dense Data and D = 100. Simulated collision probability Pr (sign(x) ̸= sign(y)) for sign stable random projections. In each panel, the dashed curve is the upper bound 1 π cos−1 ρα. Figure 2 verifies the theoretical upper bound 1 π cos−1 ρα. When α ≥1.5, this upper bound is fairly sharp. However, when α ≤1, the bound is not tight, especially for small α. Also, the curves of the empirical collision probabilities are not smooth (in terms of ρα). Real-world high-dimensional datasets are often sparse. To verify the theoretical upper bound of the collision probability on sparse data, we also simulate sparse data by randomly making 50% of the generated data as used in Figure 2 be zero. With sparse data, it is even more obvious that the theoretical upper bound 1 π cos−1 ρα is not sharp when α ≤1, as shown in Figure 3. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1.5, D = 100, Sparse ρα Collision probability 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1.2, D = 100, Sparse ρα Collision probability 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 α = 1, D = 100, Sparse ρα Collision probability 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 0.5 α = 0.5, D = 100, Sparse ρα Collision probability Figure 3: Sparse Data and D = 100. Simulated collision probability Pr (sign(x) ̸= sign(y)) for sign stable random projection. The upper bound is not tight especially when α ≤1. In summary, the collision probability bound: Pr (sign(x) ̸= sign(y)) ≤1 π cos−1 ρα is fairly sharp when α is close to 2 (e.g., α ≥1.5). However, for α ≤1, a better approximation is needed. 4 α = 1 and Chi-Square (χ2) Similarity In this section, we focus on nonnegative data (ui ≥0, vi ≥0) and α = 1. This case is important in practice. For example, we can view the data (ui, vi) as empirical probabilities, which are common when data are generated from histograms (as popular in NLP and vision) [4, 10, 13, 2, 28, 27, 26]. In this context, we always normalize the data, i.e., ∑D i=1 ui = ∑D i=1 vi = 1. Theorem 1 implies Pr (sign(x) ̸= sign(y)) ≤1 π cos−1 ρ1, where ρ1 = ( D ∑ i=1 u1/2 i v1/2 i )2 (7) While the bound is not tight, interestingly, the collision probability can be related to the χ2 similarity. Recall the definitions of the chi-square distance dχ2 = ∑D i=1 (ui−vi)2 ui+vi and the chi-square similarity ρχ2 = 1 −1 2dχ2 = ∑D i=1 2uivi ui+vi . In this context, we should view 0 0 = 0. 4 Lemma 2 Assume ui ≥0, vi ≥0, ∑D i=1 ui = 1, ∑D i=1 vi = 1. Then ρχ2 = D ∑ i=1 2uivi ui + vi ≥ρ1 = ( D ∑ i=1 u1/2 i v1/2 i )2 □ (8) It is known that the χ2-kernel is PD [10]. Consequently, we know the acos-χ2-kernel is also PD. Lemma 3 The kernel defined as K(u, v) = 1 −1 π cos−1 ρχ2 is positive definite (PD). □ The remaining question is how to connect Cauchy random projections with the χ2 similarity. 5 Two Approximations of Collision Probability for Sign Cauchy Projections It is a difficult problem to derive the collision probability of sign Cauchy projections if we would like to express the probability only in terms of certain summary statistics (e.g., some distance). Our first observation is that the collision probability can be well approximated using the χ2 similarity: Pr (sign(x) ̸= sign(y)) ≈Pχ2(1) = 1 π cos−1 ( ρχ2) (9) Figure 4 shows this approximation is better than 1 π cos−1 (ρ1). Particularly, in sparse data, the approximation 1 π cos−1 ( ρχ2) is very accurate (except when ρχ2 is close to 1), while the bound 1 π cos−1 (ρ1) is not sharp (and the curve is not smooth in ρ1). 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 ρχ 2, ρ1 Collision probability 1 χ2 α = 1, D = 100 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 ρχ 2, ρ1 Collision probability 1 χ2 α = 1, D = 100, Sparse Figure 4: The dashed curve is 1 π cos−1 (ρ), where ρ can be ρ1 or ρχ2 depending on the context. In each panel, the two solid curves are the empirical collision probabilities in terms of ρ1 (labeled by “1”) or ρχ2 (labeled by “χ2). It is clear that the proposed approximation 1 π cos−1 ρχ2 in (9) is more tight than the upper bound 1 π cos−1 ρ1, especially so in sparse data. Our second (and less obvious) approximation is the following integral: Pr (sign(x) ̸= sign(y)) ≈Pχ2(2) = 1 2 −2 π2 ∫π/2 0 tan−1 ( ρχ2 2 −2ρχ2 tan t ) dt (10) Figure 5 illustrates that, for dense data, the second approximation (10) is more accurate than the first (9). The second approximation (10) is also accurate for sparse data. Both approximations, Pχ2(1) and Pχ2(2), are monotone functions of ρχ2. In practice, we often do not need the ρχ2 values explicitly because it often suffices if the collision probability is a monotone function of the similarity. 5.1 Binary Data Interestingly, when the data are binary (before normalization), we can compute the collision probability exactly, which allows us to analytically assess the accuracy of the approximations. In fact, this case inspired us to propose the second approximation (10), which is otherwise not intuitive. For convenience, we define a = |Ia|, b = |Ib|, c = |Ic|, where Ia = {i|ui > 0, vi = 0}, Ib = {i|vi > 0, ui = 0}, Ic = {i|ui > 0, vi > 0}, (11) Assume binary data (before normalization, i.e., sum to one). That is, ui = 1 |Ia| + |Ic| = 1 a + c, ∀i ∈Ia ∪Ic, vi = 1 |Ib| + |Ic| = 1 b + c, ∀i ∈Ib ∪Ic (12) The chi-square similarity ρχ2 becomes ρχ2 = ∑D i=1 2uivi ui+vi = 2c a+b+2c and hence ρχ2 2−2ρχ2 = c a+b. 5 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 ρχ 2 Collision probability α = 1, D = 100 χ2 (1) χ2 (2) Empirical 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 ρχ 2 Collision probability α = 1, D = 100, Sparse χ2 (1) χ2 (2) Empirical Figure 5: Comparison of two approximations: χ2(1) based on (9) and χ2(2) based on (10). The solid curves (empirical probabilities expressed in terms of ρχ2) are the same solid curves labeled “χ2” in Figure 4. The left panel shows that the second approximation (10) is more accurate in dense data. The right panel illustrate that both approximations are accurate in sparse data. (9) is slightly more accurate at small ρχ2 and (10) is more accurate at ρχ2 close to 1. Theorem 2 Assume binary data. When α = 1, the exact collision probability is Pr (sign(x) ̸= sign(y)) =1 2 −2 π2 E { tan−1 ( c a|R| ) tan−1 (c b|R| )} (13) where R is a standard Cauchy random variable. □ When a = 0 or b = 0, we have E { tan−1 ( c a|R| ) tan−1 ( c b|R| )} = π 2 E { tan−1 ( c a+b|R| )} . This observation inspires us to propose the approximation (10): Pχ2(2) = 1 2 −1 π E { tan−1 ( c a + b|R| )} = 1 2 −2 π2 ∫π/2 0 tan−1 ( c a + b tan t ) dt To validate this approximation for binary data, we study the difference between (13) and (10), i.e., Z(a/c, b/c) = Err = Pr (sign(x) ̸= sign(y)) −Pχ2(2) = −2 π2 E { tan−1 ( 1 a/c|R| ) tan−1 ( 1 b/c|R| )} + 1 π E { tan−1 ( 1 a/c + b/c|R| )} (14) (14) can be easily computed by simulations. Figure 6 confirms that the errors are larger than zero and very small . The maximum error is smaller than 0.0192, as proved in Lemma 4. a/c b/c 0.001 0.01 0.019 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 10 1 10 2 10 3 0 0.005 0.01 0.015 0.02 t Z(t) Figure 6: Left panel: contour plot for the error Z(a/c, b/c) in (14). The maximum error (which is < 0.0192) occurs along the diagonal line. Right panel: the diagonal curve of Z(a/c, b/c). Lemma 4 The error defined in (14) ranges between 0 and Z(t∗): 0 ≤Z(a/c, b/c) ≤Z(t∗) = ∫∞ 0 { −2 π2 ( tan−1 ( r t∗ ))2 + 1 π tan−1 ( r 2t∗ )} 2 π 1 1 + r2 dr (15) where t∗= 2.77935 is the solution to 1 t2−1 log 2t 1+t = log(2t) (2t)2−1. Numerically, Z(t∗) = 0.01919. □ 6 5.2 An Experiment Based on 3.6 Million English Word Pairs To further validate the two χ2 approximations (in non-binary data), we experiment with a word occurrences dataset (which is an example of histogram data) from a chunk of D = 216 web crawl documents. There are in total 2,702 words, i.e., 2,702 vectors and 3,649,051 word pairs. The entries of a vector are the occurrences of the word. This is a typical sparse, non-binary dataset. Interestingly, the errors of the collision probabilities based on two χ2 approximations are still very small. To report the results, we apply sign Cauchy random projections 107 times to evaluate the approximation errors of (9) and (10). The results, as presented in Figure 7, again confirm that the upper bound 1 π cos−1 ρ1 is not tight and both χ2 approximations, Pχ2(1) and Pχ2(2), are accurate. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 ρχ 2 or ρ1 Collision probability χ2(1) χ2(2) 0 0.2 0.4 0.6 0.8 1 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 ρχ 2 Error χ2(1) χ2(2) Figure 7: Empirical collision probabilities for 3.6 million English word pairs. In the left panel, we plot the empirical collision probabilities against ρ1 (lower, green if color is available) and ρχ2 (higher, red). The curves confirm that the bound 1 π cos−1 ρ1 is not tight (and the curve is not smooth). We plot the two χ2 approximations as dashed curves which largely match the empirical probabilities plotted against ρχ2, confirming that the χ2 approximations are good. For smaller ρχ2 values, the first approximation Pχ2(1) is slightly more accurate. For larger ρχ2 values, the second approximation Pχ2(2) is more accurate. In the right panel, we plot the errors for both Pχ2(1) and Pχ2(2). 6 Sign Cauchy Random Projections for Classification Our method provides an effective strategy for classification. For each (high-dimensional) data vector, using k sign Cauchy projections, we encode a negative sign as “01” and a positive as “10” (i.e., a vector of length 2) and concatenate k short vectors to form a new feature vector of length 2k. We then feed the new data into a linear classifier (e.g., LIBLINEAR). Interestingly, this linear classifier approximates a nonlinear kernel classifier based on acos-χ2-kernel: K(u, v) = 1−1 π cos−1 ρχ2. See Figure 8 for the experiments on the same two datasets in Figure 1: UCI-PEMS and MNIST-Small. 10 −2 10 −1 10 0 10 1 10 2 10 3 0 20 40 60 80 100 k = 32 k = 64 k = 128 k = 256 512 1024 k = 2048,4096,8192 PEMS: SVM C Classification Acc (%) 10 −2 10 −1 10 0 10 1 10 2 60 70 80 90 100 k = 64 k = 128 k = 256 k = 512 1024 2048 k = 4096, 8192 C Classification Acc (%) MNIST−Small: SVM Figure 8: The two dashed (red if color is available) curves are the classification results obtained using “acos-χ2-kernel” via the “precomputed kernel” functionality in LIBSVM. The solid (black) curves are the accuracies using k sign Cauchy projections and LIBLINEAR. The results confirm that the linear kernel from sign Cauchy projections can approximate the nonlinear acos-χ2-kernel. Figure 1 has already shown that, for the UCI-PEMS dataset, the χ2-kernel (ρχ2) can produce noticeably better classification results than the acos-χ2-kernel (1 −1 π cos−1 ρχ2). Although our method does not directly approximate ρχ2, we can still estimate ρχ2 by assuming the collision probability is exactly Pr (sign(x) ̸= sign(y)) = 1 π cos−1 ρχ2 and then we can feed the estimated ρχ2 values into LIBSVM “precomputed kernel” for classification. Figure 9 verifies that this method can also approximate the χ2 kernel with enough projections. 7 10 −2 10 −1 10 0 10 1 10 2 10 3 0 20 40 60 80 100 k = 32 k = 64 k = 128 k = 256 k = 512 k = 1024 k = 2048 k = 4096 k = 8192 PEMS: χ2 kernel SVM C Classification Acc (%) 10 −2 10 −1 10 0 10 1 10 2 60 70 80 90 100 k = 64 k = 128 k = 256 k = 512 k = 1024 2048 4096 8192 C Classification Acc (%) MNIST−Small: χ2 kernel SVM Figure 9: Nonlinear kernels. The dashed curves are the classification results obtained using χ2kernel and LIBSVM “precomputed kernel” functionality. We apply k sign Cauchy projections and estimate ρχ2 assuming the collision probability is exactly 1 π cos−1 ρχ2 and then feed the estimated ρχ2 into LIBSVM again using the “precomputed kernel” functionality. 7 Conclusion The use of χ2 similarity is widespread in machine learning, especially when features are generated from histograms, as common in natural language processing and computer vision. Many prior studies [4, 10, 13, 2, 28, 27, 26] have shown the advantage of using χ2 similarity compared to other measures such as l2 distance. However, for large-scale applications with ultra-high-dimensional datasets, using χ2 similarity becomes challenging for practical reasons. Simply storing (and maneuvering) all the high-dimensional features would be difficult if there are a large number of observations. Computing all pairwise χ2 similarities can be time-consuming and in fact we usually can not materialize an all-pairwise similarity matrix even if there are merely 106 data points. Furthermore, the χ2 similarity is nonlinear, making it difficult to take advantage of modern linear algorithms which are known to be very efficient, e.g., [14, 25, 6, 3]. When data are generated in a streaming fashion, computing χ2 similarities without storing the original data will be even more challenging. The method of α-stable random projections (0 < α ≤2) [11, 17] is popular for efficiently computing the lα distances in massive (streaming) data. We propose sign stable random projections by storing only the signs (i.e., 1-bit) of the projected data. Obviously, the saving in storage would be a significant advantage. Also, these bits offer the indexing capability which allows efficient search. For example, we can build hash tables using the bits to achieve sublinear time near neighbor search (although this paper does not focus on near neighbor search). We can also build efficient linear classifiers using these bits, for large-scale high-dimensional machine learning applications. A crucial task in analyzing sign stable random projections is to study the probability of collision (i.e., when the two signs differ). We derive a theoretical bound of the collision probability which is exact when α = 2. The bound is fairly sharp for α close to 2. For α = 1 (i.e., Cauchy random projections), we find the χ2 approximation is significantly more accurate. In addition, for binary data, we analytically show that the errors from using the χ2 approximation are less than 0.0192. Experiments on real and simulated data confirm that our proposed χ2 approximations are very accurate. We are enthusiastic about the practicality of sign stable projections in learning and search applications. The previous idea of using the signs from normal random projections has been widely adopted in practice, for approximating correlations. Given the widespread use of the χ2 similarity and the simplicity of our method, we expect the proposed method will be adopted by practitioners. Future research Many interesting future research topics can be studied. (i) The processing cost of conducting stable random projections can be dramatically reduced by very sparse stable random projections [16]. This will make our proposed method even more practical. (ii) We can try to utilize more than just 1-bit of the projected data, i.e., we can study the general coding problem [19]. (iii) Another interesting research would be to study the use of sign stable projections for sparse signal recovery (Compressed Sensing) with stable distributions [21]. (iv) When α →0, the collision probability becomes Pr (sign(x) ̸= sign(y)) = 1 2 −1 2Resemblance, which provides an elegant mechanism for computing resemblance (of the binary-quantized data) in sparse data streams. Acknowledgement The work of Ping Li is supported by NSF-III-1360971, NSF-Bigdata1419210, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. The work of Gennady Samorodnitsky is supported by ARO-W911NF-12-10385. 8 References [1] http://googleresearch.blogspot.com/2010/04/ lessons-learned-developing-practical.html. [2] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. What is an object? In CVPR, pages 73–80, 2010. [3] Leon Bottou. http://leon.bottou.org/projects/sgd. [4] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks, 10(5):1055–1064, 1999. [5] Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002. [6] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [7] Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. Learning the structure of manifolds using random projections. In NIPS, Vancouver, BC, Canada, 2008. [8] Jerome H. Friedman, F. Baskett, and L. Shustek. An algorithm for finding nearest neighbors. IEEE Transactions on Computers, 24:1000–1006, 1975. [9] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of ACM, 42(6):1115–1145, 1995. [10] Matthias Hein and Olivier Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136–143, Barbados, 2005. [11] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of ACM, 53(3):307–323, 2006. [12] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604–613, Dallas, TX, 1998. [13] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In CIVR, pages 494–501, Amsterdam, Netherlands, 2007. [14] Thorsten Joachims. Training linear svms in linear time. In KDD, pages 217–226, Pittsburgh, PA, 2006. [15] Fuxin Li, Guy Lebanon, and Cristian Sminchisescu. A linear approximation to the χ2 kernel with geometric convergence. Technical report, arXiv:1206.4074, 2013. [16] Ping Li. Very sparse stable random projections for dimension reduction in lα (0 < α ≤2) norm. In KDD, San Jose, CA, 2007. [17] Ping Li. Estimators and tail bounds for dimension reduction in lα (0 < α ≤2) using stable random projections. In SODA, pages 10 – 19, San Francisco, CA, 2008. [18] Ping Li. Improving compressed counting. In UAI, Montreal, CA, 2009. [19] Ping Li, Michael Mitzenmacher, and Anshumali Shrivastava. Coding for random projections. 2013. [20] Ping Li, Art B Owen, and Cun-Hui Zhang. One permutation hashing. In NIPS, Lake Tahoe, NV, 2012. [21] Ping Li, Cun-Hui Zhang, and Tong Zhang. Compressed counting meets compressed sensing. 2013. [22] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1:117–236, 2 2005. [23] Noam Nisan. Pseudorandom generators for space-bounded computations. In STOC, 1990. [24] Gennady Samorodnitsky and Murad S. Taqqu. Stable Non-Gaussian Random Processes. Chapman & Hall, New York, 1994. [25] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807–814, Corvalis, Oregon, 2007. [26] Andrea Vedaldi and Andrew Zisserman. Efficient additive kernels via explicit feature maps. IEEE Trans. Pattern Anal. Mach. Intell., 34(3):480–492, 2012. [27] Sreekanth Vempati, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Generalized rbf feature maps for efficient detection. In BMVC, pages 1–11, Aberystwyth, UK, 2010. [28] Gang Wang, Derek Hoiem, and David A. Forsyth. Building text features for object image classification. In CVPR, pages 1367–1374, Miami, Florida, 2009. [29] Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas S. Huang, and Yihong Gong. Localityconstrained linear coding for image classification. In CVPR, pages 3360–3367, San Francisco, CA, 2010. [30] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113–1120, 2009. [31] Haiquan (Chuck) Zhao, Nan Hua, Ashwin Lall, Ping Li, Jia Wang, and Jun Xu. Towards a universal sketch for origin-destination network measurements. In Network and Parallel Computing, pages 201–213, 2011. 9
|
2013
|
63
|
5,140
|
Transfer Learning in a Transductive Setting Marcus Rohrbach Sandra Ebert Bernt Schiele Max Planck Institute for Informatics, Saarbr¨ucken, Germany {rohrbach,ebert,schiele}@mpi-inf.mpg.de Abstract Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expertspecified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm – so far only used for semi-supervised learning – to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets. 1 Introduction While supervised training is an integral part of building visual, textual, or multi-modal category models, more recently, knowledge transfer between categories has been recognized as an important ingredient to scale to a large number of categories as well as to enable fine-grained categorization. This development reflects the psychological point of view that humans are able to generalize to novel1 categories with only a few training samples [17, 1]. This has recently gained increased interest in the computer vision and machine learning literature, which look at zero-shot recognition (with no training instances for a class) [11, 19, 9, 22, 16], and one- or few-shot recognition [29, 1, 21]. Knowledge transfer is particularly beneficial when scaling to large numbers of classes [23, 16], distinguishing fine-grained categories [6], or analyzing compositional activities in videos [9, 22]. Recognizing categories with no or only few labeled training instances is challenging. To improve existing transfer learning approaches, we exploit several sources of information. Our approach allows using (1) trained category models, (2) external knowledge, (3) instance similarity, and (4) labeled instances of the novel classes if available. More specifically we learn category or attribute models based on labeled training data for known categories y (see also Figure 1) using supervised training. These trained models are then associated with the novel categories z using, e.g. expert or automatically mined semantic relatedness (cyan lines in Figure 1). Similar to unsupervised learning [32, 28] our approach exploits similarities in the data space via a graph structure to discover dense regions that are associated with coherent categories or concepts (orange graph structure in Figure 1). However, rather than using the raw input space, we map our data into a semantic output space with the 1We use “novel” throughout the paper to denote categories with no or few labeled training instances. 1 z2 z3 z1 z3 z2 z1 x12 y1 y3 y4 y5 y2 known classes x11 z3 z2 z1 x1 x2 x5 x4 x3 x6 x8 x9 x7 x10 x12 x13 x14 x15 z3 z2 z1 x1 x2 x4 x3 x11 x6 x9 x12 x13 x14 x14 x15 x13 x10 x7 x6 x9 x8 x3 x2 x1 x5 x1 x2 x4 x3 x11 x6 x9 x12 x13 x14 x14 x15 x13 x10 x7 x6 x9 x8 x3 x2 x1 x5 x4 x11 x12 x11 x1 x2 x5 x4 x3 x6 x8 x9 x7 x10 x13 x14 x15 x4 Semantic knowledge transfer Few labeled instances Instance similarity object/attribute classifier scores to estimate instance similarity Improved prediction + = + external knowledge Figure 1: Conceptual visualisation of our approach Propagated Semantic Transfer. Known categories y, novel categories z, instances x (colors denote predicted category affiliation). Qualitative results can be found in supplemental material and on our website. models trained on the known classes (pink arrow) to benefit from their discriminative knowledge. Given the uncertain predictions and the graph structure we adapt semi-supervised label propagation [34, 33] to generate more reliable predictions. If labeled instances are available they can be seamlessly added. Note, attribute or category models do not have to be retrained if novel classes are added which is an important aspect e.g. in a robotic scenario. The main contribution of this work is threefold. First, we propose a novel approach that extends semantic knowledge transfer to the transductive setting, exploiting similarities in the unlabeled data distribution. The approach allows to do zero-shot recognition but also smoothly integrate labels for novel classes (Section 3). Second, we improve the local neighborhood structure in the raw feature space by mapping the data into a low dimensional semantic output space using the trained attribute and category models. Third, we validate our approach on three challenging datasets for two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition (Section 4). We also provide a discussion of related work (Section 2) and conclusions for future work (Section 5). The implementation for our Propagated Semantic Transfer and code to easily reproduce the results in this paper is available on our website. 2 Related work Knowledge transfer or transfer learning has the goal to transfer information of learned models to changing or unknown data distributions while reducing the need and effort to collect new training labels. It refers to a variety of tasks, including domain adaptation [25] or sharing of knowledge and representations [30, 3] (a recent categorization can be found in [20]). In this work we focus on transferring knowledge from known categories with sufficient training instances to novel categories with limited training data. In computer vision or machine learning literature this setting is normally referred to as zero-shot learning [11, 19, 24, 9, 16] if there are no instances for the test classes available and one- or few-shot learning [16, 9, 8] if there are one or few instances available for the novel classes. To recognize novel categories zero-shot recognition uses additional information, typically in the form of an intermediate attribute representation [11, 9], direct similarity [24] between categories, or hierarchical structures of categories [35]. The information can either be manually specified [11, 9] or mined automatically from knowledge bases [24, 22]. Our approach builds on these works by using a semantic knowledge transfer approach as the first step. If one or a few training examples are available, these are typically used to select or adapt known models [1, 9, 26]. In contrast to related work, our approach uses the above mentioned semantic knowledge transfer also when few training examples are available to reduce the dependency on the quality of the samples. Also, we still use the labeled examples to propagate information. Additionally, we exploit the neighborhood structure of the unlabeled instances to improve recognition for zero- and few-shot recognition. This is in contrast to previous works with the exception of 2 the zero-shot approach of [9] that learns a discriminative, latent attribute representation and applies self-training on the unseen categories. While conceptually similar, the approach is different to ours, as we explicitly use the local neighborhood structure of the unlabeled instances. A popular choice to integrate local neighborhood structure of the data are graph-based methods. These have been used to discover a grouping by spectral clustering [18, 14], and to enable semi-supervised learning [34, 33]. Our setting is similar to the semi-supervised setting. To transfer labels from labeled to unlabeled data label propagation is widely used [34, 33] and has shown to work successfully in several applications [13, 7]. In this work, we extend transfer learning by considering the neighborhood structure of the novel classes. For this we adapt the well-known label propagation approach of [33]. We build a k-nearest neighbor graph to capture the underlying manifold structure as it has shown to provide the most robust structure [15]. Nevertheless, the quality of the graph structure is key to success of graph-based methods and strongly dependents on the feature representation [5]. We thus improve the graph structure by replacing the noisy raw input space with the more compact semantic output space which has shown to improve recognition [26, 22]. To improve image classification with reduced training data, [4, 27] use attributes as an intermediate layer and incorporate unlabeled data, however, both works are in a classical semi-supervised learning setting similar to [5], while our setting is transfer learning. More specifically [27] propose to bootstrap classifiers by adding unlabeled data. The bootstrapping is constrained by attributes shared across classes. In contrast, we use attributes for transfer and exploit the similarity between instances of the novel classes. [4] automatically discover a discriminative attribute representation, while incorporating unlabeled data. This notion of attributes is different to ours as we want to use semantic attributes to enable transfer from other classes. Other directions to improve the quality of the intermediate representation include integrating metric learning [31, 16] or online methods [10] which we defer to future work. 3 Propagated Semantic Transfer (PST) Our main objective is to robustly recognize novel categories by transferring knowledge from known classes and exploiting the similarity of the test instances. More specifically our novel approach called Propagated Semantic Transfer consists of the following four components: we employ semantic knowledge transfer from known classes to novel classes (Sec. 3.1); we combine the transferred predictions with labels for the novel classes (Sec. 3.2); a similarity metric is defined to achieve a robust graph structure (Sec. 3.3); we propagate this information within the novel classes (Sec. 3.4). 3.1 Semantic knowledge transfer We first transfer knowledge using a semantic representation. This allows to include external knowledge sources. We model the relation between a set of K known classes y1, . . . , yK to the set of N novel classes z1, . . . , zN. Both sets are disjoint, i.e. {y1, . . . , yK} T{z1, . . . , zN} = ∅. We use two strategies to achieve this transfer: i) an attribute representation that employs an intermediate representation of a1, . . . , aM attributes or ii) direct similarities calculated among the known object classes. Both work without any training examples for zn, i.e. also for zero-shot recognition [11, 24]. i) Attribute representation. We use the Direct-Attribute-Prediction (DAP) model [11], using our formulation [24]. An intermediate level of M attribute classifiers p(am|x) is trained on the known classes yk to estimate the presence of attribute am in the instance x. The subsequent knowledge transfer requires an external knowledge source that provides class-attribute associations azn m ∈{0, 1} indicating if attribute am is associated with class zn. Options for such association information are discussed in Section 4.2. Given this information the probability of the novel classes zn to be present in the instance x can then be estimated [24]: p(zn|x) ∝ M Y m=1 (2p(am|x))azn m . (1) ii) Direct similarity. As an alternative to attributes, we can use the U most similar training classes y1, ..., yU as a predictor for novel class zn given an instance x [24]: p(zn|x) ∝ U Y u=1 (2p(yu|x))yzn u , (2) 3 where yzn u provides continuous normalized weights for the strength of the similarity between the novel class zn and the known class yu [24]. To comply with [23, 22] we slightly diverge from these models for the ImageNet and MPII Composites dataset by using a sum formulation instead of the probabilistic expression, i.e. for attributes p(zn|x) ∝ PM m=1 azn m p(am|x) PM m=1 azn m , and for direct similarity p(zn|x) ∝ PU u=1 p(yu|x) U . Note that in this case we do not obtain probability estimates, however, for label propagation the resulting scores are sufficient. 3.2 Combining transferred and ground truth labels In the following we treat the multi-class problem as N binary problems, where N is the number of binary classes. For class zn the semantic knowledge transfer provides p(zn|x) ∈[0, 1] for all instances x. We combine the best predictions per class, scaled to [−1, 1], with labels ˆl(zn|x) ∈ {−1, 1} provided for some instances x in the following way: l(zn|x) = γˆl(zn|x) if there is a label for x (1 −γ)(2p(zn|x) −1) if p(zn|x)is among top-δ fraction of predictions for zn 0 otherwise. (3) γ provides a weighting between the true labels and the predicted labels. In the zero-shot case we only use predictions, i.e. γ = 0. The parameters δ, γ ∈[0, 1] are chosen, similar to the remaining parameters, using cross-validation on the training set. 3.3 Similarity metric based on discriminative models for graph construction We enhance transfer learning by exploiting also the neighborhood structure within novel classes, i.e. we assume a transductive setting. Graph-based semi-supervised learning incorporates this information by employing a graph structure over all instances. In this section we describe how to improve the graph structure as it has a strong influence on the final results [5]. The k-NN graph is usually built on the raw feature descriptors of the data. Distances are computed for each pair (xi, xj) by d(xi, xj) = D X d=1 |xi,d −xj,d|, (4) where D is the dimensionality of the raw feature space. We note that the visual representation used for label propagation can be independent of the visual representation used for transfer. While the visual representation for transfer is required to provide good generalization abilities in conjunction with the employed supervised learning strategy, the visual representation for label propagation should induce a good neighborhood structure. Therefore we propose to use the more compact output space trained on the known classes which we found to provide a much better structure, see Figure 5b. We thus compute the distances either on the M-dimensional vector of the attribute classifiers p(am|x) with M ≪D, i.e., d(xi, xj) = M X m=1 |p(am|xi) −p(am|xj)|, (5) or on the K-dimensional vector of object-classifiers p(yk|x) with K ≪D, i.e. d(xi, xj) = K X κ=1 |p(yκ|xi) −p(yκ|xj)|. (6) These distances are transformed into similarities with a RBF kernel: s(xi, xj) = exp −d(xi,xj) 2σ2 . Finally, we construct a k-NN graph that is known for its good performance [15, 5], i.e., Wij = s(xi, xj) if s(xi, xj) is among the k largest similarities of xi 0 otherwise. (7) 4 Figure 2: AwA (left), ImageNet (middle), and MPII Composite Activities (right) 3.4 Label propagation with certain and uncertain labels In this work, we build upon the label propagation by [33]. The k-NN graph with RBF kernel gives the weighted graph W (see Section 3.3). Based on this graph we compute a normalized graph Laplacian, i.e., S = D−1/2WD−1/2 with the diagonal matrix D summing up the weights in each row in W. Traditional semi-supervised label propagation uses sparse ground truth labels. In contrast we have dense labels l(zn|x) which are a combination of uncertain predictions and certain labels (see Eq. 3) for all instances {x1, . . . , xi} of the novel classes zn. Therefore, we modify the initialization by setting L(0) n = [l(zn|x1), . . . , l(zn|xi)] (8) for the N novel classes. For each class, labels are propagated through this graph structure converging to the following closed form solution L∗ n = (I −αS)−1L(0) n for 1 ≤n ≤N, (9) with the regularization parameter α ∈(0, 1]. The resulting framework makes use of the manifold structure underlying the novel classes to regulate the predictions from transfer learning. In general, the algorithm converges after few iterations. 4 Evaluation 4.1 Datasets We shortly outline the most important properties of the examined datasets in the following paragraphs and show example images/frames in Figure 2. AwA The Animals with Attributes dataset (AwA) [11] is one of the first and most widely used datasets for semantic knowledge transfer and zero-shot recognition. It consists of 50 mammal classes, 40 training (24,395 images) and 10 disjoint test classes (6,180 images). We use the provided pre-computed 6 image descriptors, which are concatenated. ImageNet The ImageNet 2010 challenge [2] requires large scale and fine-grained recognition. It consists of 1000 image categories which are split into 800 training and 200 test categories according to [23]. We use the LLC and Fisher-Vector encoded SIFT descriptors provided by [23]. MPII Composite Activities The MPII Composite Cooking Activities dataset [22] distinguishes 41 basic cooking activities, such as prepare scrambled egg or prepare carrots with video recordings of varying length from 1 to 41 minutes. It consists of a total of 256 videos, 44 are used for training the attribute representation, 170 are used as test data. We use the provided dense-trajectory representation and train/test split. 4.2 External knowledge sources and similarity measures Our approach incorporates external knowledge to enable semantic knowledge transfer from known classes y to unseen classes z. We use the class-attribute associations azn m for attribute-based transfer (Equation 1) or inter-class similarity yzn u for direct-similarity-based transfer (Equation 2) provided with the datasets. In the following we shortly outline the knowledge sources and measures. Manual (AwA) AwA is accompanied with a set of 85 attributes and associations to all 40 training and all 10 test classes. The associations are provided by human judgments [11]. Hierarchy (ImageNet) For ImageNet the manually constructed WordNet/ImagNet hierarchy is used to find the most similar of the 800 known classes (leaf nodes in the hierarchy). Furthermore, the 370 inner nodes can group several classes into attributes [23]. 5 Performance Approach AUC Acc. DAP [11] 81.4 41.4 IAP [11] 80.0 42.2 Zero-Shot Learning [9] n/a 41.3 PST (ours) on image descriptors 81.2 40.5 on attributes 83.7 42.7 (a) Zero-Shot. Predictions with attributes and manual defined associations, in %. 0 10 20 30 40 50 30 35 40 45 50 # training samples per class mean Acc in % PST (ours) − manual def. ass. LP + attr. classifiers − manual ass. PST (ours) − Yahoo Image attr. LP + attr. classifiers − Yahoo Img attr. LP [5] (b) Few-Shot Figure 3: Results on AwA Dataset, see Sec. 4.3.1. Linguistic knowledge bases (AwA, ImageNet) An alternative to manual association are automatically mined associations. We use the provided similarity matrices which are extracted using different linguistic similarity measures. They are either based on linguistic corpora, namely Wikipedia and WordNet, or on hit-count statistics of web search. One can distinguish basic web search (Yahoo Web), web search refined to part associations (Yahoo Holonyms), image search (Yahoo Image and Flickr Image), or use the information of the summary snippets returned by web search (Yahoo Snippets). As ImageNet does not provide attributes, we mined 811 part-attributes from the associated WordNet hierarchy [23]. Script data (MPII Composites) To associate composite cooking activities such as preparing carrots with attributes of fine-grained activities (e.g. wash, peel), ingredients (e.g. carrots), and tools (e.g. knife, peeler), textual description (Script data) of these activities were collected with AMT. The provided associations are computed based on either the frequency statistics or, more discriminate, by term frequency times inverse document frequency (tf*idf). Words in the text can be matched to labels either literally or by using WordNet expansion [22]. 4.3 Results To enable a direct comparison, we closely follow the experimental setups of the respective datasets [11, 23, 22]. On all datasets we train attribute or object classifiers (for direct similarity) with one-vsall SVMs using Mean Stochastic Gradient Descent [23] and, for AwA and MPII Composites, with a χ2 kernel approximation as in [22]. To get more distinctive representations for label propagation we train sigmoid functions [12] to estimate probabilities (on the training set for AwA/MPII Composites and on the validation set for ImageNet). The hyper-parameters of our new Propagated Semantic Transfer algorithm are estimated using 5fold cross-validation on the respective training set, splitting them into 80% known and 20% novel classes: We determine the parameters for our approach on the AwA training set and then set them for all datasets to α = 0.8, γ = 0.98, the number of neighbors k = 50, the number of iterations for propagation to 10, and use L1 distance. Due to the different recognition precision of the datasets we determine δ = 0.15/0.04 separately for AwA/ImageNet. For MPII Composites we only do zero-shot recognition and use all samples due to the limited number of samples of ≤7 per class. For few-shot recognition we report the mean over 10 runs where we pick examples randomly. The labeled examples are included in the evaluation to make it comparable to the zero-shot case. We validate our claim that the classifier output space induces a better neighborhood structure than the raw features by examining the k-Nearest-Neighbour (kNN) quality for both. In Figure 5b we compare the kNN quality on two datasets (see Sec. 4.1) for both feature representation. We observe that the attribute (Eq. 5) and object (Eq. 6) classifier-based representations (green and magenta dashed line) achieve a significantly higher accuracy than the respective raw feature-based representation (Eq. 4, Fig. 5b solid lines). We note that a good kNN-quality is required but not sufficient for good propagation, as it also depends on the distribution and quality of initial predictions. In the following, we compare the performance of the raw features with the attribute classifier representation. 6 0 10 20 30 Hierachy − leaf nodes Hierachy − inner nodes Attributes − Wikipedia Attributes − Yahoo Holonyms Attributes − Yahoo Image Attributes − Yahoo Snippets Direct similarity − Wikipedia Direct similarity − Yahoo Web Direct similarity − Yahoo Image Direct similarity − Yahoo Snippets top−5 accuracy (in %) [23] PST (ours) (a) Zero-Shot. 0 5 10 15 20 30 35 40 45 50 55 60 # training samples per class top−5 accuracy (in %) PST (ours) − Hierachy (inner nodes) PST (ours) − Yahoo Img direct LP + object classifiers (b) Few-Shot. Figure 4: Results on ImageNet, see Sec. 4.3.2. 4.3.1 AwA - image classification We start by comparing the performance of related work to our approach on AwA (see Sec. 4.1) in Figure 3. We start by examining the zero-shot results in Figure 3a, where no training examples are available for the novel or in this case unseen classes. The best results to our knowledge for on this dataset are reported by [11]. On this 10-class zero-shot task they achieve 81.4% area under ROC-curve (AUC) and 41.4% multi-class accuracy (Acc) with DAP, averaged over the 10 test classes. Additionally we report results from Zero-Shot Learning [9] which achieves 41.3% Acc. Our Propagated Semantic Transfer, using the raw image descriptors to build a neighborhood structure, achieves 81.2% AUC and 40.5% Acc. However, when propagating on the 85-dimensional attribute space, we improve over [11] and [9] to 83.7% AUC and 42.7% Acc. To understand the difference in performance between the attribute and the image descriptor space we examine the neighborhood quality used for propagating labels shown in Figure 5b. The k-NN accuracy, measured on the ground truth labels, is significantly higher for the attribute space (green dashed curve) compared to the raw features (solid green). The information is more likely propagated to neighbors of the correct class for the attribute-space leading to a better final prediction. Another advantage is the significantly reduced computation and storage costs for building the k-NN graph which scales linearly with the dimensionality. We believe that such an intermediate space, in this case represented by attributes, might provide a better neighborhood structure and could be used in other label-propagation tasks. Next we compare our approach in the few-shot setting, i.e. we add labeled examples per class. In Figure 3b we compare our approach (PST) to two label propagation (LP) baselines. We first note that PST (red curves) seamlessly moves from zero-shot to few-shot, while traditional LP (blue and black curves) needs at least one training example. We first examine the three solid lines. The black curve is our best LP variant from [5] evaluated on the 10 test classes of AwA rather than all 50 as in [5]. We also compute LP in combination with the similarity metric based on the attribute classifier scores (blue curves). This transfer of knowledge residing in the classifier trained on the known classes already gives a significant improvement in performance. Our approach (red curve) additionally transfers labels from the known classes and improves further. Especially for few labels our approach benefits from the transfer, e.g. for 5 labeled samples per class PST achieves 43.9% accuracy, compared to 38.1% for LP with attribute classifiers and 32.2% for [5]. For less samples LP drops significantly while our approach has nearly stable performance. For large amounts of training data, PST approaches - as expected - LP (red vs. blue in Figure 3b). The dashed lines in Figure 3b provide results for automatically mined associations azn m between attributes and classes. It is interesting to note that these automatically mined associations achieve performance very close to the manual defined associations (dashed vs. solid). In this plot we use Yahoo Image as base for the semantic relatedness, but we also provide the improvements of PST for the other linguistic language sources in supplemental material. 4.3.2 ImageNet - large scale image classification In this section we evaluate our Propagated Semantic Transfer approach on a large image classification task with 200 unseen image categories using the setup as proposed by [23]. We report the top-5 accuracy2 [2] which requires one of the best five predictions for an image to be correct. 2top-5 accuracy = 1 - top-5 error as defined in [2] 7 0 10 20 30 40 Script data, freqs−literal Script data, freqs−WN Script data, tf*idf−literal Script data, tf*idf−WN mean AP (in %) [22] PST (ours) (a) MPII Composite Activities, see Sec. 4.3.3. 0 20 40 60 80 100 0 20 40 60 k nearest neighours accuracy in % AwA − attribute classifiers AwA − raw features ImageNet − object classifiers ImageNet − raw features (b) Accuracy of the majority vote from kNN (kNN-Classifier) on test sets’ ground truth. Figure 5: Results Results are reported in Figure 4. For zero-shot recognition our PST (red bars) improves performance over [23] (black bars) as shown in Figure 4a. The largest improvement in top-5 accuracy is achieved for Yahoo Image with Attributes which increases by 6.7% to 25.3%. The absolute performance of 34.0% top-5 accuracy is achieved by using the inner nodes of the WordNet hierarchy for transfer, closely followed by Yahoo Web with direct similarity, achieving 33.1% top-5 accuracy. Similar to the AwA dataset we improve PST over the LP-baseline for few-shot recognition (Figure 4b). 4.3.3 MPII composite - activity recognition In the last two subsections, we showed the benefit of Propagated Semantic Transfer on two image classification challenges. We now evaluate our approach on the video-activity recognition dataset MPII Composite Cooking Activities [22]. We compute mean AP using the provided features and follow the setup of [22]. In Figure 5a we compare our performance (red bars) to the results of zero-shot recognition without propagation [22] (black bars) for four variants of Script data based transfer. Our approach achieves significant performance improvements in all four cases, increasing mean AP by 11.1%, 10.7%, 12.0%, and 7.7% to 34.0%, 32.8%, 34.4%, and 29.2%, respectively. This is especially impressive as it reaches the level of supervised training: for the same set of attributes (and very few, ≤7 training categories per class) [22] achieve 32.2% for SVM, 34.6% for NN-classification, and up to 36.2% for a combination of NN with script data. We find these results encouraging as it is much more difficult to collect and label training examples for this domain than for image classification and the complexity and compositional nature of activities frequently requires recognizing unseen categories [9]. 5 Conclusion In this work we address a frequently occurring setting where there is large amount of training data for some classes, but other, e.g. novel classes, have no or only few labeled training samples. We propose a novel approach named Propagated Semantic Transfer, which integrates semantic knowledge transfer with the visual similarities of unlabeled instances within the novel classes. We adapt a semi-supervised label-propagation approach by building the neighborhood graph on expressive, lowdimensional semantic output space and by initializing it with predictions from knowledge transfer. We evaluated this approach on three diverse datasets for image and video-activity recognition, consistently improving performance over the state-of-the-art for zero-shot and few-shot prediction. Most notably we achieve 83.7% AUC / 42.7% multi-class accuracy on the Animals with Attributes dataset for zero-shot recognition, scale to 200 unseen classes on ImageNet, and achieve up to 34.4% (+12.0%) mean AP on MPII Composite Activities which is on the level of supervised training on this dataset. We show that our approach consistently improves performance independent of factors such as (1) the specific datasets and descriptors, (2) different transfer approaches: direct vs. attributes, (3) types of transfer association: manually defined, linguistic knowledge bases, or script data, (4) domain: image and video activity recognition, or (5) model: probabilistic vs. sum formulation. Acknowledgements. This work was partially funded by the DFG project SCHI989/2-2. 8 References [1] E. Bart & S. Ullman. Single-example learning of novel classes using representation by similarity. In BMVC, 2005. [2] A. Berg, J. Deng, & L. Fei-Fei. ILSVRC 2010. www.image-net.org/challenges/LSVRC/2010/, 2010. [3] U. Blanke & B. Schiele. Remember and transfer what you have learned - recognizing composite activities based on activity spotting. In ISWC, 2010. [4] J. Choi, M. Rastegari, A. Farhadi, & L. S. Davis. Adding Unlabeled Samples to Categories by Learned Attributes. In CVPR, 2013. [5] S. Ebert, D. Larlus, & B. Schiele. Extracting Structures in Image Collections for Object Recognition. In ECCV, 2010. [6] R. Farrell, O. Oza, V. Morariu, T. Darrell, & L. S. Davis. Birdlets: Subordinate categorization using volumetric primitives and pose-normalized appearance. In ICCV, 2011. [7] R. Fergus, Y. Weiss, & A. Torralba. Semi-supervised learning in gigantic image collections. NIPS 2009. [8] M. Fink. Object classification from a single example utilizing class relevance pseudo-metrics. In NIPS, 2004. [9] Y. Fu, T. M. Hospedales, T. Xiang, & S. Gong. Learning multi-modal latent attributes. TPAMI, PP(99), 2013. [10] P. Kankuekul, A. Kawewong, S. Tangruamsub, & O. Hasegawa. Online Incremental Attribute-based Zero-shot Learning. In CVPR, 2012. [11] C. Lampert, H. Nickisch, & S. Harmeling. Attribute-based classification for zero-shot learning of object categories. TPAMI, PP(99), 2013. [12] H.-T. Lin, C.-J. Lin, & R. C. Weng. A note on platt’s probabilistic outputs for support vector machines. Machine Learning, 2007. [13] J. Liu, B. Kuipers, & S. Savarese. Recognizing human actions by attributes. In CVPR, 2011. [14] U. Luxburg. A tutorial on spectral clustering. Stat Comput, 17(4):395–416, 2007. [15] M. Maier, U. V. Luxburg, & M. Hein. Influence of graph construction on graph-based clustering measures. In NIPS, 2008. [16] T. Mensink, J. Verbeek, F. Perronnin, & G. Csurka. Metric Learning for Large Scale Image Classification: Generalizing to New Classes at Near-Zero Cost. In ECCV, 2012. [17] Y. Moses, S. Ullman, & S. Edelman. Generalization to novel images in upright and inverted faces. Perception, 25:443–461, 1996. [18] A. Y. Ng, M. I. Jordan, & Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2002. [19] M. Palatucci, D. Pomerleau, G. Hinton, & T. Mitchell. Zero-shot learning with semantic output codes. In NIPS, 2009. [20] S. J. Pan & Q. Yang. A survey on transfer learning. TKDE, 22:1345–59, 2010. [21] R. Raina, A. Battle, H. Lee, B. Packer, & A. Ng. Self-taught learning: Transfer learning from unlabeled data. In ICML, 2007. [22] M. Rohrbach, M. Regneri, M. Andriluka, S. Amin, M. Pinkal, & B. Schiele. Script data for attribute-based recognition of composite activities. In ECCV, 2012. [23] M. Rohrbach, M. Stark, & B. Schiele. Evaluating Knowledge Transfer and Zero-Shot Learning in a Large-Scale Setting. In CVPR, 2011. [24] M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, & B. Schiele. What Helps Where – And Why? Semantic Relatedness for Knowledge Transfer. In CVPR, 2010. [25] K. Saenko, B. Kulis, M. Fritz, & T. Darrell. Adapting visual category models to new domains. In ECCV, 2010. [26] V. Sharmanska, N. Quadrianto, & C. H. Lampert. Augmented Attribute Representations. In ECCV, 2012. [27] A. Shrivastava, S. Singh, & A. Gupta. Constrained Semi-Supervised Learning Using Attributes and Comparative Attributes. In ECCV, 2012. [28] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, & W. T. Freeman. Discovering Object Categories in Image Collections. In ICCV, 2005. [29] S. Thrun. Is learning the n-th thing any easier than learning the first. In NIPS, 1996. [30] A. Torralba, K. Murphy, & W. Freeman. Sharing visual features for multiclass and multiview object detection. In CVPR, 2004. [31] D. Tran & A. Sorokin. Human activity recognition with metric learning. In ECCV, 2008. [32] M. Weber, M. Welling, & P. Perona. Towards automatic discovery of object categories. In CVPR, 2000. [33] D. Zhou, O. Bousquet, T. N. Lal, Jason Weston, & B. Sch¨olkopf. Learning with Local and Global Consistency. In NIPS, 2004. [34] X. Zhu, Z. Ghahramani, & J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, 2003. [35] A. Zweig & D. Weinshall. Exploiting object hierarchy: Combining models from different category levels. In ICCV, 2007. 9
|
2013
|
64
|
5,141
|
Solving inverse problem of Markov chain with partial observations Tetsuro Morimura IBM Research - Tokyo tetsuro@jp.ibm.com Takayuki Osogami IBM Research - Tokyo osogami@jp.ibm.com Tsuyoshi Id´e IBM T.J. Watson Research Center tide@us.ibm.com Abstract The Markov chain is a convenient tool to represent the dynamics of complex systems such as traffic and social systems, where probabilistic transition takes place between internal states. A Markov chain is characterized by initial-state probabilities and a state-transition probability matrix. In the traditional setting, a major goal is to study properties of a Markov chain when those probabilities are known. This paper tackles an inverse version of the problem: we find those probabilities from partial observations at a limited number of states. The observations include the frequency of visiting a state and the rate of reaching a state from another. Practical examples of this task include traffic monitoring systems in cities, where we need to infer the traffic volume on single link on a road network from a limited number of observation points. We formulate this task as a regularized optimization problem, which is efficiently solved using the notion of natural gradient. Using synthetic and real-world data sets including city traffic monitoring data, we demonstrate the effectiveness of our method. 1 Introduction The Markov chain is a standard model for analyzing the dynamics of stochastic systems, including economic systems [29], traffic systems [11], social systems [12], and ecosystems [6]. There is a large body of the literature on the problem of analyzing the properties a Markov chain given its initial distribution and a matrix of transition probabilities [21, 26]. For example, there exist established methods for analyzing the stationary distribution and the mixing time of a Markov chain [23, 16]. In these traditional settings, the initial distribution and the transition-probability matrix are given a priori or directly estimated. Unfortunately, it is often impractical to directly measure or estimate the parameters (i.e., the initial distribution and the transition-probability matrix) of the Markov chain that models a particular system under consideration. For example, one can analyze a traffic system [27, 24], including how the vehicles are distributed across a city, by modeling the dynamics of vehicles as a Markov chain [11]. It is, however, difficult to directly measure the fraction of the vehicles that turns right or left at every intersection. The inverse problem of a Markov chain that we address in this paper is an inverse version of the traditional problem of analyzing a Markov chain with given input parameters. Namely, our goal is to estimate the parameters of a Markov chain from partial observations of the corresponding system. In the context of the traffic system, for example, we seek to find the parameters of a Markov chain, given the traffic volumes at stationary observation points and/or the rate of vehicles moving between 1
Figure 1: An inverse Markov chain problem. The traffic volume on every road is inferred from traffic volumes at limited observation points and/or the rates of vehicles transitioning between these points. these points. Such statistics can be reliably estimated from observations with web-cameras [27], automatic number plate recognition devices [10], or radio-frequency identification (RFID) [25], whose availability is however limited to a small number of observation points in general (see Figure 1). By estimating the parameters of a Markov chain and analyzing its stationary probability, one can infer the traffic volumes at unobserved points. The primary contribution of this paper is the first methodology for solving the inverse problem of a Markov chain when only the observation at a limited number of stationary observation points are given. Specifically, we assume that the frequency of visiting a state and/or the rate of reaching a state from another are given for a small number of states. We formulate the inverse problem of a Markov chain as a regularized optimization problem. Then we can efficiently find a solution to the inverse problem of a Markov chain based on the notion of natural gradient [3]. The inverse problem of a Markov chain has been addressed in the literature [9, 28, 31], but the existing methods assume that sample paths of the Markov chain are available. Related work of inverse reinforcement learning [20, 1, 32] also assumes that sample paths are available. In the context of the traffic system, the sample paths corresponds to probe-car data (i.e., sequence of GPS points). However, the probe-car data is expensive and rarely available in public. Even when it is available, it is often limited to vehicles of a particular type such as taxis or in a particular region. On the other hand, stationary observation data is often less expensive and more obtainable. For instance, web-camera images are available even in developing countries such as Kenya [2]. The rest of this paper is organized as follows. In Section 2, preliminaries are introduced. In Section 3, we formulate an inverse problem of a Markov chain as a regularized optimization problem. A method for efficiently solving the inverse problem of a Markov chain is proposed in Section 4. An example of implementation is provided in Section 5. Section 6 evaluates the proposed method with both artificial and real-world data sets including the one from traffic monitoring in a city. 2 Preliminaries A discrete-time Markov chain [26, 21] is a stochastic process, X = (X0, X1, . . . ), where Xt is a random variable representing the state at time t ∈Z≥0. A Markov chain is defined by the triplet {X, pI, pT}, where X = {1, . . . , |X|} is a finite set of states, where |X| ≥2 is the number of states. The function, pI : X →[0, 1], specifies the initial-state probability, i.e., pI(x) ≜Pr(X0 = x), and pT : X × X →[0, 1] specifies the state transition probability from x to x′, i.e., pT(x′ | x) ≜Pr(Xt+1 = x′ | Xt = x), ∀t ∈Z≥0. Note the state transition is conditionally independent of the past states given the current state, which is called the Markov property. Any Markov chain can be converted into another Markov chain, called a Markov chain with restart, by modifying the transition probability. There, the initial-state probability stays unchanged, but the state transition probability is modified into p such that p(x′ | x) ≜βpT(x′ | x) + (1 −β)pI(x′), (1) where β ∈[0, 1) is a continuation rate of the Markov chain1. In the limit of β →1, this Markov chain with restart is equivalent to the original Markov chain. In the following, we refer to p as the (total) transition probability, while pT as a partial transition (or p-transition) probability. 1The rate β can depend on the current state x so that β can be replaced with β(x) throughout the paper. For readability, we assume β is a constant. 2 Our main targeted applications are (massive) multi-agent systems such as traffic systems. So, restarting a chain means that an agent’s origin of a trip is decided by the initial distribution, and the trip ends at each time-step with probability 1 −β. We model the initial probability and p-transition probability with parameters ν ∈Rd1 and ω ∈Rd2, respectively, where d1 and d2 are the numbers of those parameters. So we will denote those as pIν and pTω, respectively, and the total transition probability as pθ, where θ is the total model parameter, θ ≜[ν⊤, ω⊤, ˜β]⊤∈Rd where d = d1+d2+1 and ˜β ≜ς−1(β) with the inverse of sigmoid function ς−1. That is, Eq. (1) is rewritten as pθ(x′ | x) ≜βpTω(x′ | x) + (1 −β)pIν(x′). (2) The Markov chain with restart can be represented as M(θ) ≜{X, pIν, pTω, β}. Also we make the following assumptions that are standard for the study of Markov chains and their variants [26, 7]. Assumption 1 The Markov chain M(θ) for any θ ∈Rd is ergodic (irreducible and aperiodic). Assumption 2 The initial probability pIν and p-transition probability pTω are differentiable everywhere with respect to θ ∈Rd.2 Under Assumption 1, there exists a unique stationary probability, πθ(·), which satisfies the balance equation: πθ(x′) = ∑ x∈X p(x′ | x)πθ(x), ∀x′ ∈X, (3) This stationary probability is equal to the limiting distribution and independent of the initial state: πθ(x′) = limt→∞Pr(Xt =x′ | X0 =x, M(θ)), ∀x∈X. Assumption 2 indicates that the transition probability pθ is also differentiable for any state pair (x, x′) ∈X × X with respect to any θ ∈Rd. Finally we define hitting probabilities for a Markov chain of indefinite-horizon. The Markov chain is represented as ˜M(θ) = {X, pTω, β}, which evolves according to the p-transition probability pTω, not to pθ, and terminates with a probability 1 −β at every step. The hitting probability of a state x′ given x is defined as hθ(x′ | x) ≜Pr(x′ ∈˜ X | X0 = x, ˜M(θ)), (4) where ˜ X = ( ˜X0, . . . , ˜XT ) is a sample path of ˜M(θ) until the stopping time, T. 3 Inverse Markov Chain Problem Here we formulate an inverse problem of the Markov chain M(θ). In the inverse problem, the model family M ∈{M(θ)|θ ∈Rd}, which may be subject to a transition structure as in the road network, is known or given a priori, but the model parameter θ is unknown. In Section 3.1, we define inputs of the problem, which are associated with functions of the Markov chain. Objective functions for the inverse problem are discussed in Section 3.2. 3.1 Problem setting The input and output of our inverse problem of the Markov chain is as follows. • Inputs are the values measured at a portion of states x ∈Xo, where Xo ⊂X and usually |Xo| ≪ |X|. The measured values include the frequency of visiting a state, f(x), x ∈Xo. In addition, the rate of reaching a state from another, g(x, x′), might also be given for (x, x′) ∈Xo × Xo, where g(x, x) is equal to 1. In the context of traffic monitoring, f(x) denotes the number of vehicles that went through an observation point, x; g(x, x′) denotes the number of vehicles that went through x and x′ in this order divided by f(x). • Output is the estimated parameter θ of the Markov chain M(θ), which specifies the totaltransition probability function pθ in Eq. (2). 2We assume ∂ ∂θi log pIν(x) = 0 when pIν(x) = 0, and an analogous assumption applies to pTω. 3 The first step of our formulation is to relate f and g to the Markov chain. Specifically, we assume that the observed f is proportional to the true stationary probability of the Markov chain: π∗(x) = cf(x), x ∈Xo, (5) where c is an unknown constant to satisfy the normalization condition. We further assume that the observed reaching rate is equal to the true hitting probability of the Markov chain: h∗(x′ | x) = g(x, x′), (x, x′) ∈Xo × Xo. (6) 3.2 Objective function Our objective is to find the parameter θ∗such that πθ∗and hθ∗well approximate π∗and h∗in Eqs. (5) and (6). We use the following objective function to be minimized, L(θ) ≜γLd(θ) + (1 −γ)Lh(θ) + λR(θ), (7) where Ld and Lh are cost functions with respect to the quality of the approximation of π∗and h∗, respectively. These are specified in the following subsections. The function R(θ) is the regularization term of θ, such as ||θ||2 2 or ||θ||1. The parameters γ ∈[0, 1] and λ ≥0 balance these cost functions and the regularization term, which will be optimized by cross-validation. Altogether, our problem is to find the parameter, θ∗= arg minθ∈Rd L(θ). 3.2.1 Cost function for stationary probability function Because the constant c in Eq. (5) is unknown, for example, we cannot minimize a squared error such as ∑ x∈Xo(π∗(x) −πθ(x))2. Thus, we need to derive an alternative cost function of πθ that is independent of c. For Ld(θ), one natural choice might be a Kullback-Leibler (KL) divergence, LKL d (θ) ≜ ∑ x∈Xo π∗(x) log π∗(x) πθ(x) = −c ∑ x∈Xo f(x) log πθ(x) + o, where o is a term independent of θ. The minimizer of LKL d (θ) is independent of c. However, minimization of LKL d will lead to a biased estimate. This is because LKL d will be decreased by increasing ∑ x∈Xoπθ(x) when the ratios πθ(x)/πθ(x′), ∀x, x′ ∈Xo are unchanged. This implies that, because of ∑ x∈Xoπθ(x) + ∑ x∈(X\Xo)πθ(x) = 1, minimizing LKL d has an unwanted sideeffect of overvaluing ∑ x∈Xoπθ(x) and undervaluing ∑ x∈(X\Xo)πθ(x). Here we propose an alternative form of Ld that can avoid this side-effect. It uses a logarithmic ratio of the stationary probabilities such that Ld(θ) ≜1 2 ∑ i∈Xo ∑ j∈Xo ( log π∗(i) π∗(j) −log πθ(i) πθ(j) )2 = 1 2 ∑ i∈Xo ∑ j∈Xo ( log f(i) f(j) −log πθ(i) πθ(j) )2 (8) The log-ratio of probabilities represents difference of information contents between these probabilities in the sense of information theory [17]. Thus this function can be regarded as a sum of squared error between π∗(x) and πθ(x) over x ∈Xo with respect to relative information contents. In a different point of view, Eq. (8) follows from maximizing the likelihood of θ under the assumption that the observation “log f(i) −log f(j)” has a Gaussian white noise N(0, ϵ2). This assumption is satisfied when f(i) has a log-normal distribution, LN(µi, (ϵ/ √ 2)2), independently for each i, where µi is the true location parameter, and the median of f(i) is equal to eµi. 3.2.2 Cost function for hitting probability function Unlike Ld(θ), there are several options for Lh(θ). Examples of this cost function include a mean squared error and mean absolute error. Here we use the following standard squared errors in the log space, based on Eq. (6), Lh(θ) ≜1 2 ∑ i∈Xo ∑ j∈Xo ( log g(i, j) −log hθ(j | i) )2. (9) Eq. (9) follows from maximizing the likelihood of θ under the assumption that the observation log g(i, j) has a Gaussian white noise, as with the case of Ld(θ). 4 4 Gradient-based Approach Let us consider (local) minimization of the objective function L(θ) in Eq. (7). We adopt a gradientdescent approach for the problem, where the parameter θ is optimized by the following iteration, with the notation ∇θL(θ) ≜[∂L(θ)/∂θ1, . . . , ∂L(θ)/∂θd]⊤, θt+1 = θt −ηtG−1 θt {γ∇θLd(θt) + (1 −γ)∇θLh(θt) + λ∇θR(θt)} , (10) where ηt > 0 is an updating rate. The matrix Gθt ∈Rd×d, called the metric of the parameter θ, is an arbitrary bounded positive definite matrix. When Gθt is set to the identity matrix of size d, Id, the update formula in Eq. (10) becomes an ordinary gradient descent. However, since the tangent space at a point of a manifold representing M(θ) is generally different from an orthonormal space with respect to θ [4], one can apply the idea of natural gradient [3] to the metric Gθ, expecting to make the procedure more efficient. This is described in Section 4.1. The gradients of Ld and Lh in Eq. (10) are given as ∇θLd(θ) = ∑ i∈Xo ∑ j∈Xo ( log f(i) f(j) −log πθ(i) πθ(j) ) ( ∇θ log πθ(j) −∇θ log πθ(i) ) , ∇θLh(θ) = ∑ i∈Xo ∑ j∈Xo ( log g(i, j) −log hθ(j | i) ) ∇θ log hθ(j | i). In order to implement the update rule of Eq. (10), we need to compute the gradient of the logarithmic stationary probability ∇θ log πθ, the hitting probability hθ, and its gradient ∇θ hθ. In Sections 4.2, we will describe how to compute them, which will turn out to be quite non-trivial. 4.1 Natural gradient Usually, a parametric family of Markov chains, Mθ ≜{M(θ) | θ ∈Rd}, forms a manifold structure with respect to the parameter θ under information divergences such as a KL divergence, instead of the Euclidean structure. Thus the ordinary gradient, Eq. (10) with Gθ = Id, does not properly reflect the differences in the sensitivities and the correlations between the elements of θ. Accordingly, the ordinary gradient is generally different from the steepest direction on the manifold, and the optimization process with the ordinary gradient often becomes unstable or falls into a learning plateau [5]. For efficient learning, we consider an appropriate Gθ based on the notion of the natural gradient (NG) [5]. The NG represents the steepest descent direction of a function b(θ) in a Riemannian space3 by −R−1 θ ∇θb(θ) when the Riemannian space is defined by the metric matrix Rθ. An appropriate Riemannian metric on a statistical model, Y , having parameters, θ, is known to be its Fisher information matrix (FIM):4 ∑ y Pr(Y =y | θ)∇θ log Pr(Y =y | θ)∇θ log Pr(Y =y | θ)⊤. In our case, the joint probability, pθ(x′|x)πθ(x) for x, x′ ∈X, fully specifies M(θ) at the steady state, due to the Markovian property. Thus we propose to use the following Gθ in the update rule of Eq. (10), Gθ = Fθ + σId, (11) where Fθ is the FIM of pθ(x′|x)πθ(x), Fθ ≜ ∑ x∈X πθ(x) ( ∇θ log πθ(x)∇θ log πθ(x)⊤+ ∑ x′∈X pθ(x′|x)∇θ log pθ(x′|x)∇θ log pθ(x′|x)⊤ ) . The second term with σ ≥0 in Eq. (11) will be needed to make Gθ positive definite. 3A parameter space is a Riemannian space if the parameter θ ∈Rd is on a Riemannian manifold defined by a positive definite matrix called a Riemannian metric matrix Rθ ∈Rd×d. The squared length of a small incremental vector ∆θ connecting θ to θ + ∆θ in a Riemannian space is given by ∥∆θ∥2 Rθ = ∆θ⊤Rθ∆θ. 4The FIM is the unique metric matrix of the second-order Taylor expansion of the KL divergence, that is, ∑ y Pr(Y =y | θ) log Pr(Y=y|θ) Pr(Y=y|θ+∆θ) ≃1 2∥∆θ∥2 Fθ. 5 4.2 Computing the gradient To derive an expression for computing ∇θ log πθ, we use the following notations for a vector and a matrix: πθ ≜[πθ(1), . . . , πθ(|X|)]⊤and (Pθ)x,x′ ≜pθ(x′|x). Then the logarithmic stationary probability gradients with respect to θi is given by ∂ ∂θi log πθ ≜∇θilog πθ = Diag(πθ)−1(Id −P⊤ θ + πθ1⊤ d)−1(∇θiP⊤ θ )πθ, (12) where Diag(a) is a diagonal matrix whose diagonal elements consist of a vector a, log a is the element-wise logarithm of a, and 1d denotes a column-vector of size d, whose elements are all 1. In the remainder of this section, we prove Eq. (12) by using the following proposition. Proposition 1 ([7]) If A ∈Rd×d satisfies limK→∞AK = 0, then the inverse of (I −A) exists, and (I −A)−1 = limK→∞ ∑K k=0 Ak. Equation (3) is rewritten as πθ = P⊤ θ πθ. Note that πθ is equal to a normalized eigenvector of P⊤ θ whose eigenvalue is 1. By taking a partial differential of Eq. (3) with respect to θi, Diag(πθ) ∇θi log πθ = (∇θiP⊤ θ )πθ + P⊤ θ Diag(πθ) ∇θi log πθ is obtained. Though we get the following linear simultaneous equation of ∇θi log πθ, (Id −P⊤ θ )Diag(πθ) ∇θilog πθ = (∇θiP⊤ θ )πθ, (13) the inverse of (Id−P⊤ θ )Diag(πθ) does not exist. It comes from the fact (Id−P⊤ θ )Diag(πθ)1d = 0. So we add a term including 1⊤ dDiag(πθ)∇θi log πθ = 1⊤ d∇θiπθ = ∇θi{1⊤ dπθ} = 0 to Eq. (13), such that (Id −P⊤ θ + πθ1⊤ d)Diag(πθ) ∇θilog πθ = (∇θiP⊤ θ )πθ. The inverse of (Id −P⊤ θ + πθ1⊤ d) exists, because of Proposition 1 and the fact limk→∞(P⊤ θ −πθ1⊤ d)k = limk→∞P⊤k θ −πθ1⊤ d = 0. The inverse of Diag(πθ) also exists, because πθ(x) is positive for any x ∈X under Assumption 1. Hence we get Eq. (12). To derive expressions for computing hθ and ∇θ log hθ, we use the following notations: hθ(x) ≜ [hθ(x | 1), . . . , hθ(x | |X|)]⊤for the hitting probabilities in Eq. (4) and (PTθ)x,x′ ≜pTω(x′ | x) for p-transition probabilities in Eq. (1). The hitting probabilities and those gradients with respect to θi can be computed as the following closed forms, hθ(x) = (I|X| −βP \x Tθ)−1ex |X|, (14) ∇θi log hθ(x) = β Diag(hθ(x))−1(I|X| −βP \x Tθ)−1(∇θiP \x θ )hθ(x), (15) where ex |X| denotes a column-vector of size |X|, where x’th element is 1 and all of the other elements are zero. The matrix P \x Tθ is defined as (I|X| −ex |X|ex ⊤ |X|)PTθ. We will derive Eqs. (14) and (15) as follows. The hitting probabilities in Eq. (4) can be represented as the following recursive form, hθ(x′ | x) = { 1 if x′ = x β ∑ y∈X pTω(y | x) hθ(x′ | y) otherwise. This equation can be represented with the matrix notation as hθ(x) = ex |X| + βP \x Tθhθ(x). Because the inverse of (I|X| −βP \x Tθ) exists by Proposition 1 and limk→∞(βP \x Tθ)k = 0, we get Eq. (14). In a similar way, one can prove Eq. (15). 5 Implementation For implementing the proposed method, parametric models of the initial probability pIν and the ptransition probability pTω in Eq. (1) need to be specified. We provide intuitive models based on the logit function [8]. The initial probability is modeled as pIν(x) ≜ exp(sI(x; ν)) ∑ y∈X exp(sI(y; ν)), (16) where sI(x; ν) is a state score function with its parameter ν ≜[νloc⊤, νglo⊤]⊤∈Rd1 consisting of a local parameter νloc ∈R|X| and a global parameter νglo ∈Rd1−|X|. It is defined as sI(x; ν) ≜νloc x + ϕI(x)⊤νglo, (17) 6 where ϕI(x) ∈Rd1−|X| is a feature vector of a state x. In the case of the road network, a state corresponds to a road segment. Then ϕI(x) may, for example [18], be defined with the indicators of whether there are particular types of buildings near the road segment, x. We refer to the first term and the second term of the right-hand side in Eq. (17) as a local preference and a global preference, respectively. If a simpler model is preferred, either of them would be omitted. Similarly, a p-transition probability model with the parameter ω≜[ωloc⊤, ωglo⊤ 1 , ωglo⊤ 2 ]⊤is given as pTω(x′|x) ≜ { exp(sT(x, x′; ω)) / ∑ y∈Xx exp(sT(x, y; ω)), if (x, x′) ∈X × Xx, 0 otherwise, (18) where Xx is a set of states connected from x, and sT(x, x′; ω) is a state-to-state score function. It is defined as sT(x, x′; ω) ≜ωloc (x,x′) + ϕT(x′)⊤ωglo 1 + ψ(x, x′)⊤ωglo 2 , (x, x′) ∈X × Xx, where ωloc (x,x′) is the element of ωloc (∈R ∑ x∈X |Xx|) corresponding to transition from x to x′, and ϕT(x) and ψ(x, x′) are feature vectors. For the road network, ϕT(x) may be defined based on the type of the road segment, x, and ψ(x, x′) may be defined based on the angle between x and x′. Those linear combinations with the global parameters, ωglo 1 and ωglo 2 , can represent drivers’ preferences such as how much the drivers prefer major roads or straight routes to others. Note that the pIν(x) and pTω(x′|x) presented in this section can be differentiated analytically. Hence, Fθ in Eq. (11), ∇θi log πθ in Eq. (12), and ∇θihθ in Eq. (15) can be computed efficiently. 6 Experiments 6.1 Experiment on synthetic data To study the sensitivities of the performance of our algorithm to the ratio of observable states, we applied it to randomly synthesized inverse problems of 100-state Markov chains with a varying number of observable states, |Xo| ∈{5, 10, 20, 35, 50, 70, 90}. The linkages between states were randomly generated in the same way as [19]. The values of pI and pT are determined in two stages. First, the basic initial probabilities, pIν, and the basic transition probabilities, pTω, were determined based on Eqs. (16) and (18), where every element of ν, ω, ϕI(x), ϕT(x), and ψT(x, x′) was drawn independently from the normal distribution N(0, 12). Then we added noises to pIν and pTω, which are ideal for our algorithm, by using the Dirichlet distribution Dir, such that pI = 0.7pIν + 0.3σ with σ ∼Dir(0.3×1|X|). Then we sampled the visiting frequencies f(x) and the hitting rates g(x, x′) for every x, x′ ∈Xo from this synthesized Markov chain. We used Eqs. (16) and (18) for the models and Eq. (7) for the objective of our method. In Eq. (7), we set γ = 0.1 and R(θ) = ∥θ∥2 2, and λ was determined with a cross-validation. We evaluated the quality of our solution with the relative mean absolute error (RMAE), RMAE = 1 |X\Xo| ∑ x∈X\Xo |f(x)−ˆcπθ(x)| max{f(x), 1} , where ˆc is a scaling value given by ˆc = 1/|Xo| ∑ x∈Xo f(x). As a baseline method, we use Nadaraya-Watson kernel regression (NWKR) [8] whose kernel is computed based on the number of hops in the minimum path between two states. Note that the NWKR could not use g(x, x′) as an input, because this is a regression problem of f(x). Hence, for a fair comparison, we also applied a variant of our method that does not use g(x, x′). Figure 2 (A) shows the mean and standard deviation of the RMAEs. The proposed method gives clearly better performance than the NWKR. This is mainly due to the fact that the NWKR assumes that all propagations of the observation from a link to another connected link are equally weighted. In contrast, our method incorporates such weight in the transition probabilities. 6.2 Experiment on real-world traffic data We tested our method through a city-wide traffic-monitoring task as shown in Fig. 1. The goal is to estimate the traffic volume along an arbitrary road segment (or link of a network), given observed traffic volumes on a limited number of the links, where a link corresponds to the state x of M(θ), and the traffic volume along x corresponds to f(x) of Eq. (5). The traffic volumes along the observable links were reliably estimated from real-world web-camera images captured in Nairobi, Kenya [2, 7 (A) (B) (C) 0 30 60 90 0 0.5 1 1.5 2 2.5 # of observation states RMAE Proposed method Proposed method with no use of g Nadaraya−Watson kernel regression 36.79 36.8 36.81 36.82 36.83 36.84 36.85 −1.31 −1.305 −1.3 −1.295 −1.29 −1.285 −1.28 −1.275 −1.27 −1.265 −1.26 I 36.79 36.8 36.81 36.82 36.83 36.84 36.85 −1.31 −1.305 −1.3 −1.295 −1.29 −1.285 −1.28 −1.275 −1.27 −1.265 −1.26 II 10 −1 10 0 10 1 10 −1 10 0 10 1 Observation Estimation NWKR (RMAE: 1.01 ± 0.917) 10 −1 10 0 10 1 10 −1 10 0 10 1 Observation Estimation Proposed method (RMAE: 0.517 ± 0.669) Figure 2: (A) Comparison of RMAE for the synthetic task between our methods and the NWKR (baseline method). (B) Traffic volumes for a city center map in Nairobi, Kenya, I: Web-camera observations (colored), II: Estimated traffic volumes by our method. (C) Comparison between the NWKR and our method for the real traffic-volume prediction problem. 15], while we did not use the hitting rate g(x, x′) here because of its unavailability. Note that this task is similar to network tomography [27, 30] or link-cost prediction [32, 14]. However, unlike network tomography, we need to infer all of the link traffics instead of source-destination demands. Unlike link-cost prediction, our inputs are stationary observations instead of trajectories. Again, we use the NMKR as the baseline method. The road network and the web-camera observations are shown in Fig. 2 (B)-I. While the total number of links was 1, 497, the number of links with observations was only 52 (about 3.5%). We used the parametric models in Section 5, where ϕT(x) ∈[−1, 1] was set based on the road category of x such that primary roads have a higher value than secondary roads [22], and ψ(x, x′) ∈[−1, 1] was the cosine of the angle between x and x′. However, we omitted the terms of ϕI(x) in Eq. (17). Figure 2 (B)-II shows an example of our results, where the red and yellow roads are most congested while the traffic on the blue roads is flowing smoothly. The congested roads from our analysis are consistent with those from a local traffic survey report [13]. Figure 2 (C) shows comparison between predicted and observed travel volumes. In the figures, the 45o line corresponds to perfect agreement between the actual and predicted values. To evaluate accuracy, we employed the leaveone-out cross-validation. We can see that the proposed method gives a good performance. This is rather surprising, because the rate of observation links is very limited to only 3.5 percent. 7 Conclusion We have defined a novel inverse problem of a Markov chain, where we infer the probabilities about the initial states and the transitions, using a limited amount of information that we can obtain by observing the Markov chain at a small number of states. We have proposed an effective objective function for this problem as well as an algorithm based on natural gradient. Using real-world data, we have demonstrated that our approach is useful for a traffic monitoring system that monitors the traffic volume at limited number of locations. From this observation the Markov chain model is inferred, which in turn can be used to deduce the traffic volume at any location. Surprisingly, even when the observations are made at only several percent of the locations, the proposed method can successfully infer the traffic volume at unobserved locations. Further analysis of the proposed method is necessary to better understand its property and effectiveness. In particular, our future work includes an analysis of model identifiability and empirical studies with other applications, such as logistics and economic system modeling. Acknowledgments The authors thank Dr. R. Morris, Dr. R. Raymond, and Mr. T. Katsuki for fruitful discussion. 8 References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. of International Conference on Machine learning, 2004. [2] AccessKenya.com. http://traffic.accesskenya.com/. [3] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998. [4] S. Amari and H. Nagaoka. Method of Information Geometry. Oxford University Press, 2000. [5] S. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Computation, 12(6):1399–1409, 2000. [6] H. Balzter. Markov chain models for vegetation dynamics. Ecological Modelling, 126(2-3):139–154, 2000. [7] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319–350, 2001. [8] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [9] H. H. Bui, S. Venkatesh, and G. West. On the recognition of abstract Markov policies. In Proc. of AAAI Conference on Artificial Intelligence, pages 524–530, 2000. [10] S. L. Chang, L. S. Chen, Y. C. Chung, and S. W. Chen. Automatic license plate recognition. In Proc. of IEEE Transactions on Intelligent Transportation Systems, pages 42–53, 2004. [11] E. Crisostomi, S. Kirkland, and R. Shorten. A Google-like model of road network dynamics and its application to regulation and control. International Journal of Control, 84(3):633–651, 1995. [12] M. Gamon and A. C. K¨onig. Navigation patterns from and to social media. In Proc. of AAAI Conference on Weblogs and Social Media, 2009. [13] J. E. Gonzales, C. C. Chavis, Y. Li, and C. F. Daganzo. Multimodal transport in Nairobi, Kenya: Insights and recommendations with a macroscopic evidence-based model. In Proc. of Transportation Research Board 90th Annual Meeting, 2011. [14] T. Id´e and M. Sugiyama. Trajectory regression on road networks. In Proc. of AAAI Conference on Artificial Intelligence, pages 203–208, 2011. [15] T. Katasuki, T. Morimura, and T. Id´e. Bayesian unsupervised vehicle counting. In Technical Report. IBM Research, RT0951, 2013. [16] D. Levin, Y. Peres, and E. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, 2008. [17] D. MacKay. Information theory, inference, and learning algorithms. Cambridge University Press, 2003. [18] T. Morimura and S. Kato. Statistical origin-destination generation with multiple sources. In Proc. of International Conference on Pattern Recognition, pages 283–290, 2012. [19] T. Morimura, E. Uchibe, J. Yoshimoto, and K. Doya. A generalized natural actor-critic algorithm. In Proc. of Advances in Neural Information Processing Systems, volume 22, 2009. [20] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. of International Conference on Machine Learning, 2000. [21] J. R. Norris. Markov Chains. Cambridge University Press, 1998. [22] OpenStreetMap. http://wiki.openstreetmap.org/. [23] C. C. Pegels and A. E. Jelmert. An evaluation of blood-inventory policies: A Markov chain application. Operations Research, 18(6):1087–1098, 1970. [24] J.A. Quinn and R. Nakibuule. Traffic flow monitoring in crowded cities. In Proc. of AAAI Spring Symposium on Artificial Intelligence for Development, 2010. [25] C. M. Roberts. Radio frequency identification (RFID). Computers & Security, 25(1):18–26, 2006. [26] S. M. Ross. Stochastic processes. John Wiley & Sons Inc, 1996. [27] S. Santini. Analysis of traffic flow in urban areas using web cameras. In Proc. of IEEE Workshop on Applications of Computer Vision, pages 140–145, 2000. [28] R. R. Sarukkai. Link prediction and path analysis using Markov chains. Computer Networks, 33(16):377–386, 2000. [29] G. Tauchen. Finite state Markov-chain approximations to univariate and vector autoregressions. Economics Letters, 20(2):177–181, 1986. [30] Y. Zhang, M. Roughan, C. Lund, and D. Donoho. An information-theoretic approach to traffic matrix estimation. In Proc. of Conference on Applications, technologies, architectures, and protocols for computer communications, pages 301–312. ACM, 2003. [31] J. Zhu, J. Hong, and J. G. Hughes. Using Markov chains for link prediction in adaptive Web sites. In Proc. of Soft-Ware 2002: Computing in an Imperfect World, volume 2311, pages 60–73. Springer, 2002. [32] B. D. Ziebart, A. L. Maas, and A. K. Dey J. A. Bagnell. Maximum entropy inverse reinforcement learning. In Proc. of AAAI Conference on Artificial Intelligence, pages 1433–1438, 2008. 9
|
2013
|
65
|
5,142
|
Wavelets on Graphs via Deep Learning Raif M. Rustamov & Leonidas Guibas Computer Science Department, Stanford University {rustamov,guibas}@stanford.edu Abstract An increasing number of applications require processing of signals defined on weighted graphs. While wavelets provide a flexible tool for signal processing in the classical setting of regular domains, the existing graph wavelet constructions are less flexible – they are guided solely by the structure of the underlying graph and do not take directly into consideration the particular class of signals to be processed. This paper introduces a machine learning framework for constructing graph wavelets that can sparsely represent a given class of signals. Our construction uses the lifting scheme, and is based on the observation that the recurrent nature of the lifting scheme gives rise to a structure resembling a deep auto-encoder network. Particular properties that the resulting wavelets must satisfy determine the training objective and the structure of the involved neural networks. The training is unsupervised, and is conducted similarly to the greedy pre-training of a stack of auto-encoders. After training is completed, we obtain a linear wavelet transform that can be applied to any graph signal in time and memory linear in the size of the graph. Improved sparsity of our wavelet transform for the test signals is confirmed via experiments both on synthetic and real data. 1 Introduction Processing of signals on graphs is emerging as a fundamental problem in an increasing number of applications [22]. Indeed, in addition to providing a direct representation of a variety of networks arising in practice, graphs serve as an overarching abstraction for many other types of data. Highdimensional data clouds such as a collection of handwritten digit images, volumetric and connectivity data in medical imaging, laser scanner acquired point clouds and triangle meshes in computer graphics – all can be abstracted using weighted graphs. Given this generality, it is desirable to extend the flexibility of classical tools such as wavelets to the processing of signals defined on weighted graphs. A number of approaches for constructing wavelets on graphs have been proposed, including, but not limited to the CKWT [7], Haar-like wavelets [24, 10], diffusion wavelets [6], spectral wavelets [12], tree-based wavelets [19], average-interpolating wavelets [21], and separable filterbank wavelets [17]. However, all of these constructions are guided solely by the structure of the underlying graph, and do not take directly into consideration the particular class of signals to be processed. While this information can be incorporated indirectly when building the underlying graph (e.g. [19, 17]), such an approach does not fully exploit the degrees of freedom inherent in wavelet design. In contrast, a variety of signal class specific and adaptive wavelet constructions exist on images and multidimensional regular domains, see [9] and references therein. Bridging this gap is challenging because obtaining graph wavelets, let alone adaptive ones, is complicated by the irregularity of the underlying space. In addition, theoretical guidance for such adaptive constructions is lacking as it remains largely unknown how the properties of the graph wavelet transforms, such as sparsity, relate to the structural properties of graph signals and their underlying graphs [22]. The goal of our work is to provide a machine learning framework for constructing wavelets on weighted graphs that can sparsely represent a given class of signals. Our construction uses the lifting 1 scheme as applied to the Haar wavelets, and is based on the observation that the update and predict steps of the lifting scheme are similar to the encode and decode steps of an auto-encoder. From this point of view, the recurrent nature of the lifting scheme gives rise to a structure resembling a deep auto-encoder network. Particular properties that the resulting wavelets must satisfy, such as sparse representation of signals, local support, and vanishing moments, determine the training objective and the structure of the involved neural networks. The goal of achieving sparsity translates into minimizing a sparsity surrogate of the auto-encoder reconstruction error. Vanishing moments and locality can be satisfied by tying the weights of the auto-encoder in a special way and by restricting receptive fields of neurons in a manner that incorporates the structure of the underlying graph. The training is unsupervised, and is conducted similarly to the greedy (pre-)training [13, 14, 2, 20] of a stack of auto-encoders. The advantages of our construction are three-fold. First, when no training functions are specified by the application, we can impose a smoothness prior and obtain a novel general-purpose wavelet construction on graphs. Second, our wavelets are adaptive to a class of signals and after training we obtain a linear transform; this is in contrast to adapting to the input signal (e.g. by modifying the underlying graph [19, 17]) which effectively renders those transforms non-linear. Third, our construction provides efficient and exact analysis and synthesis operators and results in a critically sampled basis that respects the multiscale structure imposed on the underlying graph. The paper is organized as follows: in §2 we briefly overview the lifting scheme. Next, in §3 we provide a general overview of our approach, and fill in the details in §4. Finally, we present a number of experiments in §5. 2 Lifting scheme The goal of wavelet design is to obtain a multiresolution [16] of L2(G) – the set of all functions/signals on graph G. Namely, a nested sequence of approximation spaces from coarse to fine of the form V1 ⊂V2 ⊂... ⊂Vℓmax = L2(G) is constructed. Projecting a signal in the spaces Vℓprovides better and better approximations with increasing level ℓ. Associated wavelet/detail spaces Wℓ satisfying Vℓ+1 = Vℓ⊕Wℓare also obtained. Scaling functions {φℓ,k} provide a basis for approximation space Vℓ, and similarly wavelet functions {ψℓ,k} for Wℓ. As a result, for any signal f ∈L2(G) on graph and any level ℓ0 < ℓmax, we have the wavelet decomposition f = X k aℓ0,kφℓ0,k + ℓmax−1 X ℓ=ℓ0 X k dℓ,kψℓ,k. (1) The coefficients aℓ,k and dℓ,k appearing in this decomposition are called approximation (also, scaling) and detail (also, wavelet) coefficients respectively. For simplicity, we use aℓand dℓto denote the vectors of all approximation and detail coefficients at level ℓ. Our construction of wavelets is based on the lifting scheme [23]. Starting with a given wavelet transform, which in our case is the Haar transform (HT), one can obtain lifted wavelets by applying the process illustrated in Figure 1(left) starting with ℓ= ℓmax −1, aℓmax = f and iterating down until ℓ= 1. At every level the lifted coefficients aℓand dℓare computed by augmenting the Haar Figure 1: Lifting scheme: one step of forward (left) and backward (right) transform. Here, aℓand dℓ denote the vectors of all approximation and detail coefficients of the lifted transform at level ℓ. U and P are linear update and predict operators. HT and IHT are the Haar transform and its inverse. 2 coefficients ¯aℓand ¯dℓ(of the lifted approximation coefficients aℓ+1) as follows aℓ ← ¯aℓ+ U ¯dℓ dℓ ← ¯dℓ−Paℓ where update (U) and predict (P) are linear operators (matrices). Note that in adaptive wavelet designs the update and predict operators will vary from level to level, but for simplicity of notation we do not indicate this explicitly. This process is always invertible – the backward transform is depicted, with IHT being the inverse Haar transform, in Figure 1(right) and allows obtaining perfect reconstruction of the original signal. While the wavelets and scaling functions are not explicitly computed during either forward or backward transform, it is possible to recover them using the expansion of Eq. (1). For example, to obtain a specific scaling function φℓ,k, one simply sets all of approximation and detail coefficients to zero, except for aℓ,k = 1 and runs the backward transform. 3 Approach For a given class of signals, our objective is to design wavelets that yield approximately sparse expansions in Eq.(1) – i.e. the detail coefficients are mostly small with a tiny fraction of large coefficients. Therefore, we learn the update and predict operators that minimize some sparsity surrogate of the detail (wavelet) coefficients of given training functions {f n}nmax n=1 . For a fixed multiresolution level ℓ, and a training function f n, let ¯an ℓand ¯dn ℓbe the Haar approximation and detail coefficient vectors of f n received at level ℓ(i.e. applied to an ℓ+1as in Figure 1(left)). Consider the minimization problem {U, P} = arg min U,P X n s(dn ℓ) = arg min U,P X n s( ¯dn ℓ−P(¯an ℓ+ U ¯dn ℓ)), (2) where s is some sparse penalty function. This can be seen as optimizing a linear auto-encoder with encoding step given by ¯an ℓ+ U ¯dn ℓ, and decoding step given by multiplication with the matrix P. Since we would like to obtain a linear wavelet transform, the linearity of the encode and decode steps is of crucial importance. In addition to linearity and the special form of bias terms, our auto-encoders differ from commonly used ones in that we enforce sparsity on the reconstruction error, rather than the hidden representation – in our setting, the reconstruction errors correspond to detail coefficients. The optimization problem of Eq. 2 suffers from a trivial solution: by choosing update matrix to have large norm (e.g. a large coefficient times identity matrix), and predict operator equal to the inverse of update, one can practically cancel the contribution of the bias terms, obtaining almost perfect reconstruction. Trivial solutions are a well-known problem in the context of auto-encoders, and an effective solution is to tie the weights of the encode and decode steps by setting U = P t. This also has the benefit of decreasing the number of parameters to learn. We also follow a similar strategy and tie the weights of update and predict steps, but the specific form of tying is dictated by the wavelet properties and will be discussed in §4.2. The training is conducted in a manner similar to the greedy pre-training of a stack of auto-encoders [13, 14, 2, 20]. Namely, we first train the the update and predict operators at the finest level: here the input to the lifting step are the original training functions – this corresponds to ℓ= ℓmax −1 and ∀n, an ℓ+1 = f n in Figure 1(left). After training of this finest level is completed, we obtain new approximation coefficients an ℓwhich are passed to the next level as the training functions, and this process is repeated until one reaches the coarsest level. The use of tied auto-encoders is motivated by their success in deep learning revealing their capability to learn useful features from the data under a variety of circumstances. The choice of the lifting scheme as the backbone of our construction is motivated by several observations. First, every invertible 1D discrete wavelet transform can be factored into lifting steps [8], which makes lifting a universal tool for constructing multiresolutions. Second, lifting scheme is always invertible, and provides exact reconstruction of signals. Third, it affords fast (linear time) and memory efficient (in-place) implementation after the update and predict operators are specified. We choose to apply lifting to Haar wavelets specifically because Haar wavelets are easy to define on any underlying space provided that it can be hierarchically partitioned [24, 10]. Our use of update-first scheme mirrors its 3 common use for adaptive wavelet constructions in image processing literature, which is motivated by its stability; see [4] for a thorough discussion. 4 Construction details We consider a simple connected weighted graph G with vertex set V of size N. A signal on the graph is represented by a vector f ∈RN. Let W be the N × N edge weight matrix (since there are no self-loops, Wii = 0), and let S be the diagonal N × N matrix of vertex weights; if no vertex weights are given, we set Sii = P j Wij. For a graph signal f, we define its integral over the graph as a weighted sum, ´ G f = P i Siif(i). We define the volume of a subset R of vertices of the graph by V ol(R) = ´ R 1 = P i∈R Sii. We assume that a hierarchical partitioning (not necessarily dyadic) of the underlying graph into connected regions is provided. We denote the regions at level ℓ= 1, ..., ℓmax by Rℓ,k; see the inset where the three coarsest partition levels of a dataset are shown. For each region at levels ℓ= 1, ..., ℓmax −1, we designate arbitrarily all except one of its children (i.e. regions at level ℓ+1) as active regions. As will become clear, our wavelet construction yields one approximation coefficient aℓ,k for each region Rℓ,k, and one detail coefficient dℓ,k for each active region Rℓ+1,k at level ℓ+ 1. Note that if the partition is not dyadic, at a given level ℓthe number of scaling coefficients (equal to number of regions at level ℓ) will not be the same as the number of detail coefficients (equal to number of active regions at level ℓ+ 1). We collect all of the coefficients at the same level into vectors denoted by aℓand dℓ; to keep our notation lightweight, we refrain from using boldface for vectors. 4.1 Haar wavelets Usually, the (unnormalized) Haar approximation and detail coefficients of a signal f are computed as follows. The coefficient ¯aℓ,k corresponding to region Rℓ,k equals to the average of the function f on that region: ¯aℓ,k = V ol(Rℓ,k)−1 ´ Rℓ,k f. The detail coefficient ¯dℓ,k corresponding to an active region Rℓ+1,k is the difference between averages at the region Rℓ+1,k and its parent region Rℓ,par(k), namely ¯dℓ,k = ¯aℓ+1,k −¯aℓ,par(k). For perfect reconstruction there is no need to keep detail coefficients for inactive regions, because these can be recovered from the scaling coefficient of the parent region and the detail coefficients of the sibling regions. In our setting, Haar wavelets are a part of the lifting scheme, and so the coefficient vectors ¯aℓand ¯dℓ at level ℓneed to be computed from the augmented coefficient vector aℓ+1 at level ℓ+ 1 (c.f. Figure 1(left)). This is equivalent to computing a function’s average at a given region from its averages at the children regions. As a result, we obtain the following formula: ¯aℓ,k = V ol(Rℓ,k)−1 X j,par(j)=k aℓ+1,jV ol(Rℓ+1,j), where the summation is over all the children regions of Rℓ,k. As before, the detail coefficient corresponding to an active region Rℓ+1,k is given by ¯dℓ,k = aℓ+1,k −¯aℓ,par(k). The resulting Haar wavelets are not normalized; when sorting wavelet/scaling coefficients we will multiply coefficients coming from level ℓby 2−ℓ/2. 4.2 Auto-encoder setup The choice of the update and predict operators and their tying scheme is guided by a number of properties that wavelets need to satisfy. We discuss these requirements under separate headings. Vanishing moments: The wavelets should have vanishing dual and primal moments – two independent conditions due to biorthogonality of our wavelets. In terms of the approximation and detail 4 coefficients these can be expressed as follows: a) all of the detail coefficients of a constant function should be zero and b) the integral of the approximation at any level of multiresolution should be the same as the integral of the original function. Since these conditions are already satisfied by the Haar wavelets, we need to ensure that the update and predict operators preserve them. To be more precise, if aℓ+1 is a constant vector, then we have for Haar coefficients that ¯aℓ= c⃗1 and ¯dℓ= ⃗0; here c is some constant and ⃗1 is a column-vector of all ones. To satisfy a) after lifting, we need to ensure that dℓ= ¯dℓ−P(¯aℓ+U ¯dℓ) = −P ¯aℓ= −cP⃗1 = ⃗0. Therefore, the rows of predict operator should sum to zero: P⃗1 = ⃗0. To satisfy b), we need to preserve the first order moment at every level ℓby requiring P k aℓ+1,kV ol(Rℓ+1,k) = P k ¯aℓ,kV ol(Rℓ,k) = P k aℓ,kV ol(Rℓ,k). The first equality is already satisfied (due to the use of Haar wavelets), so we need to constrain our update operator. Introducing the diagonal matrix Ac of the region volumes at level ℓ, we can write 0 = P k aℓ,kV ol(Rℓ,k) −P k ¯aℓ,kV ol(Rℓ,k) = P k U ¯dℓV ol(Rℓ,k) = ⃗1tAcU ¯dℓ. Since this should be satisfied for all ¯dℓ, we must have ⃗1tAcU = ⃗0t. Taking these two requirements into consideration, we impose the following constraints on predict and update weights: P⃗1 = ⃗0 and U = A−1 c P tAf where Af is the diagonal matrix of the active region volumes at level ℓ+ 1. It is easy to check that ⃗1tAcU = ⃗1tAcA−1 c P tAf = ⃗1tP tAf = (P⃗1)tAf = ⃗0tAf = ⃗0t as required. We have introduced the volume matrix Af of regions at the finer level to make the update/predict matrices dimensionless (i.e. insensitive to whether the volume is measured in any particular units). Locality: To make our wavelets and scaling functions localized on the graph, we need to constrain update and predict operators in a way that would disallow distant regions from updating or predicting the approximation/detail coefficients of each other. Since the update is tied to the predict operator, we can limit ourselves to the latter operator. For a detail coefficient dℓ,k corresponding to the active region Rℓ+1,k, we only allow predictions that come from the parent region Rℓ,par(k) and the immediate neighbors of this parent region. Two regions of graph are considered neighboring if their union is a connected graph. This can be seen as enforcing a sparsity structure on the matrix P or as limiting the interconnections between the layers of neurons. As a result of this choice, it is not difficult to see that the resulting scaling functions φℓ,k and wavelets ψℓ,k will be supported in the vicinity of the region Rℓ,k. Larger supports can be obtained by allowing the use of second and higher order neighbors of the parent for prediction. 4.3 Optimization A variety of ways for optimizing auto-encoders are available, we refer the reader to the recent paper [15] and references therein. In our setting, due to the relatively small size of the training set and sparse inter-connectivity between the layers, an off-the-shelf L-BFGS1 unconstrained smooth optimization package works very well. In order to make our problem unconstrained, we avoid imposing the equation P⃗1 = ⃗0 as a hard constraint, but in each row of P (which corresponds to some active region), the weight corresponding to the parent is eliminated. To obtain a smooth objective, we use L1 norm with soft absolute value s(x) = √ ϵ + x2 ≈|x|, where we set ϵ = 10−4. The initialization is done by setting all of the weights equal to zero. This is meaningful, because it corresponds to no lifting at all, and would reproduce the original Haar wavelets. 4.4 Training functions When training functions are available we directly use them. However, our construction can be applied even if training functions are not specified. In this case we choose smoothness as our prior, and train the wavelets with a set of smooth functions on the graph – namely, we use scaled eigenvectors of graph Laplacian corresponding to the smallest eigenvalues. More precisely, let D be the diagonal 1Mark Schmidt, http://www.di.ens.fr/˜mschmidt/Software/minFunc.html 5 matrix with entries Dii = P j Wij. The graph Laplacian L is defined as L = S−1(D−W). We solve the symmetric generalized eigenvalue problem (D −W)ξ = λSξ to compute the smallest eigen-pairs {λn, ξn}nmax n=0 .We discard the 0-th eigen-pair which corresponds to the constant eigenvector, and use functions {ξn/λn}nmax n=1 as our training set. The inverse scaling by the eigenvalue is included because eigenvectors corresponding to larger eigenvalues are less smooth (cf. [1]), and so should be assigned smaller weights to achieve a smooth prior. 4.5 Partitioning Since our construction is based on improving upon the Haar wavelets, their quality will have an effect on the final wavelets. As proved in [10], the quality of Haar wavelets depends on the quality (balance) of the graph partitioning. From practical standpoint, it is hard to achieve high quality partitions on all types of graphs using a single algorithm. However, for the datasets presented in this paper we find that the following approach based on spectral clustering algorithm of [18] works well. Namely, we first embed the graph vertices into Rnmax as follows: i →(ξ1(i)/λ1, ξ2(i)/λ2, ..., ξnmax(i)/λnmax), ∀i ∈V , where {λn, ξn}nmax n=0 are the eigen-pairs of the Laplacian as in §4.4, and ξ·(i) is the value of the eigenvector at the i-th vertex of the graph. To obtain a hierarchical tree of partitions, we start with the graph itself as the root. At every step, a given region (a subset of the vertex set) of graph G is split into two children partitions by running the 2-means clustering algorithm (k-means with k = 2) on the above embedding restricted to the vertices of the given partition [24]. This process is continued in recursion at every obtained region. This results in a dyadic partitioning except at the finest level ℓmax. 4.6 Graph construction for point clouds Our problem setup started with a weighted graph and arrived to the Laplacian matrix L in §4.4. It is also possible o reverse this process whereby one starts with the Laplacian matrix L and infers from it the weighted graph. This is a natural way of dealing with point clouds sampled from low-dimensional manifolds, a setting common in manifold learning. There is a number of ways for computing Laplacians on point clouds, see [5]; almost all of them fit into the above form L = S−1(D −W), and so, they can be used to infer a weighted graph that can be plugged into our construction. 5 Experiments Our goal is to experimentally investigate the constructed wavelets for multiscale behavior, meaningful adaptation to training signals, and sparse representation that generalizes to testing signals. Figure 2: Scaling (left) and wavelet (right) functions on periodic interval. For the first two objectives we visualize the scaling functions at different levels ℓbecause they provide insight about the signal approximation spaces Vℓ. The generalization performance can be deduced from comparison to Haar wavelets, because during training we modify Haar wavelets so as to achieve a sparser representation of training signals. We start with the case of a periodic interval, which is discretized as a cycle graph; 32 scaled eigenvectors (sines and cosines) are used for training. Figure 2 shows the resulting scaling and wavelet functions at level ℓ= 4. Up to discretization errors, the wavelets and scaling functions at the same level are shifts of each other – showing that our construction is able to learn shift invariance from training functions. Figure 3(a) depicts a graph representing the road network of Minnesota, with edges showing the major roads and vertices being their intersections. In our construction we employ unit weights on edges and use 32 scaled eigenvectors of graph Laplacian as training functions. The resulting scaling functions for regions containing the red vertex in Figure 3(a) are shown at different levels in Figure 3(b,c,d,e,f). The function values at graph vertices are color coded from smallest (dark blue) to largest (dark red). Note that the scaling functions are continuous and show multiscale spatial behavior. To test whether the learned wavelets provide a sparse representation of smooth signals, we synthetically generated 100 continuous functions using the xy-coordinates (the coordinates have not been 6 (a) Road network (b) Scaling ℓ= 2 (c) Scaling ℓ= 4 (d) Scaling ℓ= 6 (e) Scaling ℓ= 8 (f) Scaling ℓ= 10 (g) Sample function (h) Reconstruction error Figure 3: Our construction trained with smooth prior on the network (a), yields the scaling functions (b,c,d,e,f). A sample continuous function (g) out of 100 total test functions. Better average reconstruction results (h) for our wavelets (Wav-smooth) indicate a good generalization performance. seen by the algorithm so far) of the vertices; Figure 3(g) shows one of such functions. Figure 3(h) shows the average error of reconstruction from expansion Eq. (1) with ℓ0 = 1 by keeping a specified fraction of largest detail coefficients. The improvement over the Haar wavelets shows that our model generalizes well to unseen signals. Next, we apply our approach to real-world graph signals. We use a dataset of average daily temperature measurements2 from meteorological stations located on the mainland US. The longitudes and latitudes of stations are treated as coordinates of a point cloud, from which a weighted Laplacian is constructed using [5] with 5-nearest neighbors; the resulting graph is shown in Figure 4(a). The daily temperature data for the year of 2012 gives us 366 signals on the graph; Figure 4(b) depicts one such signal. We use the signals from the first half of the year to train the wavelets, and test for sparse reconstruction quality on the second half of the year (and vice versa). Figure 4(c,d,e,f,g) depicts some of the scaling functions at a number of levels; note that the depicted scaling function at level ℓ= 2 captures the rough temperature distribution pattern of the US. The average reconstruction error from a specified fraction of largest detail coefficients is shown in Figure 4(g). As an application, we employ our wavelets for semi-supervised learning of the temperature distribution for a day from the temperatures at a subset of labeled graph vertices. The sought temperature (a) GSOD network (b) April 9, 2012 (c) Scaling ℓ= 2 (d) Scaling ℓ= 4 (e) Scaling ℓ= 6 (f) Scaling ℓ= 8 (g) Reconstruction error (h) Learning error Figure 4: Our construction on the station network (a) trained with daily temperature data (e.g. (b)), yields the scaling functions (c,d,e,f). Reconstruction results (g) using our wavelets trained on data (Wav-data) and with smooth prior (Wav-smooth). Results of semi-supervised learning (h). 2National Climatic Data Center, ftp://ftp.ncdc.noaa.gov/pub/data/gsod/2012/ 7 (a) Scaling functions (b) PSNR (c) SSIM Figure 5: The scaling functions (a) resulting from training on a face images dataset. These wavelets (Wav-data) provide better sparse reconstruction quality than the CDF9/7 wavelet filterbanks (b,c). distribution is expanded as in Eq. (1) with ℓ0 = 1, and the coefficients are found by solving a least squares problem using temperature values at labeled vertices. Since we expect the detail coefficients to be sparse, we impose a lasso penalty on them; to make the problem smaller, all detail coefficients for levels ℓ≥7 are set to zero. We compare to the Laplacian regularized least squares [1] and harmonic interpolation approach [26]. A hold-out set of 25 random vertices is used to assign all the regularization parameters. The experiment is repeated for each of the days (not used to learn the wavelets) with the number of labeled vertices ranging from 10 to 200. Figure 4(h) shows the errors averaged over all days; our approach achieves lower error rates than the competitors. Our final example serves two purposes – showing the benefits of our construction in a standard image processing application and better demonstrating the nature of learned scaling functions. Images can be seen as signals on a graph – pixels are the vertices and each pixel is connected to its 8 nearest neighbors. We consider all of the Extended Yale Face Database B [11] images (cropped and down-sampled to 32 × 32) as a collection of signals on a single underlying graph. We randomly split the collection into half for training our wavelets, and test their reconstruction quality on the remaining half. Figure 5(a) depicts a number of obtained scaling functions at different levels (the rows correspond to levels ℓ= 4, 5, 6, 7, 8) in various locations (columns). The scaling functions have a face-like appearance at coarser levels, and capture more detailed facial features at finer levels. Note that the scaling functions show controllable multiscale spatial behavior. The quality of reconstruction from a sparse set of detail coefficients is plotted in Figure 5(b,c). Here again we consider the expansion of Eq. (1) with ℓ0 = 1, and reconstruct using a specified proportion of largest detail coefficients. We also make a comparison to reconstruction using the standard separable CDF 9/7 wavelet filterbanks from bottom-most level; for both of quality metrics, our wavelets trained on data perform better than CDF 9/7. The smoothly trained wavelets do not improve over the Haar wavelets, because the smoothness assumption does not hold for face images. 6 Conclusion We have introduced an approach to constructing wavelets that take into consideration structural properties of both graph signals and their underlying graphs. An interesting direction for future research would be to randomize the graph partitioning process or to use bagging over training functions in order to obtain a family of wavelet constructions on the same graph – leading to overcomplete dictionaries like in [25]. One can also introduce multiple lifting steps at each level or even add non-linearities as common with neural networks. Our wavelets are obtained by training a structure similar to a deep neural network; interestingly, the recent work of Mallat and collaborators (e.g. [3]) goes in the other direction and provides a wavelet interpretation of deep neural networks. Therefore, we believe that there are ample opportunities for future work in the interface between wavelets and deep neural networks. Acknowledgments: We thank Jonathan Huang for discussions and especially for his advice regarding the experimental section. The authors acknowledge the support of NSF grants FODAVA 808515 and DMS 1228304, AFOSR grant FA9550-12-1-0372, ONR grant N00014-13-1-0341, a Google research award, and the Max Plack Center for Visual Computing and Communications. 8 References [1] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56(13):209–239, 2004. 4.4, 5 [2] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 153–160. MIT Press, Cambridge, MA, 2007. 1, 3 [3] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1872–1886, 2013. 6 [4] R. L. Claypoole, G. Davis, W. Sweldens, and R. G. Baraniuk. Nonlinear wavelet transforms for image coding via lifting. IEEE Transactions on Image Processing, 12(12):1449–1459, Dec. 2003. 3 [5] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5–30, July 2006. 4.6, 5 [6] R. R. Coifman and M. Maggioni. Diffusion wavelets. Appl. Comput. Harmon. Anal., 21(1):53–94, 2006. 1 [7] M. Crovella and E. D. Kolaczyk. Graph wavelets for spatial traffic analysis. In INFOCOM, 2003. 1 [8] I. Daubechies and W. Sweldens. Factoring wavelet transforms into lifting steps. J. Fourier Anal. Appl., 4(3):245–267, 1998. 3 [9] M. N. Do and Y. M. Lu. Multidimensional filter banks and multiscale geometric representations. Foundations and Trends in Signal Processing, 5(3):157–264, 2012. 1 [10] M. Gavish, B. Nadler, and R. R. Coifman. Multiscale wavelets on trees, graphs and high dimensional data: Theory and applications to semi supervised learning. In ICML, pages 367–374, 2010. 1, 3, 4.5 [11] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intelligence, 23(6):643–660, 2001. 5 [12] D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. Anal., 30(2):129–150, 2011. 1 [13] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, 2006. 1, 3 [14] G. E. Hinton and R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science, 313:504–507, July 2006. 1, 3 [15] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep learning. In ICML, pages 265–272, 2011. 4.3 [16] S. Mallat. A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way. Academic Press, 3rd edition, 2008. 2 [17] S. K. Narang and A. Ortega. Multi-dimensional separable critically sampled wavelet filterbanks on arbitrary graphs. In ICASSP, pages 3501–3504, 2012. 1 [18] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. 4.5 [19] I. Ram, M. Elad, and I. Cohen. Generalized tree-based wavelet transform. IEEE Transactions on Signal Processing, 59(9):4199–4209, 2011. 1 [20] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an energy-based model. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 1137–1144. MIT Press, Cambridge, MA, 2007. 1, 3 [21] R. M. Rustamov. Average interpolating wavelets on point clouds and graphs. CoRR, abs/1110.2227, 2011. 1 [22] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag., 30(3):83–98, 2013. 1 [23] W. Sweldens. The lifting scheme: A construction of second generation wavelets. SIAM Journal on Mathematical Analysis, 29(2):511–546, 1998. 2 [24] A. D. Szlam, M. Maggioni, R. R. Coifman, and J. C. Bremer. Diffusion-driven multiscale analysis on manifolds and graphs: top-down and bottom-up constructions. In SPIE, volume 5914, 2005. 1, 3, 4.5 [25] X. Zhang, X. Dong, and P. Frossard. Learning of structured graph dictionaries. In ICASSP, pages 3373–3376, 2012. 6 [26] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, pages 912–919, 2003. 5 9
|
2013
|
66
|
5,143
|
Stochastic Convex Optimization with Multiple Objectives Mehrdad Mahdavi Michigan State University mahdavim@cse.msu.edu Tianbao Yang NEC Labs America, Inc tyang@nec-labs.com Rong Jin Michigan State University rongjin@cse.msu.edu Abstract In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1/ √ T) in high probability for general Lipschitz continuous objectives. 1 Introduction Although both stochastic optimization [17, 4, 18, 10, 26, 20, 22] and multiple objective optimization [9] are well studied subjects in Operational Research and Machine Learning [11, 12, 24], much less is developed for stochastic multiple objective optimization, which is the focus of this work. Unlike multiple objective optimization where we have access to the complete objective functions, in stochastic multiple objective optimization, only stochastic samples of objective functions are available for optimization. Compared to the standard setup of stochastic optimization, the fundamental challenge of stochastic multiple objective optimization is how to make appropriate tradeoff between different objectives given that we only have access to stochastic oracles for different objectives. In particular, an algorithm for this setting has to ponder conflicting objective functions and accommodate the uncertainty in the objectives. A simple approach toward stochastic multiple objective optimization is to linearly combine multiple objectives with a fixed weight assigned to each objective. It converts stochastic multiple objective optimization into a standard stochastic optimization problem, and is guaranteed to produce Pareto efficient solutions. The main difficulty with this approach is how to decide an appropriate weight for each objective, which is particularly challenging when the complete objective functions are unavailable. In this work, we consider an alternative formulation that casts multiple objective optimization into a constrained optimization problem. More specifically, we choose one of the objectives as the target to be optimized, and use the rest of the objectives as constraints in order to ensure that each of these objectives is below a specified level. Our assumption is that although full objective functions are unknown, their desirable levels can be provied due to the prior knowledge of the domain. Below, we provide a few examples that demonstrate the application of stochastic multiple objective optimization in the form of stochastic constrained optimization. Robust Investment. Let r ∈Rn denote random returns of the n risky assets, and w ∈W ≡ {w ∈Rn + : ∑n i wi = 1} denote the distribution of an investor’s wealth over all assets. The return for an investment distribution is defined as ⟨w, r⟩. The investor needs to consider conflicting objectives such as rate of return, liquidity and risk in maximizing his wealth [2]. Suppose that r has a unknown probability distribution with mean vector µ and covariance matrix Σ. Then the target 1 of the investor is to choose an optimal portfolio w that lies on the mean-risk efficient frontier. In mean-variance theory [15], which trades off between the expected return (mean) and risk (variance) of a portfolio, one is interested in minimizing the variance subject to budget constraints which leads to a formulation like: min w∈Rn +,∑n i wi=1 [⟨ w, E[rr⊤]w ⟩] subject to E[⟨r, w⟩] ≥γ. Neyman-Pearson Classification. In the Neyman-Pearson (NP) classification paradigm (see e.g, [19]), the goal is to learn a classifier from labeled training data such that the probability of a false negative is minimized while the probability of a false positive is below a user-specified level γ ∈(0, 1). Let hypothesis class be a parametrized convex set W = {w 7→⟨w, x⟩: w ∈Rd, ∥w∥≤ R} and for all (x, y) ∈Ξ ≡Rd × {−1, +1} the loss function ℓ: W × Ξ 7→R+ be a non-negative convex function. While the goal of classical binary classification problem is to minimize the risk as minw∈W [L(w) = E [ℓ(w; (x, y))]], the Neyman-Pearson targets on min w∈W L+(w) subject to L−(w) ≤γ, where L+(w) = E[ℓ(w; (x, y))|y = +1] and L−(w) = E[ℓ(w; (x, y))|y = −1]. Linear Optimization with Stochastic Constraints. In many applications in economics, most notably in welfare and utility theory, and management parameters are known only stochastically and it is unreasonable to assume that the objective functions and the solution domain are deterministically fixed. These situations involve the challenging task of pondering both conflicting goals and random data concerning the uncertain parameters of the problem. Mathematically, the goal in multi-objective linear programming with stochastic information is to solve: min w [⟨c1(ξ), w⟩, · · · , ⟨cK(ξ), w⟩] subject to w ∈W = {w ∈Rd + : A(ξ)w ≤b(ξ)}, where ξ is the randomness in the parameters, ci, i ∈[K] are the objective functions, and A and b formulate the stochastic constraints on the solution where randomness is captured by ξ. In this paper, we first examine two methods that try to eliminate the multi-objective aspect or the stochastic nature of stochastic multiple objective optimization and reduce the problem to a standard convex optimization problem. We show that both methods fail to tackle the problem of stochastic multiple objective optimization in general and require strong assumptions on the stochastic objectives, which limits their applications to real world problems. Having discussed these negative results, we propose an algorithm that can solve the problem optimally and efficiently. We achieve this by an efficient primal-dual stochastic gradient descent method that is able to attain an O(1/ √ T) convergence rate for all the objectives under the standard assumption of the Lipschitz continuity of objectives which is known to be optimal (see for instance [3]). We note that there is a flurry of research on heuristics-based methods to address the multi-objective stochastic optimization problem (see e.g., [8] and [1] for a recent survey on existing methods). However, in contrast to this study, most of these approaches do not have theoretical guarantees. Finally, we would like to distinguish our work from robust optimization [5] and online learning with long term constraint [13]. Robust optimization was designed to deal with uncertainty within the optimization systems. Although it provides a principled framework for dealing with stochastic constraints, it often ends up with non-convex optimization problems that are not computationally tractable. Online learning with long term constraint generalizes online learning. Instead of requiring the constraints to be satisfied by every solution generated by online learning, it allows the constraints to be satisfied by the entire sequence of solutions. However, unlike stochastic multiple objective optimization, in online learning with long term constraints, constraint functions are fixed and known before the start of online learning. Outline. The remainder of the paper is organized as follows. In Section 2 we establish the necessary notation and introduce the problem under consideration. Section 3 introduces the problem reduction methods and elaborates their disadvantages. Section 4 presents our efficient primal-dual stochastic optimization algorithm. Finally, we conclude the paper with open questions in Section 5. 2 Preliminaries Notation Throughout this paper, we use the following notation. We use bold-face letters to denote vectors. We denote the inner product between two vectors w, w′ ∈W by ⟨w, w′⟩where W ⊆Rd is a compact closed domain. For m ∈N, we denote by [m] the set {1, 2, · · · , m}. We only consider the ℓ2 norm throughout the paper. The ball with radius R is denoted by B = { w ∈Rd : ∥w∥≤R } . Statement of the Problem In this work, we generalize online stochastic convex optimization to the case of multiple objectives. In particular, at each iteration, the learner is asked to present a 2 solution wt, which will be evaluated by multiple loss functions f 0 t (w), f 1 t (w), . . . , f m t (w). A fundamental difference between single- and multi-objective optimization is that for the latter it is not obvious how to evaluate the optimization quality. Since it is impossible to simultaneously minimize multiple loss functions and in order to avoid complications caused by handling more than one objective, we choose one function as the objective and try to bound other objectives by appropriate thresholds. Specifically, the goal of OCO with multiple objectives becomes to minimize ∑T t=1 f 0 t (wt) and at the same time keep the other objective functions below a given threshold, i.e. 1 T T ∑ t=1 f i t(wt) ≤γi, i ∈[m], where w1, . . . , wT are the solutions generated by the online learner and γi specifies the level of loss that is acceptable to the ith objective function. Since the general setup (i.e., full adversarial setup) is challenging for online convex optimization even with two objectives [14], in this work, we consider a simple scenario where all the loss functions f i t(w), i ∈[m] are i.i.d samples from an unknown distribution [21]. We also note that our goal is NOT to find a Pareto efficient solution (a solution is Pareto efficient if it is not dominated by any solution in the decision space). Instead, we aim to find a solution that (i) optimizes one selected objective, and (ii) satisfies all the other objectives with respect to the specified thresholds. We denote by ¯f i(w) = Et[f i t(w)], i = 0, 1, . . . , m the expected loss function of sampled function f i t(w). In stochastic multiple objective optimization, we assume that we do not have direct access to the expected loss functions and the only information available to the solver is through a stochastic oracle that returns a stochastic realization of the expected loss function at each call. We assume that there exists a solution w strictly satisfying all the constraints, i.e. ¯f i(w) < γi, i ∈[m]. We denote by w∗the optimal solution to multiple objective optimization, i.e., w∗= arg min { ¯f 0(w) : ¯f i(w) ≤γi, i ∈[m] } . (1) Our goal is to efficiently compute a solution bwT after T trials that (i) obeys all the constraints, i.e. ¯f i(bwT ) ≤γi, i ∈[m] and (ii) minimizes the objective ¯f 0 with respect to the optimal solution w∗, i.e. ¯f 0(bwT ) −¯f 0(w∗). For the convenience of discussion, we refer to f 0 t (w) and ¯f 0(w) as the objective function, and to f i t(w) and ¯f i(w) for all i ∈[m] as the constraint functions. Before discussing the algorithms, we first mention a few assumptions made in our analysis. We assume that the optimal solution w∗belongs to B. We also make the standard assumption that all the loss functions, including both the objective function and constraint functions, are Lipschitz continuous, i.e., |f i t(w) −f i t(w′)| ≤L∥w −w′∥for any w, w′ ∈B. 3 Problem Reduction and its Limitations Here we examine two algorithms to cope with the complexity of stochastic optimization with multiple objectives and discuss some negative results which motivate the primal-dual algorithm presented in Section 4. The first method transforms a stochastic multi-objective problem into a stochastic single-objective optimization problem and then solves the latter problem by any stochastic programming approach. Alternatively, one can eliminate the randomness of the problem by estimating the stochastic objectives and transform the problem into a deterministic multi-objective problem. 3.1 Linear Scalarization with Stochastic Optimization A simple approach to solve stochastic optimization problem with multiple objectives is to eliminate the multi-objective aspect of the problem by aggregating the m+1 objectives into a single objective ∑m i=0 αif i t(wt), where αi, i ∈{0, 1, · · · , m} is the weight of ith objective, and then solving the resulting single objective stochastic problem by stochastic optimization methods. This approach is in general known as the weighted-sum or scalarization method [1]. Although this naive idea considerably facilitates the computational challenge of the problem, unfortunately, it is difficult to decide the weight for each objective, such that the specified levels for different objectives are obeyed. Beyond the hardness of optimally determining the weight of individual functions, it is also unclear how to bound the sub-optimality of final solution for individual objective functions. 3.2 Projected Gradient Descent with Estimated Objective Functions The main challenge of the proposed problem is that the expected constraint functions ¯f i(w) are not given. Instead, only a sampled function is provided at each trial t. Our naive approach is to replace the expected constraint function ¯f i(w) with its empirical estimation based on sampled objective functions. This approach circumvents the problem of stochastically optimizing multiple objective 3 into the original online convex optimization with complex projections, and therefore can be solved by projected gradient descent. More specifically, at trial t, given the current solution wt and received loss functions f i t(w), i = 0, 1, . . . , m, we first estimate the constraint functions as bf i t(w) = 1 t t ∑ k=1 f i k(w), i ∈[m], and then update the solution by wt+1 = ΠWt ( wt −η∇f 0 t (wt) } where η > 0 is the step size, ΠW(w) = minz∈W ∥z −w∥projects a solution w into domain W, and Wt is an approximate domain given by Wt = {w : bf i t(w) ≤γi, i ∈[m]}. One problem with the above approach is that although it is feasible to satisfy all the constraints based on the true expected constraint functions, there is no guarantee that the approximate domain Wt is not empty. One way to address this issue is to estimate the expected constraint functions by burning the first bT trials, where b ∈(0, 1) is a constant that needs to be adjusted to obtain the optimal performance, and keep the estimated constraint functions unchanged afterwards. Given the sampled functions f i 1, . . . , f i bT received in the first bT trials, we compute the approximate domain W′ as bf i(w) = 1 bT bT ∑ t=1 f i t(w), i ∈[m], W′ = { w : bf i(w) ≤γi + ˆγi, i = 1, . . . , m } where ˆγi > 0 is a relaxed constant introduced to ensure that with a high probability, the approximate domain Wt is not empty provided that the original domain W is not empty. To ensure the correctness of the above approach, we need to establish some kind of uniform (strong) convergence assumption to make sure that the solutions obtained by projection onto the estimated domain W′ will be close to the true domain W with high probability. It turns out that the following assumption ensures the desired property. Assumption 1 (Uniform Convergence). Let bf i(w), i = 0, 1, · · · , m be the estimated functions obtained by averaging over bT i.i.d samples for ¯f i(w), i ∈[m]. We assume that, with a high probability, sup w∈W bf i(w) −¯f i(w) ≤O([bT]−q), i = 0, 1, · · · , m. where q > 0 decides the convergence rate. It is straightforward to show that under Assumption 1, with a high probability, for any w ∈W, we have w ∈W′, with appropriately chosen relaxation constant ˆγi, i ∈[m]. Using the estimated domain W′, for trial t ∈[bT + 1, T], we update the solution by wt+1 = ΠW′(wt −η∇f 0 t (wt)). There are however several drawbacks with this naive approach. Since the first bT trials are used for estimating the constraint functions, only the last (1−b)T trials are used for searching for the optimal solution. The total amount of violation of individual constraint functions for the last (1 −b)T trials, given by ∑T t=bT +1 ¯f i(wt), is O((1 −b)b−qT 1−q), where each of the (1 −b)T trials receives a violation of O([bT]−q). Similarly, following the conventional analysis of online learning [26], we have ∑T t=bT +1(f 0 t (wt) −f 0 t (w∗)) ≤O( √ (1 −b)T). Using the same trick as in [13], to obtain a solution with zero violation of constraints, we will have a regret bound O((1 −b)b−qT 1−q + √ (1 −b)T), which yields a convergence rate of O(T −1/2 + T −q) which could be worse than the optimal rate O(T −1/2) when q < 1/2. Additionally, this approach requires memorizing the constraint functions of the first bT trials. This is in contrast to the typical assumption of online learning where only the solution is memorized. Remark 1. We finally remark on the uniform convergence assumption, which holds when the constraint functions are linear [25], but unfortunately does not hold for general convex Lipschitz functions. In particular, one can simply show examples where there is no uniform convergence for stochastic convex Lipchitz functions in infinite dimensional spaces [21]. Without uniform convergence assumption, the approximate domain W′ may depart from the true W significantly at some unknown point, which makes the above approach to fail for general convex objectives. To address these limitations and in particular the dependence on uniform convergence assumption, we present an algorithm that does not require projection when updating the solution and does not require to impose any additional assumption on the stochastic functions except for the standard Lipschitz continuity assumption. We note that our result is closely related to the recent studies of learning from the viewpoint of optimization [23], which state that solutions found by stochastic gradient descent can be statistically consistent even when uniform convergence theorem does not hold. 4 Algorithm 1 Stochastic Primal-Dual Optimization with Multiple Objectives 1: INPUT: step size η, λ0 = (λ1 0, · · · , λm 0 ), λi 0 > 0, i ∈[m] and total iterations T 2: w1 = λ1 = 0 3: for t = 1, . . . , T do 4: Submit the solution wt 5: Receive loss functions f i t, i = 0, 1, . . . , m 6: Compute the gradients ∇f i t(wt), i = 0, 1, . . . , m 7: Update the solution w and λ by wt+1 = ΠB (wt −η∇wLt(wt, λt)) = ΠB ( wt −η [ ∇f 0 t (wt) + m ∑ i=1 λi t∇f i t(wt) ]) , λi t+1 = Π[0,λi 0] ( λi t + η∇λiLt(wt, λt) ) = Π[0,λi 0] ( λi t + η [ f i t(wt) −γi ]) . 8: end for 9: Return ˆwT = ∑T t=1 wt/T 4 An Efficient Stochastic Primal-Dual Algorithm We now turn to devise a tractable formulation of the problem, followed by an efficient primal-dual optimization algorithm and the statements of our main results. We show that with a high probability, the solution found by the proposed algorithm will exactly satisfy the expected constraints and achieves a regret bound of O( √ T). The main idea of the proposed algorithm is to design an appropriate objective that combines the loss function ¯f 0(w) with ¯f i(w), i ∈[m]. As mentioned before, owing to the presence of conflicting goals and the randomness nature of the objective functions, we resort to seek for a solution that satisfies all the objectives instead of an optimal one. To this end, we define the following objective function ¯L(w, λ) = ¯f 0(w) + m ∑ i=1 λi( ¯f i(w) −γi). Note that the objective function consists of both the primal variable w ∈W and dual variable λ = (λ1, . . . , λm)⊤∈Λ, where Λ ⊆Rm + is a compact convex set that bounds the set of dual variables and will be discussed later. In the proposed algorithm, we will simultaneously update solutions for both w and λ. By exploring convex-concave optimization theory [16], we will show that with a high probability, the solution of regret O( √ T) exactly obeyes the constraints. As the first step, we consider a simple scenario where the obtained solution is allowed to violate the constraints. The detailed steps of our primal-dual algorithm is presented in Algorithm 1 . It follows the same procedure as convex-concave optimization. Since at each iteration, we only observed a randomly sampled loss functions f i t(w), i = 0, 1, . . . , m, the objective function given by Lt(w, λ) = f 0 t (w) + m ∑ i=1 λi(f i t(w) −γi) provides an unbiased estimate of ¯L(w, λ). Given the approximate objective Lt(w, λ), the proposed algorithm tries to minimize the objective Lt(w, λ) with respect to the primal variable w and maximize the objective with respect to the dual variable λ. To facilitate the analysis, we first rewrite the the constrained optimization problem min w∈B∩W ¯f 0(w) where W is defined as W = { w : ¯f i(w) ≤γi, i = 1, . . . m } in the following equivalent form: min w∈B max λ∈Rm + ¯f 0(w) + m ∑ i=1 λi( ¯f i(w) −γi). (2) We denote by w∗and λ∗= (λ1 ∗, . . . , λm ∗)⊤as the optimal primal and dual solutions to the above convex-concave optimization problem, respectively, i.e., w∗ = arg min w∈B ¯f 0(w) + m ∑ i=1 λi ∗( ¯f i(w) −γi), (3) λ∗ = arg max λ∈Rm + ¯f 0(w∗) + m ∑ i=1 λi( ¯f i(w∗) −γi). (4) 5 The following assumption establishes upper bound on the gradients of L(w, λ) with respect to w and λ. We later show that this assumption holds under a mild condition on the objective functions. Assumption 2 (Gradient Boundedness). The gradients ∇wL(w, λ) and ∇λL(w, λ) are uniformly bounded, i.e., there exist a constant G > 0 such that max (∇wL(w, λ), ∇λL(w, λ)) ≤G, for any w ∈B and λ ∈Λ. Under the preceding assumption, in the following theorem, we show that under appropriate conditions, the average solution bwT generated by of Algorithm 1 attains a convergence rate of O(1/ √ T) for both the regret and the violation of the constraints. Theorem 1. Set λi 0 ≥λi ∗+ θ, i ∈[m], where θ > 0 is a constant. Let bwT be the solution obtained by Algorithm 1 after T iterations. Then, with a probability 1 −(2m + 1)δ, we have ¯f 0(bwT ) −¯f 0(w∗) ≤µ(δ) √ T and ¯f i(bwT ) −γi ≤µ(δ) θ √ T , i ∈[m] where D2 = ∑m i=1[λi 0]2, η = [ √ (R2 + D2)/2T]/G, and µ(δ) = √ 2G √ R2 + D2 + 2G(R + D) √ 2 ln 1 δ . (5) Remark 2. The parameter θ ∈R+ is a quantity that may be set to obtain sharper upper bound on the violation of constraints and may be chosen arbitrarily. In particular, a larger value for θ imposes larger penalty on the violation of the constraints and results in a smaller violation for the objectives. We also can develop an algorithm that allows the solution to exactly satisfy all the constraints. To this end, we define bγi = γi −µ(δ) θ √ T . We will run Algorithm 1 but with γi replaced by bγi. Let G′ denote the upper bound in Assumption 2 for ∇λL(w, λ) with bγi is replaced by γi, i ∈[m]. The following theorem shows the property of the obtained average solution bwT . Theorem 2. Let bwT be the solution obtained by Algorithm 1 with γi replaced by bγi and λi 0 = λi ∗+ θ, i ∈[m]. Then, with a probability 1 −(2m + 1)δ, we have ¯f 0(bwT ) −¯f 0(w∗) ≤(1 + ∑m i=1 λi 0)µ′(δ) √ T and ¯f i(bwT ) ≤γi, i ∈[m], where µ′(δ) is same as (5) with G is replaced by G′ and η = [ √ (R2 + D2)/2T]/G′. 4.1 Convergence Analysis Here we provide the proofs of main theorems stated above. We start by proving Theorem 1 and then extend it to prove Theorem 2. Proof. (of Theorem 1) Using the standard analysis of convex-concave optimization, from the convexity of ¯L(w, ·) with respect to w and concavity of ¯L(·, λ) with respect to λ, for any w ∈B and λi ∈[0, λi 0], i ∈[m], we have ¯L(wt, λ) −¯L(w, λt) ≤ ⟨ wt −w, ∇w ¯L(wt, λt) ⟩ − ⟨ λt −λ, ∇λ ¯L(wt, λt) ⟩ = ⟨wt −w, ∇wLt(wt, λt)⟩−⟨λt −λ, ∇λLt(wt, λt)⟩ + ⟨ wt −w, ∇w ¯L(wt, λt) −∇wLt(wt, λt) ⟩ − ⟨ λt −λ, ∇λ ¯L(wt, λt) −∇λLt(wt, λt) ⟩ ≤ ∥wt −w∥2 −∥wt+1 −w∥2 2η + ∥λt −λ∥2 −∥λt+1 −λ∥2 2η +η 2 ( ∥∇wLt(wt, λt)∥2 + ∥∇λLt(wt, λt)∥2) + ⟨ wt −w, ∇w ¯L(wt, λt) −∇wLt(wt, λt) ⟩ − ⟨ λt −λ, ∇λ ¯L(wt, λt) −∇λLt(wt, λt) ⟩ , where in the first inequality we have added and subtracted the stochastic gradients used for updating the solutions, the last inequality follows from the updating rules for wt+1 and λt+1 and non-expensiveness property of the orthogonal projection operation onto the convex domain. 6 By adding all the inequalities together, we get T ∑ t=1 ¯L(wt, λ) −¯L(w, λt) ≤ ∥w −w1∥2 + ∥λ −λ1∥2 2η + η 2 T ∑ t=1 ∥∇wLt(wt, λt)∥2 + ∥∇λLt(wt, λt)∥2 + T ∑ t=1 ⟨ wt −w, ∇w ¯L(wt, λt) −∇wLt(wt, λt) ⟩ − ⟨ λt −λ, ∇λ ¯L(wt, λt) −∇λLt(wt, λt) ⟩ ≤ R2 + D2 2η + ηG2T + T ∑ t=1 ⟨ wt −w, ∇w ¯L(wt, λt) −∇wLt(wt, λt) ⟩ − ⟨ λt −λ, ∇λ ¯L(wt, λt) −∇λLt(wt, λt) ⟩ ≤ R2 + D2 2η + ηG2T + 2G(R + D) √ 2T ln 1 δ (w.p. 1 −δ), where the last inequality follows from the Hoeffiding inequality for Martingales [6]. By expanding the left hand side, substituting the stated value of η, and applying the Jensen’s inequality for the average solutions bwT = ∑T t=1 wt/T and bλT = ∑T t=1 λt/T, for any fixed λi ∈[0, λi 0], i ∈[m] and w ∈B, with a probability 1 −δ, we have ¯f 0(bwT ) + m ∑ i=1 λi( ¯f i(bwT ) −γi) −¯f 0(w) − m ∑ i=1 bλi T ( ¯f i(w) −γi) (6) ≤ √ 2G √ R2 + D2 T + 2G(R + D) √ 2 T ln 1 δ . By fixing w = w∗and λ = 0 in (6), we have ¯f i(w∗) ≤γi, i ∈[m], and therefore, with a probability 1 −δ, have ¯f 0(bwT ) ≤¯f 0(w∗) + √ 2G √ R2 + D2 T + 2G(R + D) √ 2 T ln 1 δ . To bound the violation of constraints we set w = w∗, λi = λi 0, i ∈[m], and λj = λj ∗, j ̸= i in (6). We have ¯f 0(bwT ) + λi 0( ¯f i(bwT ) −γi) + ∑ j̸=i λj ∗( ¯f j(bwT ) −γj) −¯f 0(w∗) − m ∑ i=1 bλi T ( ¯f i(w∗) −γi) ≥ ¯f 0(bwT ) + λi 0( ¯f i(bwT ) −γi) + ∑ j̸=i λj ∗( ¯f j(bwT ) −γj) −¯f 0(w∗) − m ∑ i=1 λi ∗( ¯f i(w∗) −γi) ≥ θ( ¯f i(bwT ) −γi), where the first inequality utilizes (4) and the second inequality utilizes (3). We thus have, with a probability 1 −δ, ¯f i(bwT ) −γi ≤ √ 2G θ √ R2 + D2 T + 2G(R + D) θ √ 2 T ln 1 δ , i ∈[m]. We complete the proof by taking the union bound over all the random events. We now turn to the proof of Theorem 2 that gives high probability bound on the convergence of the modified algorithm which obeys all the constraints. Proof. (of Theorem 2) Following the proof of Theorem 1, with a probability 1 −δ, we have ¯f 0(bwT ) + m ∑ i=1 λi( ¯f i(bwT ) −bγi) −¯f 0(w) − m ∑ i=1 bλi T ( ¯f i(w) −bγi) ≤ √ 2G′ √ R2 + D2 T + 2G′(R + D) √ 2 T ln 1 δ 7 Define ew∗and eλ∗be the saddle point for the following minimax optimization problem min w∈B max λ∈Rm + ¯f 0(w) + m ∑ i=1 λi( ¯f i(w) −bγi) Following the same analysis as Theorem 1, for each i ∈[m], by setting w = ew∗, λi = λi 0, and λj = eλj ∗, using the fact that eλj ∗≤λj ∗, we have, with a probability 1 −δ θ( ¯f i(bwT ) −γi) ≤ √ 2G′ √ R2 + D2 T + 2G′(R + D) √ 2 T ln 1 δ −µ(δ) √ T ≤0, which completes the proof. 4.2 Implementation Issues In order to run Algorithm 1, we need to estimate the parameter λi 0, i ∈[m], which requires to decide the set Λ by estimating an upper bound for the optimal dual variables λi ∗, i ∈[m]. To this end, we consider an alternative problem to the convex-concave optimization problem in (2), i.e. min w∈B max λ≥0 ¯f 0(w) + λ max 1≤i≤m( ¯f i(w) −γi). (7) Evidently w∗is the optimal primal solution to (7). Let λa be the optimal dual solution to the problem in (7). We have the following proposition that links λi ∗, i ∈[m], the optimal dual solution to (2), with λa, the optimal dual solution to (7). Proposition 1. Let λa be the optimal dual solution to (7) and λi ∗, i ∈[m] be the optimal solution to (2). We have λa = ∑m i=1 λi ∗. Proof. We can rewrite (7) as min w∈B max λ≥0,p∈∆m ¯f 0(w) + ∑m i=1 piλ( ¯f i(w) −γi), where domain ∆m is defined as ∆m = {α ∈Rm + : ∑m i=1 αi = 1}. By redefining λi = piλ, we have the problem in (7) equivalent to (2) and consequently λ = ∑m i=1 λi as claimed. Given the result from Proposition 1, it is sufficient to bound λa. In order to bound λa, we need to make certain assumption about ¯f i(w), i ∈[m]. The purpose of introducing this assumption is to ensure that the optimal dual variable is well bounded from the above. Assumption 3. We assume min α∈∆m
∑m i=1 αi∇¯f i(w)
≥τ, where τ > 0 is a constant. Equipped with Assumption 3, we are able to bound λa by τ. To this end, using the first order optimality condition of (2) [7], we have λa = ∥∇¯f 0(w∗)∥/∥∂g(w)∥, where g(w) = max1≤i≤m ¯f i(w). Since ∂g(w) ∈ {∑m i=1 αi∇¯f i(w) : α ∈∆m } , under Assumption 3, we have λa ≤L τ . By combining Proposition 1 with the upper bound on λa, we obtain λi ∗≤L τ , i ∈[m] as desired. Finally, we note that by having λ∗bounded, Assumption 2 is guaranteed by setting G2 = max(L2 ( 1 + ∑m i=1 λi 0 )2 , max w∈B ∑m i=1( ¯f i(w) −γi)2) which follows from Lipschitz continuity of the objective functions. In a similar way we can set G′ in Theorem 2 by replacing γi with bγi. 5 Conclusions and Open Questions In this paper we have addressed the problem of stochastic convex optimization with multiple objectives underlying many applications in machine learning. We first examined a simple problem reduction technique that eliminates the stochastic aspect of constraint functions by approximating them using the sampled functions from each iteration. We showed that this simple idea fails to attain the optimal convergence rate and requires to impose a strong assumption, i.e., uniform convergence, on the objective functions. Then, we presented a novel efficient primal-dual algorithm which attains the optimal convergence rate O(1/ √ T) for all the objectives relying only on the Lipschitz continuity of the objective functions. This work leaves few direction for further elaboration. In particular, it would be interesting to see whether or not making stronger assumptions on the analytical properties of objective functions such as smoothness or strong convexity may yield improved convergence rate. Acknowledgments. The authors would like to thank the anonymous reviewers for their helpful and insightful comments. The work of M. Mahdavi and R. Jin was supported in part by ONR Award N000141210431 and NSF (IIS-1251031). 8 References [1] F. B. Abdelaziz. Solution approaches for the multiobjective stochastic programming. European Journal of Operational Research, 216(1):1–16, 2012. [2] F. B. Abdelaziz, B. Aouni, and R. E. Fayedh. Multi-objective stochastic programming for portfolio selection. European Journal of Operational Research, 177(3):1811–1823, 2007. [3] A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5):3235–3249, 2012. [4] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In NIPS, pages 451–459, 2011. [5] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust optimization. Princeton University Press, 2009. [6] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration inequalities. In Advanced Lectures on Machine Learning, pages 208–240, 2003. [7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [8] R. Caballero, E. Cerd´a, M. del Mar Mu˜noz, and L. Rey. Stochastic approach versus multiobjective approach for obtaining efficient solutions in stochastic multiobjective programming problems. European Journal of Operational Research, 158(3):633–648, 2004. [9] M. Ehrgott. Multicriteria optimization. Springer, 2005. [10] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. Journal of Machine Learning Research - Proceedings Track, 19:421–436, 2011. [11] K.-J. Hsiao, K. S. Xu, J. Calder, and A. O. H. III. Multi-criteria anomaly detection using pareto depth analysis. In NIPS, pages 854–862, 2012. [12] Y. Jin and B. Sendhoff. Pareto-based multiobjective machine learning: An overview and case studies. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 38(3):397–415, 2008. [13] M. Mahdavi, R. Jin, and T. Yang. Trading regret for efficiency: online convex optimization with long term constraints. JMLR, 13:2465–2490, 2012. [14] S. Mannor, J. N. Tsitsiklis, and J. Y. Yu. Online learning with sample path constraints. Journal of Machine Learning Research, 10:569–590, 2009. [15] H. Markowitz. Portfolio selection. The journal of finance, 7(1):77–91, 1952. [16] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, Available at http://www2.isye.gatech.edu/ nemirovs, 1994. [17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19:1574–1609, 2009. [18] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In ICML, 2012. [19] P. Rigollet and X. Tong. Neyman-pearson classification, convexity and stochastic constraints. The Journal of Machine Learning Research, 12:2831–2855, 2011. [20] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012. [21] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009. [22] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807–814, 2007. [23] K. Sridharan. Learning from an optimization viewpoint. PhD Thesis, 2012. [24] K. M. Svore, M. N. Volkovs, and C. J. Burges. Learning to rank with multiple objective functions. In WWW, pages 367–376. ACM, 2011. [25] H. Xu and F. Meng. Convergence analysis of sample average approximation methods for a class of stochastic mathematical programs with equality constraints. Mathematics of Operations Research, 32(3):648–668, 2007. [26] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928–936, 2003. 9
|
2013
|
67
|
5,144
|
Bayesian Hierarchical Community Discovery Charles Blundell∗ DeepMind Technologies charles@deepmind.com Yee Whye Teh Department of Statistics, University of Oxford y.w.teh@stats.ox.ac.uk Abstract We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. Our model is a tree-structured mixture of potentially exponentially many stochastic blockmodels. We describe a family of greedy agglomerative model selection algorithms that take just one pass through the data to learn a fully probabilistic, hierarchical community model. In the worst case, Our algorithms scale quadratically in the number of vertices of the network, but independent of the number of nested communities. In practice, the run time of our algorithms are two orders of magnitude faster than the Infinite Relational Model, achieving comparable or better accuracy. 1 Introduction People often organise themselves into groups or communities. For example, friends may form cliques, scientists may have recurring collaborations, and politicians may form factions. Consequently the structure found in social networks is often studied by inferring these groups. Using community membership one may then make predictions about the presence or absence of unobserved connectivity in the social network. Sometimes these communities possess hierarchical structure. For example, within science, the community of physicists may be split into those working on various branches of physics, and each branch refined repeatedly until finally reaching the particular specialisation of an individual physicist. Much previous work on social networks has focused on discovering flat community structure. The stochastic blockmodel [1] places each individual in a community according to the block structure of the social network’s adjacency matrix, whilst the mixed membership stochastic blockmodel [2] extends the stochastic blockmodel to allow individuals to belong to several flat communities simultaneously. Both models require the number of flat communities to be known and are parametric methods. Greedy hierarchical clustering has previously been applied directly to discovering hierarchical community structure [3]. These methods do not require the community structure to be flat or the number of communities to be known. Such schemes are often computationally efficient, scaling quadratically in the number of individuals for a dense network, or linearly in the number of edges for a sparse network [4]. These methods do not yield a full probabilistic account of the data, in terms of parameters and the discovered structure. Several authors have also proposed Bayesian approaches to inferring community structure. The Infinite Relational Model (IRM; [5, 6, 7]) infers a flat community structure. The IRM has been extended to infer hierarchies [8], by augmenting it with a tree, but comes at considerable computational cost. [9] and [10] propose methods limited to hierarchies of depth two, whilst [11] propose methods limited to hierarchies of known depth.. The Mondrian process [12] propose a flexible prior on trees and a likelihood model for relational data. Current Bayesian nonparametric methods do not scale well to larger networks because the inference algorithms used make many small changes to the model. ∗Part of the work was done whilst at the Gatsby Unit, University College London. 1 Such schemes can take a large number of iterations to converge on an adequate solution whilst each iteration often scales unfavourably in the number of communities or vertices. We shall describe a greedy Bayesian hierarchical clustering method for discovering community structure in social networks. Our work builds upon Bayesian approaches to greedy hierarchical clustering [13, 14] extending these approaches to relational data. We call our method Bayesian Hierarchical Community Discovery (BHCD). BHCD produces good results two orders of magnitude faster than a single iteration of the IRM, and its worst case run-time is quadratic in the number of vertices of the graph and independent of the number of communities. The remainder of the paper is organised as follows. Section 2 describes the stochastic blockmodel. In Section 3 we introduce our model as a hierarchical mixture of stochastic blockmodels. In Section 4 we describe an efficient scheme for inferring hierarchical community structure with our model. Section 5 demonstrates BHCD on several data sets. We conclude with a brief discussion in Section 6 2 Stochastic Blockmodels A stochastic blockmodel [1] consists of a partition, φ, of vertices V and for each pair of clusters p and q in φ, a parameter, θpq, giving the probability of a presence or absence of an edge between nodes of the clusters. Suppose V = {a, b, c, d}, then one way to partition the vertices would be to form clusters ab, c and d, which we shall write as φ = ab|c|d, where | denotes a split between clusters. The probability of an adjacency matrix, D, given a stochastic blockmodel, is as follows: P(D|φ, {θpq}p,q∈φ) = Y p,q∈φ θ n1 pq pq (1 −θpq)n0 pq (1) where n1 pq is the total number of edges in D between the vertices in clusters p and q, and n0 pq is the total number of observed absent edges in D between the vertices in clusters p and q. When modelling communities, we expect the edge appearance probabilities within a cluster to be different to those between different clusters. Hence we place a different prior on each of these cases. Similar approaches have been taken to adapt the IRM to community detection [7], where non-conjugate priors were used at increased computational cost. In the interest of computational efficiency, our model will instead use conjugate priors but with differing hyperparameters. θpp will have a Beta(α, β) prior and θpq, p ̸= q, will have a Beta(δ, λ) prior. The hyperparameters are picked such that α > β and δ < λ, which correspond to a prior belief of a higher density of edges within a community than across communities. Integrating out the edge appearance parameters, we obtain the following likelihood of a particular partition φ: P(D|φ) = Y p∈φ B(α + n1 pp, β + n0 pp) B(α, β) Y p,q∈φ p̸=q B(δ + n1 pq, λ + n0 pq) B(δ, λ) (2) where B(·, ·) is the Beta function. We may generalise this to use any exponential family: p(D|φ) = Y p∈φ f(σpp) Y p,q∈φ, p̸=q g(σpq) (3) where f(·) is the marginal likelihood of the on-diagonal blocks, and g(·) is the marginal likelihood of the off-diagonal blocks. We use σpq to denote the sufficient statistics from a (p, q)-block of the adjacency matrix: all of the elements whose row indices are in cluster p and whose column indices are in cluster q. For the remainder of the paper, we shall focus on the beta-Bernoulli case given in (2) for concreteness. i.e., σpq = (n1 pq, n0 pq), with f(x, y) = B(α+x,β+y) B(α,β) and g(x, y) = B(δ+x,λ+y) B(δ,λ) . For clarity of exposition, we shall focus on modelling undirected or symmetric networks with no self-edges, so σpq = σqp and σ{x}{x} = 0 for each vertex x, but in general this restriction is not necessary. One approach to inferring φ is to fix the number of communities K then use maximum likelihood estimation or Bayesian inference to assign vertices to each of the communities [1, 15]. Another approach is to use variational Bayes, combined with an upper bound on the number of communities, to determine the number of communities and community assignments [16]. 2 Figure 1: Hierarchical decomposition of the adjacency matrix into tree-consistent partitions. Black squares indicated edge presence, white squares indicate edge absence, grey squares are unobserved. 3 Bayesian Hierarchical Communities In this section we shall develop a Bayesian nonparametric approach to community discovery. Our model organises the communities into a nested hierarchy T, with all vertices in one community at the root and singleton vertices at the leaves. Each vertex belongs to all communities along the path from the root to the leaf containing it. We describe the probabilistic model relating the hierarchy of communities to the observed network connectivity data here, whilst in the next section we will develop a greedy model selection procedure for learning the hierarchy T from data. We begin with the marginal probability of the adjacency matrix D under a stochastic blockmodel: p(D) = X φ p(φ)p(D|φ) (4) If the Chinese restaurant process (CRP) is used as the prior on partitions p(φ), then (4) corresponds to the marginal likelihood of the IRM. Computing (4) typically requires an approximation: the space of partitions φ is large and so the number of partitions in the above sum grows at least exponentially in the number of vertices. We shall take a different approach: we use a tree to define a prior on partitions, where only partitions that are consistent with the tree are included in the sum. This allows us to evaluate (4) exactly. The tree will represent the hierarchical community structure discovered in the network. Each internal node of the tree corresponds to a community and the leaves of the tree are the vertices of the adjacency matrix, D. Figure 1 shows how a tree defines a collection of partitions for inclusion in the sum in (4). The adjacency matrix on the left is explained by our model, conditioned upon the tree on the upper left, by its five tree-consistent partitions. Various blocks within the adjacency matrix are explained either by the on-diagonal model f or the off-diagonal model g, according to each partition. Note that the block structure of the off-diagonal model depends on the structure of the tree T, not just on the partition φ. The model always includes the trivial partition of all vertices in a single community and also the singleton partition of all vertices in separate communities. More precisely, we shall denote trees as a nested collection of sets of vertices. For example, the tree in Figure 1 is T = {{a, b}, {c, d, e}, f}. The set of of partitions consistent with a tree T may be expressed formally as in [14]: Φ(T) = {leaves(T)} ∪{φ1|. . . |φnT : φi ∈Φ(Ti), Ti ∈ch(T)} (5) where leaves(T) are the leaves of the tree T, ch(T) are its children, and so Ti is the ith subtree of tree T. The marginal likelihood of the tree T can be written as: p(D|T) = p(DT T |T) = X φ p(φ|T)p(DT T |φ, T) (6) where the notation DT T is short for Dleaves(T ),leaves(T ), the block of D whose rows and columns correspond to the leaves of T. Our prior on partitions p(φ|T) is motivated by the following generative process: Begin at the root of the tree, S = T. With probability πS, stop and generate DSS according to the on-diagonal model f. Otherwise, with probability 1 −πS, generate all inter-cluster edges between the children of the current node according to g, and recurse on each child of the current tree S. The resulting prior on 3 tree-consistent partitions p(φ|T) factorises as: p(φ|T) = Y S∈ancestorT (φ) (1 −πS) Y S∈subtreeT (φ) πS (7) where subtreeT (φ) are the subtrees in T corresponding to the clusters in partition φ and ancestorT (φ) are the ancestors of trees in subtreeT (φ). The prior probability of partitions not consistent with T is zero. Following [14], we define πS = 1 −(1 −γ)|ch(S)|, where γ is a parameter of the model. This choice of πS gives higher likelihood to non-binary trees over cascading binary trees when the data has no hierarchical structure [14]. Similarly, the likelihood of each partition p(D|φ, T) factorises as: p(DT T |φ, T) = Y S∈ancestorT (φ) g σ¬ch SS Y S∈subtreeT (φ) f(σSS) (8) where σSS are the sufficient statistics of the adjacency matrix D among the leaves of tree S, and σ¬ch SS are the sufficient statistics of the edges between different children of S: σ¬ch SS = σSS − X C∈ch(S) σCC (9) The set of tree consistent partitions given in (5) has at most O(2n) partitions, for n vertices. However due to the structure of the prior on partitions (7) and the block model (8), the marginal likelihood (6) may be calculated by dynamic programming, in O(n + m) time where n is the number of vertices and m is the number of edges. Combining (7) and (8) and expanding (6) by breadth-first traversal of the tree, yields the following recursion for the marginal likelihood of the generative process given at the beginning of the section: p(DT T |T) = πT f(σT T ) + (1 −πT )g σ¬ch T T Y C∈ch(T ) p(DCC|C) (10) 4 Agglomerative Model Selection In this section we describe how to learn the hierarchy of communities T. The problem is treated as one of greedy model selection: each tree T is a different model, and we wish to find the model that best explains the data. The tree is built in a bottom-up greedy agglomerative fashion, starting from a forest consisting of n trivial trees, each corresponding to exactly one vertex. Each iteration then merges two of the trees in the forest. At each iteration, each vertex in the network is a leaf of exactly one tree in the forest. The algorithm finishes when just one tree remains. We define the likelihood of the forest F as the probability of data described by each tree in the forest times that for the data corresponding to edges between different trees: p(D|F) = g(σ¬ch F F ) Y T ∈F p(DT T |T) (11) where σ¬ch F F are the sufficient statistics of the edges between different trees in the forest. The initial forest, F (0), consists a singleton tree for each vertex of the network. At each iteration a pair of trees in the forest F is chosen to be merged, resulting in forest F ⋆. Which pair of tree to merge, and how to merge these trees, is determined by considering which pair and type of merger yields the largest Bayes factor improvement over the current model. If the trees I and J are merged to form the tree M, then the Bayes factor score is: SCORE(M; I, J) = p(DMM|F ⋆) p(DMM|F) = p(DMM|M) p(DII|I)p(DJJ|J)g(σIJ) (12) where p(DMM|M), p(DII|I) and p(DJJ|J) are given by (10) and σIJ are the sufficient statistics of the edges connecting leaves(I) and leaves(J). Note that the Bayes factor score is based on data local to the merge—i.e., by considering the probability of the connectivity data only among the leaves of the newly merged tree. This permits efficient local computations and makes the assumption that local community structure should depend only on the local connectivity structure. We consider three possible mergers of two trees I and J into M. See Figure 2, where for concreteness we take I = {Ta, Tb, Tc} and J = {Td, Te} where Ta, Tb, Tc, Td, Te are subtrees. M may be 4 Ta Tb Tc Td Te I J Join (M) Ta Tb Tc Td Te J Absorb (M) Figure 2: Different merge operations. 1: Initialise F, {pI, σ¬ch II }I∈F , {σIJ}I,J∈F . 2: for each unique pair I, J ∈F do 3: Let M := MERGE(I; J), compute pM and SCORE(M; I, J), and add M to the heap. 4: end for 5: while heap is not empty do 6: Pop I = MERGE(X; Y ) off the top of heap. 7: if X ∈F and Y ∈F then 8: F ←(F \ {X, Y }) ∪{I}. 9: for each tree J ∈F \ {I}, do 10: Compute σIJ, σMM, and σ¬ch MM using (13). 11: Let M := MERGE(I; J), compute pM and SCORE(M; I, J), and add M to heap. 12: end for 13: end if 14: end while 15: return the only tree in F Algorithm 1: Bayesian hierarchical community discovery. formed by joining I and J together using a new node, giving M = {I, J}. Alternatively M may be formed by absorbing J as a child of I, yielding M = {J}∪ch(I), or vice versa, M = {I}∪ch(J). The algorithm for finding T is described in Algorithm 1. The algorithm maintains a forest F of trees, the likelihood pI = p(DII|I) of each tree I ∈F according to (10), and the sufficient statistics {σ¬ch II }I∈F , {σIJ}I,J∈F needed for efficient computation. It also maintains a heap of potential merges ordered by the SCORE (12), used for determining the ordering of merges. At each iteration, the best potential merge, say of trees X and Y resulting in tree I, is picked off the heap. If either X or Y is not in F, this means that the tree has been used in a previous merge, so that the potential merge is discarded and the next potential merge is considered. After a successful merge, the sufficient statistics associated with the new tree I are computed using the previously computed ones: σIJ = σXJ + σY J for J ∈F, J ̸= I. σMM = σII + σJJ + σIJ σ¬ch MM = σIJ if M is formed by joining I and J, σ¬ch II + σIJ if M is formed by J absorbed into I, σ¬ch JJ + σIJ if M is formed by I absorbed into J. (13) These sufficient statistics are computed based on previous cached values, allowing each inner loop of the algorithm to take O(1) time. Finally, potential mergers of I with other trees J in the forest are considered and added onto the heap. In the algorithm, MERGE(I; J) denotes the best of the three possible merges of I and J. Algorithm 1 is structurally the same as that in [14], and so has time complexity in O(n2 log(n)). The difference is that additional care is needed to cache the sufficient statistics allowing for O(1) computation per inner loop of the algorithm. We shall refer to Algorithm 1 as BHCD. 4.1 Variations BHCD will consider merging trees that have no edges between them if the merge score (12) is high enough. This does not seem to be a reasonable behaviour as communities that are completely disconnected should not be merged. We can alter BHCD by simply prohibiting such merges between trees that have no edges between them. The resulting algorithm we call BHCD sparse, as it only needs to perform computations on the parts of the network with edges present. Empirically, we have found that it produces better results than BHCD and runs faster for sparse networks, although in the worst case it has the same time complexity O(n2 log n) as BHCD. As BHCD runs, several merges can have the same score. In particular, at the first iteration all pairs of vertices connected by an edge have the same score. In such situations, we break the ties at random. Different tie breaks can yield different results and so different runs on the same data may yield 5 different trees. Where we want a single tree, we use R (R = 50 in experiments) restarts and pick the tree with the highest likelihood according to (10). Where we wish to make predictions, we will construct predictive probabilities (see next section) by averaging all R trees. 4.2 Predictions For link prediction, we wish to obtain the predictive distribution of a previously unobserved element of the adjacency matrix. This is easily achieved by traversing one path of the tree from the root towards the leaves, hence the computational complexity is linear in the depth of the tree. In particular, suppose we wish to predict the edge between x and y, Dxy, given the observed edges D, then the predictive distribution can be computed recursively starting with S = T: p(Dxy|DSS, S) = rSf(Dxy|σSS) + (1 −rS) p(Dxy|DCC, C) if ∃C ∈ch(S) : x, y ∈leaves(C), g(Dxy|σ¬ch SS ) if ∀C ∈ch(S) : x, y ̸∈leaves(C). rS = πSf(σSS) p(DSS|S) (14) where rS is the probability that the cluster consisting of leaves(S) is present if the cluster corresponding to its parent is not present, given the data D and the tree T. The predictive distribution is a mixture of a number of on-diagonal posterior f terms (weighted by the responsibility rT ), and finally an off-diagonal posterior g term. Hence the computational complexity is Θ(n). 5 Experiments We now demonstrate BHCD on three data sets. Firstly we examine qualitative performance on Sampson’s monastery network. Then we demonstrate the speed and accuracy of our method on a subset of the NIPS 1–17 co-authorship network compared to IRM—one of the fastest Bayesian nonparametric models for these data. Finally we show hierarchical structure found examining the full NIPS 1–17 co-authorship network. In our experiments we set the model hyperparameters α = δ = 1.0, β = λ = 0.2, and γ = 0.4 which we found to work well. In the first two experiments we shall compare four variations of BHCD: BHCD, BHCD sparse, BHCD restricted to binary trees, and BHCD sparse restricted to binary trees. Binary-only variations of BHCD only consider joins, not absorptions, and so may run faster. They also tend to produce better predictive results as they average over a larger number of partitions. But, as we shall see below, the hierarchies found can be more difficult to interpret than the non-binary hierarchies found. Sampson’s Monastery Network Figure 3 shows the result of running six variants of BHCD on time four of Sampson’s monastery network [17]. Sampson observed the monastery at five times—time four is the most interesting time as it was before four of the monks were expelled. We treated positive affiliations as edges, and negative affiliations as observed absent edges, and unknown affiliation as missing data. [17], using a variety of methods, found four flat groups, shown at the top of Figure 3: Young Turks (Albert, Boniface, Gregory, Hugh, John Bosco, Mark, Winfrid), Loyal Opposition (Ambrose, Berthold, Bonaventure, Louis, Peter), Outcasts (Basil, Elias, Simplicius), and Interstitial group (Amand, Ramuald, Victor). As can be seen in Figure 3, most BHCD variants find clear block diagonal structure in the adjacency matrix. The binary versions find similar structure to the non-binary versions, up to permutations of the children of the non-binary trees. BHCD global is lead astray by out of date scores on its heap and so finds less coherent structure. The log likelihoods of the trees in Figure 3 are −6.62 (BHCD) and −22.80 (BHCD sparse). Whilst the log likelihoods of the binary trees in Figure 3 are −8.32 (BHCD binary) and −24.68 (BHCD sparse binary). BHCD finds the most likely tree, and rose trees typically better explain the data than binary trees. BHCD finds the Young Turks and Loyal Opposition groups and chooses to merge some members of the Interstitial group into the Loyal Opposition and Amand into the Outcasts. Mark, however, is placed in a separate community: although Mark has a positive affiliation with Gregory, Mark also has a negative affiliation with John Bosco and so BHCD elects to create a new community to account for this discrepancy. NIPS-234 Next we applied BHCD to a subset of the NIPS co-authorship dataset [19]. We compared its predictive performance to both IRM using MCMC and also inference in the IRM using greedy 6 Albert Boniface Gregory Hugh John Bosco Mark Winfrid Amand Ramuald Victor Ambrose Berthold Bonaventure Louis Peter Basil Elias Simplicius Albert Boniface Gregory Hugh John Bosco Mark Winfrid Amand Ramuald Victor Ambrose Berthold Bonaventure Louis Peter Basil Elias Simplicius Albert Basil Boniface Gregory Hugh John Bosco Winfrid Amand Elias Simplicius Mark Ambrose Berthold Bonaventure Louis Peter Ramuald Victor Albert Basil Boniface Gregory Hugh John Bosco Winfrid Amand Elias Simplicius Mark Ambrose Berthold Bonaventure Louis Peter Ramuald Victor Albert Mark Basil Gregory Hugh John Bosco Winfrid Boniface Amand Elias Simplicius Ambrose Louis Berthold Bonaventure Peter Ramuald Victor Albert Mark Basil Gregory Hugh John Bosco Winfrid Boniface Amand Elias Simplicius Ambrose Louis Berthold Bonaventure Peter Ramuald Victor Albert Basil John Bosco Gregory Hugh Winfrid Boniface Mark Amand Elias Simplicius Ambrose Louis Berthold Bonaventure Peter Victor Ramuald Albert Basil John Bosco Gregory Hugh Winfrid Boniface Mark Amand Elias Simplicius Ambrose Louis Berthold Bonaventure Peter Victor Ramuald Albert Basil John Bosco Hugh Gregory Boniface Winfrid Amand Elias Simplicius Mark Ambrose Bonaventure Peter Ramuald Victor Berthold Louis Albert Basil John Bosco Hugh Gregory Boniface Winfrid Amand Elias Simplicius Mark Ambrose Bonaventure Peter Ramuald Victor Berthold Louis Figure 3: Sampson’s monastery network. White indicates a positive affiliation, black negative, whilst grey indicates unknown. From top to bottom: Sampson’s clustering, BHCD, BHCD-sparse, BHCD with binary trees, BHCD-sparse-binary. Method Time complexity IRM (na¨ıve) O(n2K2IR) IRM (sparse) O(mK2IR) LFRM [18] O(n2F 2IR) IMMM [9] O(n2K2IR) ILA [10] O(n2(F + K2)IR) [8] O(n2K2IR) BHCD O(n2 log(n)R) Table 1: Time complexities of different methods. n = # vertices, m = # edges, K = # communities, F = # latent factors, I = # iterations per restart, R = # restarts. G G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG −0.10 −0.09 −0.08 −0.07 −0.06 −0.05 10 1000 Run time (s) Average Log Predictive IRM BHCD Sparse G MCMC Greedy Binary Rose Binary Rose G G G G G GGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG 0.984 0.98 0.976 10 1000 Run time (s) Accuracy G G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG 0.80 0.85 0.90 10 1000 Run time (s) Area Under the Curve (AUC) Figure 4: NIPS-234 comparison using log predictive, accuracy and AUC, averaged across 10 cross-validation folds. 7 Amari S Waibel A Doya K Yang H Cortes C Finke M Cichocki A Murata N Haffner P Li Y Bengio Y Obermayer K Bishop C Singer Y Kawato M Baldi P Moore A Tresp V Morgan N Smyth P Jaakkola T Freeman W Darrell T Fisher J Willsky A Wainwright M Sudderth E B Ihler A T Taycher L Adelson E H LeCun Y Vapnik V Guyon I Denker J Graf H Simard P Henderson D Jackel L Bottou L Hubbard W Koch C Mead C Liu S Harris J Horiuchi T Wawrzynek J Luo J Lazzaro J Ruderman D Bair W Meir R Alspector J Allen R Jayakumar A El-Yaniv R Satyanarayana S Domany E Zeppenfeld T Lippe D Lapedes A Poggio T Mukherjee S Pontil M Geiger D Rudra A Vetter T Serre T Heisele B Girosi F Riesenhuber M Sejnowski T Movellan J R Viola P Bartlett M Movellan J Cohen M Littlewort G Ekman P Andreou A Larsen J Singh S Barto A Kearns M Opper M Sutton R Schapire R Mansour Y Isbell C L Littman M McAllester D Maass W Brown T Zador A Claiborne B Natschlager T Tsai K Sontag E Mainen Z Camevale N Sontag E D Stork D Wolff G Watanabe T Boonyanit K Leung M Kritayakirana K Peterson A Schwartz E Burr J Murray M Jain A Tebelskis J Wang X Schmidbauer O Sloboda T McNair A Ballard D Saito H Osterholtz L Woszczyna M Marchand M Golea M Mason L Tenorio M Lee W Baxter J Japkowicz N Frean M Sokolova M Tsirukis A Scholkopf B Weston J Muller K Shawe-Taylor J Smola A Bartlett P Ratsch G Williamson R Platt J Elisseeff A Figure 5: Clusters of authors found in NIPS 1–17. Top 10 most most collaborating authors shown for all clusters with more than 15 vertices. search, using the publicly available C implementation[20]. Our implementation of BHCD is also in C. As can be seen from Table 1, BHCD has significantly lower computational complexity than other Bayesian nonparametric methods even than those inferring flat hierarchies. This is because it is a simpler model and uses a simpler inference method—thus we do not expect it to yield better predictive results, but instead to get good results quickly. Unlike the other listed methods, BHCD’s worst case complexity does not depend upon the number of communities, and BHCD always terminates after a fixed number of steps so has no I factor. This latter factor, I, corresponds to the number of MCMC steps or the number of greedy search steps, may be large and may need to scale as the number of vertices increases. Following [18, 10] we restricted the network to the 234 most connected individuals. Figure 4 shows the average log predictive probability of held out data, accuracy and Area under the receiver operating Curve (AUC) over time for both BHCD and IRM. For the IRM, each point represents a single Gibbs step (for MCMC) or a search step (for greedy search). For BHCD, each point represents a complete run of the inference algorithm. BHCD is able to make reasonable predictions before the IRM has completed a single Gibbs scan. We used the same 10 cross-validation folds as used in [10] and so our results are quantitatively comparable to their results for the Latent Factor Relational Model (LFRM [18]) and their model, the Infinite Latent Attributes model (ILA). BHCD performs similarly to LFRM, worse than ILA, and better IRM. After about 10 seconds, the sparse variants of BHCD make as good predictions on NIPS-234 as the IRM after about 1000 seconds. Notably the sparse variations are faster than the non-sparse variants of BHCD, as the NIPS co-authorship network is sparse. Full NIPS The full NIPS dataset has 2864 vertices and 9466 edges. Figure 5 shows part of the hierarchy discovered by BHCD. The full inferred hierarchy is large, having 646 internal nodes. We cut the tree and retained the top portion of the hierarchy, shown above the clusters. We merged all the leaves of a subtree T into a flat cluster when rT Q A∈ancestorT (1−rA) > 0.5 where rT is given in (14). This quantity corresponds to the probability of picking that particular subtree in the predictive distribution. Amongst these clusters we included only those with at least 15 members in Figure 5. We include hierarchies with a lower cut-off in the supplementary. 6 Discussion and Future Work We proposed an efficient Bayesian procedure for discovering hierarchical communities in social networks. Experimentally our procedure discovers reasonable hierarchies and is able to make predictions about two orders of magnitude faster than one of the fastest existing Bayesian nonparametric schemes, whilst attaining comparable performance. Our inference procedure scales as O(n2 log n) through a novel caching scheme, where n is the number of vertices, making the procedure suitable for large dense networks. However our likelihood can be computed in O(n + m) time, where m are the number of edges. This disparity between inference and likelihood suggests that in future it may be possible to improve the scalability of the model on sparse networks, where m ≪n2. Another way to scale up the model would be to investigate parameterising the network using the sufficient statistics of triangles, instead of edges as in [21]. Others [7] have found that non-conjugate likelihoods can yield improved predictions—thus adapting our scheme to work with non-conjugate likelihoods and doing hyperparameter inference could also be fruitful next steps. Acknowledgements We thank the Gatsby Charitable Foundation for generous funding. 8 References [1] P. Holland, K.B. Laskey, and S. Leinhardt. Stochastic blockmodels: Some first steps. Social Networks, 5:109137, 1983. [2] Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership stochastic blockmodel. Journal of Machine Learning Research, 9:1981–2014, 2008. [3] M. Girvan and M. E. J. Newman. Community structure in social and biological networks. PNAS, 99:7821–7826, 2002. [4] A. Clauset, M. E. J. Newman, and C. Moore. Finding community structure in very large networks. Physics Review E, 70, 2004. [5] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. AAAI, 2006. [6] Zhao Xu, Volker Tresp, Kai Yu, and Hans-Peter Kriegel. Infinite hidden relational models. Uncertainty in Artificial Intelligence (UAI), 2006. [7] Morten Mørup and Mikkel N. Schmidt. Bayesian community detection. Neural Computation, 24:2434–2456, 2012. [8] T. Herlau, M. Mørup, M. N. Schmidt, and L. K. Hansen. Detecting hierarchical structure in networks. In Cognitive Information Processing, 2012. [9] Phaedon-Stelios Koutsourelakis and Tina Eliassi-Rad. Finding mixed-memberships in social networks. 2008 AAAI Spring Symposium on Social Information Processing (AAAI-SS’08), 2008. [10] Konstantina Palla, David A. Knowles, and Zoubin Ghahramani. An infinite latent attribute model for network data. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012. July 2012. [11] Qirong Ho, Ankur P. Parikh, Le Song, and Erix P. Xing. Multiscale community blockmodel for network exploration. Proceedings of the Fourteenth International Workshop on Artificial Intelligence and Statistics (AISTATS), 2011. [12] D. M. Roy and Y. W. Teh. The Mondrian process. In Advances in Neural Information Processing Systems, volume 21, 2009. [13] K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the International Conference on Machine Learning, volume 22, 2005. [14] C. Blundell, Y. Teh, and K. A. Heller. Bayesian Rose trees. UAI, 2010. [15] T. Snijders and K. Nowicki. Estimation and prediction for stochastic blockmodels for graphs with latent block structure. Journal of Classification, 14:75–100, 1997. [16] Jake M. Hofman and Chris H. Wiggins. Bayesian approach to network modularity. Physical Review Letters, 100(25):258701, 2008. [17] S. F. Sampson. A novitiate in a period of change. an experimental and case study of social relationships. 1968. [18] Kurt T. Miller, Thomas L. Griffiths, and Michael I. Jordan. Nonparametric latent feature models for link prediction. Neural Information Processing Systems (NIPS), 2009. [19] A. Globerson, G. Chechik, F. Pereira, and N. Tishby. Euclidean embedding of co-occurrence data. Journal of Machine Learning Research, 8:2265–2295, 2007. [20] Charles Kemp. Infinite relational model implementation. http://www.psy.cmu.edu/ ˜ckemp/code/irm.html. Accessed: 2013-04-08. [21] Q. Ho, J. Yin, and E. P. Xing. On triangular versus edge representations — towards scalable modeling of networks. Neural Information Processing Systems (NIPS), 2012. 9
|
2013
|
68
|
5,145
|
Contrastive Learning Using Spectral Methods James Zou Harvard University Daniel Hsu Columbia University David Parkes Harvard University Ryan Adams Harvard University Abstract In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another. For example, given a background corpus of news articles together with writings of a particular author, one may want a topic model that explains word patterns and themes specific to the author. Another example comes from genomics, in which biological signals may be collected from different regions of a genome, and one wants a model that captures the differential statistics observed in these regions. This paper formalizes this notion of contrastive learning for mixture models, and develops spectral algorithms for inferring mixture components specific to a foreground data set when contrasted with a background data set. The method builds on recent moment-based estimators and tensor decompositions for latent variable models, and has the intuitive feature of using background data statistics to appropriately modify moments estimated from foreground data. A key advantage of the method is that the background data need only be coarsely modeled, which is important when the background is too complex, noisy, or not of interest. The method is demonstrated on applications in contrastive topic modeling and genomic sequence analysis. 1 Introduction Generative latent variable models offer an intuitive way to explain data in terms of hidden structure, and are a cornerstone of exploratory data analysis. Popular examples of generative latent variable models include Latent Dirichlet Allocation (LDA) [1] and Hidden Markov Models (HMMs) [2], although the modularity of the generative approach has led to a wide range of variations. One of the challenges of using latent variable models for exploratory data analysis, however, is developing models and learning techniques that accurately reflect the intuitions of the modeler. In particular, when analyzing multiple specialized data sets, it is often the case that the most salient statistical structure—that most easily found by fitting latent variable models—is shared across all the data and does not reflect interesting specific local structure. For example, if we apply a topic model to a set of English-language scientific papers on computer science, we might hope to identify different cooccurring words within subfields such as theory, systems, graphics, etc. Instead, such a model will simply learn about English syntactic structure and invent topics that reflect uninteresting statistical correlations between stop words [3]. Intuitively, what we would like from such an exploratory analysis is to answer the question: What makes these data different from other sets of data in the same broad category? To answer this question, we develop a new set of techniques that we refer to as contrastive learning methods. These methods differentiate between foreground and background data and seek to learn a latent variable model that captures statistical relationships that appear in the foreground but do not appear in the background. Revisiting the previous scientific topics example, contrastive learning could treat computer science papers as a foreground corpus and (say) English-language news articles as a background corpus. As both corpora share the same broad syntactic structure, a contrastive foreground topic model would be more likely to discover semantic relationships between words that are specific to computer science. This intuition has broad applicability in other models and domains 1 Background Foreground (a) PCA (b) Linear contrastive analysis Figure 1: These figures show foreground and background data from Gaussian distributions. The foreground data has greater variance in its minor direction, but the same variance in its major direction. The means are slightly different. Different projection lines are shown for different methods, to illustrate the difference between (a) the purely unsupervised variance-preserving linear projection of principal component analysis, (b) the contrastive foreground projection that captures variance that is not present in the background. as well. For example, in genomics one might use a contrastive hidden Markov model to amplify the signal of a particular class of sequences, relative to the broader genome. Note that the objective of contrastive learning is not to discriminate between foreground and background data, but to learn an interpretable generative model that captures the differential statistics between the two data sets. To clarify this difference, consider the difference between principal component analysis and contrastive analysis. Principal component analysis finds the linear projection that maximally preserves variance without regard to foreground versus background. A contrastive approach, however, would try to find a linear projection that maximally preserves the foreground variance that is not explained by the background. Figure 1 illustrates the differences between these. Novelty detection [4] is also related, but it does not directly learn a generative model of the novelty. Our contributions. We formalize the concept of contrastive learning for mixture models and present new spectral contrast algorithms. We prove that by appropriately “subtracting” background moments from the foreground moments, our algorithms recover the model for the foregroundspecific data. To achieve this, we extend recent developments in learning latent variable models with moment matching and tensor decompositions. We demonstrate the effectiveness, robustness, and scalability of our method in contrastive topic modeling and contrastive genomics. 2 Contrastive learning in mixture models Many data can be naturally described by a mixture model. The general mixture model has the form p({xn}N n=1; {(µj, wj)}J j=1) = N n=1 J j=1 wjf(xn|µj) (1) where {µj} are the parameters of the mixture components, {wj} are the mixture weights, and f(·|µj) is the density of the j-th mixture component. Each µj is a vector in some parameter space, and a common estimation task is to infer the component parameters {(µj, wj)} given the observed data {xn}. In many applications, we have two sets of observations {xf n} and {xb n}, which we call the foreground data and the background data, respectively. The foreground and background are generated by two possibly overlapping sets of mixture components. More concretely, let {µj}j∈A, {µj}j∈B, and {µj}j∈C be three disjoint sets of parameters, with A, B, and C being three disjoint index sets. The foreground {xf n} is generated from the mixture model {(µj, wf j)}j∈A∪B, and the background {xb n} is generated from {(µj, wb j)}j∈B∪C. The goal of contrastive learning is to infer the parameters {(µj, wf j)}j∈A, which we call the foreground-specific model. The direct approach would be to infer {(µj, wf j)}j∈A∪B just from {xf n}, and in parallel infer {(µj, wb j)}j∈B∪C just from {xb n}, and then pick out the components specific to the foreground. However, this involves explicitly learning a model for the background data, which 2 is undesirable if the background is too complex, if {xb n} is too noisy, or if we do not want to devote computational power to learn the background. In many applications, we are only interested in learning a generative model for the difference between the foreground and background, because that contrast is the interesting signal. In this paper, we introduce an efficient and general approach to learn the foreground-specific model without having to learn an accurate model of the background. Our approach is based on a method-ofmoments that uses higher-order tensor decompositions for estimation [5]; we generalize the tensor decomposition technique to deal with our task of contrastive learning. Many other recent spectral learning algorithms for latent variable models are also based on the method-of-moments (e.g., [6– 13]), but their parameter estimation can not account for the asymmetry between foreground and background. We demonstrate spectral contrastive learning through two concrete applications: contrastive topic modeling and contrastive genomics. In contrastive topic modeling we are given a foreground corpus of documents and a background corpus. We want to learn a fully generative topic model that explains the foreground-specific documents (the contrast). We show that even when the background is extremely sparse—too noisy to learn a good background topic model—our spectral contrast algorithm still recovers foreground-specific topics. In contrastive genomics, sequence data is modeled by HMMs. The foreground data is generated by a mixture of two HMMs; one is foreground-specific, and the other captures some background process. The background data is generated by this second HMM. Contrastive learning amplifies the foreground-specific signal, which have meaningful biological interpretations. 3 Contrastive topic modeling To illustrate contrastive analysis and introduce tensor methods, we consider a simple topic model where each document is generated by exactly one topic. In LDA [1], this corresponds to setting the Dirichlet prior hyper-parameter α →0. The techniques here can be extended to the general α > 0 case using the moment transformations given in [10]. The generative topic model for a document is as follows. • A word x is represented by an indicator vector ex ∈RD which is 1 in its x-th entry and 0 elsewhere. D is the size of the vocabulary. A document is a bag-of-words and is represented by a vector c ∈RD with non-negative integer word counts. • A topic is first chosen according to the distribution on [K] := {1, 2, . . . , K} specified by the probability vector w ∈RK. • Given that the chosen topic is t, the words in the document are drawn independently from the distribution specified by the probability vector µt ∈RD. Following previous work (e.g., [10]) we assume that µ1, µ2, . . . , µK are linearly independent probability vectors in RD. Let the foreground corpus of documents be generated by the mixture of |A| + |B| topics {(µt, wf t)}t∈A ∪{(µt, wf t)}t∈B, and the background topics be generated by the mixture of |B| + |C| topics {(µt, wb t )}t∈B ∪{(µt, wb t )}t∈C (here, we assume (A, B, C) is a nontrivial partition of [K], and that wf t, wb t > 0 for all t). Our goal is to learn {(µt, wf t)}t∈A. 3.1 Moment decompositions We use the symbol ⊗to denote the tensor product of vectors, so a⊗b is the matrix whose (i, j)-th entry is aibj, and a⊗b⊗c is the third-order tensor whose (i, j, k)-th entry is aibjck. Given a third-order tensor T ∈Rd1×d2×d3 and vectors a ∈Rd1, b ∈Rd2, and c ∈Rd3, we let T(I, b, c) ∈Rd1 denote the vector whose i-th entry is j,k Ti,j,kbjck, and T(a, b, c) denote the scalar i,j,k Ti,j,kaibjck. We review the moments of the word observations in this model (see, e.g., [10]). Let x1, x2, x3 ∈[D] be three random words sampled from a random document generated by the foreground model (the discussion here also applies to the background model). The second-order (cross) moment matrix M f 2 := E[ex1 ⊗ex2] is the matrix whose (i, j)-th entry is the probability that x1 = i and x2 = j. Similarly, the third-order (cross) moment tensor M f 3 := E[ex1 ⊗ex2 ⊗ex3] is the 3 Algorithm 1 Contrastive Topic Model estimator input Foreground and background documents {cf n}, {cb n}; parameter γ > 0; number of topics K. output Foreground-specific topics Topicsf. 1: Let ˆ M f 2 and ˆ M f 3 ( ˆ M b 2 and ˆ M b 3) be the foreground (background) second- and third-order moment estimates based on {cf n} ({cb n}), and let ˆ M2 := ˆ M f 2 −γ ˆ M b 2 and ˆ M3 := ˆ M f 3 −γ ˆ M b 3. 2: Run Algorithm 2 with input ˆ M2, ˆ M3, K, and N to obtain {(ˆat, ˆλt) : t ∈[K]}. 3: Topicsf := {(ˆat/ˆat1, 1/ˆλ2 t) : t ∈[K], ˆλt > 0}. third-order tensor whose (i, j, k)-th entry is the probability that x1 = i, x2 = j, x3 = k. Observe that for any t ∈A ∪B, the i-th entry of E[ex1|topic = t] is precisely the probability that x1 = i given topic = t, which is i-th entry of µt. Therefore, E[ex1|topic = t] = µt. Since the words are independent given the topic, the (i, j)-th entry of E[ex1 ⊗ex2|topic = t] is the product of the i-th and j-th entry of µt, i.e., E[ex1 ⊗ex2|topic = t] = µt ⊗µt. Similarly, E[ex1 ⊗ex2 ⊗ex3|topic = t] = µt ⊗µt ⊗µt. Averaging over the choices of t ∈A ∪B with the weights wf t implies that the second- and third-order moments are M f 2 = E[ex1 ⊗ex2] = t∈A∪B wf t µt ⊗µt and M f 3 = E[ex1 ⊗ex2 ⊗ex3] = t∈A∪B wf t µt ⊗µt ⊗µt. (We discuss how to efficiently use documents of length > 3 in Section 5.2.) We can similarly decompose the background moments M b 2 and M b 3 in terms of tensors products of {µt}t∈B∪C. These equations imply the following proposition (proved in Appendix A). Proposition 1. Let M f 2, M f 3 and M b 2, M b 3 be the second- and third-order moments from the foreground and background data, respectively. Define M2 := M f 2 −γM b 2 and M3 := M f 3 −γM b 3. If γ ≥maxj∈B wf j/wb j, then M2 = K t=1 ωt µt ⊗µt and M3 = K t=1 ωt µt ⊗µt ⊗µt (2) where ωt = wf t > 0 for t ∈A (foreground-specific topic), and ωt ≤0 for t ∈B ∪C. Using tensor decompositions. Proposition 1 implies that the modified moments M2 and M3 have low-rank decompositions in which the components t with positive multipliers ωt correspond to the foreground-specific topics {(µt, wf t)}t∈A. A main technical innovation of this paper is a generalized tensor power method, described in Section 5, which takes as input (estimates of) second- and third-order tensors of the form in (2), and approximately recovers the individual components. We argue that under some natural conditions, the generalized power method is robust to large perturbations in M b 2 and M b 3, which suggests that foreground-specific topics can be learned even when it is not possible to accurately model the background. We use the generalized tensor power method to estimate the foreground-specific topics in our Contrastive Topic Model estimator (Algorithm 1). Proposition 1 gives the lower bound on γ; we empirically find that γ ≈maxj∈B wf j/wb j gives good results. When γ is too large, the convergence of the tensor power worsens. Where possible in practice, we recommend using prior belief about foreground and background compositions to estimate maxj∈B wf j/wb j, and then vary γ as part of the exploratory analysis. 3.2 Experiments with contrastive topic modeling We test our contrastive topic models on the RCV1 dataset, which consists of ≈800000 news articles. Each document comes with multiple category labels (e.g., economics, entertainment) and region labels (e.g., USA, Europe, China). The corpus spans a large set of complex and overlapping categories, making this a good dataset to validate our contrastive learning algorithm. In one set of experiments, we take documents associated with one region as the foreground corpus, and documents associated with a general theme, such as economics, as the background. The goal of the contrast is to find the region-specific topics which are not relevant to the background theme. The top half of Table 1 shows the example where we take USA-related documents as the foreground 4 USA foreground USA foreground, Economics background percent lbs bond million stock play research result basketball game week usda municipal week price round science hockey game run rate hog index sale close golf cancer nation nation hit market gilt year export trade open cell cap la win wheat barrow trade total index hole study ny association inn China foreground China foreground, Economics background china share billion shanghai yuan china panda earthquake china interest ton market reserve yuan year east china china office bond percent percent bank firm bank typhoon year office court million import million balance china foreign storm xinhua richt smuggle cost alumin trade trade exchange invest flood zoo scale ship moody Table 1: Top words from representative topics: foreground alone (left); foreground/background contrast (right). Each column corresponds to one topic. 0 0.5 1 2 0.6 0.8 1 1.2 1.4 1.6 γ classification score N=10000 N=1000 N=100 N=50 !"#$%#"&'( )"'*#+,* . / 0 1 2 3 . / 0 1 2 3 )4)! !"!# !"$$ !"!% !"!& !"!! !"!! !"!! !"!' ("!! !"!# !"!& !"!! !"!! !"!! 5.3+6 !"(# !"(# !"$$ !"(' !")# !"!# !"(( !"!% !"() !"$* !"(+ !")& !"!% !"'( 5.37$/ ("!! !"!% !"!! !"!# !"!& !"!! !"!! !"$! !"!+ !"!! !"!) !"!( !"!! !"!* 5/27$/ !"!' !"!# !"!& !"!# !"!$ !"#+ !"(& !"!! !"!% !"!+ !"!# !"!* !"#+ !"+( 507$!"') !"+' !"+% !")* !"$$ !"(! !"'* !"'& !"+( !"+% !")$ !"*! !"!) !"&& 507$. !")! !"++ ("!! !")$ !"%+ !"!& !"(+ !"&# !"+( ("!! !")* !"%+ !"!+ !"'% 507$/ !"!# !"') !"*! !"&# !"!$ !"!+ !"!* !"(( !"') !"*! !"&) !"!* !"!( !"(# 58+6 !"!! !"($ !"$+ !"() !"+% !"!& !"!* !"!! !"($ !"$+ !"($ !"+& !"!+ !"(* 5.97$!"(! !"!' !"!& !"!# !"!+ !"!* !"(% !"'+ !"!+ !"!+ !"!& !"!+ !"!% !"%& :); !"!& !"!' !"!! !"!! !"!! !"!( !"!( !"!! !"!! !"!! !"!! !"!( !"!! !"!( <$=%>* !"!( !"!# !"(! !"(+ !"(& !"'% !"+' !"!+ !"!) !"(+ !"() !"($ !"+# !"!# (a) (b) Figure 2: (a) Relative AUC as function of γ (Sec. 3.2). (b) Emission probabilities of HMM states (Sec. 4). and Economics as the background theme. We first set the contrast parameter γ = 0 in Algorithm 1; this learns the topics from the foreground dataset alone. Due to the composition of the corpus, the foreground topics for USA is dominated by topics relevant to stock markets and trade; representative topics and keywords are shown on the left of Table 1. Then we increase γ to observe the effects of contrast. In the right half of Table 1, we show the heavily weighted topics and keywords for when γ = 2. The topics involving market and trade are also present in the background corpus, so their weights are reduced through contrast. Topics which are very USA-specific and distinct from economics rise to the top: basketball, baseball, scientific research, etc. A similar experiment with China-related articles as foreground, and the same economics themed background is shown in the bottom of Table 1. These examples illustrate that Algorithm 1 learns topics which are unique to the foreground. To quantify this effect, we devised a specificity test. Using the RCV1 labels, we partition the foreground USA documents into two disjoint groups: documents with any economics-related labels (group 0) and the rest (group 1). Because Algorithm 1 learns the full probabilistic model, we use the inferred topic parameters to compute the marginal likelihood for each foreground document given the model. We then use the likelihood value to classify each foreground document as belonging to group 0 or 1. The performance of the classifier is summarized by the AUC score. We first set γ = 0 and compute the AUC score, which corresponds to how well a topic model learned from only the foreground can distinguish between the two groups. We use this score as the baseline and normalize so it is equal to 1. The hope is that by using the background data, the contrastive model can better identify the documents that are generated by foreground-specific topics. Indeed, as γ increases, the AUC score improves significantly over the benchmark (dark blue bars in Figure 2(a)). For γ > 2 we find that the foreground specific topics do not change qualitatively. A major advantage of our approach is that we do not need to learn a very accurate background model to learn the contrast. To validate this, we down sample the background corpus to 1000, 100, 5 and 50 documents. This simulates settings where the background is very sparsely sampled, so it is not possible to learn a background model very accurately. Qualitatively, we observe that even with only 50 randomly sampled background documents, Algorithm 1 still recovers topics specific to USA and not related to Economics. At γ = 2, it learns sports and NASA/space as the most prominent foreground-specific topics. This is supported by the specificity test, where contrastive topic models with sparse background better identify foreground-specific documents relative to the γ = 0 (foreground data-only) model. 4 Contrastive Hidden Markov Models Hidden Markov Models (HMMs) are commonly used to model sequence and time series data. For example, a biologist may collect several sequences from an experiment; some of the sequences are generated by a biological process of interest (modeled by an HMM), while others are generated by a different “background” process—e.g., noise or a process that is not of primary interest. Consider a simple generative process where foreground data are generated by a mixture of two HMMs: (1 −γ) HMMA +γ HMMB, and background data are generated by HMMB. The goal is to learn the parameters of HMMA, which models the biological process of interest. As we did for topic models, we can estimate a contrastive HMM by taking appropriate combinations of observable moments. Let xf 1, xf 2, xf 3, . . . be a random emission sequence in RD generated by the foreground model (1−γ) HMMA +γ HMMB, and xb 1, xb 2, xb 3, . . . be the sequence generated by the background model HMMB. Following [5], we estimate the following cross moment matrices and tensors: M f 1,2 := E[xf 1 ⊗xf 2], M f 1,3 := E[xf 1 ⊗xf 3], M f 2,3 := E[xf 2 ⊗xf 3], M f 1,2,3 := E[xf 1 ⊗xf 2 ⊗xf 3], as well as the corresponding moments for the background model. This is similar to the estimation the word pair and triple frequencies in LDA. Here we only use the first three observations in the sequence, but it is also justifiable to average over all consecutive observation triplets [14]. Then, analogous to Proposition 1, we define the contrastive moments as Mu,v := M f u,v −γM b u,v (for {u, v} ⊂{1, 2, 3}) and M1,2,3 := M f 1,2,3 −γM b 1,2,3. In the Appendix (Sec. D and Algorithm 3), we describe how to recover the foreground-specific model HMMA. The key technical difference from contrastive LDA lies in the asymmetric generalization of the Tensor Power Method of Algorithm 2. Application to contrastive genomics. For many biological problems, it is important to understand how signals in certain data are enriched relative to some related background data. For instance, we may want to contrast foreground data composed of gene expressions (or mutation rates, protein levels, etc) from one population against background data taken from (say) a control experiment, a different cell type, or a different time point. The contrastive analysis methods developed here can be a powerful exploratory tool for biology. As a concrete illustration, we use spectral contrast to refine the characterization of chromatin states. The human genome consists of ≈3 billion DNA bases, and has recently been shown that these bases can be naturally segmented into a handful of chromatin states [15,16]. Each state describes a set of genomic properties: several states describe different active and regulatory features, while other states describe repressive features. The chromatin state varies across the genome, remaining constant for relatively short regions (say, several thousand bases). Learning the nature of the chromatin states is of great interest in genomics. The state-of-the-art approach for modeling chromatin states uses an HMM [16]. The observable data are, at every 200 bases, a binary feature vector in {0, 1}10. Each feature indicates the presence/absence of a specific chemical feature at that site (assumed independent given the chromatin state). This correspond to ≈15 million observations across the genome, which are used to learn the parameters of an HMM. Each chromatin state corresponds to a latent state, characterized by a vector of 10 emission probabilities. We take as foreground data the observations from exons, introns and promoters, which account for about 30% of the genome; as background data, we take observations from intergenic regions. Because exons and introns are transcribed, we expect the foreground to be a mixture of functional chromatin states and spurious states due to noise, and expect more of the background observations to be due to non-functional process. The contrastive HMM should capture biologically meaningful signals in the foreground data. In Figure 2(b), we show the emission matrix for the foreground HMM and for the contrastive HMM. We learn K = 7 latent states, corresponding to 7 chromatin states. 6 Algorithm 2 Generalized Tensor Power Method input ˆ M2 ∈RD×D; ˆ M3 ∈RD×D×D; target rank K; number of iterations N. output Estimates {(ˆat, ˆλt) : t ∈[K]}. 1: Let ˆ M † 2 := Moore-Penrose pseudoinverse of rank K approximation to ˆ M2; initialize T := ˆ M3. 2: for t = 1 to K do 3: Randomly draw u(0) ∈RD from any distribution with full support in the range of ˆ M2. 4: Repeat power iteration update N times: u(i+1) := T(I, ˆ M † 2u(i), ˆ M † 2u(i)). 5: ˆat := u(N)/|u(N), ˆ M † 2u(N)|1/2; ˆλt := T( ˆ M † 2ˆat, ˆ M † 2ˆat, ˆ M † 2ˆat); T := T −|ˆλt|ˆat ⊗ˆat ⊗ˆat. 6: end for Each row is a chemical feature of the genome. The foreground states recover the known biological chromatin states from literature [16]. For example, state 6, with high emission for K36me3, is transcribed genes; state 5 is active enhancers; state 4 is poised enhancers. In the contrastive HMM, most of the states are the same as before. Interestingly, state 7, which is associated with feature K20me1, drops from the largest component of the foreground to a very small component of the contrast. This finding suggests that state 7 and K20me1 are less specific to gene bodies than previously thought [17], and raises more questions regarding its function, which is relatively unknown. 5 Generalized tensor power method We now describe our general approach for tensor decomposition used in Algorithm 1. Let a1, a2, . . . , aK ∈RD be linearly independent vectors, and set A := [a1|a2| · · · |aK]. Let M2 := K i=1 σiai ⊗ai and M3 := K i=1 λiai ⊗ai ⊗ai, where σi = sign(λi) ∈{±1}. The goal is to recover {(at, λt) : t ∈[K]} from (estimates of) M2 and M3. The following proposition shows that one of the vectors ai (and its associated λi) can be obtained from M2 and M3 using a simple power method similar to that from [5, 18] (note that which of the K components is obtained depends on the initialization of the procedure). Note that the error ε is exponentially small in 2t after t iterations, so the number of iterations required to converge is very small. Below, we use (·)† to denote the Moore-Penrose pseudoinverse. Proposition 2 (Informal statement). Consider the sequence u(0), u(1), . . . in RD determined by u(i+1) := M3(I, M † 2u(i), M † 2u(i)) . Then for any ε ∈(0, 1) and almost all u(0) ∈range(A), there exists t∗∈[K], c1, c2 > 0 (all depending on u(0) and {(µt, λt) : t ∈[K]}) such that ˜u(i) −at∗2 ≤ε and |˜λ −|λt∗|| ≤|λt∗|ε + maxt=t∗|λt|ε3/2 for ε := c1 exp(−c22i), where ˜u(i) := σt∗u(i)/A†u(i), and ˜λ := M3(M † 2 ˜u(i), M † 2 ˜u(i), M † 2 ˜u(i)). See Appendix B for the formal statement and proof which give explicit dependencies. We use the iterations from Proposition 2 in our main decomposition algorithm (Algorithm 2), which is a variant of the main algorithm from [5]. The main difference is that we do not require M2 to be positive semidefinite, which is essential for our application, but requires subtle modifications. For simplicity, we assume we run Algorithm 2 with exact moments M2 and M3 — a detailed perturbation analysis would be similar to that in [5] but is beyond the scope of this paper. Proposition 2 shows that a single component can be accurately recovered, and we use deflation to recover subsequent components (normalization and deflation is further discussed in Appendix B). As noted in [5], errors introduced in this deflation step have only a lower-order effect, and therefore it can be used reliably to recover all K components. For increased robustness, we actually repeat steps 3–5 in Algorithm 2 several times, and use the results of the trial in which |ˆλt| takes the median value. 5.1 Robustness to sparse background sampling Algorithm 1 can recover the foreground-specific {µt}t∈A even with relatively small numbers of background data. We can illustrate this robustness under the assumption that the support of the foreground-specific topics S0 := ∪t∈A supp(µt) is disjoint from that of the other topics S1 := ∪t∈B∪C supp(µt) (similar to Brown clusters [19]). Suppose that M f 2 is estimated accurately using a large sample of foreground documents. Then because S0 and S1 are disjoint, Algorithm 1 7 (using sufficiently large γ) will accurately recover the topics {(µt, wf t) : t ∈A} in Topicsf. The remaining concern is that sampling errors will cause Algorithm 1 to mistakenly return additional topics in Topicsf, namely the topics t ∈B ∪C. It thus suffices to guarantee that the signs of the ˆλt returned by Algorithm 2 are correct. The sample size requirement for this is independent of the desired accuracy level for the foreground-specific topics—it depends only on γ and fixed properties of the background model.1 As reported in Section 3.2, this robustness is borne out in our experiments. 5.2 Scalability Our algorithms are scalable to large datasets when implemented to exploit sparsity and low-rank structure (each experiment we report runs on a standard laptop in a few minutes). Two important details are (i) how the moments M2 and M3 are represented, and (ii) how to execute the power iteration update in Algorithm 2. These issues are only briefly mentioned in [5] and without proof, so in this section, we address these issues in detail. Efficient moment estimates for topic models. We first discuss how to represent empirical estimates of the second- and third-order moments M f 2 and M f 3 for the foreground documents (the same will hold for the background documents). Let document n ∈[N] have length n, and let cn ∈ND be its word count vector (its i-th entry cn(i) is the number of times word i appears in document n). Proposition 3 (Estimator for M f 2). Assume n ≥2. For any distinct i, j ∈[D], E[(cn(i)2 − cn(i))/(n(n −1))] = [M f 2]i,i and E[cn(i)cn(j)/(n(n −1))] = [M f 2]i,j. By Proposition 3, an unbiased estimator of M f 2 is ˆ M f 2 := N −1 N n=1(n(n −1))−1(cn ⊗cn − diag(cn)). Since ˆ M f 2 is sum of sparse matrices, it can be represented efficiently, and we may use sparsity-aware methods for computing its low-rank spectral decompositions. It is similarly easy to obtain such a decomposition for ˆ M f 2 −γ ˆ M b 2, from which one can compute its pseudoinverse and represent it in factored form as PQfor some P, Q ∈RD×K. Proposition 4 (Estimator for M f 3). Assume n ≥3. For any distinct i, j, k ∈[D], E[(cn(i)3 − 3cn(i)2 + 2cn(i))/(n(n −1)(n −2))] = [M f 3]i,i,i, E[(cn(i)2cn(j) −cn(i)cn(j))/(n(n − 1)(n −2))] = [M f 3]i,i,j, and E[(cn(i)cn(j)cn(k))/(n(n −1)(n −2))] = [M f 3]i,j,k. By Proposition 4, an unbiased estimator of M f 3(I, v, v) for any vector v ∈RD is ˆ M f 3(I, v, v) := N −1 N n=1(n(n−1)(n−2))−1(cn, v2cn−2cn, v(cn◦v)−cn, v◦vcn+2cn◦v◦v) (where ◦denotes component-wise product of vectors). Let nnz(cn) be the number of non-zero entries in cn, then each term in the sum takes only O(nnz(cn)) operations to compute. So the time to compute ˆ M f 3(I, v, v) is proportional to the number of non-zero entries of the term-document matrix, using just a single pass over the document corpus. Power iteration computation. Each power iteration update in Algorithm 2 just requires the evaluating ˆ M f 3(I, v, v) −γ ˆ M b 3(I, v, v) (one-pass linear time, as shown above) for v := ˆ M † 2u(i), and computing the deflation τ<t ˆλτˆaτ, v2ˆaτ (O(DK) time). Since ˆ M † 2 is kept in rank-K factored form, v can also be computed in O(DK) time. 6 Discussion In this paper, we formalize a model of contrastive learning and introduce efficient spectral methods to learn the model parameters specific to the foreground. Experiments with contrastive topic modeling show that Algorithm 1 can learn foreground-specific topics even when the background data is noisy. Our application in contrastive genomics illustrates the utility of this method in exploratory analysis of biological data. The contrast identifies an intriguing change associated with K20me1, which can be followed up with biological experiments. While we have focused in this work on a natural contrast model for mixture models, we also discuss an alternative approach in Appendix E. Acknowledgement This work was partially supported by DARPA Young Faculty Award DARPA N66001-12-1-4219. 1For instance, if the background model consists only of one topic µ, then the analyses from [5, 10] can be adapted to bound the sample size requirement by O(1/µ6). 8 References [1] David M. Blei, Andrew Ng, and Michael Jordan. Latent Dirichlet allocation. JMLR, 3:993– 1022, 2003. [2] Leonard E. Baum and J. A. Eagon. An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology. Bull. Amer. Math. Soc., 73(3):360–363, 1967. [3] J. Zou and R. Adams. Priors for diversity in generative latent variable models. In Advances in Neural Information Processing Systems 25, 2012. [4] B. Scholkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt. Support vector method for novelty detection. In Advances in Neural Information Processing Systems 25, 2000. [5] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and T. Telgarsky. Tensor decompositions for learning latent variable models, 2012. arXiv:1210.7559. [6] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. Journal of Computer and System Sciences, 78(5):1460–1480, 2012. [7] S. Siddiqi, B. Boots, and G. Gordon. Reduced rank hidden markov models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010. [8] B. Balle, A. Quattoni, and X. Carreras. Local loss optimization in operator models: A new insight into spectral learning. In Twenty-Ninth International Conference on Machine Learning, 2012. [9] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Spectral learning of latent variable PCFGs. In Proceedings of Association of Computational Linguistics, 2012. [10] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y. K. Liu. A spectral algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 25, 2012. [11] D. Hsu, S. M. Kakade, and P. Liang. Identifiability and unmixing of latent parse trees. In Advances in Neural Information Processing Systems 25, 2012. [12] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Experiments with spectral learning of latent-variable PCFGs. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics, 2013. [13] A. T. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. In Thirtieth International Conference on Machine Learning, 2013. [14] A. Kontorovich, B. Nadler, and R. Weiss. On learning parametric-output HMMs. In Thirtieth International Conference on Machine Learning, 2013. [15] J. Zhu et al. Genome-wide chromatin state transitions associated with developmental and environmental cues. Cell, 152(3):642–54, 2013. [16] J. Ernst et al. Mapping and analysis of chromatin state dynamics in nine human cell types. Nature, 473(7345):43–49, 2011. [17] D. Beck et al. Signal analysis for genome wide maps of histone modifications measured by chip-seq. Bioinformatics, 28(8):1062–9, 2012. [18] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank(R1, R2, ..., Rn) approximation and applications of higher-order tensors. SIAM J. Matrix Anal. Appl., 21(4):1324–1342, 2000. [19] Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. Class-based n-gram models of natural language. Comput. Linguist., 18(4):467–479, 1992. 9
|
2013
|
69
|
5,146
|
On the Relationship Between Binary Classification, Bipartite Ranking, and Binary Class Probability Estimation Harikrishna Narasimhan Shivani Agarwal Department of Computer Science and Automation Indian Institute of Science, Bangalore 560012, India {harikrishna,shivani}@csa.iisc.ernet.in Abstract We investigate the relationship between three fundamental problems in machine learning: binary classification, bipartite ranking, and binary class probability estimation (CPE). It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0.5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. Formally, these relationships involve regret transfer bounds. In this paper, we introduce the notion of weak regret transfer bounds, where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution (and in practice, must be estimated from data). We then show that, in this weaker sense, a good bipartite ranking model can be used to construct a good classification model (by thresholding at a suitable point), and more surprisingly, also to construct a good binary CPE model (by calibrating the scores of the ranking model). 1 Introduction Learning problems with binary labels, where one is given training examples consisting of objects with binary labels (such as emails labeled spam/non-spam or documents labeled relevant/irrelevant), are widespread in machine learning. These include for example the three fundamental problems of binary classification, where the goal is to learn a classification model which, when given a new object, can predict its label; bipartite ranking, where the goal is to learn a ranking model that can rank new objects such that those in one category are ranked higher than those in the other; and binary class probability estimation (CPE), where the goal is to learn a CPE model which, when given a new object, can estimate the probability of its belonging to each of the two classes. Of these, binary classification is classical, although several fundamental questions related to binary classification have been understood only relatively recently [1–4]; bipartite ranking is more recent and has received much attention in recent years [5–8], and binary CPE, while a classical problem, also continues to be actively investigated [9,10]. All three problems abound in applications, ranging from email classification to document retrieval and computer vision to medical diagnosis. It is well known that a good binary CPE model can be used to obtain a good binary classification model (in a formal sense that we will detail below; specifically, in terms of regret transfer bounds) [4, 11]; more recently, it was shown that a good binary CPE model can also be used to obtain a good bipartite ranking model (again, in terms of regret transfer bounds, to be detailed below) [12]. It is also known that a binary classification model cannot necessarily be converted to a CPE model.1 However, beyond this, not much is understood about the exact relationship between these problems.2 1Note that we start from a single classification model, which rules out the probing reduction of [13]. 2There are some results suggesting equivalence between specific boosting-style classification and ranking algorithms [14,15], but this does not say anything about relationships between the problems per se. 1 Figure 1: (a) Current state of knowledge; (b) State of knowledge after the results of this paper. Here ‘S’ denotes a ‘strong’ regret transfer relationship; ‘W’ denotes a ‘weak’ regret transfer relationship. In this paper, we introduce the notion of weak regret transfer bounds, where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution (and in practice, must be estimated from data). We then show such weak regret transfer bounds (under mild technical conditions) from bipartite ranking to binary classification, and from bipartite ranking to binary CPE. Specifically, we show that, given a good bipartite ranking model and access to either the distribution or a sample from it, one can estimate a suitable threshold and convert the ranking model into a good binary classification model; similarly, given a good bipartite ranking model and access to the distribution or a sample, one can ‘calibrate’ the ranking model to construct a good binary CPE model. Though weak, the regret bounds are non-trivial in the sense that the sample size required for constructing a good classification or CPE model from an existing ranking model is smaller than what might be required to learn such models from scratch. The main idea in transforming a ranking model to a classifier is to find a threshold that minimizes the expected classification error on the distribution, or the empirical classification error on the sample. We derive these results for cost-sensitive classification with any cost parameter c. The main idea in transforming a ranking model to a CPE model is to find a monotonically increasing function from R to [0, 1] which, when applied to the ranking model, minimizes the expected CPE error on the distribution, or the empirical CPE error on the sample; this is similar to the idea of isotonic regression [16–19]. The proof here makes use of a recent result of [20] which relates the squared error of a calibrated CPE model to classification errors over uniformly drawn costs, and a result on the Rademacher complexity of a class of bounded monotonically increasing functions on R [21]. As a by-product of our analysis, we also obtain a weak regret transfer bound from bipartite ranking to problems involving the area under the cost curve [22] as a performance measure. The relationships between the three problems – both those previously known and those established in this paper – are summarized in Figure 1. As noted above, in a weak regret transfer relationship, given a model for one type of problem, one needs access to a data sample in order to transform this to a model for another problem. This is in contrast to the previous ‘strong’ relationships, where a binary CPE model can simply be thresholded at 0.5 (or cost c) to yield a classification model, or can simply be used directly as a ranking model. Nevertheless, even with the weak relationships, one still gets that a statistically consistent algorithm for bipartite ranking can be converted into a statistically consistent algorithm for binary classification or for binary CPE. Moreover, as we demonstrate in our experiments, if one has access to a good ranking model and only a small additional sample, then one is better off using this sample to transform the ranking model into a classification or CPE model rather than using the limited sample to learn a classification or CPE model from scratch. The paper is structured as follows. We start with some preliminaries and background in Section 2. Sections 3 and 4 give our main results, namely weak regret transfer bounds from bipartite ranking to binary classification, and from bipartite ranking to binary CPE, respectively. Section 5 gives experimental results on both synthetic and real data. All proofs are included in the appendix. 2 Preliminaries and Background Let X be an instance space and let D be a probability distribution on X × {±1}. For (x, y) ∼D, we denote η(x) = P(y = 1 | x) and p = P(y = 1). In the settings we are interested in, given a training sample S = ((x1, y1), . . . , (xn, yn)) ∈(X × {±1})n with examples drawn iid from D, the goal is to learn a binary classification model, a bipartite ranking model, or a binary CPE model. In what follows, for u ∈[−∞, ∞], we will denote sign(u) = 1 if u > 0 and −1 otherwise, and sign(u) = 1 if u ≥0 and −1 otherwise. 2 (Cost-Sensitive) Binary Classification. Here the goal is to learn a model h : X →{±1}. Typically, one is interested in models h with small expected 0-1 classification error: er0-1 D [h] = E(x,y)∼D 1(h(x) ̸= y) , where 1(·) is 1 if its argument is true and 0 otherwise; this is simply the probability that h misclassifies an instance drawn randomly from D. The optimal 0-1 error (Bayes error) is er0-1,∗ D = inf h:X→{±1} er0-1 D [h] = Ex min η(x), 1 −η(x) ; this is achieved by the Bayes classifier h∗(x) = sign(η(x) −1 2). The 0-1 classification regret of a classifier h is then regret0-1 D [h] = er0-1 D [h] −er0-1,∗ D . More generally, in a cost-sensitive binary classification problem with cost parameter c ∈(0, 1), where the cost of a false positive is c and that of a false negative is (1 −c), one is interested in models h with small cost-sensitive 0-1 error: er0-1,c D [h] = E(x,y)∼D (1 −c)1 y = 1, h(x) = −1 + c 1 y = −1, h(x) = 1 . Note that for c = 1 2, we get er 0-1, 1 2 D [h] = 1 2er0-1 D [h]. The optimal cost-sensitive 0-1 error for cost parameter c can then be seen to be er0-1,c,∗ D = inf h:X→{±1} er0-1,c D [h] = Ex min (1 −c)η(x), c(1 −η(x)) ; this is achieved by the classifier h∗ c(x) = sign(η(x) −c). The c-cost-sensitive regret of a classifier h is then regret0-1,c D [h] = er0-1,c D [h] −er0-1,c,∗ D . Bipartite Ranking. Here one wants to learn a ranking model f : X →R that assigns higher scores to positive instances than to negative ones. Specifically, the goal is to learn a ranking function f with small bipartite ranking error: errank D [f] = E h 1 (y −y′)(f(x) −f(x′)) < 0 + 1 2 1 f(x) = f(x′) y ̸= y′i , where (x, y), (x′, y′) are assumed to be drawn iid from D; this is the probability that a randomly drawn pair of instances with different labels is mis-ranked by f, with ties broken uniformly at random. It is known that the ranking error of f is equivalent to one minus the area under the ROC curve (AUC) of f [5–7]. The optimal ranking error can be seen to be errank,∗ D = inf f:X→R errank D [f] = 1 2p(1 −p)Ex,x′ h min η(x)(1 −η(x′)), η(x′)(1 −η(x)) i ; this is achieved by any function f ∗that is a strictly monotonically increasing transformation of η. The ranking regret of a ranking function f is given by regretrank D [f] = errank D [f] −errank,∗ D . Binary Class Probability Estimation (CPE). The goal here is to learn a class probability estimator or CPE model bη : X →[0, 1] with small squared error (relative to labels converted to {0, 1}): ersq D[ bη ] = E(x,y)∼D h bη(x) −y+1 2 2i . The optimal squared error can be seen to be ersq,∗ D = inf bη:X→[0,1] ersq D[ bη ] = ersq D[ η ] = Ex η(x)(1 −η(x)) . The squared-error regret of a CPE model bη can be seen to be regretsq D[ bη ] = ersq D[ bη ] −ersq,∗ D = Ex bη(x) −η(x) 2 . Regret Transfer Bounds. The following (strong) regret transfer results from binary CPE to binary classification and from binary CPE to bipartite ranking are known: Theorem 1 ( [4,11]). Let bη : X →[0, 1]. Let c ∈(0, 1). Then the classifier h(x) = sign bη(x) −c) obtained by thresholding bη at c satisfies regret0-1,c D sign ◦(bη −c) ≤Ex |bη(x) −η(x)| ≤ q regretsq D[ bη ] . Theorem 2 ( [12]). Let bη : X →[0, 1]. Then using bη as a ranking model yields regretrank D [ bη ] ≤ 1 p(1 −p)Ex |bη(x) −η(x)| ≤ 1 p(1 −p) q regretsq D[ bη ] . Note that as a consequence of these results, one gets that any learning algorithm that is statistically consistent for binary CPE, i.e. whose squared-error regret converges in probability to zero as the training sample size n →∞, can easily be converted into an algorithm that is statistically consistent for binary classification (with any cost parameter c, by thresholding the CPE models learned by the algorithm at c), or into an algorithm that is statistically consistent for bipartite ranking (by using the learned CPE models directly for ranking). 3 3 Regret Transfer Bounds from Bipartite Ranking to Binary Classification In this section, we derive weak regret transfer bounds from bipartite ranking to binary classification. We derive two bounds. The first holds in an idealized setting where one is given a ranking model f as well as access to the distribution D for finding a suitable threshold to construct the classifier. The second bound holds in a setting where one is given a ranking model f and a data sample S drawn iid from D for finding a suitable threshold; this bound holds with high probability over the draw of S. Our results will require the following assumption on the distribution D and ranking model f: Assumption A. Let D be a probability distribution on X × {±1} with marginal distribution µ on X. Let f : X→R be a ranking model, and let µf denote the induced distribution of scores f(x) ∈R when x ∼µ. We say (D, f) satisfies Assumption A if µf is either discrete, continuous, or mixed with at most finitely many point masses. We will find it convenient to define the following set of all increasing functions from R to {±1}: Tinc = n θ : R→{±1} : θ(u) = sign(u −t) or θ(u) = sign(u −t) for some t ∈[−∞, ∞] o . Definition 3 (Optimal classification transform). For any ranking model f : X →R, cost parameter c ∈(0, 1), and probability distribution D over X ×{±1} such that (D, f) satisfies Assumption A, define an optimal classification transform ThreshD,f,c as any increasing function from R to {±1} such that the classifier h(x) = ThreshD,f,c(f(x)) resulting from composing f with ThreshD,f,c yields minimum cost-sensitive 0-1 error on D: ThreshD,f,c ∈argminθ∈Tinc er0-1,c D θ ◦f . (OP1) We note that when f is the class probability function η, we have ThreshD,η,c(u) = sign(u −c). Theorem 4 (Idealized weak regret transfer bound from bipartite ranking to binary classification based on distribution). Let (D, f) satisfy Assumption A. Let c ∈(0, 1). Then the classifier h(x) = ThreshD,f,c(f(x)) satisfies regret0-1,c D ThreshD,f,c ◦f ≤ q 2p(1 −p) regretrank D [f] . In practice, one does not have access to the distribution D, and the optimal threshold must be estimated from a data sample. To this end, we define the following: Definition 5 (Optimal sample-based threshold). For any ranking model f : X →R, cost parameter c ∈(0, 1), and sample S ∈∪∞ n=1(X ×{±1})n, define an optimal sample-based threshold btS,f,c as any threshold on f such that the resulting classifier h(x) = sign(f(x) −btS,f,c) yields minimum cost-sensitive 0-1 error on S: btS,f,c ∈argmint∈R er0-1,c S sign ◦ f −t , (OP2) where er0-1,c S [h] denotes the c-cost-sensitive 0-1 error of a classifier h on the empirical distribution associated with S (i.e. the uniform distribution over examples in S). Note that given a ranking function f, cost parameter c, and a sample S of size n, the optimal samplebased threshold btS,f,c can be computed in O(n ln n) time by sorting the examples (xi, yi) in S based on the scores f(xi) and evaluating at most n + 1 distinct thresholds lying between adjacent score values (and above/below all score values) in this sorted order. Theorem 6 (Sample-based weak regret transfer bound from bipartite ranking to binary classification). Let D be any probability distribution on X × {±1} and f : X →R be any fixed ranking model such that (D, f) satisfies Assumption A. Let S ∈(X × {±1})n be drawn randomly according to Dn. Let c ∈(0, 1). Let 0 < δ ≤1. Then with probability at least 1 −δ (over the draw of S ∼Dn), the classifier h(x) = sign(f(x) −btS,f,c) obtained by thresholding f at btS,f,c satisfies regret0-1,c D sign ◦(f −btS,f,c) ≤ q 2p(1 −p) regretrank D [f] + s 32 2 ln(2n) + 1 + ln 4 δ n . The proof of Theorem 6 involves an application of the result in Theorem 4 together with a standard VC-dimension based uniform convergence result; specifically, the proof makes use of the fact that selecting the sample-based threshold in (OP2) is equivalent to empirical risk minimization over Tinc. Note in particular that the above regret transfer bound, though ‘weak’, is non-trivial in that it suggests a good classifier can be constructed from a good ranking model using far fewer examples than might be required for learning a classifier from scratch based on standard VC-dimension bounds. 4 Remark 7. We note that, as a consequence of Theorem 6, one can use any learning algorithm that is statistically consistent for bipartite ranking to construct an algorithm that is consistent for (costsensitive) binary classification as follows: divide the training data into two (say equal) parts, use one part for learning a ranking model using the consistent ranking algorithm, and the other part for selecting a threshold on the learned ranking model; both terms in Theorem 6 will then go to zero as the training sample size increases, yielding consistency for (cost-sensitive) binary classification. Remark 8. Another implication of the above result is a justification for the use of the AUC as a surrogate performance measure when learning in cost-sensitive classification settings where the misclassification costs are unknown during training time [23]. Here, instead of learning a classifier that minimizes the cost-sensitive classification error for a fixed cost parameter that may turn out to be incorrect, one can learn a ranking function with good ranking performance (in terms of AUC), and then later use a small additional sample to select a suitable threshold once the misclassification costs are known; the above result provides guarantees on the resulting classification performance in terms of the ranking (AUC) performance of the learned model. 4 Regret Transfer Bounds from Bipartite Ranking to Binary CPE We now derive weak regret transfer bounds from bipartite ranking to binary CPE. Again, we derive two bounds: the first holds in an idealized setting where one is given a ranking model f as well as access to the distribution D for finding a suitable conversion to a CPE model; the second, which is a high-probability bound, holds in a setting where one is given a ranking model f and a data sample S drawn iid from D for finding a suitable conversion. We will need the following definition: Definition 9 (Calibrated CPE model). A binary CPE model bη : X →[0, 1] is said to be calibrated w.r.t. a probability distribution D on X × {±1} if P(y = 1 | bη(x) = u) = u, ∀u ∈range(bη), where range(bη) denotes the range of bη. We will make use of the following result, which follows from results in [20] and shows that the squared error of a calibrated CPE model is related to the expected cost-sensitive error of a classifier constructed using the optimal threshold in Definition 3, over uniform costs in (0, 1): Theorem 10 ( [20]). Let bη : X →[0, 1] be a binary CPE model that is calibrated w.r.t. D. Then ersq D[ bη ] = 2 Ec∼U(0,1) er0-1,c D ThreshD,bη,c ◦bη , where U(0, 1) is the uniform distribution over (0, 1) and ThreshD,bη,c is as defined in Definition 3. The proof of Theorem 10 follows from the fact that for any CPE model bη that is calibrated w.r.t. D, the optimal classification transform is given by ThreshD,bη,c(u) = sign(u −c), thus generalizing a similar result noted earlier for the true class probability function η. We then have the following result, which shows that for a calibrated CPE model bη : X→[0, 1], one can upper bound the squared-error regret in terms of the bipartite ranking regret; this result follows directly from Theorem 10 and Theorem 4: Lemma 11 (Regret transfer bound for calibrated CPE models). Let bη : X →[0, 1] be a binary CPE model that is calibrated w.r.t. D. Then regretsq D[ bη ] ≤ q 8p(1 −p) regretrank D [ bη ] . We are now ready to describe the construction of the optimal CPE transform in the idealized setting. We will find it convenient to define the following set: Ginc = n g : R→[0, 1] : g is a monotonically increasing function o . Definition 12 (Optimal CPE transform). Let f : X →[a, b] (where a, b ∈R, a < b) be any bounded-range ranking model and D be any probability distribution over X × {±1} such that (D, f) satisfies Assumption A. Moreover assume that µf (see Assumption A), if mixed, does not have a point mass at the end-points a, b, and that the function ηf : [a, b]→[0, 1] defined as ηf(t) = P(y = 1 | f(x) = t) is square-integrable w.r.t. the density of the continuous part of µf. Define an optimal CPE transform CalD,f as any monotonically increasing function from R to [0, 1] such that the CPE model bη(x) = CalD,f(f(x)) resulting from composing f with CalD,f yields minimum squared error on D (see appendix for existence of CalD,f under these conditions): CalD,f ∈argming∈Ginc ersq D g ◦f . (OP3) 5 Lemma 13 (Properties of CalD,f). Let (D, f) satisfy the conditions of Definition 12. Then 1. (CalD,f ◦f) is calibrated w.r.t. D. 2. errank D CalD,f ◦f ≤errank D [f]. The proof of Lemma 13 is based on equivalent results for the minimizer of a sample version of (OP3) [24,25]. Combining this with Lemma 11 immediately gives the following result: Theorem 14 (Idealized weak regret transfer bound from bipartite ranking to binary CPE based on distribution). Let (D, f) satisfy the conditions of Definition 12. Then the CPE model bη(x) = CalD,f(f(x)) obtained by composing f with CalD,f satisfies regretsq D CalD,f ◦f ≤ q 8p(1 −p) regretrank D [f] . We now derive a sample version of the above result. Definition 15 (Optimal sample-based CPE transform). For any ranking model f : X →R and sample S ∈∪∞ n=1(X × {±1})n, define an optimal sample-based transform c CalS,f as any monotonically increasing function from R to [0, 1] such that the CPE model bη(x) = c CalS,f(f(x)) resulting from composing f with c CalS,f yields minimum squared error on S: c CalS,f ∈argming∈Ginc ersq S g ◦f , (OP4) where ersq S [ bη ] denotes the squared error of a CPE model bη on the empirical distribution associated with S (i.e. the uniform distribution over examples in S). The above optimization problem corresponds to the well-known isotonic regression problem and can be solved in O(n ln n) time using the pool adjacent violators (PAV) algorithm [16] (the PAV algorithm outputs a score in [0, 1] for each instance in S such that these scores preserve the ordering of f; a straightforward interpolation of the scores then yields a monotonically increasing function of f). We then have the following sample-based weak regret transfer result: Theorem 16 (Sample-based weak regret transfer bound from bipartite ranking to binary CPE). Let D be any probability distribution on X × {±1} and f : X →[a, b] be any fixed ranking model such that (D, f) satisfies the conditions of Definition 12. Let S ∈(X × {±1})n be drawn randomly according to Dn. Let 0 < δ ≤1. Then with probability at least 1 −δ (over the draw of S ∼Dn), the CPE model bη(x) = c CalS,f(f(x)) obtained by composing f with c CalS,f satisfies regretsq D c CalS,f ◦f ≤ q 8p(1 −p) regretrank D [f] + 96 r 2 ln(n) n + 2 s 2 ln 8 δ n . The proof of Theorem 16 involves an application of the idealized result in Theorem 14, together with a standard uniform convergence argument based on Rademacher averages applied to the function class Ginc; for this, we make use of a result on the Rademacher complexity of this class [21]. Remark 17. As in the case of binary classification, we note that, as a consequence of Theorem 16, one can use any learning algorithm that is statistically consistent for bipartite ranking to construct an algorithm that is consistent for binary CPE as follows: divide the training data into two (say equal) parts, use one part for learning a ranking model using the consistent ranking algorithm, and the other part for selecting a CPE transform on the learned ranking model; both terms in Theorem 16 will then go to zero as the training sample size increases, yielding consistency for binary CPE. Remark 18. We note a recent result in [19] giving a bound on the empirical squared error of a CPE model constructed from a ranking model using isotonic regression in terms of the empirical ranking error of the ranking model. However, this does not amount to a regret transfer bound. Remark 19. Finally, we note that the quantity Ec∼U(0,1) er0-1,c D ThreshD,bη,c ◦bη that appears in Theorem 10 is also the area under the cost curve [20, 22]; since this quantity is upper bounded in terms of regretrank D [f] by virtue of Theorem 4, we also get a weak regret transfer bound from bipartite ranking to problems where the area under the cost curve is a performance measure of interest. In particular, this implies that algorithms that are statistically consistent with respect to AUC can also be used to construct algorithms that are statistically consistent w.r.t. the area under the cost curve. 6 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 No. of training examples 0−1 regret Upper bound Ranking + Opt. Thres. Choice (a) 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 No. of training examples Squared regret Upper bound Ranking + Calibration (b) Figure 2: Results on synthetic data. A ranking model was learned using a pairwise linear logistic regression ranking algorithm (which is a consistent ranking algorithm for the distribution used in these experiments); this was followed by an optimal choice of classification threshold (with c = 1 2) or optimal CPE transform based on the distribution as outlined in Sections 3 and 4. The plots show (a) 0-1 classification regret of the resulting classification model together with the corresponding upper bound from Theorem 4; and (b) squared-error regret of the resulting CPE model together with the corresponding upper bound from Theorem 14. As can be seen, in both cases, the classification/CPE regret converges to zero as the training sample size increases. 5 Experiments We conducted two types of experiments to evaluate the results described in this paper: the first involved synthetic data drawn from a known distribution for which the classification and ranking regrets could be calculated exactly; the second involved real data from the UCI Machine Learning Repository. In the first experiment, we learned ranking models using a consistent ranking algorithm on increasing training sample sizes, converted the learned models using the optimal threshold or CPE transforms described in Sections 3 and 4 based on the distribution, and verified that this yielded classification and CPE models with 0-1 classification regret and squared-error regret converging to zero. In the second experiment, we simulated a setting where a ranking model has been learned from some data, the original training data is no longer available, and a classification/CPE model is needed; we investigated whether in such a setting the ranking model could be used in conjunction with a small additional data sample to produce a useful classification or CPE model. 5.1 Synthetic Data Our first goal was to verify that using ranking models learned by a statistically consistent ranking algorithm and applying the distribution-based transformations described in Sections 3 and 4 yields classification/CPE models with classification/CPE regret converging to zero. For these experiments, we generated examples in (X = Rd) × {±1} (with d = 100) as follows: each example was assigned a positive/negative label with equal probability, with the positive instances drawn from a multivariate Gaussian distribution with mean µ ∈Rd and covariance matrix Σ ∈Rd×d, and negative instances drawn from a multivariate Gaussian distribution with mean −µ and the same covariance matrix Σ; here µ was drawn uniformly at random from {−1, 1}d, and Σ was drawn from a Wishart distribution with 200 degrees of freedom and a randomly drawn invertible PSD scale matrix. For this distribution, the optimal ranking and classification models are linear. Training samples of various sizes n were generated from this distribution; in each case, a linear ranking model was learned using a pairwise linear logistic regression algorithm (with regularization parameter set to 1/√n), and an optimal threshold (with c = 1 2) or CPE transform was then applied to construct a binary classification or CPE model. In this case the ranking regret and 0-1 classification regret of a linear model and can be computed exactly; the squared-error regret for the CPE model was computed approximately by sampling instances from the distribution. The results are shown in Figure 2. As can be seen, the classification and squared-error regrets of the classification and CPE models constructed both satisfy the bounds from Theorems 4 and 14, and converge to zero as the bounds suggest. 5.2 Real Data Our second goal was to investigate whether good classification and CPE models can be constructed in practice by applying the data-based transformations described in Sections 3 and 4 to an existing ranking model. For this purpose, we conducted experiments on several data sets drawn from the UCI Machine Learning Repository3. We present representative results on two data sets: Spambase (4601 3http://archive.ics.uci.edu/ml/ 7 10 20 30 40 50 60 70 80 90 100 0.06 0.08 0.1 0.12 0.14 0.16 %. of training examples 0−1 error (Fixed) Ranking error = 0.0317 Empr. Thres. Choice Logistic Regression (a) Spambase (0-1) 10 20 30 40 50 60 70 80 90 100 0.02 0.03 0.04 0.05 0.06 0.07 0.08 %. of training examples 0−1 error (Fixed) Ranking error = 0.0179 Empr. Thres. Choice Logistic Regression (b) Internet-ads (0-1) 10 20 30 40 50 60 70 80 90 100 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 %. of training examples Squared error (Fixed) Ranking error = 0.0317 Empr. Calibration Logistic Regression (c) Spambase (CPE) 10 20 30 40 50 60 70 80 90 100 0.02 0.03 0.04 0.05 0.06 %. of training examples Squared error (Fixed) Ranking error = 0.0179 Empr. Calibration Logistic Regression (d) Internet-ads (CPE) Figure 3: Results on real data from the UCI repository. A ranking model was learned using a pairwise linear logistic regression ranking algorithm from a part of the data set that was then discarded. The remaining data was divided into training and test sets. The training data was then used to estimate an empirical (sample-based) classification threshold and CPE transform (calibration) for this ranking model as outlined in Sections 3 and 4. Using the same training data, a binary classifier and CPE model were also learned from scratch using a standard linear logistic regression algorithm. The plots show the resulting test error for both approaches. As can be seen, if only a small amount of additional data is available, then using this data to convert an existing ranking model into a classification/CPE model is more beneficial than learning a classification/CPE model from scratch. instances, 57 features) and Internet Ads (3279 instances, 1554 features4). Here we divided each data set into three equal parts. One part was used to learn a ranking model using a pairwise linear logistic regression algorithm, and was then discarded. This allowed us to simulate a situation where a (reasonably good) ranking model is available, but the original training data used to learn the model is no longer accessible. Various subsets of the second part of the data (of increasing size) were then used to estimate a data-based threshold or CPE transform on this ranking model using the optimal sample-based methods described in Sections 3 and 4. The performance of the constructed classification and CPE models on the third part of the data, which was held out for testing purposes, is shown in Figure 3. For comparison, we also show the performance of binary classification and CPE models learned directly from the same subsets of the second part of the data using a standard linear logistic regression algorithm. In each case, the regularization parameter for both standard logistic regression and pairwise logistic regression was chosen from {10−4, 10−3, 10−2, 10−1, 1, 10, 102} using 5-fold cross validation on the corresponding training data. As can be seen, when one has access to a previously learned (or otherwise available) ranking model with good ranking performance, and only a small amount of additional data, then one is better off using this data to estimate a threshold/CPE transform and converting the ranking model into a classification/CPE model, than learning a classification/CPE model from this data from scratch. However, as can also be seen, the eventual performance of the classification/CPE model thus constructed is limited by the ranking performance of the original ranking model; therefore, once there is sufficient additional data available, it is advisable to use this data to learn a new model from scratch. 6 Conclusion We have investigated the relationship between three fundamental problems in machine learning: binary classification, bipartite ranking, and binary class probability estimation (CPE). While formal regret transfer bounds from binary CPE to binary classification and to bipartite ranking are known, little has been known about other directions. We have introduced the notion of weak regret transfer bounds that require access to a distribution or data sample, and have established the existence of such bounds from bipartite ranking to binary classification and to binary CPE. The latter result makes use of ideas related to calibration and isotonic regression; while these ideas have been used to calibrate scores from real-valued classifiers to construct probability estimates in practice, to our knowledge, this is the first use of such ideas in deriving formal regret bounds in relation to ranking. Our experimental results demonstrate possible uses of the theory developed here. Acknowledgments Thanks to Karthik Sridharan for pointing us to a result on monotonically increasing functions. Thanks to the anonymous reviewers for many helpful suggestions. HN gratefully acknowledges support from a Google India PhD Fellowship. SA thanks the Department of Science & Technology (DST), the Indo-US Science & Technology Forum (IUSSTF), and Yahoo! for their support. 4The original data set contains 1558 features; we discarded 4 features with missing entries. 8 References [1] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56–134, 2004. [2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [3] M. D. Reid and R. C. Williamson. Surrogate regret bounds for proper losses. In ICML, 2009. [4] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958–992, 2012. [5] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [6] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In Advances in Neural Information Processing Systems 16. MIT Press, 2004. [7] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under the ROC curve. Journal of Machine Learning Research, 6:393–425, 2005. [8] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U-statistics. Annals of Statistics, 36:844–874, 2008. [9] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation: Structure and applications. Technical report, University of Pennsylvania, November 2005. [10] M. D. Reid and R. C. Williamson. Composite binary losses. Journal of Machine Learning Research, 11:2387–2422, 2010. [11] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [12] S. Cl´emenc¸on and S. Robbiano. Minimax learning rates for bipartite ranking and plug-in rules. In Proceedings of the 28th International Conference on Machine Learning, 2011. [13] John Langford and Bianca Zadrozny. Estimating class membership probabilities using classifier learners. In AISTATS, 2005. [14] C. Rudin and R.E. Schapire. Margin-based ranking and an equivalence between adaboost and rankboost. Journal of Machine Learning Research, 10:2193–2232, 2009. [15] S¸. Ertekin and C. Rudin. On equivalence relationships between classification and ranking algorithms. Journal of Machine Learning Research, 12:2905–2929, 2011. [16] M. Ayer, H.D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman. An empirical distribution function for sampling with incomplete information. The Annals of Mathematical Statistics, 26(4):641–647, 1955. [17] H. D. Brunk. On the estimation of parameters restricted by inequalities. The Annals of Mathematical Statistics, 29(2):437–454, 1958. [18] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In KDD, 2002. [19] A.K. Menon, X. Jiang, S. Vembu, C. Elkan, and L. Ohno-Machado. Predicting accurate probabilities with a ranking loss. In ICML, 2012. [20] J. Hern´andez-Orallo, P. Flach, and C. Ferri. A unified view of performance metrics: Translating threshold choice into expected classification loss. Journal of Machine Learning Research, 13:2813–2869, 2012. [21] P. Bartlett. CS281B/Stat241B (Spring 2008) Statistical Learning Theory [Lecture 19 notes], University of California, Berkeley. http://www.cs.berkeley.edu/˜bartlett/courses/281b-sp08/ 19.pdf. 2008. [22] C. Drummond and R.C. Holte. Cost curves: An improved method for visualizing classifier performance. Machine Learning, 65(1):95–130, 2006. [23] M.A. Maloof. Learning when data sets are imbalanced and when costs are unequal and unknown. In ICML-2003 Workshop on Learning from Imbalanced Data Sets II, volume 2, 2003. [24] A.T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009. [25] T. Fawcett and A. Niculescu-Mizil. PAV and the ROC convex hull. Machine Learning, 68(1):97–106, 2007. [26] S. Agarwal. Surrogate regret bounds for the area under the ROC curve via strongly proper losses. In COLT, 2013. [27] D. Anevski and P. Soulier. Monotone spectral density estimation. Annals of Statistics, 39(1):418–438, 2011. [28] P. Groeneboom and G. Jongbloed. Generalized continuous isotonic regression. Statistics & Probability Letters, 80(34):248–253, 2010. 9
|
2013
|
7
|
5,147
|
Deep Fisher Networks for Large-Scale Image Classification Karen Simonyan Andrea Vedaldi Andrew Zisserman Visual Geometry Group, University of Oxford {karen,vedaldi,az}@robots.ox.ac.uk Abstract As massively parallel computations have become broadly available with modern GPUs, deep architectures trained on very large datasets have risen in popularity. Discriminatively trained convolutional neural networks, in particular, were recently shown to yield state-of-the-art performance in challenging image classification benchmarks such as ImageNet. However, elements of these architectures are similar to standard hand-crafted representations used in computer vision. In this paper, we explore the extent of this analogy, proposing a version of the stateof-the-art Fisher vector image encoding that can be stacked in multiple layers. This architecture significantly improves on standard Fisher vectors, and obtains competitive results with deep convolutional networks at a smaller computational learning cost. Our hybrid architecture allows us to assess how the performance of a conventional hand-crafted image classification pipeline changes with increased depth. We also show that convolutional networks and Fisher vector encodings are complementary in the sense that their combination further improves the accuracy. 1 Introduction Discriminatively trained deep convolutional neural networks (CNN) [18] have recently achieved impressive state of the art results over a number of areas, including, in particular, the visual recognition of categories in the ImageNet Large-Scale Visual Recognition Challenge [4]. This success is built on many years of tuning and incorporating ideas into CNNs in order to improve their performance. Many of the key ideas in CNN have now been absorbed into features proposed in the computer vision literature – some have been discovered independently and others have been overtly borrowed. For example: the importance of whitening [11]; max pooling and sparse coding [26, 33]; non-linearity and normalization [20]. Indeed, several standard features and pipelines in computer vision, such as SIFT [19] and a spatial pyramid on Bag of visual Words (BoW) [16] can be seen as corresponding to layers of a standard CNN. However, image classification pipelines used in the computer vision literature are still generally quite shallow: either a global feature vector is computed over an image, and used directly for classification; or, in a few cases, a two layer hierarchy is used, where the outputs of a number of classifiers form the global feature vector for the image (e.g. attributes and classemes [15, 30]). The question we address in this paper is whether it is possible to improve the performance of off-theshelf computer vision features by organising them into a deeper architecture. To this end we make the following contributions: (i) we introduce a Fisher Vector Layer, which is a generalization of the standard FV to a layer architecture suitable for stacking; (ii) we demonstrate that by discriminatively training several such layers and stacking them into a Fisher Vector Network, an accuracy competitive with the deep CNN can be achieved, whilst staying in the realms of conventional SIFT and colour features and FV encodings; and (iii) we show that class posteriors, computed by the deep CNN and FV, are complementary and can be combined to significantly improve the accuracy. The rest of the paper is organised as follows. After a discussion of the related work, we begin with a brief description of the conventional FV encoding [20] (Sect. 2). We then show how this 1 representation can be modified to be used as a layer in a deeper architecture (Sect. 3) and how the latter can be discriminatively learnt to yield a deep Fisher network (Sect. 4). After discussing important details of the implementation (Sect. 5), we evaluate our architecture on the ImageNet image classification benchmark (Sect. 6). Related work. There is a vast literature on large-scale image classification, which we briefly review here. One widely used approach is to extract local features such as SIFT [19] densely from each image, aggregate and encode them as high-dimensional vectors, and feed the latter to a classifier, e.g. an SVM. There exists a large variety of different encodings that can be used for this purpose, including the BoW [9, 29] encoding, sparse coding [33], and the FV encoding [20]. Since FV was shown to outperform other encodings [6] and achieve very good performance on various image recognition benchmarks [21, 28], we use it as the basis of our framework. We note that other recently proposed encodings (e.g. [5]) can be readily employed in the place of FV. Most encodings are designed to disregard the spatial location of features in order to be invariant to image transformations; in practice, however, retaining weak spatial information yields an improved classification performance. This can be incorporated by dividing the image into regions, encoding each of them individually, and stacking the result in a composite higher-dimensional code, known as a spatial pyramid [16]. The alternative, which does not increase the encoding dimensionality, is to augment the local features with their spatial coordinates [24]. Another vast family of image classification techniques is based on Deep Neural Networks (DNN), which are inspired by the layered structure of the visual cortex in mammals [22]. DNNs can be trained greedily, in a layer-by-layer manner, as in Restricted Boltzmann Machines [12] and (sparse) auto-encoders [3, 17], or by learning all layers simultaneously, which is relatively efficient if the layers are convolutional [18]. In particular, the advent of massively-parallel GPUs has recently made it possible to train deep convolutional networks on a large scale with excellent performance [7, 14]. It was also shown that techniques such as training and test data augmentation, as well as averaging the outputs of independently trained DNNs, can significantly improve the accuracy. There have been attempts to bridge these two families, exploring the trade-offs between network depth and width, as well as the complexity of the layers. For instance, dense feature encoding using the bag of visual words was considered as a single layer of a deep network in [1, 8, 32]. 2 Fisher vector encoding for image classification The Fisher vector encoding φ of a set of features {xp} (e.g. densely computed SIFT features) is based on fitting a parametric generative model, e.g. the Gaussian Mixture Model (GMM), to the features, and then encoding the derivatives of the log-likelihood of the model with respect to its parameters [13]. In the particular case of GMMs with diagonal covariances, used here, this leads to the representation which captures the average first and second order differences between the features and each of the GMM centres [20]: Φ(1) k = 1 N√πk N X p=1 αk(xp) xp −µk σk , Φ(2) k = 1 N√2πk N X p=1 αk(xp) (xp −µk)2 σ2 k −1 (1) Here, {πk, µk, σk}k are the mixture weights, means, and diagonal covariances of the GMM, which is computed on the training set and used for the description of all images; αk(xp) is the soft assignment weight of the p-th feature xp to the k-th Gaussian. An FV is obtained by stacking the differences: φ = h Φ(1) 1 , Φ(2) 1 , . . . , Φ(1) K , Φ(2) K i . The encoding describes how the distribution of features of a particular image differs from the distribution fitted to the features of all training images. To make the features amenable to the FV description based on the diagonal-covariance GMM, they are first decorrelated by PCA. The FV dimensionality is 2Kd, where K is the codebook size (the number of Gaussians in the GMM), and d is the dimensionality of the encoded feature vector. For instance, FV encoding of a SIFT feature (d = 128) using a small GMM codebook (K = 256) is 65.5K-dimensional. This means that high-dimensional feature encodings can be quickly computed using small codebooks. Using the same codebook size, BoW and sparse coding are only K-dimensional and less discriminative, as demonstrated in [6]. From another point of view, given the desired encoding dimensionality, these methods would require 2d-times larger codebooks than needed for FV, which would lead to impractical computation times. 2 Dense feature extraction SIFT, raw patches, … One vs. rest linear SVMs FV encoder Spatial stacking L2 norm. & PCA FV encoder SSR & L2 norm. SSR & L2 norm. input image 0-th layer 1-st Fisher layer (with optional global pooling branched out) 2-nd Fisher layer (global pooling) classifier layer Dense feature extraction SIFT, raw patches, … One vs rest linear SVMs FV encoder SSR & L2 norm. Figure 1: Left: Fisher network (Sect. 4) with two Fisher layers. Right: conventional pipeline using a shallow Fisher vector encoding. As shown in Sect. 6, making the conventional pipeline slightly deeper by injecting a single Fisher layer substantially improves the classification accuracy. As can be seen from (1), the (unnormalised) FV encoding is additive with respect to image features, i.e. the encoding of an image is an average of the individual encodings of its features. Following [20], FV performance is further improved by passing it through Signed Square-Rooting (SSR) and L2 normalisation. Finally, the high-dimensional FV is usually coupled with a one-vs-rest linear SVM classifier, and together they form a conventional image classification pipeline [21] (see Fig. 1), which serves as a baseline for our classification framework. 3 Fisher layer The conventional FV representation of an image (Sect. 2), effectively encodes each local feature (e.g. SIFT) into a high-dimensional representation, and then aggregates these encodings into a single vector by global sum-pooling over the whole image (followed by normalisation). This means that the representation describes the image in terms of the local patch features, and can not capture more complex image structures. Deep neural networks are able to model the feature hierarchies by passing an output of one feature computation layer as the input to the next one. We adopt a similar approach here, and devise a feed-forward feature encoding layer (which we term a Fisher layer), which is based on off-the-shelf Fisher vector encoding. The layers can then be stacked into a deep network, which we call a Fisher network. The architecture of the l-th Fisher layer is depicted in Fig. 2. On the input, it receives dl-dimensional features (dl ∼102), densely computed over multiple scales on a regular image grid. The features are assumed to be decorrelated using PCA. The layer then performs feed-forward feature transformation in three sub-layers. The first one computes semi-local FV encodings by pooling the input features not from the whole image, but from a dense set of semi-local regions. The resulting FVs form a new set of densely sampled features that are more discriminative than the input ones and less local, as they integrate information from larger image areas. The FV encoder (Sect. 2) uses a layer-specific GMM with Kl components, so the dimensionality of each FV is 2Kldl, which, considering that FVs are computed densely, might be too large for practical applications. Therefore, we decrease FV dimensionality by projection onto hl-dimensional subspace using a discriminatively trained linear projection Wl ∈ Rhl×2Kldl. In practice, this is carried out using an efficient variant of the FV encoder (Sect. 5). In the second sub-layer, the spatially adjacent features are stacked in a 2 × 2 window, which produces 4hl-dimensional dense feature representation. Finally, the features are L2-normalised and PCAprojected to dl+1-dimensional subspace using the linear projection Ul ∈Rdl+1×4hl, and passed as the input to the (l + 1)-th layer. Each sub-layer is explained in more detail below. 3 local Fisher Compressed semi-local Fisher vector encoding Spatial stacking (2×2) norm. L2 norm. & PCA dl hl 4hl dl+1 mixture of Kl Gaussians projection Wl from 2Kldl to hl projection Ul from 4hl to dl+1 4hl ql 2ql Figure 2: The architecture of a single Fisher layer. Left: the arrows illustrate the data flow through the layer; the dimensionality of densely computed features is shown next to the arrows. Right: spatial pooling (the blue squares) and stacking (the red square) in sub-layers 1 and 2 respectively. Fisher vector pooling (sub-layer 1). The key idea behind the first sub-layer is to aggregate the FVs of individual features over a family of semi-local spatial neighbourhoods. These neighbourhoods are overlapping square regions of size ql × ql, sampled every δl pixels (see Fig. 2); compared to the regions used in global or spatial pyramid pooling [20], these are smaller and sampled much more densely. As a result, instead of a single FV, describing the whole image, the image is represented by a large number of densely computed semi-local FVs, each of which describes a spatially adjacent set of local features, computed by the previous layer. Thus, the new feature representation can capture more complex image statistics with larger spatial support. We note that due to additivity, computing the FV of a spatial neighbourhood corresponds to the sum-pooling over the neighbourhood, a stage widely used in DNNs. The high dimensionality of Fisher vectors, however, brings up the computational complexity issue, as storing and processing thousands of dense FVs per image (each of which is 2Kldl-dimensional) is prohibitive at large scale. We tackle this problem by employing discriminative dimensionality reduction for high-dimensional FVs, which makes the layer learning procedure supervised. The dimensionality reduction is carried out using a linear projection Wl onto an hl-dimensional subspace. As will be shown in Sect. 5, compressed FVs can be computed very efficiently without the need to compute the full-dimensional FVs first, and then project them down. A similar approach (passing the output of a feature encoder to another encoder) has been previously employed by [1, 8, 32], but in their case they used bag-of-words or sparse coding representations. As noted in [8], such encodings require large codebooks to produce discriminative feature representations. This, in turn, makes these approaches hardly applicable to the datasets of ImageNet scale [4]. As explained in Sect. 2, FV encoders do not require large codebooks, and by employing supervised dimensionality reduction, we can preserve the discriminative ability of FV even after the projection onto a low-dimensional space, similarly to [10]. Spatial stacking (sub-layer 2). After the dimensionality-reduced FV pooling (Sect. 3), an image is represented as a spatially dense set of low-dimensional multi-scale discriminative features. It should be noted that local sum-pooling, while making the representation invariant to small translations, is agnostic to the relative location of aggregated features. To capture the spatial structure within each feature’s neighbourhood, we incorporate the stacking sub-layer, which concatenates the spatially adjacent features in a 2×2 window (Fig. 2). This step is similar to 4×4 stacking employed in SIFT. Normalisation and PCA projection (sub-layer 3). After stacking, the features are L2-normalised, which improves their invariance properties. This procedure is closely related to Local Contrast Normalisation, widely used in DNNs. Finally, before passing the features to the FV encoder of the next layer, PCA dimensionality reduction is carried out, which serves two purposes: (i) features are decorrelated so that they can be modelled using diagonal-covariance GMMs of the next layer; (ii) dimensionality is reduced from 4hl to dl+1 to keep the image representation compact and the computational complexity limited. Multi-scale computation. In practice, the Fisher layer computation is repeated at multiple scales by changing the pooling window size ql (the PCA projection in sub-layer 3 is the same for all scales). This allows a single layer to capture multi-scale statistics, which is different from typical DNN architectures, which use a single pooling window size per layer. The resulting dense multi-scale features, computed by the layer, form the input of the next layer (similarly to the dense multi-scale SIFT features). In Sect. 6 we show that a multi-scale Fisher layer indeed brings an improvement, compared to a fixed pooling window size. 4 4 Fisher network Our image classification pipeline, which we coin Fisher network (shown in Fig. 1) is constructed by stacking several (at least one) Fisher layers (Sect. 3) on top of dense features, such as SIFT or raw image patches. The penultimate layer, which computes a single-vector image representation, is the special case of the Fisher layer, where sum-pooling is only performed globally over the whole image. We call this layer the global Fisher layer, and it effectively computes a full-dimensional normalised Fisher vector encoding (the dimensionality reduction stage is omitted since the computed FV is directly used for classification). The final layer is an off-the-shelf ensemble of one-vs-rest binary linear SVMs. As can be seen, a Fisher network generalises the standard FV pipeline of [20], as the latter corresponds to the network with a single global Fisher layer. Multi-layer image descriptor. Each subsequent Fisher layer is designed to capture more complex, higher-level image statistics, but the competitive performance of shallow FV-based frameworks [21] suggests that low-level SIFT features are already discriminative enough to distinguish between a number of image classes. To fully exploit the hierarchy of Fisher layers, we branch out a globally pooled, normalised FV from each of the Fisher layers, not just the last one. These image representations are then concatenated to produce a rich, multi-layer image descriptor. A similar approach has previously been applied to convolutional networks in [25]. 4.1 Learning The Fisher network is trained in a supervised manner, since each Fisher layer (apart from the global layer) depends on discriminative dimensionality reduction. The network is trained greedily, layer by layer. Here we discuss how the (non-global) Fisher layer can be efficiently trained in the large-scale scenario, and introduce two options for the projection learning objective. Projection learning proxy. As explained in Sect. 3, we need to learn a discriminative projection W to significantly reduce the dimensionality of the densely-computed semi-local FVs. At the same time, the only annotation available for discriminative learning in our case is the class label of the whole image. We exploit this information by requiring that projected semi-local FVs are good predictors of the image class. Taking into account that (i) it may be unreasonable to require all local feature occurrences to predict the object class (the support of some features may not even cover the object), and (ii) there are too many features to use all of them in learning (∼104 semi-local FVs for each of the ∼106 training images), we optimize the average class prediction of all the features in a layer, rather than the prediction of individual feature occurrences. In particular, we construct a learning proxy by computing the average ψ of all unnormalised, unprojected semi-local FVs φs of an image, ψ = 1 S PS s=1 φs, and defining the learning constraints on ψ using the image label. Considering that Wψ = 1 S PS s=1 Wφs, the projection W, learnt for ψ, is also applicable to individual semi-local FVs φs. The advantages of the proxy are that the image-level class annotation can now be utilised, and during projection learning we only need to store a single vector ψ per image. In the sequel, we define two options for the projection learning objective, which are then compared in Sect. 6. Bi-convex max-margin projection learning. One approach to discriminative dimensionality reduction learning consists in finding the projection onto a subspace where the image classes are as linearly separable as possible [10, 31]. This corresponds to the bilinear class scoring function: vT c Wψ, where W is the linear projection which we seek to optimise and vc is the linear model (e.g. an SVM) of the class c in the projected space. The max-margin optimisation problem for W and the ensemble {vc} takes the following form: X i X c′̸=c(i) max h vc′ −vc(i) T Wψi + 1, 0 i + λ 2 X c ∥vc∥2 2 + µ 2 ∥W∥2 F , (2) where ci is the ground-truth class of an image i, λ and µ are the regularisation constants. The learning objective is bi-convex in W and vc, and a local optimum can be found by alternation between the convex problems for W and {vc}, both of which can be solved in primal using a stochastic sub-gradient method [27]. We initialise the alternation by setting W to the PCA-whitening matrix W0. Once the optimisation has converged, the classifiers vc are discarded, and we keep the projection W. 5 Projection onto the space of classifier scores. Another dimensionality reduction technique, which we consider in this work, is to train one-vs-rest SVM classifiers {uc}C c=1 on the full-dimensional FVs ψ, and then use the C-dimensional vector of SVM outputs as the compressed representation of ψ. This corresponds to setting the c-th row of the projection matrix W to the SVM model uc. This approach is closely related to attribute-based representations and classemes [15, 30], but in our case we do not use any additional data annotated with a different set of (attribute) classes to train the models; instead, the C = 1000 classifiers trained directly on the ILSVRC dataset are used. If a specific target dimensionality is required, PCA dimensionality reduction can be further applied to the classifier scores [10], but in our case we applied PCA after spatial stacking (Sect. 3). The advantage of using SVM models for dimensionality reduction is, mostly, computational. As we will show in Sect. 6, both formulations exhibit a similar level of performance, but training C onevs-rest classifiers is much faster than performing alternation between SVM learning and projection learning in (2). The reason is that one-vs-rest SVM training can be easily parallelised, while projection learning is significantly slower even when using a parallel gradient descent implementation. 5 Implementation details Efficient computation of hard-assignment Fisher vectors. In the original FV encoding formulation (1), each feature is soft-assigned to all K Gaussians of the GMM by computing the assignment weights αk(xp) as the responsibilities of GMM component k for feature p: αk(xp) = πk N k(xp) P j πj N j(xp), where N k(xp) is the likelihood of k-th Gaussian. To facilitate an efficient computation of a large number of dense FVs per image, we introduce and utilise a fast variant of FV (which we term hard-FV), which uses hard assignments of features to Gaussians, computed as αk(xp) = 1 if k = arg maxj πj N j(xp) 0 otherwise (3) Hard-FVs are inherently sparse; this allows for the fast computation of projected FVs Wlφ. Indeed, it is easy to show that Wlφ = PK k=1 P p∈Ωk W (k,1) l Φ(1) k (p) + W (k,2) l Φ(2) k (p) , where Ωk is the set of input vectors hard-assigned to the GMM component k, and W (k,1) l , W (k,2) l are the sub-matrices of Wl which correspond to the 1st and 2nd order differences Φ(1),(2) k (p) between the feature xp and the k-th GMM mean (1). This suggests the fast computation procedure: each dl-dimensional input feature xp is first hard-assigned to a Gaussian k based on (3). Then, the corresponding dl-D differences Φ(1),(2) k (p) are computed and projected using small hl ×dl sub-matrices W (k,1) l , W (k,2) l , which is fast. The algorithm avoids computing high-dimensional FVs, followed by the projection using a large matrix Wl ∈Rhl×2Kldl, which is prohibitive since the number of dense FVs is high. Implementation. Our SIFT feature extraction follows that of [21]. Images are rescaled so that the number of pixels is 100K. Dense RootSIFT [2] is computed on 24 × 24 patches over 5 scales (scale factor 3√ 2) with a 3 pixel step. We also employ SIFT augmentation with the patch spatial coordinates [24]. During training, high-dimensional FVs, computed by the 2nd Fisher layer, are compressed using product quantisation [23]. The learning framework is implemented in Matlab, speeded up with C++ MEX. The computation is carried out on CPU without the use of GPU. Training the Fisher network on top of SIFT descriptors on 1.2M images of ILSVRC-2010 [4] dataset takes about one day on a 200-core cluster. Image classification time is ∼2s on a single core. 6 Evaluation In this section, we evaluate the proposed Fisher network on the dataset, introduced for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2010 [4]. It contains images of 1000 categories, with 1.2M images available for training, 50K for validation, and 150K for testing. Following the standard evaluation protocol for the dataset, we report both top-1 and top-5 accuracy (%) computed on the test set. Sect. 6.1 evaluates variants of the Fisher network on a subset of ILSVRC to identify the best one. Then, Sect. 6.2 evaluates the complete framework. 6.1 Fisher network variants We begin with comparing the performance of the Fisher network under different settings. The comparison is carried out on a subset of ILSVRC, which was obtained by random sampling of 200 6 Table 1: Evaluation of dimensionality reduction, stacking, and normalisation sub-layers on a 200 class subset of ILSVRC-2010. The following configuration of Fisher layers was used: d1 = 128, K1 = 256, q1 = 5, δ1 = 1, h1 = 200 (number of classes), d2 = 200 , K2 = 256. The baseline performance of a shallow FV encoding is 57.03% and 78.9% (top-1 and top-5 accuracy). dim-ty reduction stacking L2 norm-n top-1 top-5 classifier scores ✓ 59.69 80.29 classifier scores ✓ 59.42 80.44 classifier scores ✓ ✓ 60.22 80.93 bi-convex ✓ ✓ 59.49 81.11 Table 2: Evaluation of multi-scale pooling and multi-layer image description on the subset of ILSVRC-2010. The following configuration of Fisher layers was used: d1 = 128, K1 = 256, h1 = 200, d2 = 200, K2 = 256. Both Fisher layers used spatial coordinate augmentation. The baseline performance of a shallow FV encoding is 59.51% and 80.50% (top-1 and top-5 accuracy). pooling window size q1 pooling stride δ1 multi-layer top-1 top-5 5 1 61.56 82.21 {5, 7, 9, 11} 2 62.16 82.43 {5, 7, 9, 11} 2 ✓ 63.79 83.73 classes out of 1000. To avoid over-fitting indirectly on the test set, comparisons in this section are carried on the validation set. In our experiments, we used SIFT as the first layer of the network, followed by two Fisher layers (the second one is global, as explained in Sect. 4). Dimensionality reduction, stacking, and normalisation. Here we quantitatively assess the three sub-layers of a Fisher layer (Sect. 3). We compare the two proposed dimensionality reduction learning schemes (bi-convex learning and classifier scores), and also demonstrate the importance of spatial stacking and L2 normalisation. The results are shown in Table 1. As can be seen, both spatial stacking and L2 normalisation improve the performance, and dimensionality reduction via projection onto the space of SVM classifier scores performs on par with the projection learnt using the bi-convex formulation (2). In the following experiments we used the classifier scores for dimensionality reduction, since their training can be parallelised and is significantly faster. Multi-scale pooling and multi-layer image representation. In this experiment, we compare the performance of semi-local FV pooling using single and multiple window sizes (Sect. 3), as well as single- and multi-layer image representations (Sect. 4). From Table 2 it is clear that using multiple pooling window sizes is beneficial compared to a single window size. When using multi-scale pooling, the pooling stride was increased to keep the number of pooled semi-local FVs roughly the same. Also, the multi-layer image descriptor obtained by stacking globally pooled and normalised FVs, computed by the two Fisher layers, outperforms each of these FVs taken separately. We also note that in this experiment, unlike the previous one, both Fisher layers utilized spatial coordinate augmentation of the input features, which leads to a noticeable boost in the shallow baseline performance (from 78.9% to 80.50% top-5 accuracy). Apart from our Fisher network, multi-scale pooling can be readily employed in convolutional networks. 6.2 Evaluation on ILSVRC-2010 Now that we have evaluated various Fisher layer configurations on a subset of ILSVRC, we assess the performance of our framework on the full ILSVRC-2010 dataset. We use off-the-shelf SIFT and colour features [20] in the feature extraction layer, and demonstrate that significant improvements can be achieved by injecting a single Fisher layer into the conventional FV-based pipeline [23]. The following configuration of Fisher layers was used: d1 = 80, K1 = 512, q1 = {5, 7, 9, 11}, δ1 = 2, h1 = 1000, d2 = 256, K2 = 256. On both Fisher layers, we used spatial coordinate augmentation of the input features. The first Fisher layer uses a large number of GMM components Kl, since it was found to be beneficial for shallow FV encodings [23], used here as a baseline. The one-vs-rest SVM scores were Platt-calibrated on the validation set (we did not use calibration for semi-local FV dimensionality reduction). The results are shown in Table 3. First, we note that the globally pooled Fisher vector, branched out of the first Fisher layer (which effectively corresponds to the conventional FV encoding [23]), results in better accuracy than reported in [23], which validates our implementation. Using the 2nd Fisher layer on top of the 1st one leads to a significant performance improvement. Finally, stacking the FVs, produced by the 1st and 2nd Fisher layers, pushes the accuracy even further. 7 Table 3: Performance on ILSVRC-2010 using dense SIFT and colour features. We also specify the dimensionality of SIFT-based image representations. pipeline SIFT only SIFT & colour setting dimension top-1 top-5 top-1 top-5 1st Fisher layer 82K 46.52 68.45 55.35 76.35 2nd Fisher layer 131K 48.54 71.35 56.20 77.68 multi-layer (1st and 2nd Fisher layers) 213K 52.57 73.68 59.47 79.20 S´anchez et al. [23] 524K N/A 67.9 54.3 74.3 The state of the art on the ILSVRC-2010 dataset was obtained using an 8-layer convolutional network [14], i.e. twice as deep as the Fisher network considered here. Using training and test set augmentation based on jittering (not employed here), they achieved the top-1 / top-5 accuracy of 62.5% / 83.0%. Without test set augmentation (i.e. using only the original images for class scoring), their result is 61% / 81.7%. In our case, we did not augment neither the training, nor the test set, and achieved 59.5% / 79.2%. For reference, our baseline shallow FV accuracy is 55.4% / 76.4%. We conclude that injecting a single intermediate layer leads to a significant performance boost (+4.1% top-1 accuracy), but deep CNNs are still somewhat better (+1.5% top-1 accuracy). These results are however quite encouraging since they were obtained by using off-the-shelf features and encodings, reconfigured to add a single intermediate layer. Notably, our model did not require an optimised GPU implementation, nor it was necessary to control over-fitting by techniques such as dropout [14] and training set augmentation. Finally, we demonstrate that the Fisher network and deep CNN representations are complementary by combining the class posteriors obtained from CNN with those of a Fisher network. To this end, we re-implemented the deep CNN of [14] using their publicly available cuda-convnet toolbox. Our implementation performs slightly better, giving 62.91% / 83.19% (with test set augmentation). The multiplication of CNN and Fisher network posteriors leads to a significantly improved accuracy: 66.75% / 85.64%. It should be noted that another way of improving the CNN accuracy, used in [14] on ImageNet-2012 dataset, consists in training several CNNs and averaging their posteriors. Further study of the complementarity of various deep and shallow representations is beyond the scope of this paper, and will be addressed in future research. 7 Conclusion We have shown that Fisher vectors, a standard image encoding method, are amenable to be stacked in multiple layers, in analogy to the state-of-the-art deep neural network architectures. Adding a single layer is in fact sufficient to significantly boost the performance of these shallow image encodings, bringing their performance closer to the state of the art in the large-scale classification scenario [14]. The fact that off-the-shelf image representations can be simply and successfully stacked indicates that deep schemes may extend well beyond neural networks. Acknowledgements This work was supported by ERC grant VisRec no. 228180. References [1] A. Agarwal and B. Triggs. Hyperfeatures - multilevel local coding for visual recognition. In Proc. ECCV, pages 30–43, 2006. [2] R. Arandjelovi´c and A. Zisserman. Three things everyone should know to improve object retrieval. In Proc. CVPR, 2012. [3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In NIPS, pages 153–160, 2006. [4] A Berg, J Deng, and L Fei-Fei. Large scale visual recognition challenge (ILSVRC), 2010. URL http: //www.image-net.org/challenges/LSVRC/2010/. [5] J. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu. Semantic segmentation with second-order pooling. In Proc. ECCV, pages 430–443, 2012. [6] K. Chatfield, V. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In Proc. BMVC., 2011. 8 [7] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Proc. CVPR, pages 3642–3649, 2012. [8] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011. [9] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In Workshop on Statistical Learning in Computer Vision, ECCV, pages 1–22, 2004. [10] A. Gordo, J. A. Rodr´ıguez-Serrano, F. Perronnin, and E. Valveny. Leveraging category-level labels for instance-level image retrieval. In Proc. CVPR, pages 3045–3052, 2012. [11] B. Hariharan, J. Malik, and D. Ramanan. Discriminative decorrelation for clustering and classification. In Proc. ECCV, 2012. [12] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. [13] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In NIPS, pages 487–493, 1998. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012. [15] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In Proc. CVPR, pages 951–958, 2009. [16] S. Lazebnik, C. Schmid, and J Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In Proc. CVPR, 2006. [17] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In Proc. ICML, 2012. [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [19] D. Lowe. Object recognition from local scale-invariant features. In Proc. ICCV, pages 1150–1157, Sep 1999. [20] F. Perronnin, J. S´anchez, and T. Mensink. Improving the Fisher kernel for large-scale image classification. In Proc. ECCV, 2010. [21] F. Perronnin, Z. Akata, Z. Harchaoui, and C. Schmid. Towards good practice in large-scale learning for image classification. In Proc. CVPR, pages 3482–3489, 2012. [22] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2(11):1019–1025, 1999. [23] J. S´anchez and F. Perronnin. High-dimensional signature compression for large-scale image classification. In Proc. CVPR, 2011. [24] J. S´anchez, F. Perronnin, and T. Em´ıdio de Campos. Modeling the spatial layout of images beyond spatial pyramids. Pattern Recognition Letters, 33(16):2216–2223, 2012. [25] P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale convolutional networks. In International Joint Conference on Neural Networks, pages 2809–2813, 2011. [26] T. Serre, L. Wolf, and T. Poggio. A new biologically motivated framework for robust object recognition. Proc. CVPR, 2005. [27] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient SOlver for SVM. In Proc. ICML, volume 227, 2007. [28] K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. Fisher Vector Faces in the Wild. In Proc. BMVC., 2013. [29] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In Proc. ICCV, volume 2, pages 1470–1477, 2003. [30] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object category recognition using classemes. In Proc. ECCV, pages 776–789, sep 2010. [31] J. Weston, S. Bengio, and N. Usunier. WSABIE: Scaling up to large vocabulary image annotation. In Proc. IJCAI, pages 2764–2770, 2011. [32] S. Yan, X. Xu, D. Xu, S. Lin, and X. Li. Beyond spatial pyramids: A new feature extraction framework with dense spatial sampling for image classification. In Proc. ECCV, pages 473–487, 2012. [33] J. Yang, K. Yu, Y. Gong, and T. S. Huang. Linear spatial pyramid matching using sparse coding for image classification. In Proc. CVPR, pages 1794–1801, 2009. 9
|
2013
|
70
|
5,148
|
Linear Convergence with Condition Number Independent Access of Full Gradients Lijun Zhang Mehrdad Mahdavi Rong Jin Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824, USA {zhanglij,mahdavim,rongjin}@msu.edu Abstract For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based algorithm is O(√κ log 1/ǫ), where κ is the condition number. In the case that the optimization problem is ill-conditioned, we need to evaluate a large number of full gradients, which could be computationally expensive. In this paper, we propose to remove the dependence on the condition number by allowing the algorithm to access stochastic gradients of the objective function. To this end, we present a novel algorithm named Epoch Mixed Gradient Descent (EMGD) that is able to utilize two kinds of gradients. A distinctive step in EMGD is the mixed gradient descent, where we use a combination of the full and stochastic gradients to update the intermediate solution. Theoretical analysis shows that EMGD is able to find an ǫ-optimal solution by computing O(log 1/ǫ) full gradients and O(κ2 log 1/ǫ) stochastic gradients. 1 Introduction Convex optimization has become a tool central to many areas of engineering and applied sciences, such as signal processing [20] and machine learning [24]. The problem of convex optimization is typically given as min w∈W F(w), where W is a convex domain, and F(·) is a convex function. In most cases, the optimization algorithm for solving the above problem is an iterative process, and the convergence rate is characterized by the iteration complexity, i.e., the number of iterations needed to find an ǫ-optimal solution [3,17]. In this study, we focus on first order methods, where we only have the access to the (stochastic) gradient of the objective function. For most convex optimization problems, the iteration complexity of an optimization algorithm depends on the following two factors. 1. The analytical properties of the objective function. For example, is F(·) smooth or strongly convex? 2. The information that can be elicited about the objective function. For example, do we have access to the full gradient or the stochastic gradient of F(·)? The optimal iteration complexities for some popular combinations of the above two factors are summarized in Table 1 and elaborated in the related work section. We observe that when the objective function is smooth (and strongly convex), the convergence rate for full gradients is much faster than that for stochastic gradients. On the other hand, the evaluation of a stochastic gradient is usually significantly more efficient than that of a full gradient. Thus, replacing full gradients with stochastic gradients essentially trades the number of iterations with a low computational cost per iteration. 1 Table 1: The optimal iteration complexity of convex optimization. L and λ are the moduli of smoothness and strong convexity, respectively. κ = L/λ is the condition number. Lipschitz continuous Smooth Smooth & Strongly Convex Full Gradient O 1 ǫ2 O L √ǫ O √κ log 1 ǫ Stochastic Gradient O 1 ǫ2 O 1 ǫ2 O 1 λǫ In this work, we consider the case when the objective function is both smooth and strongly convex, where the optimal iteration complexity is O(√κ log 1 ǫ ) if the optimization method is first order and has access to the full gradients [17]. For the optimization problems that are ill-conditioned, the condition number κ can be very large, leading to many evaluations of full gradients, an operation that is computationally expensive for large data sets. To reduce the computational cost, we are interested in the possibility of making the number of full gradients required independent from κ. Although the O(√κ log 1 ǫ ) rate is in general not improvable for any first order method, we bypass this difficulty by allowing the algorithm to have access to both full and stochastic gradients. Our objective is to reduce the iteration complexity from O(√κ log 1 ǫ ) to O(log 1 ǫ ) by replacing most of the evaluations of full gradients with the evaluations of stochastic gradients. Under the assumption that stochastic gradients can be computed efficiently, this tradeoff could lead to a significant improvement in computational efficiency. To this end, we developed a novel optimization algorithm named Epoch Mixed Gradient Descent (EMGD). It divides the optimization process into a sequence of epochs, an idea that is borrowed from the epoch gradient descent [9]. At each epoch, the proposed algorithm performs mixed gradient descent by evaluating one full gradient and O(κ2) stochastic gradients. It achieves a constant reduction in the optimization error for every epoch, leading to a linear convergence rate. Our analysis shows that EMGD is able to find an ǫ-optimal solution by computing O(log 1 ǫ ) full gradients and O(κ2 log 1 ǫ ) stochastic gradients. In other words, with the help of stochastic gradients, the number of full gradients required is reduced from O(√κ log 1 ǫ ) to O(log 1 ǫ ), independent from the condition number. 2 Related Work During the last three decades, there have been significant advances in convex optimization [3,15,17]. In this section, we provide a brief review of the first order optimization methods. We first discuss deterministic optimization, where the gradient of the objective function is available. For the general convex and Lipschitz continuous optimization problem, the iteration complexity of gradient (subgradient) descent is O( 1 ǫ2 ), which is optimal up to constant factors [15]. When the objective function is convex and smooth, the optimal optimization scheme is the accelerated gradient descent developed by Nesterov, whose iteration complexity is O( L √ǫ) [16,18]. With slight modifications, the accelerated gradient descent algorithm can also be applied to optimize the smooth and strongly convex objective function, whose iteration complexity is O(√κ log 1 ǫ ) and is in general not improvable [17, 19]. The objective of our work is to reduce the number of accesses to the full gradients by exploiting the availability of stochastic gradients. In stochastic optimization, we have access to the stochastic gradient, which is an unbiased estimate of the full gradient [14]. Similar to the case in deterministic optimization, if the objective function is convex and Lipschitz continuous, stochastic gradient (subgradient) descent is the optimal algorithm and the iteration complexity is also O( 1 ǫ2 ) [14,15]. When the objective function is λ-strongly convex, the algorithms proposed in very recent works [9, 10, 21, 26] achieve the optimal O( 1 λǫ) iteration complexity [1]. Since the convergence rate of stochastic optimization is dominated by the randomness in the gradient [6,11], smoothness usually does not lead to a faster convergence rate for stochastic optimization. A variant of stochastic optimization is the “semi-stochastic” approximation, which interleave stochastic gradient descent and full gradient descent [12]. In the strongly convex case, if the stochastic gradients are taken at a decreasing rate, the convergence rate can be improved to approach O( 1 λ√ǫ) [13]. 2 From the above discussion, we observe that the iteration complexity in stochastic optimization is polynomial in 1 ǫ , making it difficult to find high-precision solutions. However, when the objective function is strongly convex and can be written as a sum of a finite number of functions, i.e., F(w) = 1 n n X i=1 fi(w), (1) where each fi(·) is smooth, the iteration complexity of some specific algorithms may exhibit a logarithmic dependence on 1 ǫ , i.e., a linear convergence rate. The two very recent works are the stochastic average gradient (SAG) [22], whose iteration complexity is O(n log 1 ǫ ), provided n ≥8κ, and the stochastic dual coordinate ascent (SDCA) [23], whose iteration complexity is O((n + κ) log 1 ǫ ).1 Under approximate conditions, the incremental gradient method [2] and the hybrid method [5] can also minimize the function in (1) with a linear convergence rate. But those algorithms usually treat one pass of all fi’s (or the subset of fi’s) as one iteration, and thus have high computational cost per iteration. 3 Epoch Mixed Gradient Descent 3.1 Preliminaries In this paper, we assume there exist two oracles. 1. The first one is a gradient oracle Og, which for a given input point w ∈W returns the gradient ∇F(w), that is, Og(w) = ∇F(w). 2. The second one is a function oracle Of, each call of which returns a random function f(·), such that F(w) = Ef[f(w)], ∀w ∈W, and f(·) is L-smooth, that is, ∥∇f(w) −∇f(w′)∥≤L∥w −w′∥, ∀w, w′ ∈W. (2) Although we do not define a stochastic gradient oracle directly, the function oracle Of allows us to evaluate the stochastic gradient of F(·) at any point w ∈W. Notice that the assumption about the function oracle Of implies that the objective function F(·) is also L-smooth. Since ∇F(w) = Ef∇f(w), by Jensen’s inequality, we have ∥∇F(w) −∇F(w′)∥≤Ef∥∇f(w) −∇f(w′)∥ (2) ≤L∥w −w′∥, ∀w, w′ ∈W. (3) Besides, we further assume F(·) is λ-strongly convex, that is, ∥∇F(w) −∇F(w′)∥≥λ∥w −w′∥, ∀w, w′ ∈W. (4) From (3) and (4), it is obvious that L ≥λ. The condition number κ is defined as the ratio between them. i.e., κ = L/λ ≥1. 3.2 The Algorithm The detailed steps of the proposed Epoch Mixed Gradient Descent (EMGD) are shown in Algorithm 1, where we use the superscript for the index of epoches, and the subscript for the index of iterations at each epoch. We denote by B(x; r) the ℓ2 ball of radius r around the point x. Similar to the epoch gradient descent (EGD) [9], we divided the optimization process into a sequence of epochs (step 3 to step 10). While the number of accesses to the gradient oracle in EGD increases exponentially over the epoches, the number of accesses to the two oracles in EMGD is fixed. 1In order to apply SDCA, we need to assume each function fi is λ-strongly convex, so that we can rewrite fi(w) as gi(w) + λ 2 ∥w∥2, where gi(w) = fi(w) −λ 2 ∥w∥2 is convex. 3 Algorithm 1 Epoch Mixed Gradient Descent (EMGD) Input: step size η, the initial domain size ∆1, the number of iterations T per epoch, and the number of epoches m 1: Initialize ¯w1 = 0 2: for k = 1, . . . , m do 3: Set wk 1 = ¯wk 4: Call the gradient oracle Og to obtain ∇F( ¯wk) 5: for t = 1, . . . , T do 6: Call the function oracle Of to obtain a random function f k t (·) 7: Compute the mixed gradient as ˜gk t = ∇F( ¯wk) + ∇f k t (wk t ) −∇f k t ( ¯wk) 8: Update the solution by wk t+1 = argmin w∈W∩B( ¯wk;∆k) η⟨w −wk t , ˜gk t ⟩+ 1 2∥w −wk t ∥2 9: end for 10: Set ¯wk+1 = 1 T +1 PT +1 t=1 wk t and ∆k+1 = ∆k/ √ 2 11: end for Return ¯wm+1 At the beginning of each epoch, we initialize the solution wk 1 to be the average solution ¯wk obtained from the last epoch, and then call the gradient oracle Og to obtain ∇F( ¯wk). At each iteration t of epoch k, we call the function oracle Of to obtain a random function f k t (·) and define the mixed gradient at the current solution wk t as ˜gk t = ∇F( ¯wk) + ∇f k t (wk t ) −∇f k t ( ¯wk), which involves both the full gradient and the stochastic gradient. The mixed gradient can be divided into two parts: the deterministic part ∇F( ¯wk) and the stochastic part ∇f k t (wk t ) −∇f k t ( ¯wk). Due to the smoothness property of f k t (·) and the shrinkage of the domain size, the norm of the stochastic part is well bounded, which is the reason why our algorithm can achieve linear convergence. Based on the mixed gradient, we update wk t by a gradient mapping over a shrinking domain (i.e., W ∩B( ¯wk; ∆k)) in step 8. Since the updating is similar to the standard gradient descent except for the domain constraint, we refer to it as mixed gradient descent for short. At the end of the iteration for epoch k, we compute the average value of T + 1 solutions, instead of T solutions, and update the domain size by reducing a factor of √ 2. 3.3 The Convergence Rate The following theorem shows the convergence rate of the proposed algorithm. Theorem 1. Assume δ ≤e−1/2, T ≥1152L2 λ2 ln 1 δ , and ∆1 ≥max r 2 λ(F(0) −F(w∗)). (5) Set η = 1/[L √ T]. Let ¯wm+1 be the solution returned by Algorithm 1 after m epoches that has m accesses to oracle Og and mT accesses to oracle Of. Then, with a probability at least 1 −mδ, we have F( ¯wm+1) −F(w∗) ≤λ[∆1]2 2m+1 , and ∥¯wm+1 −w∗∥2 ≤[∆1]2 2m . Theorem 1 immediately implies that EMGD is able to achieve an ǫ optimization error by computing O(log 1 ǫ ) full gradients and O(κ2 log 1 ǫ ) stochastic gradients. 4 Table 2: The computational complexity for minimizing 1 n Pn i=1 fi(w) Nesterov’s algorithm [17] EMGD SAG (n ≥8κ) [22] SDCA [23] O √κn log 1 ǫ O (n + κ2) log 1 ǫ O n log 1 ǫ O (n + κ) log 1 ǫ 3.4 Comparisons Compared to the optimization algorithms that only rely on full gradients [17], the number of full gradients needed in EMGD is O(log 1 ǫ ) instead of O(√κ log 1 ǫ ). Compared to the optimization algorithms that only rely on stochastic gradients [9,10,21], EMGD is more efficient since it achieves a linear convergence rate. The proposed EMGD algorithm can also be applied to the special optimization problem considered in [22, 23], where F(w) = 1 n Pn i=1 fi(w). To make quantitative comparisons, let’s assume the full gradient is n times more expensive to compute than the stochastic gradient. Table 2 lists the computational complexities of the algorithms that enjoy linear convergence. As can be seen, the computational complexity of EMGD is lower than Nesterov’s algorithm [17] as long as the condition number κ ≤n2/3, the complexity of SAG [22] is lower than Nesterov’s algorithm if κ ≤n/8, and the complexity of SDCA [23] is lower than Nesterov’s algorithm if κ ≤n2.2 The complexity of EMGD is on the same order as SAG and SDCA when κ ≤n1/2, but higher in other cases. Thus, in terms of computational cost, EMGD may not be the best one, but it has advantages in other aspects. 1. Unlike SAG and SDCA that only work for unconstrained optimization problem, the proposed algorithm works for both constrained and unconstrained optimization problems, provided that the constrained problem in Step 8 can be solved efficiently. 2. Unlike the SAG and SDCA that require an Ω(n) storage space, the proposed algorithm only requires the storage space of Ω(d), where d is the dimension of w. 3. The only step in Algorithm 1 that has dependence on n is step 4 for computing the gradient ∇F( ¯wk). By utilizing distributed computing, the running time of this step can be reduced to O(n/k), where k is the number of computers, and the convergence rate remains the same. For SAG and SDCA , it is unclear whether they can reduce the running time without affecting the convergence rate. 4. The linear convergence of SAG and SDCA only holds in expectation, whereas the linear convergence of EMGD holds with a high probability, which is much stronger. 4 The Analysis In the proof, we frequently use the following property of strongly convex functions [9]. Lemma 1. Let f(x) be a λ-strongly convex function over the domain X, and x∗ = argminx∈X f(x). Then, for any x ∈X, we have f(x) −f(x∗) ≥λ 2 ∥x −x∗∥2. (6) 4.1 The Main Idea The Proof of Theorem 1 is based on induction. From the assumption about ∆1 in (5), we have F( ¯w1) −F(w∗) (5) ≤λ[∆1]2 2 , and ∥¯w1 −w∗∥2 (5), (6) ≤ [∆1]2, 2In machine learning, we usually face a regularized optimization problem minw∈W 1 n Pn i=1 ℓ(yi; x⊤ i w)+ τ 2 ∥w∥2, where ℓ(·; ·) is some loss function. When the norm of the data is bounded, the smoothness parameter L can be treated as a constant. The strong convexity parameter λ is lower bounded by τ. Thus, as long as τ > Ω(n−2/3), which is a reasonable scenario [25], we have κ < O(n2/3), indicating our proposed EMGD can be applied. 5 which means Theorem 1 is true for m = 0. Suppose Theorem 1 is true for m = k. That is, with a probability at least 1 −kδ, we have F( ¯wk+1) −F(w∗) ≤λ[∆1]2 2k+1 , and ∥¯wk+1 −w∗∥2 ≤[∆1]2 2k . Our goal is to show that after running the k+1-th epoch, with a probability at least 1 −(k + 1)δ, we have F( ¯wk+2) −F(w∗) ≤λ[∆1]2 2k+2 , and ∥¯wk+2 −w∗∥2 ≤[∆1]2 2k+1 . 4.2 The Details For the simplicity of presentation, we drop the index k for epoch. Let ¯w be the solution obtained from the epoch k. Given the condition F( ¯w) −F(w∗) ≤λ 2 ∆2, and ∥¯w −w∗∥2 ≤∆2, (7) we will show that after running the T iterations in one epoch, the new solution, denoted by bw, satisfies F(bw) −F(w∗) ≤λ 4 ∆2, and ∥bw −w∗∥2 ≤1 2∆2, (8) with a probability at least 1 −δ. Define g = ∇F( ¯w), bF(w) = F(w) −⟨w, g⟩, and gt(w) = ft(w) −⟨w, ∇ft( ¯w)⟩. (9) The objective function can be rewritten as F(w) = ⟨w, g⟩+ bF(w). (10) And the mixed gradient can be rewritten as ˜gk = g + ∇gt(wt). Then, the updating rule given in Algorithm 1 becomes wt+1 = argmin w∈W∩B( ¯w,∆) η⟨w −wt, g + ∇gt(wt)⟩+ 1 2∥w −wt∥2. (11) Notice that the objective function in (11) is 1-strongly convex. Using the fact that w∗∈W ∩ B( ¯w; ∆) and Lemma 1 (with x∗= wt+1 and x = w∗), we have η⟨wt+1 −wt, g + ∇gt(wt)⟩+ 1 2∥wt+1 −wt∥2 ≤η⟨w∗−wt, g + ∇gt(wt)⟩+ 1 2∥w∗−wt∥2 −1 2∥w∗−wt+1∥2. (12) For each iteration t in the current epoch, we have F(wt) −F(w∗) (4) ≤⟨∇F(wt), wt −w∗⟩−λ 2 ∥wt −w∗∥2 (10) = ⟨g + ∇gt(wt), wt −w∗⟩+ D ∇bF(wt) −∇gt(wt), wt −w∗E −λ 2 ∥wt −w∗∥2, (13) and ⟨g + ∇gt(wt), wt −w∗⟩ (12) ≤⟨g + ∇gt(wt), wt −wt+1⟩+ ∥wt −w∗∥2 2η −∥wt+1 −w∗∥2 2η −∥wt −wt+1∥2 2η ≤⟨g, wt −wt+1⟩+ ∥wt −w∗∥2 2η −∥wt+1 −w∗∥2 2η + max w ⟨∇gt(wt), wt −w⟩−∥wt −w∥2 2η =⟨g, wt −wt+1⟩+ ∥wt −w∗∥2 2η −∥wt+1 −w∗∥2 2η + η 2∥∇gt(wt)∥2. (14) 6 Combining (13) and (14), we have F(wt) −F(w∗) ≤∥wt −w∗∥2 2η −∥wt+1 −w∗∥2 2η −λ 2 ∥wt −w∗∥2 + ⟨g, wt −wt+1⟩+ η 2∥∇gt(wt)∥2 + D ∇bF(wt) −∇gt(wt), wt −w∗E . By adding the inequalities of all iterations, we have T X t=1 (F(wt) −F(w∗)) ≤∥¯w −w∗∥2 2η −∥wT +1 −w∗∥2 2η −λ 2 T X t=1 ∥wt −w∗∥2 + ⟨g, ¯w −wT +1⟩ + η 2 T X t=1 ∥∇gt(wt)∥2 | {z } ≜AT + T X t=1 ⟨∇bF(wt) −∇gt(wt), wt −w∗⟩ | {z } ≜BT . (15) Since F(·) is L-smooth, we have F(wT +1) −F( ¯w) ≤⟨∇F( ¯w), wT +1 −¯w⟩+ L 2 ∥¯w −wT +1∥2, which implies ⟨g, ¯w −wT +1⟩≤F( ¯w) −F(wT +1) + L 2 ∆2 (7) ≤F(w∗) −F(wT +1) + λ 2 ∆2 + L 2 ∆2 ≤F(w∗) −F(wT +1) + L∆2. (16) From (15) and (16), we have T +1 X t=1 (F(wt) −F(w∗)) ≤∆2 1 2η + L + η 2AT + BT . (17) Next, we consider how to bound AT and BT . The upper bound of AT is given by AT = T X t=1 ∥∇gt(wt)∥2 = T X t=1 ∥∇ft(wt) −∇ft( ¯w)∥2 (2) ≤L2 T X t=1 ∥wt −¯w∥2 ≤TL2∆2. (18) To bound BT , we need the Hoeffding-Azuma inequality stated below [4]. Lemma 2. Let V1, V2, . . . be a martingale difference sequence with respect to some sequence X1, X2, . . . such that Vi ∈[Ai, Ai + ci] for some random variable Ai, measurable with respect to X1, . . . , Xi−1 and a positive constant ci. If Sn = Pn i=1 Vi, then for any t > 0, Pr[Sn > t] ≤exp − 2t2 Pn i=1 c2 i . Define Vt = ⟨∇bF(wt) −∇gt(wt), wt −w∗⟩, t = 1, . . . , T. Recall the definition of bF(·) and gt(·) in (9). Based on our assumption about the function oracle Of, it is straightforward to check that V1, . . . is a martingale difference with respect to g1, . . .. The value of Vt can be bounded by |Vt| ≤
∇bF(wt) −∇gt(wt)
∥wt −w∗∥ ≤ 2∆(∥∇F(wt) −∇F( ¯w)∥+ ∥∇ft(wt) −∇ft( ¯w)∥) (2), (3) ≤ 4L∆∥wt −¯w∥≤4L∆2. 7 Following Lemma 2, with a probability at least 1 −δ, we have BT ≤4L∆2 r 2T ln 1 δ . (19) By adding the inequalities in (17), (18) and (19) together, with a probability at least 1 −δ, we have T +1 X t=1 (F(wt) −F(w∗)) ≤∆2 1 2η + L + ηTL2 2 + 4L r 2T ln 1 δ ! . By choosing η = 1/[L √ T], we have T +1 X t=1 (F(wt) −F(w∗)) ≤L∆2 √ T + 1 + 4 r 2T ln 1 δ ! (5) ≤6L∆2 r 2T ln 1 δ , (20) where in the second inequality we use the condition δ ≤e−1/2 in (5). By Jensen’s inequality, we have F(bw) −F(w∗) ≤ 1 T + 1 T +1 X t=1 (F(wt) −F(w∗)) (20) ≤∆2 6L p 2 ln 1/δ √ T + 1 , and therefore ∥bw −w∗∥2 (6) ≤2 λF(bw) −F(w∗) ≤∆2 12L p 2 ln 1/δ λ √ T + 1 . Thus, when T ≥1152L2 λ2 ln 1 δ , with a probability at least 1 −δ, we have F(bw) −F(w∗) ≤λ 4 ∆2, and ∥bw −w∗∥2 ≤1 2∆2. 5 Conclusion and Future Work In this paper, we consider how to reduce the number of full gradients needed for smooth and strongly convex optimization problems. Under the assumption that both the gradient and the stochastic gradient are available, a novel algorithm named Epoch Mixed Gradient Descent (EMGD) is proposed. Theoretical analysis shows that with the help of stochastic gradients, we are able to reduce the number of gradients needed from O(√κ log 1 ǫ ) to O(log 1 ǫ ). In the case that the objective function is in the form of (1), i.e., a sum of n smooth functions, EMGD has lower computational cost than the full gradient method [17], if the condition number κ ≤n2/3. In practice, a drawback of EMGD is that it requires the condition number κ is known beforehand. We will interstage how to find a good estimation of κ in future. When the objective function is a sum of some special functions, such as the square loss (i.e., (yi −x⊤ i w)2), we can estimate the condition number by sampling. In particular, the Hessian matrix estimated from a subset of functions, combined with the concentration inequalities for matrix [7], can be used to bound the eigenvalues of the true Hessian matrix and consequentially κ. Furthermore, if there exists a strongly convex regularizer in the objective function, which happens in many machine learning problems [8], the knowledge of the regularizer itself allows us to find an upper bound of κ. Acknowledgments This work is partially supported by ONR Award N000141210431 and NSF (IIS-1251031). 8 References [1] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5):3235–3249, 2012. [2] D. P. Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM Journal on Optimization, 7(4):913–926, 1997. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [4] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [5] M. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting. SIAM Journal on Scientific Computing, 34(3):A1380–A1405, 2012. [6] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: a generic algorithmic framework. SIAM Journal on Optimization, 22(4):1469– 1492, 2012. [7] A. Gittens and J. A. Tropp. Tail bounds for all eigenvalues of a sum of random matrices. ArXiv e-prints, arXiv:1104.4513, 2011. [8] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York, 2009. [9] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Proceedings of the 24th Annual Conference on Learning Theory, pages 421–436, 2011. [10] A. Juditsky and Y. Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. Technical report, 2010. [11] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133:365– 397, 2012. [12] K. Marti. On solutions of stochastic programming problems by descent procedures with stochastic and deterministic directions. Methods of Operations Research, 33:281–293, 1979. [13] K. Marti and E. Fuchs. Rates of convergence of semi-stochastic approximation procedures for solving stochastic optimization problems. Optimization, 17(2):243–265, 1986. [14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [15] A. Nemirovski and D. B. Yudin. Problem complexity and method efficiency in optimization. John Wiley & Sons Ltd, 1983. [16] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543–547, 1983. [17] Y. Nesterov. Introductory lectures on convex optimization: a basic course, volume 87 of Applied optimization. Kluwer Academic Publishers, 2004. [18] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127– 152, 2005. [19] Y. Nesterov. Gradient methods for minimizing composite objective function. Core discussion papers, 2007. [20] D. P. Palomar and Y. C. Eldar, editors. Convex Optimization in Signal Processing and Communications. 2010, Cambridge University Press. [21] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning, pages 449–456, 2012. [22] N. L. Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems 25, pages 2672–2680, 2012. [23] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567–599, 2013. [24] S. Sra, S. Nowozin, and S. J. Wright, editors. Optimization for Machine Learning. The MIT Press, 2011. [25] Q. Wu and D.-X. Zhou. Svm soft margin classifiers: Linear programming versus quadratic programming. Neural Computation, 17(5):1160–1187, 2005. [26] L. Zhang, T. Yang, R. Jin, and X. He. O(log T) projections for stochastic optimization of smooth and strongly convex functions. In Proceedings of the 30th International Conference on Machine Learning (ICML), pages 621–629, 2013. 9
|
2013
|
71
|
5,149
|
Learning with Noisy Labels Nagarajan Natarajan Inderjit S. Dhillon Pradeep Ravikumar Department of Computer Science, University of Texas, Austin. {naga86,inderjit,pradeepr}@cs.utexas.edu Ambuj Tewari Department of Statistics, University of Michigan, Ann Arbor. tewaria@umich.edu Abstract In this paper, we theoretically study the problem of binary classification in the presence of random classification noise — the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability. Moreover, random label noise is class-conditional — the flip probability depends on the class. We provide two approaches to suitably modify any given surrogate loss function. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical risk minimization in the presence of iid data with noisy labels. If the loss function satisfies a simple symmetry condition, we show that the method leads to an efficient algorithm for empirical minimization. Second, by leveraging a reduction of risk minimization under noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong empirical risk bounds. This approach has a very remarkable consequence — methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant. On a synthetic non-separable dataset, our methods achieve over 88% accuracy even when 40% of the labels are corrupted, and are competitive with respect to recently proposed methods for dealing with label noise in several benchmark datasets. 1 Introduction Designing supervised learning algorithms that can learn from data sets with noisy labels is a problem of great practical importance. Here, by noisy labels, we refer to the setting where an adversary has deliberately corrupted the labels [Biggio et al., 2011], which otherwise arise from some “clean” distribution; learning from only positive and unlabeled data [Elkan and Noto, 2008] can also be cast in this setting. Given the importance of learning from such noisy labels, a great deal of practical work has been done on the problem (see, for instance, the survey article by Nettleton et al. [2010]). The theoretical machine learning community has also investigated the problem of learning from noisy labels. Soon after the introduction of the noise-free PAC model, Angluin and Laird [1988] proposed the random classification noise (RCN) model where each label is flipped independently with some probability ρ ∈[0, 1/2). It is known [Aslam and Decatur, 1996, Cesa-Bianchi et al., 1999] that finiteness of the VC dimension characterizes learnability in the RCN model. Similarly, in the online mistake bound model, the parameter that characterizes learnability without noise — the Littestone dimension — continues to characterize learnability even in the presence of random label noise [Ben-David et al., 2009]. These results are for the so-called “0-1” loss. Learning with convex losses has been addressed only under limiting assumptions like separability or uniform noise rates [Manwani and Sastry, 2013]. In this paper, we consider risk minimization in the presence of class-conditional random label noise (abbreviated CCN). The data consists of iid samples from an underlying “clean” distribution D. The learning algorithm sees samples drawn from a noisy version Dρ of D — where the noise rates depend on the class label. To the best of our knowledge, general results in this setting have not been obtained before. To this end, we develop two methods for suitably modifying any given surrogate loss function ℓ, and show that minimizing the sample average of the modified proxy loss function 1 ˜ℓleads to provable risk bounds where the risk is calculated using the original loss ℓon the clean distribution. In our first approach, the modified or proxy loss is an unbiased estimate of the loss function. The idea of using unbiased estimators is well-known in stochastic optimization [Nemirovski et al., 2009], and regret bounds can be obtained for learning with noisy labels in an online learning setting (See Appendix B). Nonetheless, we bring out some important aspects of using unbiased estimators of loss functions for empirical risk minimization under CCN. In particular, we give a simple symmetry condition on the loss (enjoyed, for instance, by the Huber, logistic, and squared losses) to ensure that the proxy loss is also convex. Hinge loss does not satisfy the symmetry condition, and thus leads to a non-convex problem. We nonetheless provide a convex surrogate, leveraging the fact that the non-convex hinge problem is “close” to a convex problem (Theorem 6). Our second approach is based on the fundamental observation that the minimizer of the risk (i.e. probability of misclassification) under the noisy distribution differs from that of the clean distribution only in where it thresholds η(x) = P(Y = 1|x) to decide the label. In order to correct for the threshold, we then propose a simple weighted loss function, where the weights are label-dependent, as the proxy loss function. Our analysis builds on the notion of consistency of weighted loss functions studied by Scott [2012]. This approach leads to a very remarkable result that appropriately weighted losses like biased SVMs studied by Liu et al. [2003] are robust to CCN. The main results and the contributions of the paper are summarized below: 1. To the best of our knowledge, we are the first to provide guarantees for risk minimization under random label noise in the general setting of convex surrogates, without any assumptions on the true distribution. 2. We provide two different approaches to suitably modifying any given surrogate loss function, that surprisingly lead to very similar risk bounds (Theorems 3 and 11). These general results include some existing results for random classification noise as special cases. 3. We resolve an elusive theoretical gap in the understanding of practical methods like biased SVM and weighted logistic regression — they are provably noise-tolerant (Theorem 11). 4. Our proxy losses are easy to compute — both the methods yield efficient algorithms. 5. Experiments on benchmark datasets show that the methods are robust even at high noise rates. The outline of the paper is as follows. We introduce the problem setting and terminology in Section 2. In Section 3, we give our first main result concerning the method of unbiased estimators. In Section 4, we give our second and third main results for certain weighted loss functions. We present experimental results on synthetic and benchmark data sets in Section 5. 1.1 Related Work Starting from the work of Bylander [1994], many noise tolerant versions of the perceptron algorithm have been developed. This includes the passive-aggressive family of algorithms [Crammer et al., 2006], confidence weighted learning [Dredze et al., 2008], AROW [Crammer et al., 2009] and the NHERD algorithm [Crammer and Lee, 2010]. The survey article by Khardon and Wachman [2007] provides an overview of some of this literature. A Bayesian approach to the problem of noisy labels is taken by Graepel and Herbrich [2000] and Lawrence and Sch¨olkopf [2001]. As Adaboost is very sensitive to label noise, random label noise has also been considered in the context of boosting. Long and Servedio [2010] prove that any method based on a convex potential is inherently ill-suited to random label noise. Freund [2009] proposes a boosting algorithm based on a non-convex potential that is empirically seen to be robust against random label noise. Stempfel and Ralaivola [2009] proposed the minimization of an unbiased proxy for the case of the hinge loss. However the hinge loss leads to a non-convex problem. Therefore, they proposed heuristic minimization approaches for which no theoretical guarantees were provided (We address the issue in Section 3.1). Cesa-Bianchi et al. [2011] focus on the online learning algorithms where they only need unbiased estimates of the gradient of the loss to provide guarantees for learning with noisy data. However, they consider a much harder noise model where instances as well as labels are noisy. Because of the harder noise model, they necessarily require multiple noisy copies per clean example and the unbiased estimation schemes also become fairly complicated. In particular, their techniques break down for non-smooth losses such as the hinge loss. In contrast, we show that unbiased estimation is always possible in the more benign random classification noise setting. Manwani and Sastry [2013] consider whether empirical risk minimization of the loss itself on the 2 noisy data is a good idea when the goal is to obtain small risk under the clean distribution. But it holds promise only for 0-1 and squared losses. Therefore, if empirical risk minimization over noisy samples has to work, we necessarily have to change the loss used to calculate the empirical risk. More recently, Scott et al. [2013] study the problem of classification under class-conditional noise model. However, they approach the problem from a different set of assumptions — the noise rates are not known, and the true distribution satisfies a certain “mutual irreducibility” property. Furthermore, they do not give any efficient algorithm for the problem. 2 Problem Setup and Background Let D be the underlying true distribution generating (X, Y ) ∈X × {±1} pairs from which n iid samples (X1, Y1), . . . , (Xn, Yn) are drawn. After injecting random classification noise (independently for each i) into these samples, corrupted samples (X1, ˜Y1), . . . , (Xn, ˜Yn) are obtained. The class-conditional random noise model (CCN, for short) is given by: P( ˜Y = −1|Y = +1) = ρ+1, P( ˜Y = +1|Y = −1) = ρ−1, and ρ+1 + ρ−1 < 1 The corrupted samples are what the learning algorithm sees. We will assume that the noise rates ρ+1 and ρ−1 are known1 to the learner. Let the distribution of (X, ˜Y ) be Dρ. Instances are denoted by x ∈X ⊆Rd. Noisy labels are denoted by ˜y. Let f : X →R be some real-valued decision function. The risk of f w.r.t. the 0-1 loss is given by RD(f) = E(X,Y )∼D 1{sign(f(X))̸=Y } . The optimal decision function (called Bayes optimal) that minimizes RD over all real-valued decision functions is given by f ⋆(x) = sign(η(x) −1/2) where η(x) = P(Y = 1|x). We denote by R∗the corresponding Bayes risk under the clean distribution D, i.e. R∗= RD(f ∗). Let ℓ(t, y) denote a loss function where t ∈R is a real-valued prediction and y ∈{±1} is a label. Let ˜ℓ(t, ˜y) denote a suitably modified ℓfor use with noisy labels (obtained using methods in Sections 3 and 4). It is helpful to summarize the three important quantities associated with a decision function f: 1. Empirical ˜ℓ-risk on the observed sample: bR˜ℓ(f) := 1 n Pn i=1 ˜ℓ(f(Xi), ˜Yi). 2. As n grows, we expect bR˜ℓ(f) to be close to the ˜ℓ-risk under the noisy distribution Dρ: R˜ℓ,Dρ(f) := E(X, ˜Y )∼Dρ h ˜ℓ(f(X), ˜Y ) i . 3. ℓ-risk under the “clean” distribution D: Rℓ,D(f) := E(X,Y )∼D [ℓ(f(X), Y )]. Typically, ℓis a convex function that is calibrated with respect to an underlying loss function such as the 0-1 loss. ℓis said to be classification-calibrated [Bartlett et al., 2006] if and only if there exists a convex, invertible, nondecreasing transformation ψℓ(with ψℓ(0) = 0) such that ψℓ(RD(f)−R∗) ≤ Rℓ,D(f)−minf Rℓ,D(f). The interpretation is that we can control the excess 0-1 risk by controlling the excess ℓ-risk. If f is not quantified in a minimization, then it is implicit that the minimization is over all measurable functions. Though most of our results apply to a general function class F, we instantiate F to be the set of hyperplanes of bounded L2 norm, W = {w ∈Rd : ∥w∥2 ≤W2} for certain specific results. Proofs are provided in the Appendix A. 3 Method of Unbiased Estimators Let F : X →R be a fixed class of real-valued decision functions, over which the empirical risk is minimized. The method of unbiased estimators uses the noise rates to construct an unbiased estimator ˜ℓ(t, ˜y) for the loss ℓ(t, y). However, in the experiments we will tune the noise rate parameters through cross-validation. The following key lemma tells us how to construct unbiased estimators of the loss from noisy labels. Lemma 1. Let ℓ(t, y) be any bounded loss function. Then, if we define, ˜ℓ(t, y) := (1 −ρ−y) ℓ(t, y) −ρy ℓ(t, −y) 1 −ρ+1 −ρ−1 we have, for any t, y, E˜y h ˜ℓ(t, ˜y) i = ℓ(t, y) . 1This is not necessary in practice. See Section 5. 3 We can try to learn a good predictor in the presence of label noise by minimizing the sample average ˆf ←argmin f∈F bR˜ℓ(f) . By unbiasedness of ˜ℓ(Lemma 1), we know that, for any fixed f ∈F, the above sample average converges to Rℓ,D(f) even though the former is computed using noisy labels whereas the latter depends on the true labels. The following result gives a performance guarantee for this procedure in terms of the Rademacher complexity of the function class F. The main idea in the proof is to use the contraction principle for Rademacher complexity to get rid of the dependence on the proxy loss ˜ℓ. The price to pay for this is Lρ, the Lipschitz constant of ˜ℓ. Lemma 2. Let ℓ(t, y) be L-Lipschitz in t (for every y). Then, with probability at least 1 −δ, max f∈F | bR˜ℓ(f) −R˜ℓ,Dρ(f)| ≤2LρR(F) + r log(1/δ) 2n where R(F) := EXi,ǫi supf∈F 1 nǫif(Xi) is the Rademacher complexity of the function class F and Lρ ≤2L/(1 −ρ+1 −ρ−1) is the Lipschitz constant of ˜ℓ. Note that ǫi’s are iid Rademacher (symmetric Bernoulli) random variables. The above lemma immediately leads to a performance bound for ˆf with respect to the clean distribution D. Our first main result is stated in the theorem below. Theorem 3 (Main Result 1). With probability at least 1 −δ, Rℓ,D( ˆf) ≤min f∈F Rℓ,D(f) + 4LρR(F) + 2 r log(1/δ) 2n . Furthermore, if ℓis classification-calibrated, there exists a nondecreasing function ζℓwith ζℓ(0) = 0 such that, RD( ˆf) −R∗≤ζℓ min f∈F Rℓ,D(f) −min f Rℓ,D(f) + 4LρR(F) + 2 r log(1/δ) 2n . The term on the right hand side involves both approximation error (that is small if F is large) and estimation error (that is small if F is small). However, by appropriately increasing the richness of the class F with sample size, we can ensure that the misclassification probability of ˆf approaches the Bayes risk of the true distribution. This is despite the fact that the method of unbiased estimators computes the empirical minimizer ˆf on a sample from the noisy distribution. Getting the optimal empirical minimizer ˆf is efficient if ˜ℓis convex. Next, we address the issue of convexity of ˜ℓ. 3.1 Convex losses and their estimators Note that the loss ˜ℓmay not be convex even if we start with a convex ℓ. An example is provided by the familiar hinge loss ℓhin(t, y) = [1 −yt]+. Stempfel and Ralaivola [2009] showed that ˜ℓhin is not convex in general (of course, when ρ+1 = ρ−1 = 0, it is convex). Below we provide a simple condition to ensure convexity of ˜ℓ. Lemma 4. Suppose ℓ(t, y) is convex and twice differentiable almost everywhere in t (for every y) and also satisfies the symmetry property ∀t ∈R, ℓ′′(t, y) = ℓ′′(t, −y) . Then ˜ℓ(t, y) is also convex in t. Examples satisfying the conditions of the lemma above are the squared loss ℓsq(t, y) = (t −y)2, the logistic loss ℓlog(t, y) = log(1 + exp(−ty)) and the Huber loss: ℓHub(t, y) = −4yt if yt < −1 (t −y)2 if −1 ≤yt ≤1 0 if yt > 1 Consider the case where ˜ℓturns out to be non-convex when ℓis convex, as in ˜ℓhin. In the online learning setting (where the adversary chooses a sequence of examples, and the prediction of a learner at round i is based on the history of i −1 examples with independently flipped labels), we could use a stochastic mirror descent type algorithm [Nemirovski et al., 2009] to arrive at risk bounds (See Appendix B) similar to Theorem 3. Then, we only need the expected loss to be convex and therefore 4 ℓhin does not present a problem. At first blush, it may appear that we do not have much hope of obtaining ˆf in the iid setting efficiently. However, Lemma 2 provides a clue. We will now focus on the function class W of hyperplanes. Even though bR˜ℓ(w) is non-convex, it is uniformly close to R˜ℓ,Dρ(w). Since R˜ℓ,Dρ(w) = Rℓ,D(w), this shows that bR˜ℓ(w) is uniformly close to a convex function over w ∈W. The following result shows that we can therefore approximately minimize F(w) = bR˜ℓ(w) by minimizing the biconjugate F ⋆⋆. Recall that the (Fenchel) biconjugate F ⋆⋆is the largest convex function that minorizes F. Lemma 5. Let F : W →R be a non-convex function defined on function class W such it is ε-close to a convex function G : W →R: ∀w ∈W, |F(w) −G(w)| ≤ε Then any minimizer of F ⋆⋆is a 2ε-approximate (global) minimizer of F. Now, the following theorem establishes bounds for the case when ˜ℓis non-convex, via the solution obtained by minimizing the convex function F ∗∗. Theorem 6. Let ℓbe a loss, such as the hinge loss, for which ˜ℓis non-convex. Let W = {w : ∥w2∥≤W2}, let ∥Xi∥2 ≤X2 almost surely, and let ˆwapprox be any (exact) minimizer of the convex problem min w∈W F ⋆⋆(w) , where F ⋆⋆(w) is the (Fenchel) biconjugate of the function F(w) = bR˜ℓ(w). Then, with probability at least 1 −δ, ˆwapprox is a 2ε-minimizer of bR˜ℓ(·) where ε = 2LρX2W2 √n + r log(1/δ) 2n . Therefore, with probability at least 1 −δ, Rℓ,D( ˆwapprox) ≤min w∈W Rℓ,D(w) + 4ε . Numerical or symbolic computation of the biconjugate of a multidimensional function is difficult, in general, but can be done in special cases. It will be interesting to see if techniques from Computational Convex Analysis [Lucet, 2010] can be used to efficiently compute the biconjugate above. 4 Method of label-dependent costs We develop the method of label-dependent costs from two key observations. First, the Bayes classifier for noisy distribution, denoted ˜f ∗, for the case ρ+1 ̸= ρ−1, simply uses a threshold different from 1/2. Second, ˜f ∗is the minimizer of a “label-dependent 0-1 loss” on the noisy distribution. The framework we develop here generalizes known results for the uniform noise rate setting ρ+1 = ρ−1 and offers a more fundamental insight into the problem. The first observation is formalized in the lemma below. Lemma 7. Denote P(Y = 1|X) by η(X) and P( ˜Y = 1|X) by ˜η(X). The Bayes classifier under the noisy distribution, ˜f ∗= argminf E(X, ˜Y )∼Dρ 1{sign(f(X))̸= ˜Y } is given by, ˜f ∗(x) = sign(˜η(x) −1/2) = sign η(x) − 1/2 −ρ−1 1 −ρ+1 −ρ−1 . Interestingly, this “noisy” Bayes classifier can also be obtained as the minimizer of a weighted 0-1 loss; which as we will show, allows us to “correct” for the threshold under the noisy distribution. Let us first introduce the notion of “label-dependent” costs for binary classification. We can write the 0-1 loss as a label-dependent loss as follows: 1{sign(f(X))̸=Y } = 1{Y =1}1{f(X)≤0} + 1{Y =−1}1{f(X)>0} We realize that the classical 0-1 loss is unweighted. Now, we could consider an α-weighted version of the 0-1 loss as: Uα(t, y) = (1 −α)1{y=1}1{t≤0} + α1{y=−1}1{t>0}, where α ∈(0, 1). In fact we see that minimization w.r.t. the 0-1 loss is equivalent to that w.r.t. U1/2(f(X), Y ). It is not a coincidence that Bayes optimal f ∗has a threshold 1/2. The following lemma [Scott, 2012] shows that in fact for any α-weighted 0-1 loss, the minimizer thresholds η(x) at α. 5 Lemma 8 (α-weighted Bayes optimal [Scott, 2012]). Define Uα-risk under distribution D as Rα,D(f) = E(X,Y )∼D[Uα(f(X), Y )]. Then, f ∗ α(x) = sign(η(x) −α) minimizes Uα-risk. Now consider the risk of f w.r.t. the α-weighted 0-1 loss under noisy distribution Dρ: Rα,Dρ(f) = E(X, ˜Y )∼Dρ Uα(f(X), ˜Y ) . At this juncture, we are interested in the following question: Does there exist an α ∈(0, 1) such that the minimizer of Uα-risk under noisy distribution Dρ has the same sign as that of the Bayes optimal f ∗? We now present our second main result in the following theorem that makes a stronger statement — the Uα-risk under noisy distribution Dρ is linearly related to the 0-1 risk under the clean distribution D. The corollary of the theorem answers the question in the affirmative. Theorem 9 (Main Result 2). For the choices, α∗= 1 −ρ+1 + ρ−1 2 and Aρ = 1 −ρ+1 −ρ−1 2 , there exists a constant BX that is independent of f such that, for all functions f, Rα∗,Dρ(f) = AρRD(f) + BX. Corollary 10. The α⋆-weighted Bayes optimal classifier under noisy distribution coincides with that of 0-1 loss under clean distribution: argmin f Rα∗,Dρ(f) = argmin f RD(f) = sign(η(x) −1/2). 4.1 Proposed Proxy Surrogate Losses Consider any surrogate loss function ℓ; and the following decomposition: ℓ(t, y) = 1{y=1}ℓ1(t) + 1{y=−1}ℓ−1(t) where ℓ1 and ℓ−1 are partial losses of ℓ. Analogous to the 0-1 loss case, we can define α-weighted loss function (Eqn. (1)) and the corresponding α-weighted ℓ-risk. Can we hope to minimize an αweighted ℓ-risk with respect to noisy distribution Dρ and yet bound the excess 0-1 risk with respect to the clean distribution D? Indeed, the α⋆specified in Theorem 9 is precisely what we need. We are ready to state our third main result, which relies on a generalized notion of classification calibration for α-weighted losses [Scott, 2012]: Theorem 11 (Main Result 3). Consider the empirical risk minimization problem with noisy labels: ˆfα = argmin f∈F 1 n n X i=1 ℓα(f(Xi), ˜Yi). Define ℓα as an α-weighted margin loss function of the form: ℓα(t, y) = (1 −α)1{y=1}ℓ(t) + α1{y=−1}ℓ(−t) (1) where ℓ: R →[0, ∞) is a convex loss function with Lipschitz constant L such that it is classificationcalibrated (i.e. ℓ ′(0) < 0). Then, for the choices α∗and Aρ in Theorem 9, there exists a nondecreasing function ζℓα⋆with ζℓα⋆(0) = 0, such that the following bound holds with probability at least 1 −δ: RD( ˆfα∗) −R∗≤A−1 ρ ζℓα⋆ min f∈F Rα∗,Dρ(f) −min f Rα∗,Dρ(f) + 4LR(F) + 2 r log(1/δ) 2n . Aside from bounding excess 0-1 risk under the clean distribution, the importance of the above theorem lies in the fact that it prescribes an efficient algorithm for empirical minimization with noisy labels: ℓα is convex if ℓis convex. Thus for any surrogate loss function including ℓhin, ˆfα∗can be efficiently computed using the method of label-dependent costs. Note that the choice of α∗above is quite intuitive. For instance, when ρ−1 ≪ρ+1 (this occurs in settings such as Liu et al. [2003] where there are only positive and unlabeled examples), α∗< 1 −α∗and therefore mistakes on positives are penalized more than those on negatives. This makes intuitive sense since an observed negative may well have been a positive but the other way around is unlikely. In practice we do not need to know α∗, i.e. the noise rates ρ+1 and ρ−1. The optimization problem involves just one parameter that can be tuned by cross-validation (See Section 5). 6 5 Experiments We show the robustness of the proposed algorithms to increasing rates of label noise on synthetic and real-world datasets. We compare the performance of the two proposed methods with state-of-the-art methods for dealing with random classification noise. We divide each dataset (randomly) into 3 training and test sets. We use a cross-validation set to tune the parameters specific to the algorithms. Accuracy of a classification algorithm is defined as the fraction of examples in the test set classified correctly with respect to the clean distribution. For given noise rates ρ+1 and ρ−1, labels of the training data are flipped accordingly and average accuracy over 3 train-test splits is computed2. For evaluation, we choose a representative algorithm based on each of the two proposed methods — ˜ℓlog for the method of unbiased estimators and the widely-used C-SVM [Liu et al., 2003] method (which applies different costs on positives and negatives) for the method of label-dependent costs. 5.1 Synthetic data First, we use the synthetic 2D linearly separable dataset shown in Figure 1(a). We observe from experiments that our methods achieve over 90% accuracy even when ρ+1 = ρ−1 = 0.4. Figure 1 shows the performance of ˜ℓlog on the dataset for different noise rates. Next, we use a 2D UCI benchmark non-separable dataset (‘banana’). The dataset and classification results using C-SVM (in fact, for uniform noise rates, α∗= 1/2, so it is just the regular SVM) are shown in Figure 2. The results for higher noise rates are impressive as observed from Figures 2(d) and 2(e). The ‘banana’ dataset has been used in previous research on classification with noisy labels. In particular, the Random Projection classifier [Stempfel and Ralaivola, 2007] that learns a kernel perceptron in the presence of noisy labels achieves about 84% accuracy at ρ+1 = ρ−1 = 0.3 as observed from our experiments (as well as shown by Stempfel and Ralaivola [2007]), and the random hyperplane sampling method [Stempfel et al., 2007] gets about the same accuracy at (ρ+1, ρ−1) = (0.2, 0.4) (as reported by Stempfel et al. [2007]). Contrast these with C-SVM that achieves about 90% accuracy at ρ+1 = ρ−1 = 0.2 and over 88% accuracy at ρ+1 = ρ−1 = 0.4. −100 −80 −60 −40 −20 0 20 40 60 80 100 −100 −80 −60 −40 −20 0 20 40 60 80 100 (a) −100 −80 −60 −40 −20 0 20 40 60 80 100 −100 −80 −60 −40 −20 0 20 40 60 80 100 (b) −100 −80 −60 −40 −20 0 20 40 60 80 100 −100 −80 −60 −40 −20 0 20 40 60 80 100 (c) −100 −80 −60 −40 −20 0 20 40 60 80 100 −100 −80 −60 −40 −20 0 20 40 60 80 100 (d) −100 −80 −60 −40 −20 0 20 40 60 80 100 −100 −80 −60 −40 −20 0 20 40 60 80 100 (e) Figure 1: Classification of linearly separable synthetic data set using ˜ℓlog. The noise-free data is shown in the leftmost panel. Plots (b) and (c) show training data corrupted with noise rates (ρ+1 = ρ−1 = ρ) 0.2 and 0.4 respectively. Plots (d) and (e) show the corresponding classification results. The algorithm achieves 98.5% accuracy even at 0.4 noise rate per class. (Best viewed in color). −4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 (a) −4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 (b) −4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 (c) −4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 (d) −4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 (e) Figure 2: Classification of ‘banana’ data set using C-SVM. The noise-free data is shown in (a). Plots (b) and (c) show training data corrupted with noise rates (ρ+1 = ρ−1 = ρ) 0.2 and 0.4 respectively. Note that for ρ+1 = ρ−1, α∗= 1/2 (i.e. C-SVM reduces to regular SVM). Plots (d) and (e) show the corresponding classification results (Accuracies are 90.6% and 88.5% respectively). Even when 40% of the labels are corrupted (ρ+1 = ρ−1 = 0.4), the algorithm recovers the class structures as observed from plot (e). Note that the accuracy of the method at ρ = 0 is 90.8%. 5.2 Comparison with state-of-the-art methods on UCI benchmark We compare our methods with three state-of-the-art methods for dealing with random classification noise: Random Projection (RP) classifier [Stempfel and Ralaivola, 2007]), NHERD 2Note that training and cross-validation are done on the noisy training data in our setting. To account for randomness in the flips to simulate a given noise rate, we repeat each experiment 3 times — independent corruptions of the data set for same setting of ρ+1 and ρ−1, and present the mean accuracy over the trials. 7 DATASET (d, n+, n−) Noise rates ˜ℓlog C-SVM PAM NHERD RP ρ+1 = ρ−1 = 0.2 70.12 67.85 69.34 64.90 69.38 Breast cancer ρ+1 = 0.3, ρ−1 = 0.1 70.07 67.81 67.79 65.68 66.28 (9, 77, 186) ρ+1 = ρ−1 = 0.4 67.79 67.79 67.05 56.50 54.19 ρ+1 = ρ−1 = 0.2 76.04 66.41 69.53 73.18 75.00 Diabetes ρ+1 = 0.3, ρ−1 = 0.1 75.52 66.41 65.89 74.74 67.71 (8, 268, 500) ρ+1 = ρ−1 = 0.4 65.89 65.89 65.36 71.09 62.76 ρ+1 = ρ−1 = 0.2 87.80 94.31 96.22 78.49 84.02 Thyroid ρ+1 = 0.3, ρ−1 = 0.1 80.34 92.46 86.85 87.78 83.12 (5, 65, 150) ρ+1 = ρ−1 = 0.4 83.10 66.32 70.98 85.95 57.96 ρ+1 = ρ−1 = 0.2 71.80 68.40 63.80 67.80 62.80 German ρ+1 = 0.3, ρ−1 = 0.1 71.40 68.40 67.80 67.80 67.40 (20, 300, 700) ρ+1 = ρ−1 = 0.4 67.19 68.40 67.80 54.80 59.79 ρ+1 = ρ−1 = 0.2 82.96 61.48 69.63 82.96 72.84 Heart ρ+1 = 0.3, ρ−1 = 0.1 84.44 57.04 62.22 81.48 79.26 (13, 120, 150) ρ+1 = ρ−1 = 0.4 57.04 54.81 53.33 52.59 68.15 ρ+1 = ρ−1 = 0.2 82.45 91.95 92.90 77.76 65.29 Image ρ+1 = 0.3, ρ−1 = 0.1 82.55 89.26 89.55 79.39 70.66 (18, 1188, 898) ρ+1 = ρ−1 = 0.4 63.47 63.47 73.15 69.61 64.72 Table 1: Comparative study of classification algorithms on UCI benchmark datasets. Entries within 1% from the best in each row are in bold. All the methods except NHERD variants (which are not kernelizable) use Gaussian kernel with width 1. All method-specific parameters are estimated through cross-validation. Proposed methods (˜ℓlog and C-SVM) are competitive across all the datasets. We show the best performing NHERD variant (‘project’ and ‘exact’) in each case. [Crammer and Lee, 2010]) (project and exact variants3), and perceptron algorithm with margin (PAM) which was shown to be robust to label noise by Khardon and Wachman [2007]. We use the standard UCI classification datasets, preprocessed and made available by Gunnar R¨atsch(http://theoval.cmp.uea.ac.uk/matlab). For kernelized algorithms, we use Gaussian kernel with width set to the best width obtained by tuning it for a traditional SVM on the noise-free data. For ˜ℓlog, we use ρ+1 and ρ−1 that give the best accuracy in cross-validation. For C-SVM, we fix one of the weights to 1, and tune the other. Table 1 shows the performance of the methods for different settings of noise rates. C-SVM is competitive in 4 out of 6 datasets (Breast cancer, Thyroid, German and Image), while relatively poorer in the other two. On the other hand, ˜ℓlog is competitive in all the data sets, and performs the best more often. When about 20% labels are corrupted, uniform (ρ+1 = ρ−1 = 0.2) and non-uniform cases (ρ+1 = 0.3, ρ−1 = 0.1) have similar accuracies in all the data sets, for both C-SVM and ˜ℓlog. Overall, we observe that the proposed methods are competitive and are able to tolerate moderate to high amounts of label noise in the data. Finally, in domains where noise rates are approximately known, our methods can benefit from the knowledge of noise rates. Our analysis shows that the methods are fairly robust to misspecification of noise rates (See Appendix C for results). 6 Conclusions and Future Work We addressed the problem of risk minimization in the presence of random classification noise, and obtained general results in the setting using the methods of unbiased estimators and weighted loss functions. We have given efficient algorithms for both the methods with provable guarantees for learning under label noise. The proposed algorithms are easy to implement and the classification performance is impressive even at high noise rates and competitive with state-of-the-art methods on benchmark data. The algorithms already give a new family of methods that can be applied to the positive-unlabeled learning problem [Elkan and Noto, 2008], but the implications of the methods for this setting should be carefully analysed. We could consider harder noise models such as label noise depending on the example, and “nasty label noise” where labels to flip are chosen adversarially. 7 Acknowledgments This research was supported by DOD Army grant W911NF-10-1-0529 to ID; PR acknowledges the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894. 3A family of methods proposed by Crammer and coworkers [Crammer et al., 2006, 2009, Dredze et al., 2008] could be compared to, but [Crammer and Lee, 2010] show that the 2 NHERD variants perform the best. 8 References D. Angluin and P. Laird. Learning from noisy examples. Mach. Learn., 2(4):343–370, 1988. Javed A. Aslam and Scott E. Decatur. On the sample complexity of noise-tolerant learning. Inf. Process. Lett., 57(4):189–195, 1996. Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. Shai Ben-David, D´avid P´al, and Shai Shalev-Shwartz. Agnostic online learning. In Proceedings of the 22nd Conference on Learning Theory, 2009. Battista Biggio, Blaine Nelson, and Pavel Laskov. Support vector machines under adversarial label noise. Journal of Machine Learning Research - Proceedings Track, 20:97–112, 2011. Tom Bylander. Learning linear threshold functions in the presence of classification noise. In Proc. of the 7th COLT, pages 340–347, NY, USA, 1994. ACM. Nicol`o Cesa-Bianchi, Eli Dichterman, Paul Fischer, Eli Shamir, and Hans Ulrich Simon. Sample-efficient strategies for learning in the presence of noise. J. ACM, 46(5):684–719, 1999. Nicol`o Cesa-Bianchi, Shai Shalev-Shwartz, and Ohad Shamir. Online learning of noisy data. IEEE Transactions on Information Theory, 57(12):7907–7931, 2011. K. Crammer and D. Lee. Learning via gaussian herding. In Advances in NIPS 23, pages 451–459, 2010. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online passive-aggressive algorithms. J. Mach. Learn. Res., 7:551–585, 2006. Koby Crammer, Alex Kulesza, and Mark Dredze. Adaptive regularization of weight vectors. In Advances in NIPS 22, pages 414–422, 2009. Mark Dredze, Koby Crammer, and Fernando Pereira. Confidence-weighted linear classification. In Proceedings of the Twenty-Fifth ICML, pages 264–271, 2008. C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In Proc. of the 14th ACM SIGKDD intl. conf. on Knowledge discovery and data mining, pages 213–220, 2008. Yoav Freund. A more robust boosting algorithm, 2009. preprint arXiv:0905.2138 [stat.ML] available at http://arxiv.org/abs/0905.2138. T. Graepel and R. Herbrich. The kernel Gibbs sampler. In Advances in NIPS 13, pages 514–520, 2000. Roni Khardon and Gabriel Wachman. Noise tolerant variants of the perceptron algorithm. J. Mach. Learn. Res., 8:227–248, 2007. Neil D. Lawrence and Bernhard Sch¨olkopf. Estimating a kernel Fisher discriminant in the presence of label noise. In Proceedings of the Eighteenth ICML, pages 306–313, 2001. Bing Liu, Yang Dai, Xiaoli Li, Wee Sun Lee, and Philip S Yu. Building text classifiers using positive and unlabeled examples. In ICDM 2003., pages 179–186. IEEE, 2003. Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boosters. Mach. Learn., 78(3):287–304, 2010. Yves Lucet. What shape is your conjugate? a survey of computational convex analysis and its applications. SIAM Rev., 52(3):505–542, August 2010. ISSN 0036-1445. Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization. To appear in IEEE Trans. Syst. Man and Cybern. Part B, 2013. URL: http://arxiv.org/abs/1109.5231. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Opt., 19(4):1574–1609, 2009. David F. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise on the precision of supervised learning techniques. Artif. Intell. Rev., 33(4):275–306, 2010. Clayton Scott. Calibrated asymmetric surrogate losses. Electronic J. of Stat., 6:958–992, 2012. Clayton Scott, Gilles Blanchard, and Gregory Handy. Classification with asymmetric label noise: Consistency and maximal denoising. To appear in COLT, 2013. G. Stempfel and L. Ralaivola. Learning kernel perceptrons on noisy data using random projections. In Algorithmic Learning Theory, pages 328–342. Springer, 2007. G. Stempfel, L. Ralaivola, and F. Denis. Learning from noisy data using hyperplane sampling and sample averages. 2007. Guillaume Stempfel and Liva Ralaivola. Learning SVMs from sloppily labeled data. In Proc. of the 19th Intl. Conf. on Artificial Neural Networks: Part I, pages 884–893. Springer-Verlag, 2009. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth ICML, pages 928–936, 2003. 9
|
2013
|
72
|
5,150
|
Variational Policy Search via Trajectory Optimization Sergey Levine Stanford University svlevine@cs.stanford.edu Vladlen Koltun Stanford University and Adobe Research vladlen@cs.stanford.edu Abstract In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks. 1 Introduction Direct policy search methods have the potential to scale gracefully to complex, high-dimensional control tasks [12]. However, their effectiveness depends on discovering successful executions of the desired task, usually through random exploration. As the dimensionality and complexity of a task increases, random exploration can prove inadequate, resulting in poor local optima. We propose to decouple policy optimization from exploration by using a variational decomposition of a maximum likelihood policy objective. In our method, exploration is performed by a model-based trajectory optimization algorithm that is not constrained by the policy parameterization, but attempts to minimize both the cost and the deviation from the current policy, while the policy is simply optimized to match the resulting trajectory distribution. Since direct model-based trajectory optimization is usually much easier than policy search, this method can discover low cost regions much more easily. Intuitively, the trajectory optimization “guides” the policy search toward regions of low cost. The trajectory optimization can be performed by a variant of the differential dynamic programming algorithm [4], and the policy is optimized with respect to a standard maximum likelihood objective. We show that this alternating optimization maximizes a well-defined policy objective, and demonstrate experimentally that it can learn complex tasks in high-dimensional domains that are infeasible for methods that rely on random exploration. Our evaluation shows that the proposed algorithm produces good results on two challenging locomotion problems, outperforming prior methods. 2 Preliminaries In standard policy search, we seek to find a distribution over actions ut in each state xt, denoted πθ(ut|xt), so as to minimize the sum of expected costs E[c(ζ)] = E[PT t=1 c(xt, ut)], where ζ is a sequence of states and actions. The expectation is taken with respect to the system dynamics p(xt+1|xt, ut) and the policy πθ(ut|xt), which is typically parameterized by a vector θ. An alternative to this standard formulation is to convert the task into an inference problem, by introducing a binary random variable Ot at each time step that serves as the indicator for “optimality.” 1 We follow prior work and define the probability of Ot as p(Ot = 1|xt, ut) ∝exp(−c(xt, ut)) [19]. Using the dynamics distribution p(xt+1|xt, ut) and the policy πθ(ut|xt), we can define a dynamic Bayesian network that relates states, actions, and the optimality indicator. By setting Ot = 1 at all time steps and learning the maximum likelihood values for θ, we can perform policy optimization [20]. The corresponding optimization problem has the objective p(O|θ)= Z p(O|ζ)p(ζ|θ)dζ ∝ Z exp − T X t=1 c(xt, ut) ! p(x1) T Y t=1 πθ(ut|xt)p(xt+1|xt, ut)dζ. (1) Although this objective differs from the classical minimum average cost objective, previous work showed that it is nonetheless useful for policy optimization and planning [20, 19]. In Section 5, we discuss how this objective relates to the classical objective in more detail. 3 Variational Policy Search Following prior work [11], we can decompose log p(O|θ) by using a variational distribution q(ζ): log p(O|θ) = L(q, θ) + DKL(q(ζ)∥p(ζ|O, θ)), where the variational lower bound L is given by L(q, θ) = Z q(ζ) log p(O|ζ)p(ζ|θ) q(ζ) dζ, and the second term is the Kullback-Leibler (KL) divergence DKL(q(ζ)∥p(ζ|O, θ)) = − Z q(ζ) log p(ζ|O, θ) q(ζ) dζ = − Z q(ζ) log p(O|ζ)p(ζ|θ) q(ζ)p(O|θ) dζ. (2) We can then optimize the maximum likelihood objective in Equation 1 by iteratively minimizing the KL divergence with respect to q(ζ) and maximizing the bound L(q, θ) with respect to θ. This is the standard formulation for expectation maximization [9], and has been applied to policy optimization in previous work [8, 21, 3, 11]. However, prior policy optimization methods typically represent q(ζ) by sampling trajectories from the current policy πθ(ut|xt) and reweighting them, for example by the exponential of their cost. While this can improve policies that already visit regions of low cost, it relies on random policy-driven exploration to discover those low cost regions. We propose instead to directly optimize q(ζ) to minimize both its expected cost and its divergence from the current policy πθ(ut|xt) when a model of the dynamics is available. In the next section, we show that, for a Gaussian distribution q(ζ), the KL divergence in Equation 2 can be minimized by a variant of the differential dynamic programming (DDP) algorithm [4]. 4 Trajectory Optimization DDP is a trajectory optimization algorithm based on Newton’s method [4]. We build off of a variant of DDP called iterative LQR, which linearizes the dynamics around the current trajectory, computes the optimal linear policy under linear-quadratic assumptions, executes this policy, and repeats the process around the new trajectory until convergence [17]. We show how this procedure can be used to minimize the KL divergence in Equation 2 when q(ζ) is a Gaussian distribution over trajectories. This derivation follows previous work [10], but is repeated here and expanded for completeness. Iterative LQR is a dynamic programming algorithm that recursively computes the value function backwards through time. Because of the linear-quadratic assumptions, the value function is always quadratic, and the dynamics are Gaussian with the mean at f(xt, ut) and noise ϵ. Given a trajectory (¯x1, ¯u1), . . . , (¯xT , ¯uT ) and defining ˆxt = xt−¯xt and ˆut = ut−¯ut, the dynamics and cost function are then approximated as following, with subscripts x and u denoting partial derivatives: ˆxt+1 ≈fxtˆxt + futˆut + ϵ c(xt, ut) ≈ˆxT t cxt + ˆuTcut + 1 2 ˆxT t cxxtˆxt + 1 2 ˆuT t cuutˆut + ˆuT t cuxtˆxt + c(¯xt, ¯ut). 2 Under this approximation, we can recursively compute the Q-function as follows: Qxxt =cxxt+f T xtVxxt+1fxt Quut =cuut+f T utVxxt+1fut Quxt =cuxt+f T utVxxt+1fxt Qxt =cxt+f T xtVxt+1 Qut =cut+f T utVxt+1, as well as the value function and linear policy terms: Vxt = Qxt −QT uxtQ−1 uutQu kt = −Q−1 uutQut Vxxt = Qxxt −QT uxtQ−1 uutQux Kt = −Q−1 uutQuxt. The deterministic optimal policy is then given by g(xt) = ¯ut + kt + Kt(xt −¯xt). By repeatedly computing the optimal policy around the current trajectory and updating ¯xt and ¯ut based on the new policy, iterative LQR converges to a locally optimal solution [17]. In order to use this algorithm to minimize the KL divergence in Equation 2, we introduce a modified cost function ¯c(xt, ut) = c(xt, ut) −log πθ(ut|xt). The optimal trajectory for this cost function approximately1 minimizes the KL divergence when q(ζ) is a Dirac delta function, since DKL(q(ζ)∥p(ζ|O, θ)) = Z q(ζ) " T X t=1 c(xt, ut) −log πθ(ut|xt) −log p(xt+1|xt, ut) # dζ + const. However, we can also obtain a Gaussian q(ζ) by using the framework of linearly solvable MDPs [16] and the closely related concept of maximum entropy control [23]. The optimal policy πG under this framework minimizes an augmented cost function, given by ˜c(xt, ut) = ¯c(xt, ut) −H(πG), where H(πG) is the entropy of a stochastic policy πG(ut|xt), and ¯c(xt, ut) includes log πθ(ut|xt) as above. Ziebart [23] showed that the optimal policy can be written as πG(ut|xt) = exp(−Qt(xt, ut) + Vt(xt)), where V is a “softened” value function given by Vt(xt) = log Z exp (Qt(xt, ut)) dut. Under linear dynamics and quadratic costs, V has the same form as in the LQR derivation above, which means that πG(ut|xt) is a linear Gaussian with mean g(xt) and covariance Q−1 uut [10]. Together with the linearized dynamics, the resulting policy specifies a Gaussian distribution over trajectories with Markovian independence: q(ζ) = ˜p(xt) T Y t=1 πG(ut|xt)˜p(xt+1|xt, ut), where πG(ut|xt) = N(g(xt), Q−1 uut), ˜p(xt) is an initial state distribution, and ˜p(xt+1|xt, ut) = N(fxtˆxt+futˆut+¯xt+1, Σft) is the linearized dynamics with Gaussian noise Σft. This distribution also corresponds to a Laplace approximation for p(ζ|O, θ), which is formed from the exponential of the second order Taylor expansion of log p(ζ|O, θ) [15]. Once we compute πG(ut|xt) using iterative LQR/DDP, it is straightforward to obtain the marginal distributions q(xt), which will be useful in the next section for minimizing the variational bound L(q, θ). Using µt and Σt to denote the mean and covariance of the marginal at time t and assuming that the initial state distribution at t = 1 is given, the marginals can be computed recursively as µt+1 = [ fxt fut ] µt ¯ut + kt + Kt(µt −¯xt) Σt+1 = [ fxt fut ] Σt ΣtKT t KtΣt Q−1 uut + KtΣtKT t [ fxt fut ]T + Σft. 1The minimization is not exact if the dynamics p(xt+1|xt, ut) are not deterministic, but the result is very close if the dynamics have much lower entropy than the policy and exponentiated cost, which is often the case. 3 Algorithm 1 Variational Guided Policy Search 1: Initialize q(ζ) using DDP with cost ¯c(xt, ut) = α0c(xt, ut) 2: for iteration k = 1 to K do 3: Compute marginals (µ1, Σt), . . . , (µT , ΣT ) for q(ζ) 4: Optimize L(q, θ) with respect to θ using standard nonlinear optimization methods 5: Set αk based on annealing schedule, for example αk = exp K−k K log α0 + k K log αK 6: Optimize q(ζ) using DDP with cost ¯c(xt, ut) = αkc(xt, ut) −log πθ(ut|xt) 7: end for 8: Return optimized policy πθ(ut|xt) When the dynamics are nonlinear or the modified cost ¯c(xt, ut) is nonquadratic, this solution only approximates the minimum of the KL divergence. In practice, the approximation is quite good when the dynamics and the cost c(xt, ut) are smooth. Unfortunately, the policy term log πθ(ut|xt) in the modified cost ¯c(xt, ut) can be quite jagged early on in the optimization, particularly for nonlinear policies. To mitigate this issue, we compute the derivatives of the policy not only along the current trajectory, but also at samples drawn from the current marginals q(xt), and average them together. This averages out local perturbations in log πθ(ut|xt) and improves the approximation. In Section 8, we discuss more sophisticated techniques that could be used in future work to handle highly nonlinear dynamics for which this approximation may be inadequate. 5 Variational Guided Policy Search The variational guided policy search (variational GPS) algorithm alternates between minimizing the KL divergence in Equation 2 with respect to q(ζ) as described in the previous section, and maximizing the bound L(q, θ) with respect to the policy parameters θ. Minimizing the KL divergence reduces the difference between L(q, θ) and log p(O|θ), so that the maximization of L(q, θ) becomes a progressively better approximation for the maximization of log p(O|θ). The method is summarized in Algorithm 1. The bound L(q, θ) can be maximized by a variety of standard optimization methods, such as stochastic gradient descent (SGD) or LBFGS. The gradient is given by ∇L(q, θ) = Z q(ζ) T X t=1 ∇log πθ(ut|xt)dζ ≈1 M M X i=1 T X t=1 ∇log πθ(ui t|xi t), (3) where the samples (xi t, ui t) are drawn from the marginals q(xt, ut). When using SGD, new samples can be drawn at every iteration, since sampling from q(xt, ut) only requires the precomputed marginals from the preceding section. Because the marginals are computed using linearized dynamics, we can be assured that the samples will not deviate drastically from the optimized trajectory, regardless of the true dynamics. The resulting SGD optimization is analogous to a supervised learning task with an infinite training set. When using LBFGS, a new sample set can generated every n LBFGS iterations. We found that values of n from 20 to 50 produced good results. When choosing the policy class, it is common to use deterministic policies with additive Gaussian noise. In this case, we can optimize the policy more quickly and with many fewer samples by only sampling states and evaluating the integral over actions analytically. Letting µθ xt, Σθ xt and µq xt, Σq xt denote the means and covariances of πθ(ut|xt) and q(ut|xt), we can write L(q, θ) as L(q, θ) ≈1 M M X i=1 T X t=1 Z q(ut|xi t) log πθ(ut|xi t)dut + const = 1 M M X i=1 T X t=1 −1 2 µθ xi t −µq xi t T Σθ−1 xt µθ xi t −µq xi t −1 2log Σθ xi t −1 2tr Σθ−1 xi t Σq xi t + const. Two additional details should be taken into account in order to obtain the best results. First, although model-based trajectory optimization is more powerful than random exploration, complex tasks such as bipedal locomotion, which we address in the following section, are too difficult to solve entirely with trajectory optimization. To solve such tasks, we can initialize the procedure from a good initial 4 trajectory, typically provided by a demonstration. This trajectory is only used for initialization and need not be reproducible by any policy, since it will be modified by subsequent DDP invocations. Second, unlike the average cost objective, the maximum likelihood objective is sensitive to the magnitude of the cost. Specifically, the logarithm of Equation 1 corresponds to a soft minimum over all likely trajectories under the current policy, with the softness of the minimum inversely proportional to the cost magnitude. As the magnitude increases, this objective scores policies based primarily on their best-case cost, rather than the average case. As the magnitude decreases, the objective becomes more similar to the classic average cost. Because of this, we found it beneficial to gradually anneal the cost by multiplying it by αk at the kth iteration, starting with a high magnitude to favor aggressive exploration, and ending with a low magnitude to optimize average case performance. In our experiments, αk begins at 1 and is reduced exponentially to 0.1 by the 50th iteration. Since our method produces both a parameterized policy πθ(ut|xt) and a DDP solution πG(ut|xt), one might wonder why the DDP policy itself is not a suitable controller. The issue is that πθ(ut|xt) can have an arbitrary parameterization, and admits constraints on available information, stationarity, etc., while πG(ut|xt) is always a nonstationary linear feedback policy. This has three major advantages: first, only the learned policy may be usable at runtime if the information available at runtime differs from the information during training, for example if the policy is trained in simulation and executed on a physical system with limited sensors. Second, if the policy class is chosen carefully, we might hope that the learned policy would generalize better than the DDP solution, as shown in previous work [10]. Third, multiple trajectories can be used to train a single policy from different initial states, creating a single controller that can succeed in a variety of situations. 6 Experimental Evaluation We evaluated our method on two simulated planar locomotion tasks: swimming and bipedal walking. For both tasks, the policy sets joint torques on a simulated robot consisting of rigid links. The swimmer has 3 links and 5 degrees of freedom, including the root position, and a 10-dimensional state space that includes joint velocities. The walker has 7 links, 9 degrees of freedom, and 18 state dimensions. Due to the high dimensionality and nonlinear dynamics, these tasks represent a significant challenge for direct policy learning. The cost function for the walker was given by c(x, u) = wu∥u∥2 + (vx −v⋆ x)2 + (py −p⋆ y)2, where vx and v⋆ x are the current and desired horizontal velocities, py and p⋆ y are the current and desired heights of the hips, and the torque penalty was set to wu = 10−4. The swimmer cost excludes the height term and uses a lower torque penalty of wu = 10−5. As discussed in the previous section, the magnitude of the cost was decreased by a factor of 10 during the first 50 iterations, and then remained fixed. Following previous work [10], the trajectory for the walker was initialized with a demonstration from a hand-crafted locomotion system [22]. The policy was represented by a neural network with one hidden layer and a soft rectifying nonlinearity of the form a = log(1 + exp(z)), with Gaussian noise at the output. Both the weights of the neural network and the diagonal covariance of the output noise were learned as part of the policy optimization. The number of policy parameters ranged from 63 for the 5-unit swimmer to 246 for the 10-unit walker. Due to its complexity and nonlinearity, this policy class presents a challenge to traditional policy search algorithms, which often focus on compact, linear policies [8]. Figure 1 shows the average cost of the learned policies on each task, along with visualizations of the swimmer and walker. Methods that sample from the current policy use 10 samples per iteration, unless noted otherwise. To ensure a fair comparison, the vertical axis shows the average cost E[c(ζ)] rather than the maximum likelihood objective log p(O|θ). The cost was evaluated for both the actual stochastic policy (solid line), and a deterministic policy obtained by setting the variance of the Gaussian noise to zero (dashed line). Each plot also shows the cost of the initial DDP solution. Policies with costs significantly above this amount do not succeed at the task, either falling in the case of the walker, or failing to make forward progress in the case of the swimmer. Our method learned successful policies for each task, and often converged faster than previous methods, though performance during early iterations was often poor. We believe this is because the variational bound L(q, θ) does not become a good proxy for log p(O|θ) until after several invocations of DDP, at which point the algorithm is able to rapidly improve the policy. 5 DDP solution variational GPS GPS adapted GPS cost-weighted cost-weighted 1000 DAGGER weighted DAGGER adapted DAGGER swimmer, 5 hidden units iteration average cost 20 40 60 80 100 100 150 200 250 300 350 400 swimmer, 10 hidden units iteration average cost 20 40 60 80 100 100 150 200 250 300 350 400 walker, 5 hidden units iteration average cost 20 40 60 80 100 0 500 1000 1500 2000 2500 3000 3500 4000 walker, 10 hidden units iteration average cost 20 40 60 80 100 0 500 1000 1500 2000 2500 3000 3500 4000 Figure 1: Comparison of variational guided policy search (VGPS) with prior methods. The average cost of the stochastic policy is shown with a solid line, and the average cost of the deterministic policy without Gaussian noise is shown with a dashed line. The bottom-right panel shows plots of the swimmer and walker, with the center of mass trajectory under the learned policy shown in blue, and the initial DDP solution shown in black. The first method we compare to is guided policy search (GPS), which uses importance sampling to introduce samples from the DDP solution into a likelihood ratio policy search [10]. The GPS algorithm first draws a fixed number of samples from the DDP solution, and then adds on-policy samples at each iteration. Like our method, GPS uses DDP to explore regions of low cost, but the policy optimization is done using importance sampling, which can be susceptible to degenerate weights in high dimensions. Since standard GPS only samples from the initial DDP solution, these samples are only useful if they can be reproduced by the policy class. Otherwise, GPS must rely on random exploration to improve the solution. On the easier swimmer task, the GPS policy can reproduce the initial trajectory and succeeds immediately. However, GPS is unable to find a successful walking policy with only 5 hidden units, which requires modifications to the initial trajectory. In addition, although the deterministic GPS policy performs well on the walker with 10 hidden units, the stochastic policy fails more often. This suggests that the GPS optimization is not learning a good variance for the Gaussian policy, possibly because the normalized importance sampled estimator places greater emphasis on the relative probability of the samples than their absolute probability. The adaptive variant of GPS runs DDP at every iteration and adapts to the current policy, in the same manner as our method. However, samples from this adapted DDP solution are then included in the policy optimization with importance sampling, while our approach optimizes the variational bound L(q, θ). In the GPS estimator, each sample ζi is weighted by an importance weight dependent on πθ(ζi), while the samples in our optimization are not weighted. When a sample has a low probability under the current policy, it is ignored by the importance sampled optimizer. Because of this, although the adaptive variant of GPS improves on the standard variant, it is still unable to learn a walking policy with 5 hidden units, while our method quickly discovers an effective policy. We also compared to an imitation learning method called DAGGER. DAGGER aims to learn a policy that imitates an oracle [14], which in our case is the DDP solution. At each iteration, DAGGER adds samples from the current policy to a dataset, and then optimizes the policy to take the oracle action at each dataset state. While adjusting the current policy to match the DDP solution may appear similar to our approach, we found that DAGGER performed poorly on these tasks, since the on-policy samples initially visited states that were very far from the DDP solution, and therefore the DDP action at these states was large and highly suboptimal. To reduce the impact of these poor states, we implemented a variant of DAGGER which weighted the samples by their probability under the DDP marginals. This variant succeeded on the swimming tasks and eventually found a good deterministic policy for the walker with 10 hidden units, though the learned stochastic policy performed very poorly. We also implemented an adapted variant, where the DDP solution is reoptimized at each iteration to match the policy (in addition to weighting), but this variant performed 6 worse. Unlike DAGGER, our method samples from a Gaussian distribution around the current DDP solution, ensuring that all samples are drawn from good parts of the state space. Because of this, our method is much less sensitive to poor or unstable initial policies. Finally, we compare to an alternative variational policy search algorithm analogous to PoWER [8]. Although PoWER requires a linear policy parameterization and a specific exploration strategy, we can construct an analogous non-linear algorithm by replacing the analytic M-step with nonlinear optimization, as in our method. This algorithm is identical to ours, except that instead of using DDP to optimize q(ζ), the variational distribution is formed by taking samples from the current policy and reweighting them by the exponential of their cost. We call this method “cost-weighted.” The policy is still initialized with supervised training to resemble the initial DDP solution, but otherwise this method does not benefit from trajectory optimization and relies entirely on random exploration. This kind of exploration is generally inadequate for such complex tasks. Even if the number of samples per iteration is increased to 103 (denoted as “cost-weighted 1000”), this method still fails to solve the harder walking task, suggesting that simply taking more random samples is not the solution. These results show that our algorithm outperforms prior methods because of two advantages: we use a model-based trajectory optimization algorithm instead of random exploration, which allows us to outperform model-free methods such as the “cost-weighted” PoWER analog, and we decompose the policy search into two simple optimization problems that can each be solved efficiently by standard algorithms, which leaves us less vulnerable to local optima than more complex methods like GPS. 7 Previous Work In optimizing a maximum likelihood objective, our method builds on previous work that frames control as inference [20, 19, 13]. Such methods often redefine optimality in terms of a log evidence probability, as in Equation 1. Although this definition differs from the classical expected return, our evaluation suggests that policies optimized with respect to this measure also exhibit a good average return. As we discuss in Section 5, this objective is risk seeking when the cost magnitude is high, and annealing can be used to gradually transition from an objective that favors aggressive exploration to one that resembles the average return. Other authors have also proposed alternative definitions of optimality that include appealing properties like maximization of entropy [23] or computational benefits [16]. However, our work is the first to our knowledge to show how trajectory optimization can be used to guide policy learning within the control-as-inference framework. Our variational decomposition follows prior work on policy search with variational inference [3, 11] and expectation maximization [8, 21]. Unlike these methods, our approach aims to find a variational distribution q(ζ) that is best suited for control and leverages a known dynamics model. We present an interpretation of the KL divergence minimization in Equation 2 as model-based exploration, which can be performed with a variant of DDP. As shown in our evaluation, this provides our method with a significant advantage over methods that rely on model-free random exploration, though at the cost of requiring a differentiable model of the dynamics. Interestingly, our algorithm never requires samples to be drawn from the current policy. This can be an advantage in applications where running an unstable, incompletely optimized policy can be costly or dangerous. Our use of DDP to guide the policy search parallels our previous Guided Policy Search (GPS) algorithm [10]. Unlike the proposed method, GPS incorporates samples from DDP directly into an importance-sampled estimator of the return. These samples are therefore only useful when the policy class can reproduce them effectively. As shown in the evaluation of the walker with 5 hidden units, GPS may be unable to discover a good policy when the policy class cannot reproduce the initial DDP solution. Adaptive GPS addresses this issue by reoptimizing the trajectory to resemble the current policy, but the policy is still optimized with respect to an importance-sampled return estimate, which leaves it highly prone to local optima, and the theoretical justification for adaptation is unclear. The proposed method justifies the reoptimization of the trajectory under a variational framework, and uses standard maximum likelihood in place of the complex importance-sampled objective. We also compared our method to DAGGER [14], which uses a general-purpose supervised training algorithm to train the current policy to match an oracle, which in our case is the DDP solution. DAGGER matches actions from the oracle policy at states visited by the current policy, under the 7 assumption that the oracle can provide good actions in all states. This assumption does not hold for DDP, which is only valid in a narrow region around the trajectory. To mitigate the locality of the DDP solution, we weighted the samples by their probability under the DDP marginals, which allowed DAGGER to solve the swimming task, but it was still outperformed by our method on the walking task, even with adaptation of the DDP solution. Unlike DAGGER, our approach is relatively insensitive to the instability of the learned policy, since the learned policy is not sampled. Several prior methods also propose to improve policy search by using a distribution over high-value states, which might come from a DDP solution [6, 1]. Such methods generally use this “restart” distribution as a new initial state distribution, and show that optimizing a policy from such a restart distribution also optimizes the expected return. Unlike our approach, such methods only use the states from the DDP solution, not the actions, and tend to suffer from the increased variance of the restart distribution, as shown in previous work [10]. 8 Discussion and Future Work We presented a policy search algorithm that employs a variational decomposition of a maximum likelihood objective to combine trajectory optimization with policy search. The variational distribution is obtained using differential dynamic programming (DDP), and the policy can be optimized with a standard nonlinear optimization algorithm. Model-based trajectory optimization effectively takes the place of random exploration, providing a much more effective means for finding low cost regions that the policy is then trained to visit. Our evaluation shows that this algorithm outperforms prior variational methods and prior methods that use trajectory optimization to guide policy search. Our algorithm has several interesting properties that distinguish it from prior methods. First, the policy search does not need to sample the learned policy. This may be useful in real-world applications where poor policies might be too risky to run on a physical system. More generally, this property improves the robustness of our method in the face of unstable initial policies, where on-policy samples have extremely high variance. By sampling directly from the Gaussian marginals of the DDP-induced distribution over trajectories, our approach also avoids some of the issues associated with unstable dynamics, requiring only that the task permit effective trajectory optimization. By optimizing a maximum likelihood objective, our method favors policies with good best-case performance. Obtaining good best-case performance is often the hardest part of policy search, since a policy that achieves good results occasionally is easier to improve with standard on-policy search methods than one that fails outright. However, modifying the algorithm to optimize the standard average cost criterion could produce more robust controllers in the future. The use of local linearization in DDP results in only approximate minimization of the KL divergence in Equation 2 in nonlinear domains or with nonquadratic policies. While we mitigate this by averaging the policy derivatives over multiple samples from the DDP marginals, this approach could still break down in the presence of highly nonsmooth dynamics or policies. An interesting avenue for future work is to extend the trajectory optimization method to nonsmooth domains by using samples rather than linearization, perhaps analogously to the unscented Kalman filter [5, 18]. This could also avoid the need to differentiate the policy with respect to the inputs, allowing for richer policy classes to be used. Another interesting avenue for future work is to apply model-free trajectory optimization techniques [7], which would avoid the need for a model of the system dynamics, or to learn the dynamics from data, for example by using Gaussian processes [2]. It would also be straightforward to use multiple trajectories optimized from different initial states to learn a single policy that is able to succeed under a variety of initial conditions. Overall, we believe that trajectory optimization is a very useful tool for policy search. By separating the policy optimization and exploration problems into two separate phases, we can employ simpler algorithms such as SGD and DDP that are better suited for each phase, and can achieve superior performance on complex tasks. We believe that additional research into augmenting policy learning with trajectory optimization can further advance the performance of policy search techniques. Acknowledgments We thank Emanuel Todorov, Tom Erez, and Yuval Tassa for providing the simulator used in our experiments. Sergey Levine was supported by NSF Graduate Research Fellowship DGE-0645962. 8 References [1] A. Bagnell, S. Kakade, A. Ng, and J. Schneider. Policy search by dynamic programming. In Advances in Neural Information Processing Systems (NIPS), 2003. [2] M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-efficient approach to policy search. In International Conference on Machine Learning (ICML), 2011. [3] T. Furmston and D. Barber. Variational methods for reinforcement learning. Journal of Machine Learning Research, 9:241–248, 2010. [4] D. Jacobson and D. Mayne. Differential Dynamic Programming. Elsevier, 1970. [5] S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In International Symposium on Aerospace/Defense Sensing, Simulation, and Control, 1997. [6] S. Kakade and J. Langford. Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning (ICML), 2002. [7] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal. STOMP: stochastic trajectory optimization for motion planning. In International Conference on Robotics and Automation, 2011. [8] J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on Robotics and Automation, 2009. [9] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [10] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013. [11] G. Neumann. Variational inference for policy search in changing situations. In International Conference on Machine Learning (ICML), 2011. [12] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682–697, 2008. [13] K. Rawlik, M. Toussaint, and S. Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In Robotics: Science and Systems, 2012. [14] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627–635, 2011. [15] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81(393):82–86, 1986. [16] E. Todorov. Policy gradients in linearly-solvable MDPs. In Advances in Neural Information Processing Systems (NIPS 23), 2010. [17] E. Todorov and W. Li. A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems. In American Control Conference, 2005. [18] E. Todorov and Y. Tassa. Iterative local dynamic programming. In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2009. [19] M. Toussaint. Robot trajectory optimization using approximate inference. In International Conference on Machine Learning (ICML), 2009. [20] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical POMDP controller optimization by likelihood maximization. In Uncertainty in Artificial Intelligence (UAI), 2008. [21] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning model-free robot control by a Monte Carlo EM algorithm. Autonomous Robots, 27(2):123–130, 2009. [22] K. Yin, K. Loken, and M. van de Panne. SIMBICON: simple biped locomotion control. ACM Transactions Graphics, 26(3), 2007. [23] B. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD thesis, Carnegie Mellon University, 2010. 9
|
2013
|
73
|
5,151
|
Dropout Training as Adaptive Regularization Stefan Wager⇤, Sida Wang†, and Percy Liang† Departments of Statistics⇤and Computer Science† Stanford University, Stanford, CA-94305 swager@stanford.edu, {sidaw, pliang}@cs.stanford.edu Abstract Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset. 1 Introduction Dropout training was introduced by Hinton et al. [1] as a way to control overfitting by randomly omitting subsets of features at each iteration of a training procedure.1 Although dropout has proved to be a very successful technique, the reasons for its success are not yet well understood at a theoretical level. Dropout training falls into the broader category of learning methods that artificially corrupt training data to stabilize predictions [2, 4, 5, 6, 7]. There is a well-known connection between artificial feature corruption and regularization [8, 9, 10]. For example, Bishop [9] showed that the effect of training with features that have been corrupted with additive Gaussian noise is equivalent to a form of L2-type regularization in the low noise limit. In this paper, we take a step towards understanding how dropout training works by analyzing it as a regularizer. We focus on generalized linear models (GLMs), a class of models for which feature dropout reduces to a form of adaptive model regularization. Using this framework, we show that dropout training is first-order equivalent to L2-regularization after transforming the input by diag(ˆI)−1/2, where ˆI is an estimate of the Fisher information matrix. This transformation effectively makes the level curves of the objective more spherical, and so balances out the regularization applied to different features. In the case of logistic regression, dropout can be interpreted as a form of adaptive L2-regularization that favors rare but useful features. The problem of learning with rare but useful features is discussed in the context of online learning by Duchi et al. [11], who show that their AdaGrad adaptive descent procedure achieves better regret bounds than regular stochastic gradient descent (SGD) in this setting. Here, we show that AdaGrad S.W. is supported by a B.C. and E.J. Eaves Stanford Graduate Fellowship. 1Hinton et al. introduced dropout training in the context of neural networks specifically, and also advocated omitting random hidden layers during training. In this paper, we follow [2, 3] and study feature dropout as a generic training method that can be applied to any learning algorithm. 1 and dropout training have an intimate connection: Just as SGD progresses by repeatedly solving linearized L2-regularized problems, a close relative of AdaGrad advances by solving linearized dropout-regularized problems. Our formulation of dropout training as adaptive regularization also leads to a simple semi-supervised learning scheme, where we use unlabeled data to learn a better dropout regularizer. The approach is fully discriminative and does not require fitting a generative model. We apply this idea to several document classification problems, and find that it consistently improves the performance of dropout training. On the benchmark IMDB reviews dataset introduced by [12], dropout logistic regression with a regularizer tuned on unlabeled data outperforms previous state-of-the-art. In follow-up research [13], we extend the results from this paper to more complicated structured prediction, such as multi-class logistic regression and linear chain conditional random fields. 2 Artificial Feature Noising as Regularization We begin by discussing the general connections between feature noising and regularization in generalized linear models (GLMs). We will apply the machinery developed here to dropout training in Section 4. A GLM defines a conditional distribution over a response y 2 Y given an input feature vector x 2 Rd: pβ(y | x) def = h(y) exp{y x · β −A(x · β)}, `x,y(β) def = −log pβ(y | x). (1) Here, h(y) is a quantity independent of x and β, A(·) is the log-partition function, and `x,y(β) is the loss function (i.e., the negative log likelihood); Table 1 contains a summary of notation. Common examples of GLMs include linear (Y = R), logistic (Y = {0, 1}), and Poisson (Y = {0, 1, 2, . . . }) regression. Given n training examples (xi, yi), the standard maximum likelihood estimate ˆβ 2 Rd minimizes the empirical loss over the training examples: ˆβ def = arg min β2Rd n X i=1 `xi, yi(β). (2) With artificial feature noising, we replace the observed feature vectors xi with noisy versions ˜xi = ⌫(xi, ⇠i), where ⌫is our noising function and ⇠i is an independent random variable. We first create many noisy copies of the dataset, and then average out the auxiliary noise. In this paper, we will consider two types of noise: • Additive Gaussian noise: ⌫(xi, ⇠i) = xi + ⇠i, where ⇠i ⇠N(0, σ2Id⇥d). • Dropout noise: ⌫(xi, ⇠i) = xi ⊙⇠i, where ⊙is the elementwise product of two vectors. Each component of ⇠i 2 {0, (1 −δ)−1}d is an independent draw from a scaled Bernoulli(1 −δ) random variable. In other words, dropout noise corresponds to setting ˜xij to 0 with probability δ and to xij/(1 −δ) else.2 Integrating over the feature noise gives us a noised maximum likelihood parameter estimate: ˆβ = arg min β2Rd n X i=1 E⇠[`˜xi, yi(β)] , where E⇠[Z] def = E [Z | {xi, yi}] (3) is the expectation taken with respect to the artificial feature noise ⇠= (⇠1, . . . , ⇠n). Similar expressions have been studied by [9, 10]. For GLMs, the noised empirical loss takes on a simpler form: n X i=1 E⇠[`˜xi, yi(β)] = n X i=1 −(y xi · β −E⇠[A(˜xi · β)]) = n X i=1 `xi, yi(β) + R(β). (4) 2Artificial noise of the form xi ⊙⇠i is also called blankout noise. For GLMs, blankout noise is equivalent to dropout noise as defined by [1]. 2 Table 1: Summary of notation. xi Observed feature vector R(β) Noising penalty (5) ˜xi Noised feature vector Rq(β) Quadratic approximation (6) A(x · β) Log-partition function `(β) Negative log-likelihood (loss) The first equality holds provided that E⇠[˜xi] = xi, and the second is true with the following definition: R(β) def = n X i=1 E⇠[A(˜xi · β)] −A(xi · β). (5) Here, R(β) acts as a regularizer that incorporates the effect of artificial feature noising. In GLMs, the log-partition function A must always be convex, and so R is always positive by Jensen’s inequality. The key observation here is that the effect of artificial feature noising reduces to a penalty R(β) that does not depend on the labels {yi}. Because of this, artificial feature noising penalizes the complexity of a classifier in a way that does not depend on the accuracy of a classifier. Thus, for GLMs, artificial feature noising is a regularization scheme on the model itself that can be compared with other forms of regularization such as ridge (L2) or lasso (L1) penalization. In Section 6, we exploit the label-independence of the noising penalty and use unlabeled data to tune our estimate of R(β). The fact that R does not depend on the labels has another useful consequence that relates to prediction. The natural prediction rule with artificially noised features is to select ˆy to minimize expected loss over the added noise: ˆy = argminy E⇠[`˜x, y(ˆβ)]. It is common practice, however, not to noise the inputs and just to output classification decisions based on the original feature vector [1, 3, 14]: ˆy = argminy `x, y(ˆβ). It is easy to verify that these expressions are in general not equivalent, but they are equivalent when the effect of feature noising reduces to a label-independent penalty on the likelihood. Thus, the common practice of predicting with clean features is formally justified for GLMs. 2.1 A Quadratic Approximation to the Noising Penalty Although the noising penalty R yields an explicit regularizer that does not depend on the labels {yi}, the form of R can be difficult to interpret. To gain more insight, we will work with a quadratic approximation of the type used by [9, 10]. By taking a second-order Taylor expansion of A around x · β, we get that E⇠[A(˜x · β)] −A(x · β) ⇡1 2A00(x · β) Var⇠[˜x · β] . Here the first-order term E⇠[A0(x · β)(˜x −x)] vanishes because E⇠[˜x] = x. Applying this quadratic approximation to (5) yields the following quadratic noising regularizer, which will play a pivotal role in the rest of the paper: Rq(β) def = 1 2 n X i=1 A00(xi · β) Var⇠[˜xi · β] . (6) This regularizer penalizes two types of variance over the training examples: (i) A00(xi · β), which corresponds to the variance of the response yi in the GLM, and (ii) Var⇠[˜xi · β], the variance of the estimated GLM parameter due to noising.3 Accuracy of approximation Figure 1a compares the noising penalties R and Rq for logistic regression in the case that ˜x · β is Gaussian;4 we vary the mean parameter p def = (1 + e−x·β)−1 and the noise level σ. We see that Rq is generally very accurate, although it tends to overestimate the true penalty for p ⇡0.5 and tends to underestimate it for very confident predictions. We give a graphical explanation for this phenomenon in the Appendix (Figure A.1). The quadratic approximation also appears to hold up on real datasets. In Figure 1b, we compare the evolution during training of both R and Rq on the 20 newsgroups alt.atheism vs 3Although Rq is not convex, we were still able (using an L-BFGS algorithm) to train logistic regression with Rq as a surrogate for the dropout regularizer without running into any major issues with local optima. 4This assumption holds a priori for additive Gaussian noise, and can be reasonable for dropout by the central limit theorem. 3 0.0 0.5 1.0 1.5 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Sigma Noising Penalty p = 0.5 p = 0.73 p = 0.82 p = 0.88 p = 0.95 (a) Comparison of noising penalties R and Rq for logistic regression with Gaussian perturbations, i.e., (˜x −x) · β ⇠N(0, σ2). The solid line indicates the true penalty and the dashed one is our quadratic approximation thereof; p = (1 + e−x·β)−1 is the mean parameter for the logistic model. 0 50 100 150 10 20 50 100 200 500 Training Iteration Loss Dropout Penalty Quadratic Penalty Negative Log−Likelihood (b) Comparing the evolution of the exact dropout penalty R and our quadratic approximation Rq for logistic regression on the AthR classification task in [15] with 22K features and n = 1000 examples. The horizontal axis is the number of quasi-Newton steps taken while training with exact dropout. Figure 1: Validating the quadratic approximation. soc.religion.christian classification task described in [15]. We see that the quadratic approximation is accurate most of the way through the learning procedure, only deteriorating slightly as the model converges to highly confident predictions. In practice, we have found that fitting logistic regression with the quadratic surrogate Rq gives similar results to actual dropout-regularized logistic regression. We use this technique for our experiments in Section 6. 3 Regularization based on Additive Noise Having established the general quadratic noising regularizer Rq, we now turn to studying the effects of Rq for various likelihoods (linear and logistic regression) and noising models (additive and dropout). In this section, we warm up with additive noise; in Section 4 we turn to our main target of interest, namely dropout noise. Linear regression Suppose ˜x = x + " is generated by by adding noise with Var["] = σ2Id⇥d to the original feature vector x. Note that Var⇠[˜x · β] = σ2kβk2 2, and in the case of linear regression A(z) = 1 2z2, so A00(z) = 1. Applying these facts to (6) yields a simplified form for the quadratic noising penalty: Rq(β) = 1 2σ2nkβk2 2. (7) Thus, we recover the well-known result that linear regression with additive feature noising is equivalent to ridge regression [2, 9]. Note that, with linear regression, the quadratic approximation Rq is exact and so the correspondence with L2-regularization is also exact. Logistic regression The situation gets more interesting when we move beyond linear regression. For logistic regression, A00(xi · β) = pi(1 −pi) where pi = (1 + exp(−xi · β))−1 is the predicted probability of yi = 1. The quadratic noising penalty is then Rq(β) = 1 2σ2kβk2 2 n X i=1 pi(1 −pi). (8) In other words, the noising penalty now simultaneously encourages parsimonious modeling as before (by encouraging kβk2 2 to be small) as well as confident predictions (by encouraging the pi’s to move away from 1 2). 4 Table 2: Form of the different regularization schemes. These expressions assume that the design matrix has been normalized, i.e., that P i x2 ij = 1 for all j. The pi = (1 + e−xi·β)−1 are mean parameters for the logistic model. Linear Regression Logistic Regression GLM L2-penalization kβk2 2 kβk2 2 kβk2 2 Additive Noising kβk2 2 kβk2 2 P i pi(1 −pi) kβk2 2 tr(V (β)) Dropout Training kβk2 2 P i, j pi(1 −pi) x2 ij β2 j β> diag(X>V (β)X)β 4 Regularization based on Dropout Noise Recall that dropout training corresponds to applying dropout noise to training examples, where the noised features ˜xi are obtained by setting ˜xij to 0 with some “dropout probability” δ and to xij/(1 −δ) with probability (1 −δ), independently for each coordinate j of the feature vector. We can check that: Var⇠[˜xi · β] = 1 2 δ 1 −δ d X j=1 x2 ijβ2 j , (9) and so the quadratic dropout penalty is Rq(β) = 1 2 δ 1 −δ n X i=1 A00(xi · β) d X j=1 x2 ijβ2 j . (10) Letting X 2 Rn⇥d be the design matrix with rows xi and V (β) 2 Rn⇥n be a diagonal matrix with entries A00(xi · β), we can re-write this penalty as Rq(β) = 1 2 δ 1 −δ β> diag(X>V (β)X)β. (11) Let β⇤be the maximum likelihood estimate given infinite data. When computed at β⇤, the matrix 1 nX>V (β⇤)X = 1 n Pn i=1 r2`xi, yi(β⇤) is an estimate of the Fisher information matrix I. Thus, dropout can be seen as an attempt to apply an L2 penalty after normalizing the feature vector by diag(I)−1/2. The Fisher information is linked to the shape of the level surfaces of `(β) around β⇤. If I were a multiple of the identity matrix, then these level surfaces would be perfectly spherical around β⇤. Dropout, by normalizing the problem by diag(I)−1/2, ensures that while the level surfaces of `(β) may not be spherical, the L2-penalty is applied in a basis where the features have been balanced out. We give a graphical illustration of this phenomenon in Figure A.2. Linear Regression For linear regression, V is the identity matrix, so the dropout objective is equivalent to a form of ridge regression where each column of the design matrix is normalized before applying the L2 penalty.5 This connection has been noted previously by [3]. Logistic Regression The form of dropout penalties becomes much more intriguing once we move beyond the realm of linear regression. The case of logistic regression is particularly interesting. Here, we can write the quadratic dropout penalty from (10) as Rq(β) = 1 2 δ 1 −δ n X i=1 d X j=1 pi(1 −pi) x2 ij β2 j . (12) Thus, just like additive noising, dropout generally gives an advantage to confident predictions and small β. However, unlike all the other methods considered so far, dropout may allow for some large pi(1 −pi) and some large β2 j , provided that the corresponding cross-term x2 ij is small. Our analysis shows that dropout regularization should be better than L2-regularization for learning weights for features that are rare (i.e., often 0) but highly discriminative, because dropout effectively does not penalize βj over observations for which xij = 0. Thus, in order for a feature to earn a large β2 j , it suffices for it to contribute to a confident prediction with small pi(1 −pi) each time that it is active.6 Dropout training has been empirically found to perform well on tasks such as document 5Normalizing the columns of the design matrix before performing penalized regression is standard practice, and is implemented by default in software like glmnet for R [16]. 6To be precise, dropout does not reward all rare but discriminative features. Rather, dropout rewards those features that are rare and positively co-adapted with other features in a way that enables the model to make confident predictions whenever the feature of interest is active. 5 Table 3: Accuracy of L2 and dropout regularized logistic regression on a simulated example. The first row indicates results over test examples where some of the rare useful features are active (i.e., where there is some signal that can be exploited), while the second row indicates accuracy over the full test set. These results are averaged over 100 simulation runs, with 75 training examples in each. All tuning parameters were set to optimal values. The sampling error on all reported values is within ±0.01. Accuracy L2-regularization Dropout training Active Instances 0.66 0.73 All Instances 0.53 0.55 classification where rare but discriminative features are prevalent [3]. Our result suggests that this is no mere coincidence. We summarize the relationship between L2-penalization, additive noising and dropout in Table 2. Additive noising introduces a product-form penalty depending on both β and A00. However, the full potential of artificial feature noising only emerges with dropout, which allows the penalty terms due to β and A00 to interact in a non-trivial way through the design matrix X (except for linear regression, in which all the noising schemes we consider collapse to ridge regression). 4.1 A Simulation Example The above discussion suggests that dropout logistic regression should perform well with rare but useful features. To test this intuition empirically, we designed a simulation study where all the signal is grouped in 50 rare features, each of which is active only 4% of the time. We then added 1000 nuisance features that are always active to the design matrix, for a total of d = 1050 features. To make sure that our experiment was picking up the effect of dropout training specifically and not just normalization of X, we ensured that the columns of X were normalized in expectation. The dropout penalty for logistic regression can be written as a matrix product Rq(β) = 1 2 δ 1 −δ (· · · pi(1 −pi) · · ·) 0 @ · · · · · · x2 ij · · · · · · 1 A 0 @ · · · β2 j · · · 1 A . (13) We designed the simulation study in such a way that, at the optimal β, the dropout penalty should have structure Small (confident prediction) Big (weak prediction) ! · · · · · · 0 · · · 0 B @ 1 C A Big (useful feature) Small (nuisance feature) 0 B B B B @ 1 C C C C A . (14) A dropout penalty with such a structure should be small. Although there are some uncertain predictions with large pi(1 −pi) and some big weights β2 j , these terms cannot interact because the corresponding terms x2 ij are all 0 (these are examples without any of the rare discriminative features and thus have no signal). Meanwhile, L2 penalization has no natural way of penalizing some βj more and others less. Our simulation results, given in Table 3, confirm that dropout training outperforms L2-regularization here as expected. See Appendix A.1 for details. 5 Dropout Regularization in Online Learning There is a well-known connection between L2-regularization and stochastic gradient descent (SGD). In SGD, the weight vector ˆβ is updated with ˆβt+1 = ˆβt −⌘t gt, where gt = r`xt, yt(ˆβt) is the gradient of the loss due to the t-th training example. We can also write this update as a linear L2-penalized problem ˆβt+1 = argminβ ⇢ `xt, yt(ˆβt) + gt · (β −ˆβt) + 1 2⌘t kβ −ˆβtk2 2 ( , (15) where the first two terms form a linear approximation to the loss and the third term is an L2regularizer. Thus, SGD progresses by repeatedly solving linearized L2-regularized problems. 6 0 10000 20000 30000 40000 0.8 0.82 0.84 0.86 0.88 0.9 size of unlabeled data accuracy dropout+unlabeled dropout L2 5000 10000 15000 0.8 0.82 0.84 0.86 0.88 0.9 size of labeled data accuracy dropout+unlabeled dropout L2 Figure 2: Test set accuracy on the IMDB dataset [12] with unigram features. Left: 10000 labeled training examples, and up to 40000 unlabeled examples. Right: 3000-15000 labeled training examples, and 25000 unlabeled examples. The unlabeled data is discounted by a factor ↵= 0.4. As discussed by Duchi et al. [11], a problem with classic SGD is that it can be slow at learning weights corresponding to rare but highly discriminative features. This problem can be alleviated by running a modified form of SGD with ˆβt+1 = ˆβt −⌘A−1 t gt, where the transformation At is also learned online; this leads to the AdaGrad family of stochastic descent rules. Duchi et al. use At = diag(Gt)1/2 where Gt = Pt i=1 gig> i and show that this choice achieves desirable regret bounds in the presence of rare but useful features. At least superficially, AdaGrad and dropout seem to have similar goals: For logistic regression, they can both be understood as adaptive alternatives to methods based on L2-regularization that favor learning rare, useful features. As it turns out, they have a deeper connection. The natural way to incorporate dropout regularization into SGD is to replace the penalty term kβ − ˆβk2 2/2⌘in (15) with the dropout regularizer, giving us an update rule ˆβt+1 = argminβ n `xt, yt(ˆβt) + gt · (β −ˆβt) + Rq(β −ˆβt; ˆβt) o (16) where, Rq(·; ˆβt) is the quadratic noising regularizer centered at ˆβt:7 Rq(β −ˆβt; ˆβt) = 1 2(β −ˆβt)> diag(Ht) (β −ˆβt), where Ht = t X i=1 r2`xi, yi(ˆβt). (17) This implies that dropout descent is first-order equivalent to an adaptive SGD procedure with At = diag(Ht). To see the connection between AdaGrad and this dropout-based online procedure, recall that for GLMs both of the expressions Eβ⇤⇥ r2`x, y(β⇤) ⇤ = Eβ⇤⇥ r`x, y(β⇤)r`x, y(β⇤)>⇤ (18) are equal to the Fisher information I [17]. In other words, as ˆβt converges to β⇤, Gt and Ht are both consistent estimates of the Fisher information. Thus, by using dropout instead of L2-regularization to solve linearized problems in online learning, we end up with an AdaGrad-like algorithm. Of course, the connection between AdaGrad and dropout is not perfect. In particular, AdaGrad allows for a more aggressive learning rate by using At = diag(Gt)−1/2 instead of diag(Gt)−1. But, at a high level, AdaGrad and dropout appear to both be aiming for the same goal: scaling the features by the Fisher information to make the level-curves of the objective more circular. In contrast, L2-regularization makes no attempt to sphere the level curves, and AROW [18]—another popular adaptive method for online learning—only attempts to normalize the effective feature matrix but does not consider the sensitivity of the loss to changes in the model weights. In the case of logistic regression, AROW also favors learning rare features, but unlike dropout and AdaGrad does not privilege confident predictions. 7This expression is equivalent to (11) except that we used ˆβt and not β −ˆβt to compute Ht. 7 Table 4: Performance of semi-supervised dropout training for document classification. (a) Test accuracy with and without unlabeled data on different datasets. Each dataset is split into 3 parts of equal sizes: train, unlabeled, and test. Log. Reg.: logistic regression with L2 regularization; Dropout: dropout trained with quadratic surrogate; +Unlabeled: using unlabeled data. Datasets Log. Reg. Dropout +Unlabeled Subj 88.96 90.85 91.48 RT 73.49 75.18 76.56 IMDB-2k 80.63 81.23 80.33 XGraph 83.10 84.64 85.41 BbCrypt 97.28 98.49 99.24 IMDB 87.14 88.70 89.21 (b) Test accuracy on the IMDB dataset [12]. Labeled: using just labeled data from each paper/method, +Unlabeled: use additional unlabeled data. Drop: dropout with Rq, MNB: multionomial naive Bayes with semisupervised frequency estimate from [19],8-Uni: unigram features, -Bi: bigram features. Methods Labeled +Unlabeled MNB-Uni [19] 83.62 84.13 MNB-Bi [19] 86.63 86.98 Vect.Sent[12] 88.33 88.89 NBSVM[15]-Bi 91.22 – Drop-Uni 87.78 89.52 Drop-Bi 91.31 91.98 6 Semi-Supervised Dropout Training Recall that the regularizer R(β) in (5) is independent of the labels {yi}. As a result, we can use additional unlabeled training examples to estimate it more accurately. Suppose we have an unlabeled dataset {zi} of size m, and let ↵2 (0, 1] be a discount factor for the unlabeled data. Then we can define a semi-supervised penalty estimate R⇤(β) def = n n + ↵m ⇣ R(β) + ↵RUnlabeled(β) ⌘ , (19) where R(β) is the original penalty estimate and RUnlabeled(β) = P i E⇠[A(zi · β)] −A(zi · β) is computed using (5) over the unlabeled examples zi. We select the discount parameter by crossvalidation; empirically, ↵2 [0.1, 0.4] works well. For convenience, we optimize the quadratic surrogate Rq ⇤instead of R⇤. Another practical option would be to use the Gaussian approximation from [3] for estimating R⇤(β). Most approaches to semi-supervised learning either rely on using a generative model [19, 20, 21, 22, 23] or various assumptions on the relationship between the predictor and the marginal distribution over inputs. Our semi-supervised approach is based on a different intuition: we’d like to set weights to make confident predictions on unlabeled data as well as the labeled data, an intuition shared by entropy regularization [24] and transductive SVMs [25]. Experiments We apply this semi-supervised technique to text classification. Results on several datasets described in [15] are shown in Table 4a; Figure 2 illustrates how the use of unlabeled data improves the performance of our classifier on a single dataset. Overall, we see that using unlabeled data to learn a better regularizer R⇤(β) consistently improves the performance of dropout training. Table 4b shows our results on the IMDB dataset of [12]. The dataset contains 50,000 unlabeled examples in addition to the labeled train and test sets of size 25,000 each. Whereas the train and test examples are either positive or negative, the unlabeled examples contain neutral reviews as well. We train a dropout-regularized logistic regression classifier on unigram/bigram features, and use the unlabeled data to tune our regularizer. Our method benefits from unlabeled data even in the presence of a large amount of labeled data, and achieves state-of-the-art accuracy on this dataset. 7 Conclusion We analyzed dropout training as a form of adaptive regularization. This framework enabled us to uncover close connections between dropout training, adaptively balanced L2-regularization, and AdaGrad; and led to a simple yet effective method for semi-supervised training. There seem to be multiple opportunities for digging deeper into the connection between dropout training and adaptive regularization. In particular, it would be interesting to see whether the dropout regularizer takes on a tractable and/or interpretable form in neural networks, and whether similar semi-supervised schemes could be used to improve on the results presented in [1]. 8Our implementation of semi-supervised MNB. MNB with EM [20] failed to give an improvement. 8 References [1] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [2] Laurens van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning with marginalized corrupted features. In Proceedings of the International Conference on Machine Learning, 2013. [3] Sida I Wang and Christopher D Manning. Fast dropout training. In Proceedings of the International Conference on Machine Learning, 2013. [4] Yaser S Abu-Mostafa. Learning from hints in neural networks. Journal of Complexity, 6(2):192–198, 1990. [5] Chris J.C. Burges and Bernhard Schlkopf. Improving the accuracy and speed of support vector machines. In Advances in Neural Information Processing Systems, pages 375–381, 1997. [6] Patrice Y Simard, Yann A Le Cun, John S Denker, and Bernard Victorri. Transformation invariance in pattern recognition: Tangent distance and propagation. International Journal of Imaging Systems and Technology, 11(3):181–197, 2000. [7] Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent classifier. Advances in Neural Information Processing Systems, 24:2294–2302, 2011. [8] Kiyotoshi Matsuoka. Noise injection into inputs in back-propagation learning. Systems, Man and Cybernetics, IEEE Transactions on, 22(3):436–440, 1992. [9] Chris M Bishop. Training with noise is equivalent to Tikhonov regularization. Neural computation, 7(1):108–116, 1995. [10] Salah Rifai, Xavier Glorot, Yoshua Bengio, and Pascal Vincent. Adding noise to the input of a model trained with a regularized objective. arXiv preprint arXiv:1104.3250, 2011. [11] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2010. [12] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142–150. Association for Computational Linguistics, 2011. [13] Sida I Wang, Mengqiu Wang, Stefan Wager, Percy Liang, and Christopher D Manning. Feature noising for log-linear structured prediction. In Empirical Methods in Natural Language Processing, 2013. [14] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In Proceedings of the International Conference on Machine Learning, 2013. [15] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 90–94. Association for Computational Linguistics, 2012. [16] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1, 2010. [17] Erich Leo Lehmann and George Casella. Theory of Point Estimation. Springer, 1998. [18] Koby Crammer, Alex Kulesza, Mark Dredze, et al. Adaptive regularization of weight vectors. Advances in Neural Information Processing Systems, 22:414–422, 2009. [19] Jiang Su, Jelber Sayyad Shirab, and Stan Matwin. Large scale text classification using semi-supervised multinomial naive Bayes. In Proceedings of the International Conference on Machine Learning, 2011. [20] Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2-3):103–134, May 2000. [21] G. Bouchard and B. Triggs. The trade-off between generative and discriminative classifiers. In International Conference on Computational Statistics, pages 721–728, 2004. [22] R. Raina, Y. Shen, A. Ng, and A. McCallum. Classification with hybrid generative/discriminative models. In Advances in Neural Information Processing Systems, Cambridge, MA, 2004. MIT Press. [23] J. Suzuki, A. Fujino, and H. Isozaki. Semi-supervised structured output learning based on a hybrid generative and discriminative approach. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007. [24] Y. Grandvalet and Y. Bengio. Entropy regularization. In Semi-Supervised Learning, United Kingdom, 2005. Springer. [25] Thorsten Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the International Conference on Machine Learning, pages 200–209, 1999. 9
|
2013
|
74
|
5,152
|
Prior-free and prior-dependent regret bounds for Thompson Sampling S´ebastien Bubeck, Che-Yu Liu Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu, cheliu@princeton.edu Abstract We consider the stochastic multi-armed bandit problem with a prior distribution on the reward distributions. We are interested in studying prior-free and priordependent regret bounds, very much in the same spirit than the usual distributionfree and distribution-dependent bounds for the non-Bayesian stochastic bandit. We first show that Thompson Sampling attains an optimal prior-free bound in the sense that for any prior distribution its Bayesian regret is bounded from above by 14 √ nK. This result is unimprovable in the sense that there exists a prior distribution such that any algorithm has a Bayesian regret bounded from below by 1 20 √ nK. We also study the case of priors for the setting of Bubeck et al. [2013] (where the optimal mean is known as well as a lower bound on the smallest gap) and we show that in this case the regret of Thompson Sampling is in fact uniformly bounded over time, thus showing that Thompson Sampling can greatly take advantage of the nice properties of these priors. 1 Introduction In this paper we are interested in the Bayesian multi-armed bandit problem which can be described as follows. Let π0 be a known distribution over some set Θ, and let θ be a random variable distributed according to π0. For i ∈[K], let (Xi,s)s≥1 be identically distributed random variables taking values in [0, 1] and which are independent conditionally on θ. Denote µi(θ) := E(Xi,1|θ). Consider now an agent facing K actions (or arms). At each time step t = 1, . . . n, the agent pulls an arm It ∈[K]. The agent receives the reward Xi,s when he pulls arm i for the sth time. The arm selection is based only on past observed rewards and potentially on an external source of randomness. More formally, let (Us)s≥1 be an i.i.d. sequence of random variables uniformly distributed on [0, 1], and let Ti(s) = Ps t=1 1It=i, then It is a random variable measurable with respect to σ(I1, X1,1, . . . , It−1, XIt−1,TIt−1(t−1), Ut). We measure the performance of the agent through the Bayesian regret defined as BRn = E n X t=1 max i∈[K] µi(θ) −µIt(θ) , where the expectation is taken with respect to the parameter θ, the rewards (Xi,s)s≥1, and the external source of randomness (Us)s≥1. We will also be interested in the individual regret Rn(θ) which is defined similarly except that θ is fixed (instead of being integrated over π0). When it is clear from the context we drop the dependency on θ in the various quantities defined above. 1 Given a prior π0 the problem of finding an optimal strategy to minimize the Bayesian regret BRn is a well defined optimization problem and as such it is merely a computational problem. On the other hand the point of view initially developed in Robbins [1952] leads to a learning problem. In this latter view the agent’s strategy must have a low regret Rn(θ) for any θ ∈Θ. Both formulations of the problem have a long history and we refer the interested reader to Bubeck and Cesa-Bianchi [2012] for a survey of the extensive recent literature on the learning setting. In the Bayesian setting a major breakthrough was achieved in Gittins [1979] where it was shown that when the prior distribution takes a product form an optimal strategy is given by the Gittins indices (which are relatively easy to compute). The product assumption on the prior means that the reward processes (Xi,s)s≥1 are independent across arms. In the present paper we are precisely interested in the situations where this assumption is not satisfied. Indeed we believe that one of the strength of the Bayesian setting is that one can incorporate prior knowledge on the arms in very transparent way. A prototypical example that we shall consider later on in this paper is when one knows the distributions of the arms up to a permutation, in which case the reward processes are strongly dependent. In general without the product assumption on the prior it seems hopeless (from a computational perspective) to look for the optimal Bayesian strategy. Thus, despite being in a Bayesian setting, it makes sense to view it as a learning problem and to evaluate the agent’s performance through its Bayesian regret. In this paper we are particularly interested in studying the Thompson Sampling strategy which was proposed in the very first paper on the multi-armed bandit problem Thompson [1933]. This strategy can be described very succinctly: let πt be the posterior distribution on θ given the history Ht = (I1, X1,1, . . . , It−1, XIt−1,TIt−1(t−1)) of the algorithm up to the beginning of round t. Then Thompson Sampling first draws a parameter θ(t) from πt (independently from the past given πt) and it pulls It ∈argmaxi∈[K] µi(θ(t)). Recently there has been a surge of interest for this simple policy, mainly because of its flexibility to incorporate prior knowledge on the arms, see for example Chapelle and Li [2011]. For a long time the theoretical properties of Thompson Sampling remained elusive. The specific case of binary rewards with a Beta prior is now very well understood thanks to the papers Agrawal and Goyal [2012a], Kaufmann et al. [2012], Agrawal and Goyal [2012b]. However as we pointed out above here we are interested in proving regret bounds for the more realistic scenario where one runs Thompson Sampling with a hand-tuned prior distribution, possibly very different from a Beta prior. The first result in that spirit was obtained very recently by Russo and Roy [2013] who showed that for any prior distribution π0 Thompson Sampling always satisfies BRn ≤5√nK log n. A similar bound was proved in Agrawal and Goyal [2012b] for the specific case of Beta prior1. Our first contribution is to show in Section 2 that the extraneous logarithmic factor in these bounds can be removed by using ideas reminiscent of the MOSS algorithm of Audibert and Bubeck [2009]. Our second contribution is to show that Thompson Sampling can take advantage of the properties of some non-trivial priors to attain much better regret guarantees. More precisely in Section 2 and 3 we consider the setting of Bubeck et al. [2013] (which we call the BPR setting) where µ∗and ε > 0 are known values such that for any θ ∈Θ, first there is a unique best arm {i∗(θ)} = argmaxi∈[K] µi(θ), and furthermore µi∗(θ)(θ) = µ∗, and ∆i(θ) := µi∗(θ)(θ) −µi(θ) ≥ε, ∀i ̸= i∗(θ). In other words the value of the best arm is known as well as a non-trivial lower bound on the gap between the values of the best and second best arms. For this problem a new algorithm was proposed in Bubeck et al. [2013] (which we call the BPR policy), and it was shown that the BPR policy satisfies Rn(θ) = O X i̸=i∗(θ) log(∆i(θ)/ε) ∆i(θ) log log(1/ε) , ∀θ ∈Θ, ∀n ≥1. Thus the BPR policy attains a regret uniformly bounded over time in the BPR setting, a feature that standard bandit algorithms such as UCB of Auer et al. [2002] cannot achieve. It is natural to view 1Note however that the result of Agrawal and Goyal [2012b] applies to the individual regret Rn(θ) while the result of Russo and Roy [2013] only applies to the integrated Bayesian regret BRn. 2 the assumptions of the BPR setting as a prior over the reward distributions and to ask what regret guarantees attain Thompson Sampling in that situation. More precisely we consider Thompson Sampling with Gaussian reward distributions and uniform prior over the possible range of parameters. We then prove individual regret bounds for any sub-Gaussian distributions (similarly to Bubeck et al. [2013]). We obtain that Thompson Sampling uses optimally the prior information in the sense that it also attains uniformly bounded over time regret. Furthermore as an added bonus we remove the extraneous log-log factor of the BPR policy’s regret bound. The results presented in Section 2 and 3 can be viewed as a first step towards a better understanding of prior-dependent regret bounds for Thompson Sampling. Generalizing these results to arbitrary priors is a challenging open problem which is beyond the scope of our current techniques. 2 Optimal prior-free regret bound for Thompson Sampling In this section we prove the following result. Theorem 1 For any prior distribution π0 over reward distributions in [0, 1], Thompson Sampling satisfies BRn ≤14 √ nK. Remark that the above result is unimprovable in the sense that there exist prior distributions π0 such that for any algorithm one has Rn ≥ 1 20 √ nK (see e.g. [Theorem 3.5, Bubeck and Cesa-Bianchi [2012]]). This theorem also implies an optimal rate of identification for the best arm, see Bubeck et al. [2009] for more details on this. Proof We decompose the proof into three steps. We denote i∗(θ) ∈argmaxi∈[K] µi(θ), in particular one has It = i∗(θt). Step 1: rewriting of the Bayesian regret in terms of upper confidence bounds. This step is given by [Proposition 1, Russo and Roy [2013]] which we reprove for sake of completeness. Let Bi,t be a random variable measurable with respect to σ(Ht). Note that by definition θ(t) and θ are identically distributed conditionally on Ht. This implies by the tower rule: EBi∗(θ),t = EBi∗(θ(t)),t = EBIt,t. Thus we obtain: E µi∗(θ)(θ) −µIt(θ) = E µi∗(θ)(θ) −Bi∗(θ),t + E (BIt,t −µIt(θ)) . Inspired by the MOSS strategy of Audibert and Bubeck [2009] we will now take Bi,t = bµi,Ti(t−1) + v u u tlog+ n KTi(t−1) Ti(t −1) , where bµi,s = 1 s Ps t=1 Xi,t, and log+(x) = log(x)1x≥1. In the following we denote δ0 = 2 q K n . From now on we work conditionally on θ and thus we drop all the dependency on θ. Step 2: control of E µi∗(θ)(θ) −Bi∗(θ),t|θ . By a simple integration of the deviations one has E (µi∗−Bi∗,t) ≤δ0 + Z 1 δ0 P(µi∗−Bi∗,t ≥u)du. Next we extract the following inequality from Audibert and Bubeck [2010] (see p2683–2684), for any i ∈[K], P(µi −Bi,t ≥u) ≤4K nu2 log r n K u + 1 nu2/K −1. 3 Now an elementary integration gives Z 1 δ0 4K nu2 log r n K u du = −4K nu log e r n K u 1 δ0 ≤4K nδ0 log e r n K δ0 = 2(1+log 2) r K n , and Z 1 δ0 1 nu2/K −1du = " −1 2 r K n log p n K u + 1 p n K u −1 !#1 δ0 ≤1 2 r K n log p n K δ0 + 1 p n K δ0 −1 ! = log 3 2 r K n . Thus we proved: E µi∗(θ)(θ) −Bi∗(θ),t|θ ≤ 2 + 2(1 + log 2) + log 3 2 q K n ≤6 q K n . Step 3: control of Pn t=1 E (BIt,t −µIt(θ)|θ). We start again by integrating the deviations: E n X t=1 (BIt,t −µIt) ≤δ0n + Z +∞ δ0 n X t=1 P(BIt,t −µIt ≥u)du. Next we use the following simple inequality: n X t=1 1{BIt,t −µIt ≥u} ≤ n X s=1 K X i=1 1 bµi,s + s log+ n Ks s −µi ≥u , which implies n X t=1 P(BIt,t −µIt ≥u) ≤ K X i=1 n X s=1 P bµi,s + s log+ n Ks s −µi ≥u . Now for u ≥δ0 let s(u) = ⌈3 log nu2 K /u2⌉where ⌈x⌉is the smallest integer large than x. Let c = 1 − 1 √ 3. It is is easy to see that one has: n X s=1 P bµi,s + s log+ n Ks s −µi ≥u ≤ 3 log nu2 K u2 + n X s=s(u) P (bµi,s −µi ≥cu) . Using an integration already done in Step 2 we have Z +∞ δ0 3 log nu2 K u2 ≤3(1 + log(2)) r n K ≤5.1 r n K . Next using Hoeffding’s inequality and the fact that the rewards are in [0, 1] one has for u ≥δ0 n X s=s(u) P (bµi,s −µi ≥cu) ≤ n X s=s(u) exp(−2sc2u2)1u≤1/c ≤exp(−12c2 log 2) 1 −exp(−2c2u2)1u≤1/c. Now using that 1 −exp(−x) ≥x −x2/2 for x ≥0 one obtains Z 1/c δ0 1 1 −exp(−2c2u2)du = Z 1/(2c) δ0 1 1 −exp(−2c2u2)du + Z 1/c 1/(2c) 1 1 −exp(−2c2u2)du ≤ Z 1/(2c) δ0 1 2c2u2 −2c4u4 du + 1 2c(1 −exp(−1/2)) ≤ Z 1/(2c) δ0 2 3c2u2 du + 1 2c(1 −exp(−1/2)) = 2 3c2δ0 −4 3c + 1 2c(1 −exp(−1/2)) ≤ 1.9 r n K . 4 Putting the pieces together we proved E n X t=1 (BIt,t −µIt) ≤7.6 √ nK, which concludes the proof together with the results of Step 1 and Step 2. 3 Thompson Sampling in the two-armed BPR setting Following [Section 2, Bubeck et al. [2013]] we consider here the two-armed bandit problem with sub-Gaussian reward distributions (that is they satisfy Eeλ(X−µ) ≤eλ2/2 for all λ ∈R) and such that one reward distribution has mean µ∗and the other one has mean µ∗−∆where µ∗and ∆are known values. In order to derive the Thompson Sampling strategy for this problem we further assume that the reward distributions are in fact Gaussian with variance 1. In other words let Θ = {θ1, θ2}, π0(θ1) = π0(θ2) = 1/2, and under θ1 one has X1,s ∼N(µ∗, 1) and X2,s ∼N(µ∗−∆, 1) while under θ2 one has X2,s ∼N(µ∗, 1) and X1,s ∼N(µ∗−∆, 1). Then a straightforward computation (using Bayes rule and induction) shows that one has for some normalizing constant c > 0: πt(θ1) = c exp −1 2 T1(t−1) X s=1 (µ∗−X1,s)2 −1 2 T2(t−1) X s=1 (µ∗−∆−X2,s)2 , πt(θ2) = c exp −1 2 T1(t−1) X s=1 (µ∗−∆−X1,s)2 −1 2 T2(t−1) X s=1 (µ∗−X2,s)2 . Recall that Thompson Sampling draws θ(t) from πt and then pulls the best arm for the environment θ(t). Observe that under θ1 the best arm is arm 1 and under θ2 the best arm is arm 2. In other words Thompson Sampling draws It at random with the probabilities given by the posterior πt. This leads to a general algorithm for the two-armed BPR setting with sub-Gaussian reward distributions that we summarize in Figure 1. The next result shows that it attains optimal performances in this setting up to a numerical constant (see Bubeck et al. [2013] for lower bounds), for any sub-Gaussian reward distribution (not necessarily Gaussian) with largest mean µ∗and gap ∆. For rounds t ∈{1, 2}, select arm It = t. For each round t = 3, 4, . . . play It at random from pt where pt(1) = c exp −1 2 T1(t−1) X s=1 (µ∗−X1,s)2 −1 2 T2(t−1) X s=1 (µ∗−∆−X2,s)2 , pt(2) = c exp −1 2 T1(t−1) X s=1 (µ∗−∆−X1,s)2 −1 2 T2(t−1) X s=1 (µ∗−X2,s)2 , and c > 0 is such that pt(1) + pt(2) = 1. Figure 1: Policy inspired by Thompson Sampling for the two-armed BPR setting. Theorem 2 The policy of Figure 1 has regret bounded as Rn ≤∆+ 578 ∆, uniformly in n. 5 0 500 1000 1500 2000 2500 3000 0 1 2 3 4 5 Time n Rescaled regret: ∆Rn µ* = 0, ∆ = 0.2 Policy 1 from Bubeck et al.[2013] Policy of Figure 1 0 5000 10000 15000 20000 0 1 2 3 4 5 Time n Rescaled regret: ∆Rn µ* = 0, ∆ = 0.05 Policy 1 from Bubeck et al.[2013] Policy of Figure 1 Figure 2: Empirical comparison of the policy of Figure 1 and Policy 1 of Bubeck et al. [2013] on Gaussian reward distributions with variance 1. Note that we did not try to optimize the numerical constant in the above bound. Figure 2 shows an empirical comparison of the policy of Figure 1 with Policy 1 of Bubeck et al. [2013]. Note in particular that a regret bound of order 16/∆was proved for the latter algorithm and the (limited) numerical simulation presented here suggests that Thompson Sampling outperforms this strategy. Proof Without loss of generality we assume that arm 1 is the optimal arm, that is µ1 = µ∗and µ2 = µ∗−∆. Let bµi,s = 1 s Ps t=1 Xi,t, bγ1,s = µ1 −bµ1,s and bγ2,s = bµ2,s −µ2. Note that large (positive) values of bγ1,s or bγ2,s might mislead the algorithm into bad decisions, and we will need to control what happens in various regimes for these γ coefficients. We decompose the proof into three steps. Step 1. This first step will be useful in the rest of the analysis, it shows how the probability ratio of a bad pull over a good pull evolves as a function of the γ coefficients introduced above. One has: pt(2) pt(1) = exp −1 2 T1(t−1) X s=1 (µ2 −X1,s)2 −(µ1 −X1,s)2 −1 2 T2(t−1) X s=1 (µ1 −X2,s)2 −(µ2 −X2,s)2 = exp −T1(t −1) 2 µ2 2 −µ2 1 −2(µ2 −µ1)bµ1,T1(t−1) −T2(t −1) 2 µ2 1 −µ2 2 −2(µ1 −µ2)bµ2,T2(t−1) = exp −T1(t −1) 2 ∆2 −2∆(µ1 −bµ1,T1(t−1)) −T2(t −1) 2 ∆2 −2∆(bµ2,T2(t−1) −µ2) = exp −t∆2 2 + T1(t −1)∆bγ1,T1(t−1) + T2(t −1)∆bγ2,T2(t−1) ! . Step 2. We decompose the regret Rn as follows: Rn ∆ = 1 + E n X t=3 1{It = 2} = 1 + E n X t=3 1 bγ2,T2(t−1) > ∆ 4 , It = 2 + E n X t=3 1 bγ2,T2(t−1) ≤∆ 4 , bγ1,T1(t−1) ≤∆ 4 , It = 2 +E n X t=3 1 bγ2,T2(t−1) ≤∆ 4 , bγ1,T1(t−1) > ∆ 4 , It = 2 . We use Hoeffding’s inequality to control the first term: E n X t=3 1 bγ2,T2(t−1) > ∆ 4 , It = 2 ≤E n X s=1 1 bγ2,s > ∆ 4 ≤ n X s=1 exp −s∆2 32 ≤32 ∆2 . 6 For the second term, using the rewriting of Step 1 as an upper bound on pt(2), one obtains: E n X t=3 1 bγ2,T2(t−1) ≤∆ 4 , bγ1,T1(t−1) ≤∆ 4 , It = 2 = n X t=3 E pt(2)1 bγ2,T2(t−1) ≤∆ 4 , bγ1,T1(t−1) ≤∆ 4 ≤ n X t=3 exp −t∆2 4 ! ≤ 4 ∆2 . The third term is more difficult to control, and we further decompose the corresponding event as follows: bγ2,T2(t−1) ≤∆ 4 , bγ1,T1(t−1) > ∆ 4 , It = 2 ⊂ bγ1,T1(t−1) > ∆ 4 , T1(t −1) > t/4 ∪ bγ2,T2(t−1) ≤∆ 4 , It = 2, T1(t −1) ≤t/4 . The cumulative probability of the first event in the above decomposition is easy to control thanks to Hoeffding’s maximal inequality2 which states that for any m ≥1 and x > 0 one has P(∃1 ≤s ≤m s.t. s bγ1,s ≥x) ≤exp −x2 2m . Indeed this implies P bγ1,T1(t−1) > ∆ 4 , T1(t −1) > t/4 ≤P ∃1 ≤s ≤t s.t. s bγ1,s > ∆t 16 ≤exp −t∆2 512 , and thus E n X t=3 1 bγ1,T1(t−1) > ∆ 4 , T1(t −1) > t/4 ≤512 ∆2 . It only remains to control the term E n X t=3 1 bγ2,T2(t−1) ≤∆ 4 , It = 2, T1(t −1) ≤t/4 = n X t=3 E pt(2)1 bγ2,T2(t−1) ≤∆ 4 , T1(t −1) ≤t/4 ≤ n X t=3 E exp −t∆2 4 + ∆ max 1≤s≤t/4 sbγ1,s ! , where the last inequality follows from Step 1. The last step is devoted to bounding from above this last term. Step 3. By integrating the deviations and using again Hoeffding’s maximal inequality one obtains E exp ∆ max 1≤s≤t/4 sbγ1,s ≤1+ Z +∞ 1 P max 1≤s≤t 4 sbγ1,s ≥log x ∆ ! dx ≤1+ Z +∞ 1 exp −2(log x)2 ∆2t dx. Now, straightforward computation gives n X t=3 exp −t∆2 4 ! 1 + Z +∞ 1 exp −2(log x)2 ∆2t ! dx ! ≤ n X t=3 exp −t∆2 4 ! 1 + s π∆2t 2 exp t∆2 8 ! ≤ 4 ∆2 + Z +∞ 0 s π∆2t 2 exp −t∆2 8 ! dt ≤ 4 ∆2 + 16√π ∆2 Z +∞ 0 √u exp(−u) du ≤ 30 ∆2 . which concludes the proof by putting this together with the results of the previous step. 2It is an easy exercise to verify that Azuma-Hoeffding holds for martingale differences with sub-Gaussian increments, which implies Hoeffding’s maximal inequality for sub-Gaussian distributions. 7 4 Optimal strategy for the BPR setting inspired by Thompson Sampling In this section we consider the general BPR setting. That is the reward distributions are sub-Gaussian (they satisfy Eeλ(X−µ) ≤eλ2/2 for all λ ∈R), one reward distribution has mean µ∗, and all the other means are smaller than µ∗−ε where µ∗and ε are known values. Similarly to the previous section we assume that the reward distributions are Gaussian with variance 1 for the derivation of the Thompson Sampling strategy (but we do not make this assumption for the analysis of the resulting algorithm). Then the set of possible parameters is described as follows: Θ = ∪K i=1Θi where Θi = {θ ∈RK s.t. θi = µ∗and θj ≤µ∗−ε for all j ̸= i}. Assuming a uniform prior over the index of the best arm, and a prior λ over the mean of a suboptimal arm one obtains by Bayes rule that the probability density function of the posterior is given by: dπt(θ) ∝exp −1 2 K X j=1 Tj(t−1) X s=1 (Xj,s −θj)2 K Y j=1,j̸=i∗(θ) dλ(θj). Now remark that with Thompson Sampling arm i is played at time t if and only if θ(t) ∈Θi. In other words It is played at random from probability pt where pt(i) = πt(Θi) ∝ exp −1 2 Ti(t−1) X s=1 (Xi,s −µ∗)2 Y j̸=i Z µ∗−ε −∞ exp −1 2 Tj(t−1) X s=1 (Xj,s −v)2 dλ(v) ∝ exp −1 2 PTi(t−1) s=1 (Xi,s −µ∗)2 R µ∗−ε −∞ exp −1 2 PTi(t−1) s=1 (Xi,s −v)2 dλ(v) . Taking inspiration from the above calculation we consider the following policy, where λ is the Lebesgue measure and we assume a slightly larger value for the variance (this is necessary for the proof). For rounds t ∈[K], select arm It = t. For each round t = K + 1, K + 2, . . . play It at random from pt where pt(i) = c exp −1 3 PTi(t−1) s=1 (Xi,s −µ∗)2 R µ∗−ε −∞ exp −1 3 PTi(t−1) s=1 (Xi,s −v)2 dv , and c > 0 is such that PK i=1 pt(i) = 1. Figure 3: Policy inspired by Thompson Sampling for the BPR setting. The following theorem shows that this policy attains the best known performance for the BPR setting, shaving off a log-log term in the regret bound of the BPR policy. Theorem 3 The policy of Figure 3 has regret bounded as Rn ≤P i:∆i>0 ∆i + 80+log(∆i/ε) ∆i , uniformly in n. The proof of this result is fairly technical and it is deferred to the supplementary material. 8 References S. Agrawal and N. Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. In Proceedings of the 25th Annual Conference on Learning Theory (COLT), 2012a. S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling, 2012b. arXiv:1209.3353. J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009. J.-Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research, 11:2635–2686, 2010. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning Journal, 47(2-3):235–256, 2002. S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009. S. Bubeck, V. Perchet, and P. Rigollet. Bounded regret in stochastic multi-armed bandits. In Proceedings of the 26th Annual Conference on Learning Theory (COLT), 2013. O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Advances in Neural Information Processing Systems (NIPS), 2011. J.C. Gittins. Bandit processes and dynamic allocation indices. Journal Royal Statistical Society Series B, 14:148–167, 1979. E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite-time analysis. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory (ALT), 2012. H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952. D. Russo and B. Van Roy. Learning to optimize via posterior sampling, 2013. arXiv:1301.2609. W. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Bulletin of the American Mathematics Society, 25:285–294, 1933. 9
|
2013
|
75
|
5,153
|
Geometric optimisation on positive definite matrices with application to elliptically contoured distributions Suvrit Sra Max Planck Institute for Intelligent Systems T¨ubingen, Germany Reshad Hosseini School of ECE, College of Engineering University of Tehran, Tehran, Iran Abstract Hermitian positive definite (hpd) matrices recur throughout machine learning, statistics, and optimisation. This paper develops (conic) geometric optimisation on the cone of hpd matrices, which allows us to globally optimise a large class of nonconvex functions of hpd matrices. Specifically, we first use the Riemannian manifold structure of the hpd cone for studying functions that are nonconvex in the Euclidean sense but are geodesically convex (g-convex), hence globally optimisable. We then go beyond g-convexity, and exploit the conic geometry of hpd matrices to identify another class of functions that remain amenable to global optimisation without requiring g-convexity. We present key results that help recognise g-convexity and also the additional structure alluded to above. We illustrate our ideas by applying them to likelihood maximisation for a broad family of elliptically contoured distributions: for this maximisation, we derive novel, parameter free fixed-point algorithms. To our knowledge, ours are the most general results on geometric optimisation of hpd matrices known so far. Experiments show that advantages of using our fixed-point algorithms. 1 Introduction The geometry of Hermitian positive definite (hpd) matrices is remarkably rich and forms a foundational pillar of modern convex optimisation [21] and of the rapidly evolving area of convex algebraic geometry [4]. The geometry exhibited by hpd matrices, however, goes beyond what is typically exploited in these two areas. In particular, hpd matrices form a convex cone which is also a differentiable Riemannian manifold that is also a CAT(0) space (i.e., a metric space of nonpositive curvature [7]). This rich structure enables “geometric optimisation” with hpd matrices, which allows solving many problems that are nonconvex in the Euclidean sense but convex in the manifold sense (see §2 or [29]), or have enough metric structure (see §3) to permit efficient optimisation. This paper develops (conic) geometric optimisation1 (GO) for hpd matrices. We present key results that help recognise geodesic convexity (g-convexity); we also present sufficient conditions that put a class of even non g-convex functions within the grasp of GO. To our knowledge, ours are the most general results on geometric optimisation with hpd matrices known so far. Motivation for GO. We begin by noting that the widely studied class of geometric programs is ultimately nothing but the 1D version of GO on hpd matrices. Given that geometric programming has enjoyed great success in numerous applications—see e.g., the survey of Boyd et al. [6]—we hope GO also gains broad applicability. For this paper, GO arises naturally while performing maximum likelihood parameter estimation for a rich class of elliptically contoured distributions 1To our knowledge the name “geometric optimisation” has not been previously attached to hpd matrix optimisation, perhaps because so far only scattered few examples were known. Our theorems provide a starting point for recognising and constructing numerous problems amenable to geometric optimisation. 1 (ECDs) [8, 13, 20]. Perhaps the best known GO problem is the task of computing the Karcher / Fr´echet-mean of hpd matrices: a topic that has attracted great attention within matrix theory [2, 3, 27], computer vision [10], radar imaging [22; Part II], and medical imaging [11, 31]—we refer the reader to the recent book [22] for additional applications, references, and details. Another GO problem arises as a subroutine in nearest neighbour search over hpd matrices [12]. Several other areas involve GO problems: statistics (covariance shrinkage) [9], nonlinear matrix equations [17], Markov decision processes and the wider encompassing area of nonlinear Perron-Frobenius theory [18]. Motivating application. We use ECDs as a platform for illustrating our ideas for two reasons: (i) ECDs are important in a variety of settings (see the recent survey [23]); and (ii) they offer an instructive setup for presenting key ideas from the world of geometric optimisation. Let us therefore begin by recalling some basics. An ECD with density on Rd takes the form 2 ∀x ∈Rd, Eϕ(x; S) ∝det(S)−1/2ϕ(xT S−1x), (1) where S ∈Pd (i.e., the set of d × d symmetric positive definite matrices) is the scatter matrix while ϕ : R →R++ is positive density generating function (dgf). If ECDs have finite covariance matrix, then the scatter matrix is proportional to the covariance matrix [8]. Example 1. With ϕ(t) = e−t 2 , density (1) reduces to the multivariate normal density. For the choice ϕ(t) = tα−d/2 exp −(t/b)β , (2) where α, b and β are fixed positive numbers, density (1) yields the rich class called Kotz-type distributions that are known to have powerful modelling abilities [15; §3.2]; they include as special cases multivariate power exponentials, elliptical gamma, multivariate W-distributions, for instance. MLE. Let (x1, . . . , xn) be i.i.d. samples from an ECD Eϕ(S). Up to constants, the log-likelihood is L(S) = −1 2n log det S + Xn i=1 log ϕ(xT i S−1xi). (3) Equivalently, we may consider the minimisation problem minS≻0 Φ(S) := 1 2n log det(S) − X i log ϕ(xT i S−1xi). (4) Problem (4) is in general difficult as Φ may be nonconvex and may have multiple local minima. Since statistical estimation theory relies on having access to global optima, it is important to be able to solve (4) to global optimality. These difficulties notwithstanding, using GO ideas, we identify a rich class of ECDs for which we can indeed solve (4) optimally. Some examples already exist in the literature [16, 23, 30]; this paper develops techniques that are strictly more general and subsume previous examples, while advancing the broader idea of geometric optimisation. We illustrate our ideas by studying the following two main classes of dgfs in (1): (i) Geodesically Convex (GC): This class contains functions for which the negative log-likelihood Φ(S) is g-convex, i.e., convex along geodesics in the manifold of hpd matrices. Some members of this class have been previously studied (though sometimes without recognising or directly exploiting the g-convexity); (ii) Log-Nonexpansive (LN): This is a new class that we introduce in this paper. It exploits the “non-positive curvature” property of the manifold of hpd matrices. There is a third important class: LC, the class of log-convex dgfs ϕ. Though, since (4) deals with −log ϕ, the optimisation problem is still nonconvex. We describe class LC only in [28] primarily due to paucity of space and also because the first two classes contain our most novel results. These classes of dgfs are neither mutually disjoint nor proper subsets of each other. Each captures unique analytic or geometric structure crucial to efficient optimisation. Class GC characterises the “hidden” convexity found in several instances of (4), while LN is a novel class of models that might not have this hidden convexity, but nevertheless admit global optimisation. Contributions. The key contributions of this paper are the following: – New results that characterise and help recognise g-convexity (Thm. 1, Cor. 2, Cor. 3, Thm. 4). Though initially motivated by ECDs, our matrix-theoretic proofs are more generally applicable and should be of wider interest. All technical proofs, and several additional results that help recognise g-convexity are in the longer version of this paper [28]. 2For simplicity we describe only mean zero families; the extension to the general case is trivial. 2 – New fixed-point theory for solving GO problems, including some that might even lack g-convexity. Here too, our results go beyond ECDs—in fact, they broaden the class of problems that admit fixed-point algorithms in the metric space (Pd, δT )—Thms. 11 and 14 are the key results here. Our results on geodesic convexity subsume the more specialised results reported recently in [29]. We believe our matrix-theoretic proofs, though requiring slightly more advanced machinery, are ultimately simpler and more widely applicable. Our fixed-point theory offers a unified framework that not only captures the well-known M-estimators of [16], but applies to a larger class of problems than possible using previous methods. Our experimental illustrate computational benefits of one of resulting algorithms. 2 Geometric optimisation with geodesic convexity: class GC Geodesic convexity (g-convexity) is a classical concept in mathematics and is used extensively in the study of Hadamard manifolds and metric spaces of nonpositive curvature [7, 24] (i.e., spaces whose distance function is g-convex). This concept has been previously studied in nonlinear optimisation [25], but its full importance and applicability in statistical applications and optimisation is only recently emerging [29, 30]. We begin our presentation by recalling some definitions—please see [7, 24] for extensive details. Definition 2 (gc set). Let M denote a d-dimensional connected C2 Riemannian manifold. A set X ⊂M, where is called geodesically convex if any two points of X are joined by a geodesic lying in X. That is, if x, y ∈X, then there exists a path γ : [0, 1] →X such that γ(0) = x and γ(1) = y. Definition 3 (gc function). Let X ⊂M be a gc set. A function φ : X →R is geodesically convex, if for any x, y ∈X and a unit speed geodesic γ : [0, 1] →X with γ(0) = x and γ(1) = y, we have φ(γ(t)) ≤(1 −t)φ(γ(0)) + tφ(γ(1)) = (1 −t)φ(x) + tφ(y). (5) The power of gc functions in the context of solving (4) comes into play because the set Pd (the convex cone of positive definite matrices) is also a differentiable Riemannian manifold where geodesics between points can be computed efficiently. Indeed, the tangent space to Pd at any point can be identified with the set of Hermitian matrices, and the inner product on this space leads to a Riemannian metric on Pd. At any point A ∈Pd, this metric is given by the differential form ds = ∥A−1/2dAA−1/2∥F; also, between A, B ∈Pd there is a unique geodesic [1; Thm. 6.1.6] A#tB := γ(t) = A1/2(A−1/2BA−1/2)tA1/2, t ∈[0, 1]. (6) The midpoint of this path, namely A#1/2B is called the matrix geometric mean, which is an object of great interest in numerous areas [1–3, 10, 22]. As per convention, we denote it simply by A#B. Example 4. Let z ∈Cd be any vector. The function φ(X) := z∗X−1z is gc. Proof. Since φ is continuous, it suffices to verify midpoint convexity: φ(X#Y ) ≤1 2φ(X) + 1 2φ(Y ), for X, Y ∈Pd. Since (X#Y )−1 = X−1#Y −1 and X−1#Y −1 ⪯X−1+Y −1 2 ([1; 4.16]), it follows that φ(X#Y ) = z∗(X#Y )−1z ≤1 2(z∗X−1z + z∗Y −1z) = 1 2(φ(X) + φ(Y )). We are ready to state our first main theorem, which vastly generalises the above example and provides a foundational tool for recognising and constructing gc functions. Theorem 1. Let Π : Pd →Pk be a strictly positive linear map. Let A, B ∈Pd we have Π(A#tB) ⪯Π(A)#tΠ(B), t ∈[0, 1]. (7) Proof. Although positive linear maps are well-studied objects (see e.g., [1; Ch. 4]), we did not find an explicit proof of (7) in the literature, so we provide a proof in the longer version [28]. A useful corollary of Thm. 1 is the following (notice this corollary subsumes Example 4). Corollary 2. For positive definite matrices A, B ∈Pd and matrices 0 ̸= X ∈Cd×k we have tr X∗(A#tB)X ≤[tr X∗AX]1−t[tr X∗BX]t, t ∈(0, 1). (8) 3 Proof. Use the map A 7→tr X∗AX in Thm. 1. Note: Cor. 2 actually constructs a log-g-convex function, from which g-convexity is immediate. A notable corollary to Thm. 1 that subsumes a nontrivial result [14; Lem. 3.2] is mentioned below. Corollary 3. Let Xi ∈Cd×k with k ≤d such that rank([Xi]m i=1) = k. Then the function φ(S) := log det(P i X∗ i SXi) is gc on Pd. Proof. By our assumption on the Xi, the map Π = S 7→P i X∗ i SXi is strictly positive. Thus, from Thm 1 it follows that Π(S#R) = P i X∗ i (S#R)Xi ⪯Π(S)#Π(R). Since log det is monotonic, and determinant is multiplicative, the previous inequality yields φ(S#R) = log det Π(S#R) ≤log det(Π(S)) + log det(Π(R)) = 1 2φ(S) + 1 2φ(R). We are now ready to state our second main theorem. Theorem 4. Let h : Pk →R be gc function that is nondecreasing in L¨owner order. Let r ∈{±1}, and let Π : Pd →Pk be a strictly positive linear map. Then, φ(S) = h(Π(Sr)) ± log det(S) is gc. Proof. Since φ is continuous, it suffices to prove midpoint geodesic convexity. Since r ∈{±1}, (S#R)r = Sr#Rr; thus, from Thm. 1 and since h is matrix nondecreasing, it follows that h(Π(S#R)r) = h(Π(Sr#Rr)) ≤h(Π(Sr)#Π(Rr)). (9) Since h is also gc, inequality (9) further yields h(Π(Sr)#Π(Rr)) ≤1 2h(Π(Sr)) + 1 2h(Π(Rr)). (10) Since ± log det(S#R) = ± 1 2 log det(S) + log det(R) , on combining with (10) we obtain φ(S#R) ≤1 2φ(S) + 1 2φ(R), as desired. Notice also that if h is strictly gc, then φ(S) is also strictly gc. Finally, we state a corollary of Thm. 4 helpful towards recognising geodesic convexity of ECDs. We mention here that a result equivalent to Corr. 5 was recently also discovered in [30]. Thm. 4 is more general and uses a completely different argument founded on the matrix-theoretic results; our techniques may also be of wider independent interest. Corollary 5. Let h : R++ →R be nondecreasing and gc (i.e., h(x1−λyλ) ≤(1 −λ)h(x) + λh(y)). Then, for r ∈{±1}, φ : Pd →R : S 7→P i h(xT i Srxi) ± log det(S) is gc. 2.1 Application to ECDs in class GC We begin with a straightforward corollary of the above discussion. Corollary 6. For the following distributions the negative log-likelihood (4) is gc: (i) Kotz with α ≤d 2 (its special cases include Gaussian, multivariate power exponential, multivariate W-distribution with shape parameter smaller than one, elliptical gamma with shape parameter ν ≤d 2); (ii) Multivariate-t; (iii) Multivariate Pearson type II with positive shape parameter; (iv) Elliptical multivariate logistic distribution. 3 If the log-likelihood is strictly gc then (4) cannot have multiple solutions. Moreover, for any local optimisation method that computes a solution to (4), geodesic convexity ensures that this solution is globally optimal. Therefore, the key question to answer is: (i) does (4) have a solution? Note that answering this question is nontrivial even in special cases [16, 30]. We provide below a fairly general result that helps establish existence. 3The dgfs of different distributions are brought here for the reader’s convenience. Multivariate power exponential: φ(t) = exp(−tν/b), ν > 0; Multivariate W-distribution: φ(t) = tν−1 exp(−tν/b), ν > 0; Elliptical gamma: φ(t) = tν−d/2 exp(−t/b), ν > 0; Multivariate t: φ(t) = (1 + t/ν)−(ν+d)/2, ν > 0; Multivariate Pearson type II: φ(t) = (1 −t)ν, ν > −1, 0 ≤t ≤1; Elliptical multivariate logistic: φ(t) = exp(− √ t)/(1 + exp(− √ t))2. 4 Theorem 7. If Φ(S) satisfies the following properties: (i) −log ϕ(t) is lower semi-continuous (lsc) for t > 0, and (ii) Φ(S) →∞as ∥S∥→∞or ∥S−1∥→∞, then Φ(S) attains its minimum. Proof. Consider the metric space (Pd, dR), where dR is the Riemannian distance, dR(A, B) = ∥log(A−1/2BA−1/2)∥F A, B ∈Pd. (11) If Φ(S) →∞as ∥S∥→∞or as ∥S−1∥→∞, then Φ(S) has bounded lower-level sets in (Pd, dR). It is a well-known result in variational analysis that a function that has bounded lower-level sets in a metric space and is lsc, then the function attains its minimum [26]. Since −log ϕ(t) is lsc and log det(S−1) is continuous, Φ(S) is lsc on (Pd, dR). Therefore it attains its minimum. A key consequence of Thm. 7 is its ability to show existence of solutions to (4) for a variety of different ECDs. Let us look at an application to Kotz-type distributions below. For these distributions, the function Φ(S) assumes the form K(S) = n 2 log det(S) + ( d 2 −α) Xn i=1 log xT i S−1xi + Xn i=1 xT i S−1xi b β . (12) Lemma 8 shows that K(S) →∞whenever ∥S−1∥→∞or ∥S∥→∞. Lemma 8. Let the data X = {x1, . . . , xn} span the whole space and satisfy for α < d 2 the condition |X ∩L| |X| < dL d −2α, (13) where L is an arbitrary subspace with dimension dL < d and |X ∩L| is the number of datapoints that lie in the subspace L. If ∥S−1∥→∞or ∥S∥→∞, then K(S) →∞. Proof. If ∥S−1∥→∞and since the data span the whole space, it is possible to find a datum x1 such that t1 = xT 1 S−1x1 →∞. Since lim t→∞c1 log(t) + tc2 + c3 →∞ for constants c1,c3 and c2 > 0, it follows that K(S) →∞whenever ∥S−1∥→∞. If ∥S∥→∞and ∥S−1∥is bounded, then the third term in expression of K(S) is bounded. Assume that dL is the number of eigenvalues of S that go to ∞and |X ∩L| is the number of data that lie in the subspace span by these eigenvalues. Then in the limit when eigenvalues of S go to ∞, K(S) converges to the following limit lim λ→∞ n 2 dL log λ + ( d 2 −α)|X ∩L| log λ−1 + c Apparently if n 2 dL −( d 2 −α)|X ∩L| > 0, then K(S) →∞and the proof is complete. It is important to note that overlap condition (13) can be fulfilled easily by assuming that the number of data is larger than their dimensionality and that they are noisy. Using Lemma 8, we can invoke Thm. 7 to immediately state the following result. Theorem 9 (Existence Kotz-distr.). If the data samples satisfy condition (13), then the Kotz negative log-likelihood has a minimiser. As previously mentioned, once existence is ensured, one may use any local optimisation method to minimise (4) to obtain the desired mle. This brings us to the next question. What if Φ(S) is neither convex nor g-convex? The ideas introduced in Sec. 3 below offer a partial one answer. 3 Geometric optimisation for class LN Without convexity or g-convexity, in general at best we might obtain local minima. However, as alluded to previously, the set Pd of hpd matrices possesses remarkable geometric structure that allows us to extend global optimisation to a rich class beyond just gc functions. To our knowledge, this class of ECDs was beyond the grasp of previous methods [16, 29, 30]. We begin with a key definition. 5 Definition 5 (Log-nonexpansive). Let f : R++ →(0, ∞). We say f is log-nonexpansive (LN) on a compact interval I ⊂R+ if there exists a fixed constant 0 ≤q ≤1 such that | log f(t) −log f(s)| ≤q| log t −log s|, ∀s, t ∈I. (14) If q < 1, we say f is log-contractive. Finally, if for every s ̸= t it holds that | log f(t) −log f(s)| < | log t −log s|, ∀s, t s ̸= t, we say f is weakly log-contractive (wlc); an important point to note here is the absence of a fixed q. Next we study existence, uniqueness, and computation of solutions to (4). To that end, momentarily ignore the constraint S ≻0, to see that the first-order necessary optimality condition for (4) is ∂Φ(S) ∂S = 0 ⇐⇒ 1 2nS−1 + Xn i=1 ϕ′(xT i S−1xi) ϕ(xT i S−1xi) S−1xixT i S−1 = 0. (15) Defining h ≡−ϕ′/ϕ, condition (15) may be rewritten more compactly as S = 2 n Xn i=1 xih(xT i S−1xi)xT i = 2 nXh(DS)XT , (16) where DS := Diag(xT i S−1xi), and X = [x1, . . . , xm]. If (16) has a positive definite solution, then it is a candidate mle; if it is unique, then it is the desired solution (observe that if we have a Gaussian, then h(t) ≡1/2, and as expected (16) reduces to the sample covariance matrix). But how should we solve (16)? This question is in general highly nontrivial to answer because (16) is difficult nonlinear equation in matrix variables. This is the point where the class LN introduced above comes into play. More specifically, we solve (16) via a fixed-point iteration. Introduce therefore the nonlinear map G : Pd →Pd that maps S to the right hand side of (16); then, starting with a feasible S0 ≻0, simply perform the iteration Sk+1 ←G(Sk), k = 0, 1, . . . , (17) which is shown more explicitly as Alg. 1 below. Algorithm 1 Fixed-point iteration for mle Input: Observations x1, . . . , xn; function h Initialize: k ←0; S0 ←In while ¬ converged do Sk+1 ←2 n Pn i=1 xih(xT i S−1 k xi)xT i end while return Sk The most interesting twist to analysing iteration (17) is that the map G is usually not contractive with respect to the Euclidean metric. But the metric geometry of Pd alluded to previously suggests that it might be better to analyse the iteration using a non-Euclidean metric. Unfortunately, the Riemannnian distance (11) on Pd, while canonical, also turns out to be unsuitable. This impasse is broken by selecting a more suitable “hyperbolic distance” that captures the crucial non-Euclidean geometry of Pd, while still respecting its convex conical structure. Such a suitable choice is provided by the Thompson metric—an object of great interest in nonlinear matrix equations [17]—which is known to possess geometric properties suitable for analysing convex cones, of which Pd is a shining example [18]. On Pd, the Thompson metric is given by δT (X, Y ) := ∥log(Y −1/2XY −1/2)∥, (18) where ∥·∥is the usual operator 2-norm, and ‘log’ is the matrix logarithm. The core properties of (18) that prove useful for analysis fixed point iterations are listed below—for proofs please see [17, 19]. Proposition 10. Unless noted otherwise, all matrices are assumed to be hpd.. δT (X−1, Y −1) = δT (X, Y ) (19a) δT (B∗XB, B∗Y B) = δT (X, Y ), B ∈GLn(C) (19b) δT (Xt, Y t) ≤ |t|δT (X, Y ), for t ∈[−1, 1] (19c) δT X i wiXi, X i wiYi ≤ max 1≤i≤m δT (Xi, Yi), wi ≥0, w ̸= 0 (19d) δT (X + A, Y + A) ≤ α α+β δT (X, Y ), A ⪰0, (19e) where α = max{∥X∥, ∥Y ∥} and β = λmin(A). 6 We need one more crucial result (see [28] for a proof), which we state below. This theorem should be of wider interest as it enlarges the class of maps that one can study using the Thompson metric. Theorem 11. Let X ∈Cd×p, where p ≤d, and rank(X) = p. Let A, B ∈Pd. Then, δT (X∗AX, X∗BX) ≤ δT (A, B). (20) We now show how to use Prop. 10 and Thm. 11 to analyse contractions on Pd. Proposition 12. Let h be a LN function. Then, the map G in (17) is nonexpansive in δT . Moreover, if h is wlc, then G is weakly-contractive in δT . Proof. Let S, R ≻0 be arbitrary. Then, we have the following chain of inequalities δT (G(S), G(R)) = δT 2 nXh(DS)XT , 2 nXh(DR)XT ≤ δT h(DS), h(DR) ≤ max 1≤i≤n δT h(xT i S−1xi), h(xT i R−1xi) ≤ max 1≤i≤n δT xT i S−1xi, xT i R−1xi ≤ δT S−1, R−1 = δT (S, R), where the first inequality follows from (19b) and Thm. 11; the second inequality follows since h(DS) and h(DS) are diagonal; the third follows from (19d); the fourth from another application of Thm. 11; while the final equality is via (19a). This proves nonexpansivity. If in addition h is weakly log-contractive and S ̸= R, then the second inequality above is strict, that is, δT (G(S), G(R)) < δT (S, R) ∀S, R and S ̸= R. Consequently, we obtain the following main convergence theorem for (17). Theorem 13. If G is weakly contractive and (16) has a solution, then this solution is unique and iteration (17) converges to it. When h is merely LN (not wlc), it is still possible to show uniqueness of (16) up to a constant. Our proof depends on the following new property of δT , which again should be of broader interest. Theorem 14. Let G be nonexpansive in the δT metric, that is δT (G(X), G(Y )) ≤δT (X, Y ), and F be weakly contractive, that is δT (F(X), F(Y )) < δT (X, Y ), then G + F is also weakly contractive. Observe that the property proved in Thm. 14 is a striking feature of the nonpositive curvature of Pd; clearly, such a result does not usually hold in Banach spaces. As a consequence, Thm. 14 helps establish the following “robustness” result for iteration (17). Theorem 15. If h is LN, and S1 ̸= S2 are solutions to the nonlinear equation (16), then iteration (17) converges to a solution, and S1 ∝S2. As an illustrative example of these results, consider the problem of finding the minimum of negative log-likelihood solution of Kotz type distribution. The convergence of the iterative algorithm in (17) can be obtained from Thm. 15. But for the Kotz distribution we can show a stronger result, which helps obtain geometric convergence rates for the fixed-point iteration. Lemma 16. If c > 0 and −1 < b < 1, the function h(x) = x + cxb is weakly log-contractive. According to this lemma, h in the iterative algorithm 16 for the Kotz-type distributions with 0 < β < 2 and α < d 2 is wlc. Based on Thm. 9, K(S) has a minimum. Therefore, we have the following. Corollary 17. The iterative algorithm (16) for the Kotz-type distribution with 0 < β < 2 and α < d 2 converges to a unique fixed point. 4 Numerical results We briefly highlight the numerical performance of our fixed-point iteration. The key message here is that our fixed-point iterations solve nonconvex likelihood maximisation problems that involve a complicating hpd constraint. But since the fixed-point iterations always generate hpd iterates, no extra eigenvalue computation is needed, which leads to substantial computational advantages. In contrast, a nonlinear solver must perform constrained optimisation, which can be unduly expensive. 7 −1.9 −1.52 −1.14 −0.76 −0.38 0 −5 −3.18 −1.36 0.46 2.28 4.1 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon −1.4 −0.84 −0.28 0.28 0.84 1.4 −5 −2.97 −0.96 1.06 3.08 5.1 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon −1.3 −0.46 0.38 1.22 2.06 2.9 −5 −2.89 −0.79 1.3 3.41 5.5 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon Figure 1: Running times comparison of the fixed-point iteration compared with MATLAB’s fmincon to maximise a Kotz-likelihood (see text for details). The plots show (from left to right), running times for estimating S ∈Pd, for d ∈{4, 16, 32}. Larger d was not tried because fmincon does not scale. −1.4 −0.64 0.12 0.88 1.64 2.4 −5 −3 −1 1 3 5 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon −1.4 −0.84 −0.28 0.28 0.84 1.4 −5 −2.94 −0.87 1.19 3.24 5.3 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon −1.3 −0.72 −0.14 0.44 1.02 1.6 −5 −2.8 −0.59 1.6 3.81 6 log Running time (seconds) log Φ(S)−Φ(Smin) fixed−point fmincon Figure 2: In the Kotz-type distribution, when β gets close to zero or 2, the contraction factor becomes smaller which could impact the convergence rate. This figure shows running time variance for Kotz-type distributions with fixed d = 16, and α = 2, for different values of β: β = 0.1, β = 1, β = 1.7. We show two short experiments (Figs. 1 and 2) showing scalability of the fixed-point iteration with increasing dimensionality of the input matrix, and for varying β parameter of the Kotz distribution; this parameter influences the convergence rate of the fixed-point iteration. For three different dimensions d = 4, d = 16, and d = 32, we sample 10,000 datapoints from a Kotz-type distribution with β = 0.5, α = 2, and a random covariance matrix. The convergence speed is shown as blue curves in Figure 1. For comparison, the result of constrained optimisation (red curves) using MATLAB’S optimisation toolbox are shown. The fixed-point algorithm clearly outperforms MATLAB’S toolbox, especially as dimensionality increases. These results indicate that the fixed-point approach can be very competitive. Also note that the problems are nonconvex with an open constraint set—this precludes direct application simple approaches such as gradient-projection (since projection requires closed sets; moreover, projection also requires eigenvector decompositions). Additional comparisons in the longer version [28] show that the fixed-point iteration also significantly outperforms sophisticated manifold optimisation techniques [5], especially for increasing data dimensionality. 5 Conclusion We developed geometric optimisation for minimising potentially nonconvex functions over the set of positive definite matrices. We showed key results that help recognise geodesic convexity; we also introduced the class of log-nonexpansive functions that contains functions that need not be g-convex, but can still be optimised efficiently. Key to our ideas here was a careful construction of fixed-point iterations in a suitably chosen metric space. We motivated, developed, and applied our results to the task of maximum likelihood estimation for various elliptically contoured distributions, covering classes and examples substantially beyond what had been known so far in the literature. We believe that the general geometric optimisation techniques that we developed in this paper will prove to be of wider use and interest beyond our motivating application. Developing a more extensive geometric optimisation numerical package is part of our ongoing project. References [1] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007. [2] R. Bhatia and R. L. Karandikar. The matrix geometric mean. Technical Report isid/ms/2-11/02, Indian Statistical Institute, 2011. [3] D. A. Bini and B. Iannazzo. Computing the karcher mean of symmetric positive definite matrices. Linear Algebra and its Applications, 438(4):1700 – 1710, 2013. 8 [4] G. Blekherman and P. A. Parrilo, editors. Semidefinite Optimization and Convex Algebraic Geometry. SIAM, 2013. [5] N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt: a matlab toolbox for optimization on manifolds. arXiv Preprint 1308.5200, 2013. [6] S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi. A Tutorial on Geometric Programming. Optimization and Engineering, 8(1):67–127, 2007. [7] M. R. Bridson and A. Haeflinger. Metric Spaces of Non-Positive Curvature. Springer, 1999. [8] S. Cambanis, S. Huang, and G. Simons. On the theory of elliptically contoured distributions. Journal of Multivariate Analysis, 11(3):368–385, 1981. [9] Y. Chen, A. Wiesel, and A. Hero. Robust shrinkage estimation of high-dimensional covariance matrices. IEEE Transactions on Signal Processing, 59(9):4097–4107, 2011. [10] G. Cheng and B. Vemuri. A novel dynamic system in the space of spd matrices with applications to appearance tracking. SIAM Journal on Imaging Sciences, 6(1):592–615, 2013. [11] G. Cheng, H. Salehian, and B. C. Vemuri. Efficient Recursive Algorithms for Computing the Mean Diffusion Tensor and Applications to DTI Segmentation. In European Conference on Computer Vision (ECCV), volume 7, pages 390–401, 2012. [12] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet Divergence for Efficient Similarity Computations on Positive Definite Tensors. IEEE TPAMI, 2012. [13] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman and Hall/CRC, 1999. [14] L. Gurvits and A. Samorodnitsky. A deterministic algorithm for approximating mixed discriminant and mixed volume, and a combinatorial corollary. Disc. Comp. Geom., 27(4), 2002. [15] S. K. K.-T. Fang and K. W. Ng. Symmetric multivariate and related distributions. Chapman & Hall, 1990. [16] J. T. Kent and D. E. Tyler. Redescending M-estimates of multivariate location and scatter. The Annals of Statistics, 19(4):2102–2119, Dec. 1991. [17] H. Lee and Y. Lim. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity, 21: 857–878, 2008. [18] B. Lemmens and R. Nussbaum. Nonlinear Perron-Frobenius Theory. Cambridge Univ. Press, 2012. [19] Y. Lim and M. P´alfia. Matrix power means and the Karcher mean. J. Functional Analysis, 262:1498–1514, 2012. [20] R. J. Muirhead. Aspects of multivariate statistical theory. John-Wiley, 1982. [21] Y. Nesterov and A. S. Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM, 1994. [22] F. Nielsen and R. Bhatia, editors. Matrix Information Geometry. Springer, 2013. [23] E. Ollila, D. Tyler, V. Koivunen, and H. V. Poor. Complex elliptically symmetric distributions: Survey, new results and applications. IEEE Transactions on Signal Processing, 60(11):5597–5625, 2011. [24] A. Papadopoulos. Metric spaces, convexity and nonpositive curvature. Europ. Math. Soc., 2005. [25] T. Rapcs´ak. Geodesic convexity in nonlinear optimization. J. Optim. Theory and Appl., 69(1):169–183, 1991. [26] R. T. Rockafellar and R. J.-B. Wets. Variational analysis. Springer, 1998. [27] S. Sra. Positive Definite Matrices and the Symmetric Stein Divergence. arXiv:1110.1773, Oct. 2012. [28] S. Sra and R. Hosseini. Conic geometric optimisation on the manifold of positive definite matrices. arXiv preprint, 2013. [29] A. Wiesel. Geodesic convexity and covariance estimation. IEEE Transactions on Signal Processing, 60 (12):6182–89, 2012. [30] T. Zhang, A. Wiesel, and S. Greco. Multivariate generalized gaussian distribution: Convexity and graphical models. arXiv preprint arXiv:1304.3206, 60(11):5597–5625, Nov. 2013. [31] H. Zhu, H. Zhang, J. Ibrahim, and B. Peterson. Statistical analysis of diffusion tensors in diffusion-weighted magnetic resonance imaging data. Journal of the American Statistical Association, 102(480):1085–1102, 2007. 9
|
2013
|
76
|
5,154
|
Capacity of strong attractor patterns to model behavioural and cognitive prototypes Abbas Edalat Department of Computing Imperial College London London SW72RH, UK ae@ic.ac.uk Abstract We solve the mean field equations for a stochastic Hopfield network with temperature (noise) in the presence of strong, i.e., multiply stored, patterns, and use this solution to obtain the storage capacity of such a network. Our result provides for the first time a rigorous solution of the mean filed equations for the standard Hopfield model and is in contrast to the mathematically unjustifiable replica technique that has been used hitherto for this derivation. We show that the critical temperature for stability of a strong pattern is equal to its degree or multiplicity, when the sum of the squares of degrees of the patterns is negligible compared to the network size. In the case of a single strong pattern, when the ratio of the number of all stored pattens and the network size is a positive constant, we obtain the distribution of the overlaps of the patterns with the mean field and deduce that the storage capacity for retrieving a strong pattern exceeds that for retrieving a simple pattern by a multiplicative factor equal to the square of the degree of the strong pattern. This square law property provides justification for using strong patterns to model attachment types and behavioural prototypes in psychology and psychotherapy. 1 Introduction: Multiply learned patterns in Hopfield networks The Hopfield network as a model of associative memory and unsupervised learning was introduced in [23] and has been intensively studied from a wide range of viewpoints in the past thirty years. However, properties of a strong pattern, as a pattern that has been multiply stored or learned in these networks, have only been examined very recently, a surprising delay given that repetition of an activity is the basis of learning by the Hebbian rule and long term potentiation. In particular, while the storage capacity of a Hopfield network with certain correlated patterns has been tackled [13, 25], the storage capacity of a Hopfield network in the presence of strong as well as random patterns has not been hitherto addressed. The notion of a strong pattern of a Hopfield network has been proposed in [15] to model attachment types and behavioural prototypes in developmental psychology and psychotherapy. This suggestion has been motivated by reviewing the pioneering work of Bowlby [9] in attachment theory and highlighting how a number of academic biologists, psychiatrists, psychologists, sociologists and neuroscientists have consistently regarded Hopfield-like artificial neural networks as suitable tools to model cognitive and behavioural constructs as patterns that are deeply and repeatedly learned by individuals [11, 22, 24, 30, 29, 10]. A number of mathematical properties of strong patterns in Hopfield networks, which give rise to strong attractors, have been derived in [15]. These show in particular that strong attractors are strongly stable; a series of experiments have also been carried out which confirm the mathematical 1 results and also indicate that a strong pattern stored in the network can be retrieved even in the presence of a large number of simple patterns, far exceeding the well-known maximum load parameter or storage capacity of the Hopfield network with random patterns (αc ≈0.138). In this paper, we consider strong patterns in stochastic Hopfield model with temperature, which accounts for various types of noise in the network. In these networks, the updating rule is probabilistic and depend on the temperature. Since analytical solution of such a system is not possible in general, one strives to obtain the average behaviour of the network when the input to each node, the so-called field at the node, is replaced with its mean. This is the basis of mean field theory for these networks. Due to the close connection between the Hopfield network and the Ising model in ferromagnetism [1, 8], the mean field approach for the Hopfield network and its variations has been tackled using the replica method, starting with the pioneering work of Amit, Gutfreund and Sompolinsky [3, 2, 4, 19, 31, 1, 13]. Although this method has been widely used in the theory of spin glasses in statistical physics [26, 16] its mathematical justification has proved to be elusive as we will discuss in the next section; see for example [20, page 264], [14, page 27], and [7, page 9]. In [17] and independently in [27], an alternative technique to the replica method for solving the mean field equations has been proposed which is reproduced and characterised as heuristic in [20, section 2.5] since it relies on a number of assumptions that are not later justified and uses a number of mathematical steps that are not validated. Here, we use the basic idea of the above heuristic to develop a verifiable mathematical framework with provable results grounded on elements of probability theory, with which we assume the reader is familiar. This technique allows us to solve the mean field equations for the Hopfield network in the presence of strong patterns and use the results to study, first, the stability of these patterns in the presence of temperature (noise) and, second, the storage capacity of the network with a single strong pattern at temperature zero. We show that the critical temperature for the stability of a strong pattern is equal to its degree (i.e., its multiplicity) when the ratio of the sum of the squares of degrees of the patterns to the network size tends to zero when the latter tends to infinity. In the case that there is only one strong pattern present with its degree small compared to the number of patterns and the latter is a fixed multiple of the number of nodes, we find the distribution of the overlap of the mean field and the patterns when the strong pattern is being retrieved. We use these distributions to prove that the storage capacity for retrieving a strong pattern exceeds that for a simple pattern by a multiplicative factor equal to the square of the degree of the strong attractor. This result matches the finding in [15] regarding the capacity of a network to recall strong patterns as mentioned above. Our results therefore show that strong patterns are robust and persistent in the network memory as attachment types and behavioural prototypes are in the human memory system. In this paper, we will several times use Lyapunov’s theorem in probability which provides a simple sufficient condition to generalise the Central Limit theorem when we deal with independent but not necessarily identically distributed random variables. We require a general form of this theorem as follows. Let Yn = Pkn i=1 Yni, for n ∈IN, be a triangular array of random variables such that for each n, the random variables Yni, for 1 ≤i ≤kn are independent with E(Yni) = 0 and E(Y 2 ni) = σ2 ni, where E(X) stands for the expected value of the random variable X. Let s2 n = Pkn i=1 σ2 ni. We use the notation X ∼Y when the two random variables X and Y have the same distribution (for large n if either or both of them depend on n). Theorem 1.1 (Lyapunov’s theorem [6, page 368]) If for some δ > 0, we have the condition: 1 s2+δ n E(|Yn|2+δ|) →0 as n →∞ then 1 sn Yn d −→N(0, 1) as n →∞where d −→denotes convergence in distribution, and we denote by N(a, σ2) the normal distribution with mean a and variance σ2. Thus, for large n we have Yn ∼N(0, s2 n). □ 2 2 Mean field theory We consider a Hopfield network with N neurons i = 1, . . . , N with values Si = ±1 and follow the notations in [20]. As in [15], we assume patterns can be multiply stored and the degree of a pattern is defined as its multiplicity. The total number of patterns, counting their multiplicity, is denoted by p and we assume there are n patterns ξ1, . . . , ξn with degrees d1, . . . , dn ≥1 respectively and that the remaining p −Pn k=1 dk ≥0 patterns are simple, i.e., each has degree one. Note that by our assumptions there are precisely p0 = p + n − n X k=1 dk distinct patterns, which we assume are independent and identically distributed with equal probability of taking value ±1 for each node. More generally, for any non-negative integer k ∈IN, we let pk = p0 X µ=1 dk µ. We use the generalized Hebbian rule for the synaptic couplings: wij = 1 N Pp0 µ=1 dµξµ i ξµ j for i ̸= j with wii = 0 for 1 ≤i, j ≤N. As in the standard stochastic Hopfield model [20], we use Glauber dynamics [18] for the stochastic updating rule with pseudo-temperature T > 0, which accounts for various types of noise in the network, and assume zero bias in the local field. Putting β = 1/T (i.e., with the Boltzmann constant kB = 1) and letting fβ(h) = 1/(1 + exp(−2βh)), the stochastic updating rule at time t is given by: Pr(Si(t + 1) = ±1) = fβ(±hi(t)), where hi(t) = N X j=1 wijSj(t), (1) is the local field at i at time t. The updating is implemented asynchronously in a random way. The energy of the network in the configuration S = (Si)N i=1 is given by H(S) = −1 2 N X i,j=1 SiSjwij. For large N, this specifies a complex system, with an underlying state space of dimension 2N, which in general cannot be solved exactly. However, mean field theory has proved very useful in studying Hopfield networks. The average updated value of Si(t + 1) in Equation (1) is ⟨Si(t + 1)⟩= 1/(1 + e−2βhi(t)) −1/(1 + e2βhi(t)) = tanh(βhi(t)), (2) where ⟨. . .⟩denotes taking average with respect to the probability distribution in the updating rule in Equation (1). The stationary solution for the mean field thus satisfies: ⟨Si⟩= ⟨tanh(βhi)⟩, (3) The average overlap of pattern ξµ with the mean field at the nodes of the network is given by: mν = 1 N N X i=1 ξν i ⟨Si⟩ (4) The replica technique for solving the mean field problem, used in the case p/N = α > 0 as N →∞, seeks to obtain the average of the overlaps in Equation (4) by evaluating the partition function of the system, namely, Z = TrS exp(−βH(S)), where the trace TrS stands for taking sum over all possible configurations S = (Si)N i=1. As it is generally the case in statistical physics, once the partition function of the system is obtained, 3 all required physical quantities can in principle be computed. However, in this case, the partition function is very difficult to compute since it entails computing the average ⟨⟨log Z⟩⟩of log Z, where ⟨⟨. . .⟩⟩indicates averaging over the random distribution of the stored patterns ξµ. To overcome this problem, the identity log Z = lim k→0 Zk −1 k is used to reduce the problem to finding the average ⟨⟨Zk⟩⟩of Zk, which is then computed for positive integer values of k. For such k, we have: Zk = TrS1TrS2 . . . TrSk exp(−β(H(S1) + H(S1) + . . . + H(Sk))), where for each i = 1, . . . , k the super-scripted configuration Si is a replica of the configuration state. In computing the trace over each replica, various parameters are obtained and the replica symmetry condition assumes that these parameters are independent of the particular replica under consideration. Apart from this assumption, there are two basic mathematical problems with the technique which makes it unjustifiable [20, page 264]. Firstly, the positive integer k above is eventually treated as a real number near zero without any mathematical justification. Secondly, the order of taking limits, in particular the order of taking the two limits k →0 and N →∞, are several times interchanged again without any mathematical justification. Here, we develop a mathematically rigorous method for solving the mean field problem, i.e., computing the average of the overlaps in Equation (4) in the case of p/N = α > 0 as N →∞. Our method turns the basic idea of the heuristic presented in [17] and reproduced in [20] for solving the mean field equation into a mathematically verifiable formalism, which for the standard Hopfield network with random stored patterns gives the same result as the replica method, assuming replica symmetry. In the presence of strong patterns we obtain a set of new results as explained in the next two sections. The mean field equation is obtained from Equation (3) by approximating the right hand side of this equation by the value of tanh at the mean field ⟨hi⟩= PN j=1 wij⟨Sj⟩, ignoring the sum PN j=1 wij(Sj −⟨Sj⟩) for large N [17, page 32]: ⟨Si⟩= tanh(β⟨hi⟩) = tanh β N PN j=1 Pp0 µ=1 dµξµ i ξµ j ⟨Sj⟩ . (5) Equation (5) gives the mean field equation for the Hopfield network with n possible strong patterns ξµ (1 ≤µ ≤n) and p −Pn µ=1 dµ simple patterns ξµ with n + 1 ≤µ ≤p0. As in the standard Hopfield model, where all patterns are simple, we have two cases to deal with. However, we now have to account for the presence of strong attractors and our two cases will be as follows: (i) In the first case we assume p2 := Pp0 µ=1 d2 µ = o(N), which includes the simpler case p2 ≪N when p2 is fixed and independent of N. (ii) In the second case we assume we have a single strong attractor with the load parameter p/N = α > 0. 3 Stability of strong patterns with noise: p2 = o(N) The case of constant p and N →∞is usually referred to as α = 0 in the standard Hopfield model. Here, we need to consider the sum of degrees of all stored patterns (and not just the number of patterns) compared to N. We solve the mean field equation with T > 0 by using a method similar in spirit to [20, page 33] for the standard Hopfield model, but in our case strong patterns induce a sequence of independent but non-identically distributed random variables in the crosstalk term, where the Central Limit Theorem cannot be used; we show however that Lyapunov’s theorem (Theorem (1.1) can be invoked. In retrieving pattern ξ1, we look for a solution of the mean filed equation of the form: ⟨Si⟩= mξ1 i , where m > 0 is a constant. Using Equation (5) and separating the contribution of ξ1 in the argument of tanh, we obtain: mξ1 i = tanh mβ N d1ξ1 i + X j̸=i,µ>1 dµξµ i ξµ j ξ1 j . (6) 4 For each N, µ > 1 and j ̸= i, let YNµj = dµ N ξµ i ξµ j ξ1 j . (7) This gives (p0 −1)(N −1) independent random variables with E(YNµj) = 0, E(Y 2 Nµj) = d2 µ/N 2, and E(|Y 3 Nµj|) = d3 µ/N 3. We have: s2 N := X µ>1,j̸=i E(Y 2 Nµj) = N −1 N 2 X µ>1 d2 µ ∼1 N X µ>1 d2 µ. (8) Thus, as N →∞, we have: 1 s3 N X µ>1,j̸=i E(|Y 3 Nµj|) ∼ P µ>1 d3 µ √ N(P µ>1 d2µ)3/2 →0. (9) as N →∞since for positive numbers dµ we always have P µ>1 d3 µ < (P µ>1 d2 µ)3/2. Thus the Lyapunov condition is satisfied for δ = 1. By Lyapunov’s theorem we deduce: 1 N X µ>1,j̸=i dµξµ i ξµ j ξ1 j ∼N 0, X µ>1 d2 µ/N ! (10) Since we also have p2 = o(N), it follows that we can ignore the second term, i.e., the crosstalk term, in the argument of tanh in Equation (6) as N →∞; we thus obtain: m = tanh βd1m. (11) To examine the fixed points of the Equation (11), we let d = d1 for convenience and put x = βdm = dm/T, so that tanh x = Tx/d; see Figure 1. It follows that Tc = d is the critical temperature. If T < d then there is a non-zero (non-trivial) solution for m, whereas for T > d we only have the trivial solution. For d = 1 our solution is that of the standard Hopfield network as in [20, page 34]. x y = tanh x y > x y = x ( d = T) y < x ( T < d ) ( d < T ) Figure 1: Stability of strong attractors with noise Theorem 3.1 The critical temperature for stability of a strong attractor is equal to its degree. □ 4 Mean field equations for p/N = α > 0. The case p/N = α, as for the standard Hopfield model, is much harder and we here assume we have only a single pattern ξ1 with d1 ≥1 and the rest of the patterns ξµ are simple with dµ = 1 for 2 ≤µ ≤p0. The case when there are more than one strong patterns is harder and will be dealt with in a future paper. Moreover, we assume d1 ≪p0 which is the interesting case in applications. If d1 > 1 then we have a single strong pattern whereas if d1 = 1 the network is reduced to the standard Hopfield network. We recall that all patterns ξµ for 1 ≤µ ≤p0 are independent and random. Since 5 p and N are assumed to be large and d1 ≪p0, we will replace p0 with p and approximate terms like p −2 with p. We again consider the mean field equation (5) for retrieving pattern ξ1 but now the cross talk term in (6) is large and can no longer be ignored. We therefore look at the overlaps, Equation (4), of the mean field with all the stored patterns ξν and not just ξ1. Combining Equation (5) and (4), we eliminate the mean field to obtain a recursive equation for the overlaps as the new variables: mν = 1 N N X i=1 ξν i tanh β p X µ=1 dµξµ i mµ ! (12) We now have a family of p stochastic equations for the random variables mν with 1 ≤ν ≤p in order to retrieve the random pattern ξ1. Formally, we assume we have a probability space (Ω, F, P) with the real-valued random variables mν : Ω→IR, which are measurable with respect to F and the Borel sigma field B over the real line and which take value mν(ω) ∈IR for each sample point ω ∈Ω. The probability of an event A ∈B is given by Pr{ω : mν(ω) ∈A}. As usual Ωcan itself be taken to the real line with its Borel sigma field and we will usually drop all references to Ω. We need two lemmas to prove our main result. We write XN a.s. −→X for the almost sure convergence of the sequence of random variables XN to X, whereas XN d −→X indicates convergence in distribution [6]. Recall that almost sure convergence implies convergence in distribution. To help us compute the right hand side of Equation (12), we need the following lemma, which extends the standard result for the Law of Large Numbers and its rate of convergence [5, pages 112 and 113]. Lemma 4.1 Let X be a random variable on IR such that its probability distribution F(x) = Pr(X ≤x) is differentiable with density F ′(x) = f(x). If g : IR →IR is a bounded measurable function and Xk (k ≥1) is a sequence of of independent and identically distributed random variables with distribution X, then 1 N N X i=1 g(Xi) a.s. −→Eg(X) = Z ∞ ∞ g(x)f(x)dx, (13) and for all ϵ > 0 and t > 1, we have: Pr sup k≥N 1 k k X i=1 (g(Xi) −kE(g)(X)) ! ≥ϵ ! = o(1/N t−1) □ (14) The proof of the above lemma is given on-line in the supplementary material. Assume p/N = α > 0 with d1 ≪p0 and dµ = 1 for 1 < µ ≤p0. In the following theorem, we use the basic idea of the heuristic in [17] which is reproduced in [20, section 2.5] to develop a verifiable mathematical method with provable results to solve the mean field equation in the more general case that we have a single strong pattern present in the network. Theorem 4.2 There is a solution to the mean field equations (12) for retrieving ξ1 with independent random variables mν (for 1 ≤ν ≤p0), where m1 ∼N(m, s/N) and mν ∼N(0, r/N) (for ν ̸= 1), if the real numbers m, s and r satisfy the four simultaneous equations: (i) m = R ∞ −∞ dz √ 2πe−z2 2 tanh(β(d1m + √αrz)) (ii) s = q −m2 (iii) q = R ∞ −∞ dz √ 2πe−z2 2 tanh2(β(d1m + √αrz)) (iv) r = q (1−β(1−q))2 (15) In the proof of this theorem, as given below, we seek a solution of the mean field equations assuming we have independent random variables mν (for 1 ≤ν ≤p0) such that for large N and p with 6 p/N = α, we have m1 ∼N(m, s/N) and mν ∼N(0, r/N) (ν ̸= 1), and then find conditions in terms of m, s and r to ensure that such a solution exists. These assumptions are in effect equivalent to the replica symmetry approximation [17, page 262], since they lead, as shown below, to the same solution derived from the replica method when all stored patterns are simple. In analogy with the replica technique, we call our solution symmetric. Since by our assumption about the distribution of the overlaps mµ, the standard deviation of each overlap is O(1/ √ N), we ignore terms of O(1/N) and more generally terms of o(1/ √ N) compared to terms of O(1/ √ N) in the proof including in the lemma below, which enables us to compute the argument of tanh in Equation (12) for large N. Lemma 4.3 If mν ∼N(0, r/N) (for ν ̸= 1), then we have the equivalence of distributions: X µ̸=1,ν ξ1 i ξµ i mµ ∼N(0, αr) ∼ X µ̸=1 ξ1 i ξµ i mµ. □ The proofs of the above lemma and Theorem (4.2) are given on-line in the supplementary material. We note that in the heuristic described in [20] the distributions of m1 and mν (ν ̸= 1) are not eventually determined yet an initial assumption about the variance of mν is made. Moreover, the heuristic has no assumption on how mν is distributed, and no valid justification is provided for computing the double summation to obtain mν, which is similar to the lack of justification for the interchange of limits in the replica technique mentioned in Section 2. Comparing the equations for m, q and r in Equations (15) with those obtained by the replica method [20, pages 263-4] or the heuristic in [20, page 37], we see that m has been replaced by d1m on the right hand side of the equations for m and q. It follows that for d1 = 1, we obtain the solution for random patterns in the standard Hopfield network produced by the replica method. We can solve the simultaneous equations in (15) for m, q and r (and then for s) numerically. As in [20, page 38], we examine when these equations have non-trivial solutions (i.e., m ̸= 0) when T →0 corresponding to β →∞, where we also have q →1 but C := β(1 −q) remains finite: Using the relations: ( R ∞ −∞ dz √ 2πe−z2/2(1 −tanh2 β(az + b)) ≈2 π 1 aβ e−b2/2a2 R ∞ −∞ dz √ 2πe−z2/2 tanh β(az + b) β→∞ −→erf(b/ √ 2a), (16) where erf is the error function, the three equations for m, q and r become: ( C := β(1 −q) = p 2/παr exp(−(dm)2/2αr) r = 1/(1 −C)2, m = erf(dm/ √ 2αr), (17) where we have put d := d1. Let y = dm/ √ 2αr; then we obtain: fα,d(y) := y d( √ 2α + 2 √π e−y2) = erf(y) (18) Figure 2, gives a schematic view of the solution of Equation (18). The dotted curve is the erf function on the right hand side of the equation, whereas the three solid curves correspond to the graphs of the function fα,d on the left hand side of the equation for a given value of d and three different values of α. The heights of these graphs increase with α. The critical load parameter αc(d) is the threshold such that for α < αc(d) the strong pattern with degree d can be retrieved whereas for αc(d) < α this memory is lost. Geometrically, αc(d) corresponds to the curve that is tangent, say at yd, to the error function, i.e., f ′ αc(d),d(yd) = erf ′(yd). For α < αc(d), the function fα,d has two non-trivial intersections (away from the origin) with erf while for αc(d) < α there are no non-trivial intersections. We can compare the storage capacity of strong patterns with that of simple patterns, assuming the independence of mν (equivalently replica symmetry), by finding a lower bound for αc(d) in terms 7 . y yd 0 fα, . . d erf(y) α c f d α ( ),d α, d f Figure 2: Capacity of strong attractors of αc(1) as follows. We have: fα,d(y) = y( p 2(α/d2) + 2 d√π e−y2) ≤y( p 2(α/d2) + 2 √π e−y2) (19) where equality holds iff d = 1. Putting α = d2αc(1) and y = y1, we have for d > 1: fd2αc(1),d(y1) < fαc(1),1(y1) = erf(y1), (20) Therefore, for a strong pattern, the graphs of fd2αc(1),d and erf intersect in two non-trivial points and thus αc(d) > d2αc(1). Since αc(1) = αc ≈0.138, this yields: αc(d)/0.138 > d2, i.e., the relative increase in the storage capacity exceeds the square of the degree of the strong pattern. In the case of the standard Hopfield network with simple patterns only, we have αc(1) = αc ≈ 0.138, but simulation experiments show that for values in the narrow range 0.138 < α < 0.144 there are replica symmetry breaking solutions for which a stored pattern can still be retrieved [12]. We show that the square property holds when we take into account symmetry breaking solutions. By [15, Theorem 1], it follows that the error probability of retrieving a single strong attractor is: Prer ≈1 2(1 −erf(d/ √ 2α), for α = p/N. Thus, this error will be constant if d/√α remains fixed, indicating that the critical value of the load parameter is proportional to the square of the degree of the strong attractor. Corollary 4.4 The storage capacity for retrieving a single strong pattern exceeds that of a simple pattern by the square of the degree of the strong pattern. □ This square property shows that a multiply learned pattern is retained in the memory in the presence of a large number of other random patterns, proportional to the square of its multiplicity. 5 Conclusion We have developed a mathematically justifiable method to derive the storage capacity of the Hopfield network when the load parameter α = p/N remains a positive constant as the network size N →∞. For the standard model, our result confirms that of the replica technique, i.e., αc ≈0.138. However, our method also computes the storage capacity when retrieving a single strong pattern of degree d in the presence of other random patterns and we have shown that this capacity exceeds that of a simple pattern by a multiplicative factor d2, providing further justification for using strong patterns of Hopfield networks to model attachment types and behavioural prototypes in psychology. The storage capacity of Hopfield networks when there are more than a single strong pattern and in networks with low neural activation will be addressed in future work. It is also of interest to examine the behaviour of strong patterns in Boltzmann Machines [20], Restricted Boltzmann Machines [28] and Deep Learning Networks [21]. 8 References [1] D. J. Amit. Modeling Brain Function: The World of Attractor Neural Networks. Cambridge, 1989. [2] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Spin-glass models of neural networks. Phys. Rev. A, 32:1007–1018, 1985. [3] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys. Rev. Lett., 55:1530–1533, Sep 1985. [4] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Information storage in neural networks with low levels of activity. Phys. Rev. A, 35:2293–2303, Mar 1987. [5] L. E. Baum and M. Katz. Convergence rates in the law of large numbers. Transactions of the American Mathematical Society, 120(1):108–123, 1965. [6] P. Billingsley. Probability and Measure. John Wiley & Sons, second edition, 1986. [7] E. Bolthausen. Random media and spin glasses: An introduction into some mathematical results and problems. In E. Bolthausen and A. Bovier, editors, Spin Glasses, volume 1900 of Lecture Notes in Mathematics. Springer, 2007. [8] A. Bovier and V. Gayrard. Hopfield models as generalized random mean field models. In A. Bovier and P. Picco, editors, Mathematical Aspects of Spin Glasses and Neural Networks, pages 3–89. Birkhuser, 1998. [9] John Bowlby. Attachment: Volume One of the Attachment and Loss Trilogy. Pimlico, second revised edition, 1997. [10] L. Cozolino. The Neuroscience of Human Relationships. W. W. Norton, 2006. [11] F. Crick and G. Mitchison. The function of dream sleep. Nature, 304:111–114, 1983. [12] A. Crisanti, D. J. Amit, and H. Gutfreund. Saturation level of the hopfield model for neural network. Europhys. Lett., 2(337), 1986. [13] L. F. Cugliandolo and M. V. Tsodyks. Capacity of networks with correlated attractors. Journal of Physics A: Mathematical and General, 27(3):741, 1994. [14] V. Dotsenko. An Introduction to the theory of spin glasses and neural networks. World Scientific, 1994. [15] A. Edalat and F. Mancinelli. Strong attractors of Hopfield neural networks to model attachment types and behavioural patterns. In IJCNN 2013 Conference Proceedings. IEEE, August 2013. [16] K. H. Fischer and J. A. Hertz. Spin Glasses (Cambridge Studies in Magnetism). Cambridge, 1993. [17] T. Geszti. Physical Models of Neural Networks. World Scientific, 1990. [18] R. J. Glauber. Time–dependent statistics of the Ising model. J. Math. Phys., 294(4), 1963. [19] H. Gutfreund. Neural networks with hierarchically correlated patterns. Phys. Rev. A, 37:570–577, 1988. [20] J. A. Hertz, A. S. Krogh, and R. G. Palmer. Introduction To The Theory Of Neural Computation. Westview Press, 1991. [21] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [22] R. E. Hoffman. Computer simulations of neural information processing and the schizophrenia-mania dichotomy. Arch Gen Psychiatry., 44(2):178–88, 1987. [23] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Science, USA, 79:2554–2558, 1982. [24] T. Lewis, F. Amini, and R. Richard. A General Theory of Love. Vintage, 2000. [25] Matthias Lowe. On The Storage Capacity of Hopfield Models with Correlated Patterns. Annals of Appplied Probability, 8(4):1216–1250, 1998. [26] M. Mezard, G. Parisi, and M. Virasoro, editors. Spin Glass Theory and Beyond. World Scientific, 1986. [27] P. Peretto. On learning rules and memory storage abilities of asymmetrical neural networks. J. Phys. France, 49:711–726, 1998. [28] A. Salakhutdinov, R.and Mnih and G. Hinton. Restricted Boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pages 791–798, 2007. [29] A. N. Schore. Affect Dysregulation and Disorders of the Self. W. W. Norton, 2003. [30] T. S. Smith, G. T. Stevens, and S. Caldwell. The familiar and the strange: Hopfield network models for prototype-entrained. In T. S. Franks, D. D. and Smith, editor, Mind, brain, and society: Toward a neurosociology of emotion, volume 5 of Social perspectives on emotion. Elsevier/JAI, 1999. [31] M. Tsodyks and M. Feigelman. Enhanced storage capacity in neural networks with low level of activity. Europhysics Letters,, 6:101–105, 1988. 9
|
2013
|
77
|
5,155
|
Manifold-based Similarity Adaptation for Label Propagation Masayuki Karasuyama and Hiroshi Mamitsuka Bioionformatics Center, Institute for Chemical Research, Kyoto University, Japan {karasuyama,mami}@kuicr.kyoto-u.ac.jp Abstract Label propagation is one of the state-of-the-art methods for semi-supervised learning, which estimates labels by propagating label information through a graph. Label propagation assumes that data points (nodes) connected in a graph should have similar labels. Consequently, the label estimation heavily depends on edge weights in a graph which represent similarity of each node pair. We propose a method for a graph to capture the manifold structure of input features using edge weights parameterized by a similarity function. In this approach, edge weights represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation. For further justification, we provide analytical considerations including an interpretation as a cross-validation of a propagation model in the feature space, and an error analysis based on a low dimensional manifold model. Experimental results demonstrated the effectiveness of our approach both in synthetic and real datasets. 1 Introduction Graph-based learning algorithms have received considerable attention in machine learning community. For example, label propagation (e.g., [1, 2]) is widely accepted as a state-of-the-art approach for semi-supervised learning, in which node labels are estimated through the input graph structure. A common important property of these graph-based approaches is that the manifold structure of the input data can be captured by the graph. Their practical performance advantage has been demonstrated in various application areas. On the other hand, it is well-known that the accuracy of the graph-based methods highly depends on the quality of the input graph (e.g., [1, 3–5]), which is typically generated from a set of numerical input vectors (i.e., feature vectors). A general framework of graph-based learning can be represented as the following three-step procedure: Step 1: Generating graph edges from given data, where nodes of the generated graph correspond to the instances of input data. Step 2: Giving weights to the graph edges. Step 3: Estimating node labels based on the generated graph, which is often represented as an adjacency matrix. In this paper, we focus on the second step in the three-step procedure; estimating edge weights for the subsequent label estimation. Optimizing edge weights is difficult in semi-supervised learning, because there are only a small number of labeled instances. Also this problem is important because edge weights heavily affect final prediction accuracy of graph-based methods, while in reality rather simple heuristics strategies have been employed. There are two standard approaches for estimating edge weights: similarity function based- and locally linear embedding (LLE) [6] based-approaches. Each of these two approaches has its own 1 disadvantage. The similarity based approaches use similarity functions, such as Gaussian kernel, while most similarity functions have tuning parameters (such as the width parameter of Gaussian kernel) that are in general difficult to be tuned. On the other hand, in LLE, the true underlying manifold can be approximated by a graph which minimizes a local reconstruction error. LLE is more sophisticated than the similarity-based approach, and LLE based graphs have been applied to semi-supervised learning [5,7–9]. However LLE is noise-sensitive [10]. In addition, to avoid a kind of degeneracy problem [11], LLE has to have additional tuning parameters. Our approach is a similarity-based method, yet also captures the manifold structure of the input data; we refer to our approach as adaptive edge weighting (AEW). In AEW, graph edges are determined by a data adaptive manner in terms of both similarity and manifold structure. The objective function in AEW is based on local reconstruction, by which estimated weights capture the manifold structure, where each edge is parameterized as a similarity function of each node pair. Consequently, in spite of its simplicity, AEW has the following three advantages: • Compared to LLE based approaches, our formulation alleviates the problem of over-fitting due to the parameterization of weights. In our experiments, we observed that AEW is robust against noise of input data using synthetic data set, and we also show the performance advantage of AEW in eight real-world datasets. • Similarity based representation of edge weights is reasonable for label propagation because transitions of labels are determined by those weights, and edge weights obtained by LLE approaches may not represent node similarity. • AEW does not have additional tuning parameters such as regularization parameters. Although the number of edges in a graph cannot be determined by AEW, we show that performance of AEW is robust against the number of edges compared to standard heuristics and a LLE based approach. We provide further justifications for our approach based on the ideas of feature propagation and local linear approximation. Our objective function can be seen as a cross validation error of a propagation model for feature vectors, which we call feature propagation. This allows us to interpret that AEW optimizes graph weights through cross validation (for prediction) in the feature vector space instead of label space, assuming that input feature vectors and given labels share the same local structure. Another interpretation is provided through local linear approximation, by which we can analyze the error of local reconstruction in the output (label) space under the assumption of low dimensional manifold model. 2 Graph-based Semi-supervised Learning In this paper we use label propagation, which is one of the state-of-the-art graph-based learning algorithms, as the methods in the third step in the three-step procedure. Suppose that we have n feature vectors X = {x1, . . . , xn}, where xi ∈Rp. An undirected graph G is generated from X, where each node (or vertex) corresponds to each data point xi. The graph G can be represented by the adjacency matrix W ∈Rn×n where (i, j)-element Wij is a weight of the edge between xi and xj. The key idea of graph-based algorithms is so-called manifold assumption, in which instances connected by large weights Wij on a graph have similar labels (meaning that labels smoothly change on the graph). For the adjacency matrix Wij, the following weighted k-nearest neighbor (k-NN) graph is commonly used in graph-based learning algorithms [1]: Wij = ( exp −Pp d=1 (xid−xjd)2 σ2 d , j ∈Ni or i ∈Nj, 0, otherwise, (1) where xid is the d-th element of xi, Ni is a set of indices of the k-NN of xi, and {σd}p d=1 is a set of parameters. [1] shows this weighting can also be interpreted as the solution of the heat equation on the graph. From this adjacency matrix, the graph Laplacian can be defined by L = D −W , 2 where D is a diagonal matrix with the diagonal entry Dii = P j Wij. Instead of L, normalized variants of Laplacian such as L = I −D−1W or L = I −D−1/2W D−1/2 is also used, where I ∈Rn×n is the identity matrix. Among several label propagation algorithms, we mainly use the formulation by [1], which is the standard formulation of graph-based semi-supervised learning. Suppose that the first ℓdata points in X are labeled by Y = {y1, . . . , yℓ}, where yi ∈{1, . . . , c} and c is the number of classes. The goal of label propagation is to predict the labels of unlabeled nodes {xℓ+1, . . . , xn}. The scoring matrix F gives an estimation of the label of xi by argmaxj Fij. Label propagation can be defined as estimating F in such a way that score F smoothly changes on a given graph as well as it can predict given labeled points. The following is standard formulation, which is called the harmonic Gaussian field (HGF) model, of label propagation [1]: min F trace F ⊤LF subject to Fij = Yij, for i = 1, . . . , ℓ. where Yij is the label matrix with Yij = 1 if xi is labeled as yi = j; otherwise, Yij = 0, In this formulation, the scores for labeled nodes are fixed as constants. This formulation can be reduced to linear systems, which can be solved efficiently, especially when Laplacian L has some sparse structure. 3 Basic Framework of Proposed Approach The performance of label propagation heavily depends on quality of an input graph. Our proposed approach, adaptive edge weighting (AEW), optimizes edge weights for the graph-based learning algorithms. We note that AEW is for the second step of the three step procedure and has nothing to do with the first and third steps, meaning that any methods in the first and third steps can be combined with AEW. In this paper we consider that the input graph is generated by k-NN graph (the first step is based on k-NN), while we note that AEW can be applied to any types of graphs. First of all, graph edges should satisfy the following conditions: • Capturing the manifold structure of the input space. • Representing similarity between two nodes. These two conditions are closely related to manifold assumption of graph-based learning algorithms, in which labels vary smoothly along the input manifold. Since the manifold structure of the input data is unknown beforehand, the graph is used to approximate the manifold (the first condition). Subsequent predictions are performed in such a way that labels smoothly change according to the similarity structure provided by the graph (the second condition). Our algorithm simultaneously pursues these two important aspects of the graph for the graph-based learning algorithms. We define Wij as a similarity function of two nodes (1), using Gaussian kernel in this paper (Note: other similarity functions can also be used). We estimate σd so that the graph represents manifold structure of the input data by the following optimization problem: min {σd}p d=1 n X i=1 ∥xi − 1 Dii X j∼i Wijxj∥2 2, (2) where j ∼i means that j is connected to i. This minimizes the reconstruction error by local linear approximation, which captures the input manifold structure, in terms of the parameters of the similarity function. We will describe the motivation and analytical properties of the objective function in Section 4. We further describe advantages of this function over existing approaches including well-known locally linear embedding (LLE) [6] based methods in Section 5, respectively. To optimize (2), we can use any gradient-based algorithm such as steepest descent and conjugate gradient (in the later experiments, we used steepest descent method). Due to the non-convexity of the objective function, we cannot guarantee that solutions converge to the global optimal which means that the solutions depend on the initial σd. In our experiments, we employed well-known median heuristics (e.g., [12]) for setting initial values of σd (Section 6). Another possible strategy is to use a number of different initial values for σd, which needs a high computational cost. The 3 gradient can be computed efficiently, due to the sparsity of the adjacency matrix. Since the number of edges of a k-NN graph is O(nk), the derivative of adjacency matrix W can be calculated by O(nkp). Then the entire derivative of the objective function can be calculated by O(nkp2). Note that k often takes a small value such as k = 10. 4 Analytical Considerations In Section 3, we defined our approach as the minimization of the local reconstruction error of input features. We describe several interesting properties and interpretations of this definition. 4.1 Derivation from Feature Propagation First, we show that our objective function can be interpreted as a cross-validation error of the HGF model for the feature vector x on the graph. Let us divide a set of node indices {1, . . . , n} into a training set T and a validation set V. Suppose that we try to predict x in the validation set {xi}i∈V from the given training set {xi}i∈T and the adjacency matrix W . For this prediction problem, we consider the HGF model for x: min ˆ X trace ˆX ⊤L ˆX subject to ˆxij = xij, for i ∈T , where X = (x1, x2, . . . xn)⊤, ˆX = (ˆx1, ˆx2, . . . ˆxn)⊤, and xij and ˆxij indicate (i, j)-th entries of X and ˆX respectively. In this formulation, ˆxi corresponds to a prediction for xi. Note that only ˆxi in the validation set V is regarded as free variables in the optimization problem because the other {ˆxi}i∈T is fixed at the observed values by the constraint. This can be interpreted as propagating {xi}i∈T to predict {xi}i∈V. We call this process as feature propagation. When we employ leave-one-out as the cross-validation of the feature propagation model, we obtain n X i=1 ∥xi −ˆx−i∥2 2, (3) where ˆx−i is a prediction for xi with T = {1, . . . , i −1, i + 1, . . . , n} and V = {i}. Due to the local averaging property of HGF [1], we see ˆx−i = P j Wijxj/Dii, and then (3) is equivalent to our objective function (2). From this equivalence, AEW can be interpreted as the parameter optimization in graph weights of the HGF model for feature vectors through the leave-one-out cross-validation. This also means that our framework estimates labels using the adjacency matrix W optimized in the feature space instead of the output (label) space. Thus, if input features and labels share the same adjacency matrix (i.e., sharing the same local structure), the minimization of the objective function (2) should estimate the adjacency matrix which accurately propagates the labels of graph nodes. 4.2 Local Linear Approximation The feature propagation model provides the interpretation of our approach as the optimization of the adjacency matrix under the assumption that x and y can be reconstructed by the same adjacency matrix. We here justify this assumption in a more formal way from a viewpoint of local reconstruction with a lower dimensional manifold model. As shown in [1], HGF can be regarded as local reconstruction methods, which means the prediction can be represented as weighted local averages: Fik = P j WijFjk Dii for i = ℓ+ 1, . . . , n. We show the relationship between the local reconstruction error in the feature space described by our objective function (2) and the output space. For simplicity we consider the vector form of the score function f ∈ Rn which can be considered as a special case of the score matrix F , and discussions here can be applied to F . The same analysis can be approximately applied to other graph learning methods such as local global consistency [2] because it has similar local averaging form as the above, though we omitted here. 4 We assume the following manifold model for the input feature space, in which x is generated from corresponding some lower dimensional variable τ ∈Rq: x = g(τ) + εx, where g : Rq →Rp is a smooth function, and εx ∈Rp represents noise. In this model, y is also represented by some function form of τ: y = h(τ) + εy, where h : Rq →R is a smooth function, and εy ∈R represents noise (for simplicity, we consider a continuous output rather than discrete labels). For this model, the following theorem shows the relationship between the reconstruction error of the feature vector x and the output y: Theorem 1. Suppose xi can be approximated by its neighbors as follows xi = 1 Dii X j∼i Wijxj + ei, (4) where ei ∈Rp represents an approximation error. Then, the same adjacency matrix reconstructs the output yi ∈R with the following error: yi − 1 Dii X j∼i Wijyj = Jei + O(δτ i) + O(εx + εy), (5) where J = ∂h(τ i) ∂τ ⊤ ∂g(τ i) ∂τ ⊤ + with superscript + indicates pseudoinverse, and δτ i = maxj(∥τ i − τ j∥2 2). See our supplementary material for the proof of this theorem. From (5), we can see that the reconstruction error of yi consists of three terms. The first term includes the reconstruction error for xi which is represented by ei, and the second term is the distance between τ i and {τ j}j∼i. These two terms have a kind of trade-off relationship because we can reduce ei if we use a lot of data points xj, but then δτ i would increase. The third term is the intrinsic noise which we cannot directly control. In spite of its importance, this simple relationship has not been focused on in the context of graph estimation for semi-supervised learning, in which a LLE based objective function has been used without clear justification [5,7–9]. A simple approach to exploit this theorem would be a regularization formulation, which can be a minimization of a combination of the reconstruction error for x and a penalization term for distances between data points connected by edges. Regularized LLE [5, 8, 13, 14] can be interpreted as one realization of such an approach. However, in semi-supervised learning, selecting appropriate values of the regularization parameter is difficult. We therefore optimize edge weights through parameters of the similarity function, especially the bandwidth parameter of Gaussian similarity function σ. In this approach, a very large bandwidth (giving large weights to distant data points) may cause a large reconstruction error, while an extremely small bandwidth causes the problem of not giving enough weights to reconstruct. For symmetric normalized graph Laplacian, we can not apply Theorem 1 to our algorithm. See supplementary material for modified version of Theorem 1 for normalized Laplacian. In the experiments, we also report results for normalized Laplacian and show that our approach can improve prediction accuracy as in the case of unnormalized Laplacian. 5 Related Topics LLE [6] can also estimate graph edges based on a similar objective function, in which W is directly optimized as a real valued matrix. This manner has been used in many methods for graph-based semi-supervised learning and clustering [5,7–9], but LLE is very noise-sensitive [10], and resulting weights Wij cannot necessarily represent similarity between the corresponding nodes (i, j). For example, for two nearly identical points xj1 and xj2, both connecting to xi, it is not guaranteed that Wij1 and Wij2 have similar values. To solve this problem, a regularization term can be introduced [11], while it is not easy to optimize the regularization parameter for this term. On the other hand, we optimize parameters of the similarity (kernel) function. This parameterized form of edge weights can alleviate the over-fitting problem. Moreover, obviously, the optimized weights still represent the node similarity. Although several model selection approaches (such as cross-validation and marginal likelihood maximization) have been applied to optimizing graph edge weights by regarding them as usual hyper5 parameters in supervised learning [3, 4, 15], most of them need labeled instances and become unreliable under the cases with few labels. Another approach is optimizing some criterion designed specifically for graph-based algorithms (e.g., [1, 16]). These criteria often have degenerate (trivial) solutions for which heuristics are used to prevent, but the validity of those heuristics is not clear. Compared to these approaches, our approach is more general and flexible for problem settings, because AEW is independent of the number of classes, the number of labels, and subsequent label estimation algorithms. In addition, model selection based approaches are basically for the third step in the three-step procedure, by which AEW can be combined with such methods, like that the optimized graph by AEW can be used as the input graph of these methods. Besides k-NN, there have been several methods generating a graph (edges) from the feature vectors (e.g., [9, 17]). Our approach can also be applied to those graphs because AEW only optimizes weights of edges. In our experiments, we used the edges of the k-NN graph as the initial graph of AEW. We then observed that AEW is not sensitive to the choice of k, comparing with usual k-NN graphs. This is because the Gaussian similarity value becomes small if xi and xj are not close to each other to minimize the reconstruction error (2). In other words, redundant weights can be reduced drastically, because in the Gaussian kernel, weights decay exponentially according to the squared distance. Metric learning is another approach to adapting similarity, while metric learning is not for graphs. A standard method for incorporating graph information into metric learning is to use some graphbased regularization, in which graph weights must be determined beforehand. For example, in [18], a graph is generated by LLE, of which we already described the disadvantages. Another approach is [19], which estimates a distance metric so that the k-NN graph in terms of the obtained metric should reproduce a given graph. This approach is however not for semi-supervised learning, and it is unclear if this approach works for semi-supervised settings. Overall metric learning is developed from a different context from our setting, by which it has not been justified that metric learning can be applied to label propagation. 6 Experiments We evaluated the performance of our approach using synthetic and real-world datasets. We investigated the performance of AEW using the harmonic Gaussian field (HGF) model. For comparison, we used linear neighborhood propagation (LNP) [5], which generates a graph using a LLE based objective function. LNP can have two regularization parameters, one of which is for the LLE process (the first and second steps in the three-step procedure), and the other is for the label estimation process (the third step in the three-step procedure). For the parameter in the LLE process, we used the heuristics suggested by [11], and for the label propagation process, we chose the best parameter value in terms of the test accuracy. HGF does not have such hyper-parameters. All results were averaged over 30 runs with randomly sampled data points. 6.1 Synthetic datasets We here use two datasets in Figure 1 having the same form, but Figure 1 (b) has several noisy data points which may become bridge points (points connecting different classes [5]). In both cases, the number of classes is 4 and each class has 100 data points (thus, n = 400). Table 1 shows the error rates for the unlabeled nodes of HGF and LNP under 0-1 loss. For HGF, we used the median heuristics to choose the parameter σd in similarity function (1), meaning that a common σ (= σ1 = . . . = σp) is set as the median distance between all connected pairs of xi.The symmetric normalized version of graph Laplacian was used. The optimization of AEW started from the median σd. The results by AEW are shown in the column ‘AEW + HGF’ of Table 1. The number of labeled nodes was 10 in each class (ℓ= 40, i.e., 10% of the entire datasets), and the number of neighbors in the graphs was set as k = 10 or 20. In Table 1, we see HGF with AEW achieved better prediction accuracies than the median heuristics and LNP in all cases. Moreover, for both of datasets (a) and (b), AEW was most robust against the change of the number of neighbors k. This is because σd is automatically adjusted in such a way that the local reconstruction error is minimized and then weights for connections between 6 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 (a) −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 (b) Figure 1: Synthetic datasets. Table 1: Test error comparison for synthetic datasets. The best methods according to t-test with the significant level of 5% are highlighted with boldface. data k HGF AEW + HGF LNP (a) 10 .057 (.039) .020 (.027) .039 (.026) (a) 20 .261 (.048) .020 (.028) .103 (.042) (b) 10 .119 (.054) .073 (.035) .103 (.038) (b) 20 .280 (.051) .077 (.035) .148 (.047) 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 (a) k-NN 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 (b) AEW 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 (c) LNP Figure 2: Resulting graphs for the synthetic dataset of Figure 1 (a) (k = 20). different manifolds are reduced. Although LNP also minimizes the local reconstruction error, LNP may connect data points far from each other if it reduces the reconstruction error. Figure 2 shows the graphs generated by (a) k-NN, (b) AEW, and (c) LNP, under k = 20 for the dataset of Figure 1 (a). In Figure 2, the k-NN graph connects a lot of nodes in different classes, while AEW favorably eliminates those undesirable edges. LNP also has less edges between different classes compared to k-NN, but it still connects different classes. AEW reveals the class structure more clearly, which can lead the better prediction performance of subsequent learning algorithms. 6.2 Real-world datasets Table 2: List of datasets. n p # classes COIL 500 256 10 USPS 1000 256 10 MNIST 1000 784 10 ORL 360 644 40 Vowel 792 10 11 Yale 250 1200 5 optdigit 1000 256 10 UMIST 518 644 20 We examined the performance of our approach on the eight popular datasets shown in Table 2, namely COIL (COIL-20) [20], USPS (a preprocessed version from [21]), MNIST [22], ORL [23], Vowel [24], Yale (Yale Face Database B) [25], optdigit [24], and UMIST [26]. We evaluated two variants of the HGF model. In what follows, ‘HGF’ indicates HGF using unnormalized graph Laplacian L = D −W , and ‘N-HGF’ indicates HGF using symmetric normalized Laplacian L = I −D−1/2W D−1/2. For both of two variants, the median heuristics was used to set σd. To adapt the difference of local scale, we here use local scaling kernel [27] as the similarity function. Figure 3 shows the test error for unlabeled nodes. In this figure, two dashed lines with different markers are by HGF and N-HGF, while two solid lines with the same markers are by HGF with AEW. The performance difference within the variants of HGF was not large, compared to the effect of AEW, particularly in COIL, ORL, Vowel, Yale, and UMIST. We can rather see that AEW substantially improved the prediction accuracy of HGF in most cases. LNP is by the solid line without any markers. LNP outperformed HGF (without AEW, shown as the dashed lines) in COIL, ORL, Vowel, Yale and UMIST, while HGF with AEW (at least one of three variants) achieved better performance than LNP in all these datasets except for Yale (In Yale, LNP and HGF with AEW attained a similar accuracy). Overall AEW-N-HGF had the best prediction accuracy, where typical examples were USPS and MNIST. Although Theorem 1 exactly holds only for AEW-HGF, we can see that AEW-N-HGF, in which the degrees of the graph nodes are scaled by normalized Laplacian had highly stable performance. We further examined the effect of k. Figure 4 shows the test error for k = 20 and 10, using N-HGF, AEW-N-HGF, and LNP for COIL dataset. The number of labeled instances is the midst value in 7 2 4 6 8 10 0.1 0.15 0.2 0.25 0.3 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (a) COIL 2 4 6 8 10 0.1 0.15 0.2 0.25 0.3 0.35 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (b) USPS 2 4 6 8 10 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (c) MNIST 1 2 3 4 5 0.05 0.1 0.15 0.2 0.25 0.3 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (d) ORL 5 10 15 20 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (e) Vowel 2 4 6 8 10 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (f) Yale 2 4 6 8 10 0.04 0.06 0.08 0.1 0.12 0.14 0.16 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (g) optdigit 2 4 6 8 10 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 # labeled instances in each class Test error rate HGF N−HGF AEW−HGF AEW−N−HGF LNP (h) UMIST Figure 3: Performance comparison on real-world datasets. HGFs with AEW are by solid lines with markers, while HGFs with median heuristics is by dashed lines with the same markers, and LNP is by a solid line without any markers. For N-HGF and AWE-N-HGF, ‘N’ indicates normalized Laplacian. the horizontal axis of Figure 3 (a) (5 in each class). We can see that the test error of AEW is not sensitive to k. Performance of N-HGF with k = 20 was worse than that with k = 10. On the other hand, AEW-N-HGF with k = 20 had a similar performance to that with k = 10. 7 Conclusions N−HGF AEW−N−HGF LNP 0.05 0.1 0.15 0.2 0.25 Test error rate Figure 4: Comparison in test error rates of k = 10 and 20 (COIL ℓ= 50). Two boxplots of each method correspond to k = 10 in the left (with a smaller width) and k = 20 in the right (with a larger width). We have proposed the adaptive edge weighting (AEW) method for graph-based semi-supervised learning. AEW is based on the local reconstruction with the constraint that each edge represents the similarity of each pair of nodes. Due to this constraint, AEW has numerous advantages against LLE based approaches. For example, noise sensitivity of LLE can be alleviated by the parameterized form of the edge weights, and the similarity form for the edges weights is very reasonable for graph-based methods. We also provide several interesting properties of AEW, by which our objective function can be motivated analytically. We examined the performance of AEW by using two synthetic and eight real benchmark datasets. Experimental results demonstrated that AEW can improve the performance of the harmonic Gaussian field (HGF) model substantially, and we also saw that AEW outperformed LLE based approaches in all cases of real datasets except only one case. References [1] X. Zhu, Z. Ghahramani, and J. D. Lafferty, “Semi-supervised learning using Gaussian fields and harmonic functions,” in Proc. of the 20th ICML (T. Fawcett and N. Mishra, eds.), pp. 912–919, AAAI Press, 2003. [2] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf, “Learning with local and global consistency,” in Advances in NIPS 16 (S. Thrun, L. Saul, and B. Sch¨olkopf, eds.), MIT Press, 2004. 8 [3] A. Kapoor, Y. A. Qi, H. Ahn, and R. Picard, “Hyperparameter and kernel learning for graph based semisupervised classification,” in Advances in NIPS 18 (Y. Weiss, B. Sch¨olkopf, and J. Platt, eds.), pp. 627– 634, MIT Press, 2006. [4] X. Zhang and W. S. Lee, “Hyperparameter learning for graph based semi-supervised learning algorithms,” in Advances in NIPS 19 (B. Sch¨olkopf, J. Platt, and T. Hoffman, eds.), pp. 1585–1592, MIT Press, 2007. [5] F. Wang and C. Zhang, “Label propagation through linear neighborhoods,” IEEE TKDE, vol. 20, pp. 55– 67, 2008. [6] S. Roweis and L. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000. [7] S. I. Daitch, J. A. Kelner, and D. A. Spielman, “Fitting a graph to vector data,” in Proc. of the 26th ICML, (New York, NY, USA), pp. 201–208, ACM, 2009. [8] H. Cheng, Z. Liu, and J. Yang, “Sparsity induced similarity measure for label propagation,” in IEEE 12th ICCV, pp. 317–324, IEEE, 2009. [9] W. Liu, J. He, and S.-F. Chang, “Large graph construction for scalable semi-supervised learning,” in Proc. of the 27th ICML, pp. 679–686, Omnipress, 2010. [10] J. Chen and Y. Liu, “Locally linear embedding: a survey,” Artificial Intelligence Review, vol. 36, pp. 29– 48, 2011. [11] L. K. Saul and S. T. Roweis, “Think globally, fit locally: unsupervised learning of low dimensional manifolds,” JMLR, vol. 4, pp. 119–155, Dec. 2003. [12] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch¨olkopf, and A. J. Smola, “A kernel method for the twosample-problem,” in Advances in NIPS 19 (B. Sch¨olkopf, J. C. Platt, and T. Hoffman, eds.), pp. 513–520, MIT Press, 2007. [13] E. Elhamifar and R. Vidal, “Sparse manifold clustering and embedding,” in Advances in NIPS 24 (J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 55–63, 2011. [14] D. Kong, C. H. Ding, H. Huang, and F. Nie, “An iterative locally linear embedding algorithm,” in Proc. of the 29th ICML (J. Langford and J. Pineau, eds.), pp. 1647–1654, Omnipress, 2012. [15] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty, “Nonparametric transforms of graph kernels for semisupervised learning,” in Advances in NIPS 17 (L. K. Saul, Y. Weiss, and L. Bottou, eds.), pp. 1641–1648, MIT Press, 2005. [16] F. R. Bach and M. I. Jordan, “Learning spectral clustering,” in Advances in NIPS 16 (S. Thrun, L. K. Saul, and B. Sch¨olkopf, eds.), 2004. [17] T. Jebara, J. Wang, and S.-F. Chang, “Graph construction and b-matching for semi-supervised learning,” in Proc. of the 26th ICML (A. P. Danyluk, L. Bottou, and M. L. Littman, eds.), pp. 441–448, ACM, 2009. [18] M. S. Baghshah and S. B. Shouraki, “Metric learning for semi-supervised clustering using pairwise constraints and the geometrical structure of data,” Intelligent Data Analysis, vol. 13, no. 6, pp. 887–899, 2009. [19] B. Shaw, B. Huang, and T. Jebara, “Learning a distance metric from a network,” in Advances in NIPS 24 (J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 1899–1907, 2011. [20] S. A. Nene, S. K. Nayar, and H. Murase, “Columbia object image library,” tech. rep., CUCS-005-96, 1996. [21] T. Hastie, R. Tibshirani, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction. New York: Springer-Verlag, 2001. [22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [23] F. Samaria and A. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, pp. 138–142, 1994. [24] A. Asuncion and D. J. Newman, “UCI machine learning repository.” http://www.ics.uci.edu/˜mlearn/MLRepository.html, 2007. [25] A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE TPAMI, vol. 23, no. 6, pp. 643–660, 2001. [26] D. B. Graham and N. M. Allinson, “Characterizing virtual eigensignatures for general purpose face recognition,” in Face Recognition: From Theory to Applications ; NATO ASI Series F, Computer and Systems Sciences (H. Wechsler, P. J. Phillips, V. Bruce, F. Fogelman-Soulie, and T. S. Huang, eds.), vol. 163, pp. 446–456, 1998. [27] L. Zelnik-Manor and P. Perona, “Self-tuning spectral clustering,” in Advances in NIPS 17, pp. 1601–1608, MIT Press, 2004. 9
|
2013
|
78
|
5,156
|
New Subsampling Algorithms for Fast Least Squares Regression Paramveer S. Dhillon1 Yichao Lu2 Dean Foster2 Lyle Ungar1 1Computer & Information Science, 2Statistics (Wharton School) University of Pennsylvania, Philadelphia, PA, U.S.A {dhillon|ungar}@cis.upenn.edu foster@wharton.upenn.edu, yichaolu@sas.upenn.edu Abstract We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data (n ≫p). We propose three methods which solve the big data problem by subsampling the covariance matrix using either a single or two stage estimation. All three run in the order of size of input i.e. O(np) and our best method, Uluru, gives an error bound of O( p p/n) which is independent of the amount of subsampling as long as it is above a threshold. We provide theoretical bounds for our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. We also compare the performance of our methods on synthetic and real-world datasets and show that if observations are i.i.d., sub-Gaussian then one can directly subsample without the expensive Randomized Hadamard preconditioning without loss of accuracy. 1 Introduction Ordinary Least Squares (OLS) is one of the oldest and most widely studied statistical estimation methods with its origins tracing back over two centuries. It is the workhorse of fields as diverse as Machine Learning, Statistics, Econometrics, Computational Biology and Physics. To keep pace with the growing amounts of data ever faster ways of estimating OLS are sought. This paper focuses on the setting (n ≫p), where n is the number of observations and p is the number of covariates or features, a common one for web scale data. Numerous approaches to this problem have been proposed [1, 2, 3, 4, 5]. The predominant approach to solving big data OLS estimation involves using some kind of random projections, for instance, transforming the data with a randomized Hadamard transform [6] or Fourier transform and then uniformly sampling observations from the resulting transformed matrix and estimating OLS on this smaller data set. The intuition behind this approach is that these frequency domain transformations uniformize the data and smear the signal across all the observations so that there are no longer any high leverage points whose omission could unduly influence the parameter estimates. Hence, a uniform sampling in this transformed space suffices. Another way of looking at this approach is as preconditioning the design matrix with a carefully constructed data-independent random matrix before subsampling. This approach has been used by a variety of papers proposing methods such as the Subsampled Randomized Hadamard Transform (SRHT) [1, 4] and the Subsampled Randomized Fourier Transform (SRFT) [2, 3]. There is also publicly available software which implements these ideas [7]. It is worth noting that these approaches assume a fixed design setting. Following this line of work, in this paper we provide two main contributions: 1 1. Novel Subsampling Algorithms for OLS: We propose three novel1 algorithms for fast estimation of OLS which work by subsampling the covariance matrix. Some recent results in [8] allow us to bound the difference between the parameter vector ( bw) we estimate from the subsampled data and the true underlying parameter (w0) which generates the data. We provide theoretical analysis of our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. The error bound of our best algorithm, Uluru, is independent of the fraction of data subsampled (above a minimum threshold of sub-sampling) and depends only on the characteristics of the data/design matrix X. 2. Randomized Hadamard preconditioning not always needed: We show that the error bounds for all the three algorithms are similar for both the fixed design and the subGaussian random design setting. In other words, one can either transform the data/design matrix via Randomized Hadamard transform (fixed design setting) and then use any of our three algorithms or, if the observations are i.i.d. and sub-Gaussian, one can directly use any of our three algorithms. Thus, another contribution of this paper is to show that if the observations are i.i.d. and sub-Gaussian then one does not need the slow Randomized Hadamard preconditioning step and one can get similar accuracies much faster. The remainder of the paper is organized as follows: In the next section, we formally define notation for the regression problem, then in Sections 3 and 4, we describe our algorithms and provide theorems characterizing their performance. Finally, we compare the empirical performance of our methods on synthetic and real world data. 2 Notation and Preliminaries Let X be the n × p design matrix. For the random design case we assume the rows of X are n i.i.d samples from the 1 × p independent variable (a.k.a. “covariates” or “predictors”) X. Y is the real valued n × 1 response vector which contains n corresponding values of the dependent variable Y (in general we use bold letter for samples and normal letter for random variables or vectors). ϵ is the n × 1 homoskedastic noise vector with common variance σ2. We want to infer w0 i.e. the p × 1 population parameter vector that generated the data. More formally, we can write the true model as: Y = Xw0 + ϵ ϵ ∼iid N(0, σ2) The sample solution to the above equation (in matrix notation) is given by bwsample = (X⊤X)−1X⊤Y and by consistency of the OLS estimator we know that bwsample →d w0 as n →∞. Classical algorithms to estimate bwsample use QR decomposition or bidiagonalization [9] and they require O(np2) floating point operations. Since our algorithms are based on subsampling the covariance matrix, we need some extra notation. Let r = nsubs/n (< 1) be the subsampling ratio, giving the ratio of the number of observations (nsubs) in the subsampled matrix Xsubs fraction to the number of observations (n) in the original X matrix. I.e., r is the fraction of the observations sampled. Let Xrem, Yrem denote the data and response vector for the remaining n −nsubs observations. In other words X⊤= [X⊤ subs ; X⊤ rem] and Y⊤= [Y⊤ subs ; Y⊤ rem]. Also, let ΣXXbe the covariance of X and ΣXY be the covariance between X and Y . Then, for the fixed design setting ΣXX = X⊤X/n and ΣXY = X⊤Y/n and for the random design setting ΣXX = E(X⊤X) and ΣXY = E(X⊤Y). The bounds presented in this paper are expressed in terms of the Mean Squared Error (or Risk) for the ℓ2 loss. For the fixed design setting, MSE = (w0 −bw)⊤X⊤X(w0 −bw)/n = (w0 −bw)⊤ΣXX(w0 −bw) For the random design setting MSE = EX∥Xw0 −X bw∥2 = (w0 −bw)⊤ΣXX(w0 −bw) 1One of our algorithms (FS) is similar to [4] as we describe in Related Work. However, even for that algorithm, our theoretical analysis is novel. 2 2.1 Design Matrix and Preconditioning Thus far, we have not made any assumptions about the design matrix X. In fact, our algorithms and analysis work for both fixed design and random design settings. As mentioned earlier, our algorithms involve subsampling the observations, so we have to ensure that we do not leave behind any observations which are outliers/high leverage points; This is done differently for fixed and random designs. For the fixed design setting the design matrix X is arbitrary and may contain high leverage points. Therefore before subsampling we precondition the matrix by a Randomized Hadamard/Fourier Transform [1, 4] and after conditioning, the probability of having high leverage points in the new design matrix becomes very small. On the other hand, if we assume X to be random design and its rows are i.i.d. draws from some nice distribution like sub-Gaussian, then the probability of having high leverage points is very small and we can happily subsample X without preconditioning. In this paper we analyze both the fixed as well as sub-Gaussian random design settings. Since the fixed design analysis would involve transforming the design matrix with a preconditioner before subsampling, some background on SRHT is warranted. Subsampled Randomized Hadamard Transform (SRHT): In the fixed design setting we precondition and subsample the data with a nsubs × n randomized hadamard transform matrix Θ(= q n nsubs RHD) as Θ · X. The matrices R, H, and D are defined as: • R ∈Rnsubs×n is a set of nsubs rows from the n × n identity matrix, where the rows are chosen uniformly at random without replacement. • D ∈Rn×n is a random diagonal matrix whose entries are independent random signs, i.e. random variables uniformly distributed on {±1}. • H ∈Rn×n is a normalized Walsh-Hadamard matrix, defined as: Hn = Hn/2 Hn/2 Hn/2 −Hn/2 with, H2 = +1 +1 +1 −1 . H = 1 √nHn is a rescaled version of Hn. It is worth noting that HD is the preconditioning matrix and R is the subsampling matrix. The running time of SRHT is n p log(p) floating point operations (FLOPS) [4]. [4] mention fixing nsubs = O(p). However, in our experiments we vary the amount of subsampling, which is not something recommended by their theory. With varying subsampling, the run time becomes O(n p log(nsubs)). 3 Three subsampling algorithms for fast linear regression All our algorithms subsample the X matrix followed by a single or two stage fitting and are described below. The algorithms given below are for the random design setting. The algorithms for the fixed design are exactly the same as below, except that Xsubs, Ysubs are replaced by Θ · X, Θ · Y and Xrem, Yrem with Θrem ·X, Θrem ·Y, where Θ is the SRHT matrix defined in the previous section and Θrem is the same as Θ, except that R is of size nrem × n. Still, for the sake of completeness, the algorithms are described in detail in the Supplementary material. Full Subsampling (FS): Full subsampling provides a baseline for comparison; In it we simply r-subsample (X, Y) as (Xsubs, Ysubs) and use the subsampled data to estimate both the ΣXX and ΣXY covariance matrices. Covariance Subsampling (CovS): In Covariance Subsampling we r-subsample X as Xsubs only to estimate the ΣXX covariance matrix; we use all the n observations to compute the ΣXY covariance matrix. 3 Uluru : Uluru2 is a two stage fitting algorithm. In the first stage it uses the r-subsampled (X, Y) to get an initial estimate of bw (i.e., bwF S) via the Full Subsampling (FS) algorithm. In the second stage it uses the remaining data (Xrem, Yrem) to estimate the bias of the first stage estimator wcorrect = w0 −bwF S. The final estimate (wUluru) is taken to be a weighted combination (generally just the sum) of the FS estimator and the second stage estimator ( bwcorrect). Uluru is described in Algorithm 1. In the second stage, since bwF S is known, on the remaining data we have Yrem = Xremw0 + ϵrem, hence Rrem = Yrem −Xrem · bwF S = Xrem(w0 −bwF S) + ϵrem The above formula shows we can estimate wcorrect = w0 −bwF S with another regression, i.e. bwcorrect = (X⊤ remXrem)−1X⊤ remRrem. Since computing X⊤ remXrem takes too many FLOPS, we use X⊤ subXsub instead (which has already been computed). Finally we correct bwF S and bwcorrect to get bwUluru. The estimate wcorrect can be seen as an almost unbiased estimation of the error w0 −wsubs, so we correct almost all the error, hence reducing the bias. Input: X, Y, r Output: bw bwF S = (X⊤ subsXsubs)−1X⊤ subsYsubs; Rrem = Yrem −Xrem · bwF S; bwcorrect = nsubs nrem · (X⊤ subsXsubs)−1X⊤ remRrem; bwUluru = bwF S + bwcorrect; return bw = bwUluru; Algorithm 1: Uluru Algorithm 4 Theory In this section we provide the theoretical guarantees of the three algorithms we discussed in the previous sections in the fixed as well as random design setting. All the theorems assume OLS setting as mentioned in Section 2. Without loss of generality we assume that X is whitened, i.e. ΣX,X = Ip (see Supplementary Material for justification). For both the cases we bound the square root of Mean Squared Error which becomes ∥w0 −bw∥, as described in Section 2. 4.1 Fixed Design Setting Here we assume preconditioning and subsampling with SRHT as described in previous sections. (Please see the Supplementary Material for all the Proofs) Theorem 1 Assume X ∈n × p and X⊤X = n · Ip. Let Y = Xw0 + ϵ where ϵ ∈n × 1 is i.i.d. gaussian noise with standard deviation σ. If we use algorithm FS, then with failure probability at most 2 n ep + 2δ, ∥w0 −ˆwF S∥≤Cσ r ln(nr + 1/δ) p nr (1) Theorem 2 Assuming our data comes from the same model as Theorem 1 and we use CovS, then with failure probability at most 3δ + 3 n ep , ∥w0−ˆwCovS∥≤(1−r) C1 r ln(2p δ ) p nr + C2 s ln(2p δ ) p n(1 −r) ! ∥w0∥+C3σ r log(n + 1/δ) p n (2) 2Uluru is a rock that is shaped like a quadratic and is solid. So, if your estimate of the quadratic term is as solid as Uluru, you do not need use more data to make it more accurate. 4 Theorem 3 Assuming our data comes from the same model as Theorem 1 and we use Uluru, then with failure probability at most 5δ + 5 n ep , ∥w0 −ˆwUluru∥ ≤ σ r ln(nr + 1/δ) p nr C1 r ln(2p δ ) p nr + C2 s ln(2p δ ) p n(1 −r) ! +σC3 r ln(n(1 −r) + 1/δ) · p n(1 −r) Remark 1 The probability n ep becomes really small for large p, hence it can be ignored and the ln terms can be viewed as constants. Lets consider the case nsubs ≪nrem, since only in this situation subsampling reduces computational cost significantly. Then, keeping only the dominating terms, the result of the above three theorems can be summarized as: With some failure probability less than some fixed number, the error of FS algorithm is O(σ p p nr), the error of CovS algorithm is O( p p nr∥w∥+ σ p p n) and the error of Uluru algorithm is O(σ p nr + σ p p n) 4.2 Sub-gaussian Random Design Setting 4.2.1 Definitions The following two definitions from [10] characterize what it means to be sub-gaussian. Definition 1 A random variable X is sub-gaussian with sub-gaussian norm ∥X∥ψ2 if and only if (E|X|p)1/p ≤∥X∥ψ2 √p for all p ≥1 (3) Here ∥X∥ψ2 is the minimal constant for which the above condition holds. Definition 2 A random vector X ∈Rn is sub-gaussian if the one dimensional marginals x⊤X are sub-gaussian for all x ∈Rn. The sub-gaussian norm of random vector X is defined as ∥X∥ψ2 = sup ∥x∥2=1 ∥x⊤X∥ψ2 (4) Remark 2 Since the sum of two sub-gaussian variables is sub-gaussian, it is easy to conclude that a random vector X = (X1, ..Xp)⊤is a sub-gaussian random vector when the components X1, ..Xp are sub-gaussian variables. 4.2.2 Sub-gaussian Bounds Under the assumption that the rows of the design matrix X are i.i.d draws for a p dimensional sub-Gaussian random vector X with ΣXX = Ip, we have the following bounds (Please see the Supplementary Material for all the Proofs): Theorem 4 If we use the FS algorithm, then with failure probability at most δ, ∥w0 −bwF S∥≤Cσ r p · ln(2p/δ) nr (5) Theorem 5 If we use the CovS algorithm, then with failure probability at most δ, ∥w0 −bwCovS∥ ≤ (1 −r) C1 r p n · r + C2 r p n(1 −r) ! ∥w0∥ + C3σ r p · ln(2(p + 2)/δ) n (6) 5 Theorem 6 If we use Uluru, then with failure probability at most δ, ∥w0 −bwUluru∥ ≤ C1σ r p · ln(2(2p + 2)/δ) n · r " C2 r p n · r + C3 r p (1 −r) · n # + C4σ s p · ln(2(2p + 2)/δ) (1 −r) · n Remark 3 Here also, the ln terms can be viewed as constants. Consider the case r ≪1, since this is the only case where subsampling reduces computational cost significantly. Keeping only dominating terms, the result of the above three theorems can be summarized as: With failure probability less than some fixed number, the error of the FS algorithm is O(σ p p rn), the error of the CovS algorithm is O( p p rn∥w∥+ σ p p n) and the error of the Uluru algorithm is O(σ p rn + σ p p n). These errors are exactly the same as in the fixed design case. 4.3 Discussion We can make a few salient observations from the error expressions for the algorithms presented in Remarks 1 & 3. The second term for the error of the Uluru algorithm does not contain r at all. If it is the dominating term, which is the case if r > O( p p/n) (7) then the error of Uluru is approximately O(σ p p n), which is completely independent of r. Thus, if r is not too small (i.e., when Eq. 7 holds), the error bound for Uluru is not a function of r. In other words, when Eq. 7 holds, we do not increase the error by using less data in estimating the covariance matrix in Uluru. FS Algorithm does not have this property since its error is proportional to 1 √r. Similarly, for the CovS algorithm, when r > O(∥w0∥2 σ2 ) (8) the second term dominates and we can conclude that the error does not change with r. However, Eq. 8 depends on how large the standard deviation σ of the noise is. We can assume ∥w0∥2 = O(p) since it is p dimensional. Hence if σ ≤O(√p), Eq. 8 fails since it implies r > O(1) and the error bound of CovS algorithm increases with r. To sum this up, Uluru has the nice property that its error bound does not increase as r gets smaller as long as r is greater than a threshold. This threshold is completely independent of how noisy the data is and only depends on the characteristics of the design/data matrix (n, p). 4.4 Run Time complexity Table 1 summarizes the run time complexity and theoretically predicted error bounds for all the methods. We use these theoretical run times (FLOPS) in our plots. 5 Experiments In this section we elucidate the relative merits of our methods by comparing their empirical performance on both synthetic and real world datasets. 5.1 Methodology We can compare our algorithms by allowing them each to have about O(np) CPU time (ignoring the log factors). This is of order the same time as it takes to read the data. Our target accuracy is p p/n, namely what a full least squares algorithm would generate. We will assume n ≫p. The 6 Methods Running Time Error Methods O(FLOPS) bound OLS O(n p2) O( p p/n) FS O(nsubs p2) O( p p/nsubs) CovS O(nsubs p2 + n p) * Uluru O(nsubs p2 + n p) O( p p/n) SRHT-FS O(max(n p log(p), nsubs p2)) O( p p2/n) SRHT-CovS O(max(n p log(p), nsubs p2 + n p)) * SRHT-Uluru O(max(n p log(p), nsubs p2 + n p)) O( p p/n) Table 1: Runtime complexity. nsubs is the number of observations in the subsample, n is the number of observations, and p is the number of predictors. * indicates that no uniform error bounds are known. subsample size, nsubs, for FS should be O(n/p) to keep the CPU time O(np), which leads to an accuracy of p p2/n. For the CovS method, the accuracy depends on how noisy our data is (i.e. how big σ is). When σ is large, it performs as well as p p/n, which is the same as full least squares. When σ is small, it performs as poorly as p p2/n. For Uluru, to keep the CPU time O(np), nsubs should be O(n/p) or equivalently r = O(1/p). As stated in the discussions after the theorems, when r ≥O( p p/n) (in this set up we want r = O(1/p), which implies n ≥O(p3)), Uluru has error bound O( p p/n) no matter what signal noise ratio the problem has. 5.2 Synthetic Datasets We generated synthetic data by distributing the signal uniformly across all the p singular values, picking the p singular values to be λi = 1/i2, i = 1 : p, and further varying the amount of signal. 5.3 Real World Datasets We also compared the performance of the algorithms on two UCI datasets 3: CPUSMALL (n=8192, p=12) and CADATA (n=20640, p=8) and the PERMA sentiment analysis dataset described in [11] (n=1505, p=30), which uses LR-MVL word embeddings [12] as features. 4 5.4 Results The results for synthetic data are shown in Figure 1 (top row) and for real world datasets are also shown in Figure 1 (bottom row). To generate the plots, we vary the amount of data used in the subsampling, nsubs, from 1.1p to n. For FS, this simply means using a fraction of the data; for CovS and Uluru, only the data for the covariance matrix is subsampled. We report the Mean Squared Error (MSE), which in the case of squared loss is the same as the risk, as was described in Section 2. For the real datasets we do not know the true population parameter, w0, so we replace it with its consistent estimator wMLE, which is computed using standard OLS on the entire dataset. The horizontal gray line in the figures is the overfitting point; it is the error generated by ˆw vector of all zeros. The vertical gray line is the n · p point; thus anything which is faster than that must look at only some of the data. Looking at the results, we can see two trends for the synthetic data. Firstly, our algorithms with no preconditioning are much faster than their counterparts with preconditioning and give similar accuracies. Secondly, as we had expected, CovS performs best for high noise setting being slightly better than Uluru, and Uluru is significantly better for low noise setting. For real world datasets also, Uluru is almost always better than the other algorithms, both with and without preconditioning. As earlier, the preconditioned alternatives are slower. 3http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/regression.html 4We also compared our approaches against coordinate ascent methods from [13] and our algorithms outperform them. Due to paucity of space we relegated that comparison to the supplementary material. 7 0.02 0.05 0.20 0.50 2.00 5.00 1e+00 1e+02 1e+04 1e+06 # FLOPS/n*p MSE/Risk 0.02 0.05 0.20 0.50 2.00 5.00 1e−01 1e+01 1e+03 1e+05 # FLOPS/n*p MSE/Risk 0.02 0.05 0.20 0.50 2.00 5.00 1e−03 1e+00 1e+03 1e+06 # FLOPS/n*p MSE/Risk 1 2 5 10 1e−01 1e+01 1e+03 1e+05 # FLOPS/n*p MSE/Risk 0.02 0.05 0.20 0.50 2.00 5.00 1e−05 1e−02 1e+01 1e+04 1e+07 # FLOPS/n*p MSE/Risk 5e−03 5e−02 5e−01 5e+00 1e−05 1e−02 1e+01 1e+04 # FLOPS/n*p MSE/Risk Figure 1: Results for synthetic datasets (n=4096, p=8) in top row and for (PERMA, CPUSMALL, CADATA, left-right) bottom row. The three columns in the top row have different amounts of signal, 2, q n p and n p respectively. In all settings, we varied the amount of subsampling from 1.1 p to n in multiples of 2. Color scheme: + (Green)-FS, + (Blue)-CovS, + (Red)-Uluru. The solid lines indicate no preconditioning (i.e. random design) and dashed lines indicate fixed design with Randomized Hadamard preconditioning. The FLOPS reported are the theoretical values (see Supp. material), the actual values were noisy due to varying load settings on CPUs. 6 Related Work The work that comes closest to our work is the set of approaches which precondition the matrix by either Subsampled Randomized Hadamard Transform (SRHT) [1, 4], or Subsampled Randomized Fourier Transform (SRFT) [2, 3], before subsampling uniformly from the resulting transformed matrix. However, this line of work is different our work in several ways. They are doing their analysis in a mathematical set up, i.e. solving an overdetermined linear system ( ˆw = arg minw∈Rp ∥Xw −Y ∥2), while we are working in a statistical set up (a regression problem Y = Xβ + ϵ) which leads to different error analysis. Our FS algorithm is essentially the same as the subsampling algorithm proposed by [4]. However, our theoretical analysis of it is novel, and furtheremore they only consider it in fixed design setting with Hadamard preconditioning. The CovS and Uluru are entirely new algorithms and as we have seen differ from FS in a key sense, namely that CovS and Uluru make use of all the data but FS uses only a small proportion of the data. 7 Conclusion In this paper we proposed three subsampling methods for faster least squares regression. All three run in O(size of input) = np. Our best method, Uluru, gave an error bound which is independent of the amount of subsampling as long as it is above a threshold. Furthermore, we argued that for problems arising from linear regression, the Randomized Hadamard transformation is often not needed. In linear regression, observations are generally i.i.d. If one further assumes that they are sub-Gaussian (perhaps as a result of a preprocessing step, or simply because they are 0/1 or Gaussian), then subsampling methods without a Randomized Hadamard transformation suffice. As shown in our experiments, dropping the Randomized Hadamard transformation significantly speeds up the algorithms and in i.i.d sub-Gaussian settings, does so without loss of accuracy. 8 References [1] Boutsidis, C., Gittens, A.: Improved matrix algorithms via the subsampled randomized hadamard transform. CoRR abs/1204.0062 (2012) [2] Tygert, M.: A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations. CoRR abs/0905.4745 (2009) [3] Rokhlin, V., Tygert, M.: A fast randomized algorithm for overdetermined linear least-squares regression. Proceedings of the National Academy of Sciences 105(36) (September 2008) 13212–13217 [4] Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarl´os, T.: Faster least squares approximation. CoRR abs/0710.1435 (2007) [5] Mahoney, M.W.: Randomized algorithms for matrices and data. (April 2011) [6] Ailon, N., Chazelle, B.: Approximate nearest neighbors and the fast johnson-lindenstrauss transform. In: STOC. (2006) 557–563 [7] Avron, H., Maymounkov, P., Toledo, S.: Blendenpik: Supercharging lapack’s least-squares solver. SIAM J. Sci. Comput. 32(3) (April 2010) 1217–1236 [8] Vershynin, R.: How Close is the Sample Covariance Matrix to the Actual Covariance Matrix? Journal of Theoretical Probability 25(3) (September 2012) 655–686 [9] Golub, G.H., Van Loan, C.F.: Matrix Computations (Johns Hopkins Studies in Mathematical Sciences)(3rd Edition). 3rd edn. The Johns Hopkins University Press (October 1996) [10] Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. CoRR abs/1011.3027 (2010) [11] Dhillon, P.S., Rodu, J., Foster, D., Ungar, L.: Two step cca: A new spectral method for estimating vector models of words. In: Proceedings of the 29th International Conference on Machine learning. ICML’12 (2012) [12] Dhillon, P.S., Foster, D., Ungar, L.: Multi-view learning of word embeddings via cca. In: Advances in Neural Information Processing Systems (NIPS). Volume 24. (2011) [13] Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss minimization. CoRR abs/1209.1873 (2012) 9
|
2013
|
79
|
5,157
|
Sequential Transfer in Multi-armed Bandit with Finite Set of Models Mohammad Gheshlaghi Azar ⇤ School of Computer Science CMU Alessandro Lazaric † INRIA Lille - Nord Europe Team SequeL Emma Brunskill ⇤ School of Computer Science CMU Abstract Learning from prior tasks and transferring that experience to improve future performance is critical for building lifelong learning agents. Although results in supervised and reinforcement learning show that transfer may significantly improve the learning performance, most of the literature on transfer is focused on batch learning tasks. In this paper we study the problem of sequential transfer in online learning, notably in the multi–armed bandit framework, where the objective is to minimize the total regret over a sequence of tasks by transferring knowledge from prior tasks. We introduce a novel bandit algorithm based on a method-of-moments approach for estimating the possible tasks and derive regret bounds for it. 1 Introduction Learning from prior tasks and transferring that experience to improve future performance is a key aspect of intelligence, and is critical for building lifelong learning agents. Recently, multi-task and transfer learning received much attention in the supervised and reinforcement learning (RL) setting with both empirical and theoretical encouraging results (see recent surveys by Pan and Yang, 2010; Lazaric, 2011). Most of these works focused on scenarios where the tasks are batch learning problems, in which a training set is directly provided to the learner. On the other hand, the online learning setting (Cesa-Bianchi and Lugosi, 2006), where the learner is presented with samples in a sequential fashion, has been rarely considered (see Mann and Choe (2012); Taylor (2009) for examples in RL and Sec. E of Azar et al. (2013) for a discussion on related settings). The multi–armed bandit (MAB) (Robbins, 1952) is a simple yet powerful framework formalizing the online learning with partial feedback problem, which encompasses a large number of applications, such as clinical trials, web advertisements and adaptive routing. In this paper we take a step towards understanding and providing formal bounds on transfer in stochastic MABs. We focus on a sequential transfer scenario where an (online) learner is acting in a series of tasks drawn from a stationary distribution over a finite set of MABs. The learning problem, within each task, can be seen as a standard MAB problem with a fixed number of steps. Prior to learning, the model parameters of each bandit problem are not known to the learner, nor does it know the distribution probability over the bandit problems. Also, we assume that the learner is not provided with the identity of the tasks throughout the learning. To act efficiently in this setting, it is crucial to define a mechanism for transferring knowledge across tasks. In fact, the learner may encounter the same bandit problem over and over throughout the learning, and an efficient algorithm should be able to leverage the knowledge obtained in previous tasks, when it is presented with the same problem again. To address this problem one can transfer the estimates of all the possible models from prior tasks to the current one. Once these models are accurately estimated, we show that an extension of the UCB algorithm (Auer et al., 2002) is able to efficiently exploit this prior knowledge and reduce the regret through tasks (Sec. 3). ⇤{mazar,ebrun}@cs.cmu.edu †alessandro.lazaric@inria.fr 1 The main contributions of this paper are two-fold: (i) we introduce the tUCB algorithm which transfers the model estimates across the tasks and uses this knowledge to achieve a better performance than UCB. We also prove that the new algorithm is guaranteed to perform as well as UCB in early episodes, thus avoiding any negative transfer effect, and then to approach the performance of the ideal case when the models are all known in advance (Sec. 4.4). (ii) To estimate the models we rely on a new variant of method of moments, robust tensor power method (RTP) (Anandkumar et al., 2013, 2012b) and extend it to the multi-task bandit setting1:we prove that RTP provides a consistent estimate of the means of all arms (for all models) as long as they are pulled at least three times per task and prove sample complexity bounds for it (Sec. 4.2). Finally, we report some preliminary results on synthetic data confirming the theoretical findings (Sec. 5). An extended version of this paper containing proofs and additional comments is available in (Azar et al., 2013). 2 Preliminaries We consider a stochastic MAB problem defined by a set of arms A = {1, . . . , K}, where each arm i 2 A is characterized by a distribution ⌫i and the samples (rewards) observed from each arm are independent and identically distributed. We focus on the setting where there exists a set of models ⇥= {✓= (⌫1, . . . , ⌫K)}, |⇥| = m, which contains all the possible bandit problems. We denote the mean of an arm i, the best arm, and the best value of a model ✓2 ⇥respectively by µi(✓), i⇤(✓), µ⇤(✓). We define the arm gap of an arm i for a model ✓as ∆i(✓) = µ⇤(✓) −µi(✓), while the model gap for an arm i between two models ✓and ✓0 is defined as Γi(✓, ✓0) = |µi(✓) −µi(✓0)|. We also assume that arm rewards are bounded in [0, 1]. We consider the sequential transfer setting where at each episode j the learner interacts with a task ¯✓j, drawn from a distribution ⇢over ⇥, for n steps. The objective is to minimize the (pseudo-)regret RJ over J episodes measured as the difference between the rewards obtained by pulling i⇤(¯✓j) and those achieved by the learner: RJ = J X j=1 Rj n = J X j=1 X i6=i⇤ T j i,n∆i(¯✓j), (1) where T j i,n is the number of pulls to arm i after n steps of episode j. We also introduce some tensor notation. Let X 2 RK be a random realization of the rewards of all arms from a random model. All the realizations are i.i.d. conditional on a model ¯✓and E[X|✓= ¯✓] = µ(✓), where the i-th component of µ(✓) 2 RK is [µ(✓)]i = µi(✓). Given realizations X1 , X2 and X3, we define the second moment matrix M2 = E[X1 ⌦X2] such that [M 2]i,j = E[X1 i X2 j] and the third moment tensor M3 = E[X1 ⌦X2 ⌦X3] such that [M2]i,j,l = E[X1 i X2 jX3 l ]. Since the realizations are conditionally independent, we have that, for every ✓2 ⇥, E[X1 ⌦X2|✓] = E[X1|✓] ⌦E[X2|✓] = µ(✓) ⌦µ(✓) and this allows us to rewrite the second and third moments as M2 = P ✓⇢(✓)µ(✓)⌦2, M3 = P ✓⇢(✓)µ(✓)⌦3, where v⌦p = v ⌦v ⌦· · · v is the p-th tensor power. Let A be a 3rd order member of the tensor product of the Euclidean space RK (as M3), then we define the multilinear map as follows. For a set of three matrices {Vi 2 RK⇥m}1i3 , the (i1, i2, i3) entry in the 3-way array representation of A(V1, V2, V3) 2 Rm⇥m⇥m is [A(V1, V2, V3)]i1,i2,i3 := P 1j1,j2,j3n Aj1,j2,j3[V1]j1,i1[V2]j2,i2[V3]j3,i3. We also use different norms: the Euclidean norm k · k; the Frobenius norm k · kF ; the matrix max-norm kAkmax = maxij |[A]ij|. 3 Multi-arm Bandit with Finite Models Require: Set of models ⇥, number of steps n for t = 1, . . . , n do Build ⇥t = {✓: 8i, |µi(✓) −ˆµi,t| "i,t} Select ✓t = arg max✓2⇥t µ⇤(✓) Pull arm It = i⇤(✓t) Observe sample xIt and update end for Figure 1: The mUCB algorithm. Before considering the transfer problem, we show that a simple variation to UCB allows us to effectively exploit the knowledge of ⇥and obtain a significant reduction in the regret. The mUCB (model-UCB) algorithm in Fig. 1 takes as input a set of models ⇥including the current (unknown) model ¯✓. At each step t, the algorithm computes a subset ⇥t ✓⇥containing only the models whose means µi(✓) are compatible with the current estimates ˆµi,t of the means µi(¯✓) of the current model, obtained averaging 1Notice that estimating the models involves solving a latent variable model estimation problem, for which RTP is the state-of-the-art. 2 Ti,t pulls, and their uncertainty "i,t (see Eq. 2 for an explicit definition of this term). Notice that it is enough that one arm does not satisfy the compatibility condition to discard a model ✓. Among all the models in ⇥t, mUCB first selects the model with the largest optimal value and then it pulls its corresponding optimal arm. This choice is coherent with the optimism in the face of uncertainty principle used in UCB-based algorithms, since mUCB always pulls the optimal arm corresponding to the optimistic model compatible with the current estimates ˆµi,t. We show that mUCB incurs a regret which is never worse than UCB and it is often significantly smaller. We denote the set of arms which are optimal for at least a model in a set ⇥0 as A⇤(⇥0) = {i 2 A : 9✓2 ⇥0 : i⇤(✓) = i}. The set of models for which the arms in A0 are optimal is ⇥(A0) = {✓2 ⇥: 9i 2 A0 : i⇤(✓) = i}. The set of optimistic models for a given model ¯✓is ⇥+ = {✓2 ⇥: µ⇤(✓) ≥ µ⇤(¯✓)}, and their corresponding optimal arms A+ = A⇤(⇥+). The following theorem bounds the expected regret (similar bounds hold in high probability). The lemmas and proofs (using standard tools from the bandit literature) are available in Sec. B of Azar et al. (2013). Theorem 1. If mUCB is run with δ = 1/n, a set of m models ⇥such that the ¯✓2 ⇥and "i,t = q log(mn2/δ)/(2Ti,t−1), (2) where Ti,t−1 is the number of pulls to arm i at the beginning of step t, then its expected regret is E[Rn] K + X i2A+ 2∆i(¯✓) log $ mn3% min✓2⇥+,i Γi(✓, ¯✓)2 K + X i2A+ 2 log $ mn3% min✓2⇥+,i Γi(✓, ¯✓), (3) where A+ = A⇤(⇥+) is the set of arms which are optimal for at least one optimistic model ⇥+ and ⇥+,i = {✓2 ⇥+ : i⇤(✓) = i} is the set of optimistic models for which i is the optimal arm. Remark (comparison to UCB). The UCB algorithm incurs a regret E[Rn(UCB)] O ⇣X i2A log n ∆i(¯✓) ⌘ O ⇣ K log n mini ∆i(¯✓) ⌘ . We see that mUCB displays two major improvements. The regret in Eq. 3 can be written as E[Rn(mUCB)] O ⇣X i2A+ log n min✓2⇥+,i Γi(✓, ¯✓) ⌘ O ⇣ |A+| log n mini min✓2⇥+,i Γi(✓, ¯✓) ⌘ . This result suggests that mUCB tends to discard all the models in ⇥+ from the most optimistic down to the actual model ¯✓which, with high-probability, is never discarded. As a result, even if other models are still in ⇥t, the optimal arm of ¯✓is pulled until the end. This significantly reduces the set of arms which are actually pulled by mUCB and the previous bound only depends on the number of arms in A+, which is |A+| |A⇤(⇥)| K. Furthermore, for all arms i, the minimum gap min✓2⇥+,i Γi(✓, ¯✓) is guaranteed to be larger than the arm gap ∆i(¯✓) (see Lem. 4 in Sec. B of Azar et al. (2013)), thus further improving the performance of mUCB w.r.t. UCB. 4 Online Transfer with Unknown Models We now consider the case when the set of models is unknown and the regret is cumulated over multiple tasks drawn from ⇢(Eq. 1). We introduce tUCB (transfer-UCB) which transfers estimates of ⇥, whose accuracy is improved through episodes using a method-of-moments approach. 4.1 The transfer-UCB Bandit Algorithm Fig. 2 outlines the structure of our online transfer bandit algorithm tUCB (transfer-UCB). The algorithm uses two sub-algorithms, the bandit algorithm umUCB (uncertain model-UCB), whose objective is to minimize the regret at each episode, and RTP (robust tensor power method) which at each episode j computes an estimate {ˆµj i(✓)} of the arm means of all the models. The bandit algorithm umUCB in Fig. 3 is an extension of the mUCB algorithm. It first computes a set of models ⇥j t whose means ˆµi(✓) are compatible with the current estimates ˆµi,t. However, unlike the case where the exact models are available, here the models themselves are estimated and the uncertainty "j in their means (provided as input to umUCB) is taken into account in the definition of ⇥j t. Once 3 Require: number of arms K, number of models m, constant C(✓). Initialize estimated models ⇥1 = {ˆµ1 i (✓)}i,✓, samples R 2 RJ⇥K⇥n for j = 1, 2, . . . , J do Run Rj = umUCB(⇥j, n) Run ⇥j+1 = RTP(R, m, K, j, δ) end for Figure 2: The tUCB algorithm. Require: set of models ⇥j, num. steps n Pull each arm three times for t = 3K + 1, . . . , n do Build ⇥j t = {✓: 8i, |ˆµj i(✓) −ˆµi,t| "i,t + "j} Compute Bj t (i; ✓) = min ! (ˆµj i(✓) + "j), (ˆµi,t + "i,t) Compute ✓j t = arg max✓2⇥j t maxi Bj t (i; ✓) Pull arm It = arg maxi Bj t (i; ✓j t) Observe sample R(It, Ti,t) = xIt and update end for return Samples R Figure 3: The umUCB algorithm. Require: samples R 2 Rj⇥n, number of models m and arms K, episode j Estimate the second and third moment c M2 and c M3 using the reward samples from R (Eq. 4) Compute bD 2 Rm⇥m and bU 2 RK⇥m (m largest eigenvalues and eigenvectors of c M2 resp.) Compute the whitening mapping c W = bU bD−1/2 and the tensor bT = c M3(c W, c W, c W) Plug bT in Alg. 1 of Anandkumar et al. (2012b) and compute eigen-vectors/values {bv(✓)}, {bλ(✓)} Compute bµj(✓) = bλ(✓)(c W T)+bv(✓) for all ✓2 ⇥ return ⇥j+1 = {bµj(✓) : ✓2 ⇥} Figure 4: The robust tensor power (RTP) method (Anandkumar et al., 2012b). the active set is computed, the algorithm computes an upper-confidence bound on the value of each arm i for each model ✓and returns the best arm for the most optimistic model. Unlike in mUCB, due to the uncertainty over the model estimates, a model ✓might have more than one optimal arm, and an upper-confidence bound on the mean of the arms ˆµi(✓) + "j is used together with the upperconfidence bound ˆµi,t + "i,t, which is directly derived from the samples observed so far from arm i. This guarantees that the B-values are always consistent with the samples generated from the actual model ¯✓j. Once umUCB terminates, RTP (Fig. 4) updates the estimates of the model means bµj(✓) = {ˆµj i(✓)}i 2 RK using the samples obtained from each arm i. At the beginning of each task umUCB pulls all the arms 3 times, since RTP needs at least 3 samples from each arm to accurately estimate the 2nd and 3rd moments (Anandkumar et al., 2012b). More precisely, RTP uses all the reward samples generated up to episode j to estimate the 2nd and 3rd moments (see Sec. 2) as c M2 = j−1 Xj l=1 µ1l ⌦µ2l, and c M3 = j−1 Xj l=1 µ1l ⌦µ2l ⌦µ3l, (4) where the vectors µ1l, µ2l, µ3l 2 RK are obtained by dividing the T l i,n samples observed from arm i in episode l in three batches and taking their average (e.g., [µ1l]i is the average of the first T l i,n/3 samples).2 Since µ1l, µ2l, µ3l are independent estimates of µ(¯✓l), c M2 and c M3 are consistent estimates of the second and third moments M2 and M3. RTP relies on the fact that the model means µ(✓) can be recovered from the spectral decomposition of the symmetric tensor T = M3(W, W, W), where W is a whitening matrix for M2, i.e., M2(W, W) = Im⇥m (see Sec. 2 for the definition of the mapping A(V1, V2, V3)). Anandkumar et al. (2012b) (Thm. 4.3) have shown that under some mild assumption (see later Assumption 1) the model means {µ(✓)}, can be obtained as µ(✓) = λ(✓)Bv(✓), where (λ(✓), v(✓)) is a pair of eigenvector/eigenvalue for the tensor T and B := (W T)+.Thus the RTP algorithm estimates the eigenvectors bv(✓) and the eigenvalues bλ(✓), of the m ⇥m ⇥m tensor bT := c M3(c W, c W, c W).3 Once bv(✓) and bλ(✓) are computed, the estimated mean vector bµj(✓) is obtained by the inverse transformation bµj(✓) = bλ(✓) bBbv(✓), where bB is the pseudo inverse of c W T(for a detailed description of RTP algorithm see Anandkumar et al., 2012b). 2Notice that 1/3([µ1l]i + [µ2l]i + [µ1l]i) = ˆµl i,n, the empirical mean of arm i at the end of episode l. 3The matrix c W 2 RK⇥m is such that c M2(c W, c W) = Im⇥m, i.e., c W is the whitening matrix of c M2. In general c W is not unique. Here, we choose c W = bU bD−1/2, where bD 2 Rm⇥m is a diagonal matrix consisting of the m largest eigenvalues of c M2 and bU 2 RK⇥m has the corresponding eigenvectors as its columns. 4 4.2 Sample Complexity of the Robust Tensor Power Method umUCB requires as input "j, i.e., the uncertainty of the model estimates. Therefore we need sample complexity bounds on the accuracy of {ˆµi(✓)} computed by RTP. The performance of RTP is directly affected by the error of the estimates c M2 and c M3 w.r.t. the true moments. In Thm. 2 we prove that, as the number of tasks j grows, this error rapidly decreases with the rate of p 1/j. This result provides us with an upper-bound on the error "j needed for building the confidence intervals in umUCB. The following definition and assumption are required for our result. Definition 1. Let ⌃M2 = {σ1, σ2, . . . , σm} be the set of m largest eigenvalues of the matrix M2. Define σmin := minσ2⌃M2 σ, σmax := maxσ2⌃M2 σ and λmax := max✓λ(✓). Define the minimum gap between the distinct eigenvalues of M2 as Γσ := minσi6=σl(|σi −σl|). Assumption 1. The mean vectors {µ(✓)}✓are linear independent and ⇢(✓) > 0 for all ✓2 ⇥. We now state our main result which is in the form of a high probability bound on the estimation error of mean reward vector of every model ✓2 ⇥. Theorem 2. Pick δ 2 (0, 1). Let C(⇥) := C3λmax p σmax/σ3 min (σmax/Γσ + 1/σmin + 1/σmax), where C3 > 0 is a universal constant. Then under Assumption 1 there exist constants C4 > 0 and a permutation ⇡on ⇥, such that for all ✓2 ⇥, we have w.p. 1 −δ kµ(✓) −bµj(⇡(✓))k "j , C(⇥)K2.5m2 q log(K/δ) j after j ≥ C4m5K6 log(K/δ) min(σmin,Γσ)2σ3 minλ2 min . (5) Remark (computation of C(⇥)). As illustrated in Fig. 3, umUCB relies on the estimates bµj(✓) and on their accuracy "j. Although the bound reported in Thm. 2 provides an upper confidence bound on the error of the estimates, it contains terms which are not computable in general (e.g., σmin). In practice, C(⇥) should be considered as a parameter of the algorithm.This is not dissimilar from the parameter usually introduced in the definition of "i,t in front of the square-root term in UCB. 4.3 Regret Analysis of umUCB We now analyze the regret of umUCB when an estimated set of models ⇥j is provided as input. At episode j, for each model ✓we define the set of non-dominated arms (i.e., potentially optimal arms) as Aj ⇤(✓) = {i 2 A : @i0, ˆµj i(✓) + "j < ˆµj i0(✓) −"j}. Among the non-dominated arms, when the actual model is ¯✓j, the set of optimistic arms is Aj +(✓; ¯✓j) = {i 2 Aj ⇤(✓) : ˆµj i(✓) + "j ≥µ⇤(¯✓j)}. As a result, the set of optimistic models is ⇥j +(¯✓j) = {✓2 ⇥: Aj +(✓; ¯✓j) 6= ;}. In some cases, because of the uncertainty in the model estimates, unlike in mUCB, not all the models ✓6= ¯✓j can be discarded, not even at the end of a very long episode. Among the optimistic models, the set of models that cannot be discarded is defined as e⇥j +(¯✓j) = {✓2 ⇥j +(¯✓j) : 8i 2 Aj +(✓; ¯✓j), |ˆµj i(✓)−µi(¯✓j)| "j}. Finally, when we want to apply the previous definitions to a set of models ⇥0 instead of single model we have, e.g., Aj ⇤(⇥0; ¯✓j) = S ✓2⇥0 Aj ⇤(✓; ¯✓j). The proof of the following results are available in Sec. D of Azar et al. (2013), here we only report the number of pulls, and the corresponding regret bound. Corollary 1. If at episode j umUCB is run with "i,t as in Eq. 2 and "j as in Eq. 5 with a parameter δ0 = δ/2K, then for any arm i 2 A, i 6= i⇤(¯✓j) is pulled Ti,n times such that 8 > > > > < > > > > : Ti,n min ⇢2 log * 2mKn2/δ + ∆i(¯✓j)2 , log * 2mKn2/δ + 2 min✓2⇥j i,+(¯✓j) bΓi(✓; ¯✓j)2 , + 1 if i 2 Aj 1 Ti,n 2 log * 2mKn2/δ + /(∆i(¯✓j)2) + 1 if i 2 Aj 2 Ti,n = 0 otherwise w.p. 1 −δ, where ⇥j i,+(¯✓j) = {✓2 ⇥j +(¯✓j) : i 2 A+(✓; ¯✓j)} is the set of models for which i is among their optimistic non-dominated arms, bΓi(✓; ¯✓j) = Γi(✓, ¯✓j)/2−"j, Aj 1 = Aj +(⇥j +(¯✓j); ¯✓j)− Aj +(e⇥j +(¯✓j); ¯✓j) (i.e., set of arms only proposed by models that can be discarded), and Aj 2 = Aj +(e⇥j +(¯✓j); ¯✓j) (i.e., set of arms only proposed by models that cannot be discarded). 5 The previous corollary states that arms which cannot be optimal for any optimistic model (i.e., the optimistic non-dominated arms) are never pulled by umUCB, which focuses only on arms in i 2 Aj +(⇥j +(¯✓j); ¯✓j). Among these arms, those that may help to remove a model from the active set (i.e., i 2 Aj 1) are potentially pulled less than UCB, while the remaining arms, which are optimal for the models that cannot be discarded (i.e., i 2 Aj 2), are simply pulled according to a UCB strategy. Similar to mUCB, umUCB first pulls the arms that are more optimistic until either the active set ⇥j t changes or they are no longer optimistic (because of the evidence from the actual samples). We are now ready to derive the per-episode regret of umUCB. Theorem 3. If umUCB is run for n steps on the set of models ⇥j estimated by RTP after j episodes with δ = 1/n, and the actual model is ¯✓j, then its expected regret (w.r.t. the random realization in episode j and conditional on ¯✓j) is E[Rj n] K+ X i2Aj 1 log * 2mKn3+ min ⇢ 2/∆i(¯✓j)2, 1/ * 2 min✓2⇥j i,+(¯✓j) bΓi(✓; ¯✓j)2+, ∆i(¯✓j) + X i2Aj 2 2 log * 2mKn3+ /∆i(¯✓j). Remark (negative transfer). The transfer of knowledge introduces a bias in the learning process which is often beneficial. Nonetheless, in many cases transfer may result in a bias towards wrong solutions and a worse learning performance, a phenomenon often referred to as negative transfer. The first interesting aspect of the previous theorem is that umUCB is guaranteed to never perform worse than UCB itself. This implies that tUCB never suffers from negative transfer, even when the set ⇥j contains highly uncertain models and might bias umUCB to pull suboptimal arms. Remark (improvement over UCB). In Sec. 3 we showed that mUCB exploits the knowledge of ⇥ to focus on a restricted set of arms which are pulled less than UCB. In umUCB this improvement is not as clear, since the models in ⇥are not known but are estimated online through episodes. Yet, similar to mUCB, umUCB has the two main sources of potential improvement w.r.t. to UCB. As illustrated by the regret bound in Thm. 3, umUCB focuses on arms in Aj 1 [ Aj 2 which is potentially a smaller set than A. Furthermore, the number of pulls to arms in Aj 1 is smaller than for UCB whenever the estimated model gap bΓi(✓; ¯✓j) is bigger than ∆i(¯✓j). Eventually, umUCB reaches the same performance (and improvement over UCB) as mUCB when j is big enough. In fact, the set of optimistic models reduces to the one used in mUCB (i.e., ⇥j +(¯✓j) ⌘⇥+(¯✓j)) and all the optimistic models have only optimal arms (i.e., for any ✓2 ⇥+ the set of non-dominated optimistic arms is A+(✓; ¯✓j) = {i⇤(✓)}), which corresponds to Aj 1 ⌘A⇤(⇥+(¯✓j)) and Aj 2 ⌘{i⇤(¯✓j)}, which matches the condition of mUCB. For instance, for any model ✓, in order to have A⇤(✓) = {i⇤(✓)}, for any arm i 6= i⇤(✓) we need that ˆµj i(✓) + "j ˆµj i⇤(✓)(✓) −"j. Thus after j ≥ 2C(⇥) min ¯✓2⇥ min ✓2⇥+(¯✓) mini ∆i(✓)2 + 1. episodes, all the optimistic models have only one optimal arm independently from the actual identity of the model ¯✓j. Although this condition may seem restrictive, in practice umUCB starts improving over UCB much earlier, as illustrated in the numerical simulation in Sec. 5. 4.4 Regret Analysis of tUCB Given the previous results, we derive the bound on the cumulative regret over J episodes (Eq. 1). Theorem 4. If tUCB is run over J episodes of n steps in which the tasks ¯✓j are drawn from a fixed distribution ⇢over a set of models ⇥, then its cumulative regret is RJ JK + XJ j=1 X i2Aj 1 min ⇢2 log * 2mKn2/δ + ∆i(¯✓j)2 , log * 2mKn2/δ + 2 min✓2⇥j i,+(¯✓j) bΓj i(✓; ¯✓j)2 , ∆i(¯✓j) + XJ j=1 X i2Aj 2 2 log * 2mKn2/δ + ∆i(¯✓j) , w.p. 1 −δ w.r.t. the randomization over tasks and the realizations of the arms in each episode. 6 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 m1 m2 m3 m4 m5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Models Reward Figure 5: Set of models ⇥. 0 1000 2000 3000 4000 5000 0 5 10 15 20 25 30 Number of Tasks (J) Complexity UCB UCB+ mUCB tUCB Figure 6: Complexity over tasks. 0 5000 10000 15000 50 100 150 200 250 300 350 Number of Steps (n) Regret UCB UCB+ mUCB tUCB (J=1000) tUCB (J=2000) tUCB (J=5000) Figure 7: Regret of UCB, UCB+, mUCB, and tUCB (avg. over episodes) vs episode length. 0 1000 2000 3000 4000 5000 100 150 200 250 300 350 Number of Tasks (J) Per−episode Regret UCB UCB+ mUCB tUCB Figure 8: Per-episode regret of tUCB. This result immediately follows from Thm. 3 and it shows a linear dependency on the number of episodes J. This dependency is the price to pay for not knowing the identity of the current task ¯✓j. If the task was revealed at the beginning of the task, a bandit algorithm could simply cluster all the samples coming from the same task and incur a much smaller cumulative regret with a logarithmic dependency on episodes and steps, i.e., log(nJ). Nonetheless, as discussed in the previous section, the cumulative regret of tUCB is never worse than for UCB and as the number of tasks increases it approaches the performance of mUCB, which fully exploits the prior knowledge of ⇥. 5 Numerical Simulations In this section we report preliminary results of tUCB on synthetic data. The objective is to illustrate and support the previous theoretical findings. We define a set ⇥of m = 5 MAB problems with K = 7 arms each, whose means {µi(✓)}i,✓are reported in Fig. 5 (see Sect. F in Azar et al. (2013) for the actual values), where each model has a different color and squares correspond to optimal arms (e.g., arm 2 is optimal for model ✓2). This set of models is chosen to be challenging and illustrate some interesting cases useful to understand the functioning of the algorithm.4 Models ✓1 and ✓2 only differ in their optimal arms and this makes it difficult to distinguish them. For arm 3 (which is optimal for model ✓3 and thus potentially selected by mUCB), all the models share exactly the same mean value. This implies that no model can be discarded by pulling it. Although this might suggest that mUCB gets stuck in pulling arm 3, we showed in Thm. 1 that this is not the case. Models ✓1 and ✓5 are challenging for UCB since they have small minimum gap. Only 5 out of the 7 arms are actually optimal for a model in ⇥. Thus, we also report the performance of UCB+ which, under the assumption that ⇥is known, immediately discards all the arms which are not optimal (i /2 A⇤) and performs UCB on the remaining arms. The model distribution is uniform, i.e., ⇢(✓) = 1/m. Before discussing the transfer results, we compare UCB, UCB+, and mUCB, to illustrate the advantage of the prior knowledge of ⇥w.r.t. UCB. Fig. 7 reports the per-episode regret of the three 4Notice that although ⇥satisfies Assumption 1, the smallest singular value σmin = 0.0039 and Γσ = 0.0038, thus making the estimation of the models difficult. 7 algorithms for episodes of different length n (the performance of tUCB is discussed later). The results are averaged over all the models in ⇥and over 200 runs each. All the algorithms use the same confidence bound "i,t. The performance of mUCB is significantly better than both UCB, and UCB+, thus showing that mUCB makes an efficient use of the prior of knowledge of ⇥. Furthermore, in Fig. 6 the horizontal lines correspond to the value of the regret bounds up to the n dependent terms and constants5 for the different models in ⇥averaged w.r.t. ⇢for the three algorithms (the actual values for the different models are in the supplementary material). These values show that the improvement observed in practice is accurately predicated by the upper-bounds derived in Thm. 1. We now move to analyze the performance of tUCB. In Fig. 8 we show how the per-episode regret changes through episodes for a transfer problem with J = 5000 tasks of length n = 5000. In tUCB we used "j as in Eq. 5 with C(⇥) = 2. As discussed in Thm. 3, UCB and mUCB define the boundaries of the performance of tUCB. In fact, at the beginning tUCB selects arms according to a UCB strategy, since no prior information about the models ⇥is available. On the other hand, as more tasks are observed, tUCB is able to transfer the knowledge acquired through episodes and build an increasingly accurate estimate of the models, thus approaching the behavior of mUCB. This is also confirmed by Fig. 6 where we show how the complexity of tUCB changes through episodes. In both cases (regret and complexity) we see that tUCB does not reach the same performance of mUCB. This is due to the fact that some models have relatively small gaps and thus the number of episodes to have an accurate enough estimate of the models to reach the performance of mUCB is much larger than 5000 (see also the Remarks of Thm. 3). Since the final objective is to achieve a small global regret (Eq. 1), in Fig. 7 we report the cumulative regret averaged over the total number of tasks (J) for different values of J and n. Again, this graph shows that tUCB outperforms UCB and that it tends to approach the performance of mUCB as J increases, for any value of n. 6 Conclusions and Open Questions In this paper we introduce the transfer problem in the multi–armed bandit framework when a tasks are drawn from a finite set of bandit problems. We first introduced the bandit algorithm mUCB and we showed that it is able to leverage the prior knowledge on the set of bandit problems ⇥and reduce the regret w.r.t. UCB. When the set of models is unknown we define a method-of-moments variant (RTP) which consistently estimates the means of the models in ⇥from the samples collected through episodes. This knowledge is then transferred to umUCB which performs no worse than UCB and tends to approach the performance of mUCB. For these algorithms we derive regret bounds, and we show preliminary numerical simulations. To the best of our knowledge, this is the first work studying the problem of transfer in multi-armed bandit. It opens a series of interesting directions, including whether explicit model identification can improve our transfer regret. Optimality of tUCB. At each episode, tUCB transfers the knowledge about ⇥acquired from previous tasks to achieve a small per-episode regret using umUCB. Although this strategy guarantees that the per-episode regret of tUCB is never worse than UCB, it may not be the optimal strategy in terms of the cumulative regret through episodes. In fact, if J is large, it could be preferable to run a model identification algorithm instead of umUCB in earlier episodes so as to improve the quality of the estimates ˆµi(✓). Although such an algorithm would incur a much larger regret in earlier tasks (up to linear), it could approach the performance of mUCB in later episodes much faster than done by tUCB. This trade-off between identification of the models and transfer of knowledge may suggest that different algorithms than tUCB are possible. Unknown model-set size. In some problems the size of model set m is not known to the learner and needs to be estimated. This problem can be addressed by estimating the rank of matrix M2 which equals to m (Kleibergen and Paap, 2006). We also note that one can relax the assumption that ⇢(✓) needs to be positive (see Assumption 1) by using the estimated model size as opposed to m, since M2 depends not on the means of models with ⇢(✓) = 0. Acknowledgments. This research was supported by the National Science Foundation (NSF award #SBE0836012). We would like to thank Sham Kakade and Animashree Anandkumar for valuable discussions. A. Lazaric would like to acknowledge the support of the Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and FEDER through the’ Contrat de Projets Etat Region (CPER) 2007-2013’, and the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project CompLACS). 5For instance, for UCB we compute P i 1/∆i. 8 References Agarwal, A., Dudík, M., Kale, S., Langford, J., and Schapire, R. E. (2012). Contextual bandit learning with predictable rewards. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS’12). Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S., and Liu, Y.-K. (2012a). A spectral algorithm for latent dirichlet allocation. In Proceedings of Advances in Neural Information Processing Systems 25 (NIPS’12), pages 926–934. Anandkumar, A., Ge, R., Hsu, D., and Kakade, S. M. (2013). A tensor spectral approach to learning mixed membership community models. Journal of Machine Learning Research, 1:65. Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M., and Telgarsky, M. (2012b). Tensor decompositions for learning latent variable models. CoRR, abs/1210.7559. Anandkumar, A., Hsu, D., and Kakade, S. M. (2012c). A method of moments for mixture models and hidden markov models. In Proceeding of the 25th Annual Conference on Learning Theory (COLT’12), volume 23, pages 33.1–33.34. Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235–256. Azar, M. G., Lazaric, A., and Brunskill, E. (2013). Sequential transfer in multi-armed bandit with finite set of models. CoRR, abs/1307.6887. Cavallanti, G., Cesa-Bianchi, N., and Gentile, C. (2010). Linear algorithms for online multitask classification. Journal of Machine Learning Research, 11:2901–2934. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press. Dekel, O., Long, P. M., and Singer, Y. (2006). Online multitask learning. In Proceedings of the 19th Annual Conference on Learning Theory (COLT’06), pages 453–467. Garivier, A. and Moulines, E. (2011). On upper-confidence bound policies for switching bandit problems. In Proceedings of the 22nd international conference on Algorithmic learning theory, ALT’11, pages 174–188, Berlin, Heidelberg. Springer-Verlag. Kleibergen, F. and Paap, R. (2006). Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics, 133(1):97–126. Langford, J. and Zhang, T. (2007). The epoch-greedy algorithm for multi-armed bandits with side information. In Proceedings of Advances in Neural Information Processing Systems 20 (NIPS’07). Lazaric, A. (2011). Transfer in reinforcement learning: a framework and a survey. In Wiering, M. and van Otterlo, M., editors, Reinforcement Learning: State of the Art. Springer. Lugosi, G., Papaspiliopoulos, O., and Stoltz, G. (2009). Online multi-task learning with hard constraints. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT’09). Mann, T. A. and Choe, Y. (2012). Directed exploration in reinforcement learning with transferred knowledge. In Proceedings of the Tenth European Workshop on Reinforcement Learning (EWRL’12). Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359. Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the AMS, 58:527–535. Saha, A., Rai, P., Daumé III, H., and Venkatasubramanian, S. (2011). Online learning of multiple tasks and their relationships. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS’11), Ft. Lauderdale, Florida. Stewart, G. W. and Sun, J.-g. (1990). Matrix perturbation theory. Academic press. Taylor, M. E. (2009). Transfer in Reinforcement Learning Domains. Springer-Verlag. Wedin, P. (1972). Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12(1):99–111. 9
|
2013
|
8
|
5,158
|
A message-passing algorithm for multi-agent trajectory planning Jos´e Bento ⇤ jbento@disneyresearch.com Nate Derbinsky nate.derbinsky@disneyresearch.com Javier Alonso-Mora jalonso@disneyresearch.com Jonathan Yedidia yedidia@disneyresearch.com Abstract We describe a novel approach for computing collision-free global trajectories for p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM). Compared with existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments. We apply our method to classical challenging instances and observe that its computational requirements scale well with p for several cost functionals. We also show that a specialization of our algorithm can be used for local motion planning by solving the problem of joint optimization in velocity space. 1 Introduction Robot navigation relies on at least three sub-tasks: localization, mapping, and motion planning. The latter can be described as an optimization problem: compute the lowest-cost path, or trajectory, between an initial and final configuration. This paper focuses on trajectory planning for multiple agents, an important problem in robotics [1, 2], computer animation, and crowd simulation [3]. Centralized planning for multiple agents is PSPACE hard [4, 5]. To contend with this complexity, traditional multi-agent planning prioritizes agents and computes their trajectories sequentially [6], leading to suboptimal solutions. By contrast, our method plans for all agents simultaneously. Trajectory planning is also simplified if agents are non-distinct and can be dynamically assigned to a set of goal positions [1]. We consider the harder problem where robots have a unique identity and their goal positions are statically pre-specified. Both mixed-integer quadratic programming (MIQP) [7] and [more efficient, although local] sequential convex programming [8] approaches have been applied to the problem of computing collision-free trajectories for multiple agents with pre-specified goal positions; however, due to the non-convexity of the problem, these approaches, especially the former, do not scale well with the number of agents. Alternatively, trajectories may be found by sampling in their joint configuration space [9]. This approach is probabilistic and, alone, only gives asymptotic guarantees. See Appendix A for further comments on discrete search methods. Due to the complexity of planning collision-free trajectories, real-time robot navigation is commonly decoupled into a global planner and a fast local planner that performs collision-avoidance. Many single-agent reactive collision-avoidance algorithms are based either on potential fields [10], which typically ignore the velocity of other agents, or “velocity obstacles” [11], which provide improved performance in dynamic environments by formulating the optimization in velocity space instead of Cartesian space. Building on an extension of the velocity-obstacles approach, recent work on centralized collision avoidance [12] computes collision-free local motions for all agents whilst maximizing a joint utility using either a computationally expensive MIQP or an efficient, though local, QP. While not the main focus of this paper, we show that a specialization of our approach ⇤This author would like to thank Emily Hupf and Noa Ghersin for their support while writing this paper. 1 to global-trajectory optimization also applies for local-trajectory optimization, and our numerical results demonstrate improvements in both efficiency and scaling performance. In this paper we formalize the global trajectory planning task as follows. Given p agents of different radii {ri}p i=1 with given desired initial and final positions, {xi(0)}p i=1 and {xi(T)}p i=1, along with a cost functional over trajectories, compute collision-free trajectories for all agents that minimize the cost functional. That is, find a set of intermediate points {xi(t)}p i=1, t 2 (0, T), that satisfies the “hard” collision-free constraints that kxi(t) −xj(t)k > ri + rj, for all i, j and t, and that insofar as possible, minimizes the cost functional. The method we propose searches for a solution within the space of piece-wise linear trajectories, wherein the trajectory of an agent is completely specified by a set of positions at a fixed set of time instants {ts}⌘ s=0. We call these time instants break-points and they are the same for all agents, which greatly simplifies the mathematics of our method. All other intermediate points of the trajectories are computed by assuming that each agent moves with constant velocity in between break-points: if t1 and t2 > t1 are consecutive break-points, then xi(t) = 1 t2−t1 ((t2 −t)xi(t1) + (t −t1)xi(t2)) for t 2 [t1, t2]. Along with the set of initial and final configurations, the number of interior break-points (⌘−1) is an input to our method, with a corresponding tradeoff: increasing ⌘yields trajectories that are more flexible and smooth, with possibly higher quality; but increasing ⌘enlarges the problem, leading to potentially increased computation. The main contributions of this paper are as follows: i) We formulate the global trajectory planning task as a decomposable optimization problem. We show how to solve the resulting sub-problems exactly and efficiently, despite their nonconvexity, and how to coordinate their solutions using message-passing. Our method, based on the “three-weight” version of ADMM [13], is easily parallelized, does not require parameter tuning, and we present empirical evidence of good scalability with p. ii) Within our decomposable framework, we describe different sub-problems, called minimizers, each ensuring the trajectories satisfy a separate criterion. Our method is flexible and can consider different combinations of minimizers. A particularly crucial minimizer ensures there are no inter-agent collisions, but we also derive other minimizers that allow for finding trajectories with minimal total energy, avoiding static obstacles, or imposing dynamic constraints, such as maximum/minimum agent velocity. iii) We show that our method can specialize to perform local planning by solving the problem of joint optimization in velocity space [12]. Our work is among the few examples where the success of applying ADMM to find approximate solutions to a large non-convex problems can be judged with the naked eye, by the gracefulness of the trajectories found. This paper also reinforces the claim in [13] that small, yet important, modifications to ADMM can bring an order of magnitude increase in speed. We emphasize the importance of these modifications in our numerical experiments, where we compare the performance of our method using the three-weight algorithm (TWA) versus that of standard ADMM. The rest of the paper is organized as follows. Section 2 provides background on ADMM and the TWA. Section 3 formulates the global-trajectory-planning task as an optimization problem and describes the separate blocks necessary to solve it (the mathematical details of solving these subproblems are left to appendices). Section 4 evaluates the performance of our solution: its scalability with p, sensitivity to initial conditions, and the effect of different cost functionals. Section 5 explains how to implement a velocity-obstacle method using our method and compares its performance with prior work. Finally, Section 6 draws conclusions and suggests directions for future work. 2 Minimizers in the TWA In this section we provide a short description of the TWA [13], and, in particular, the role of the minimizer building blocks that it needs to solve a particular optimization problem. Section B of the supplementary material includes a full description of the TWA. As a small illustrative example of how the TWA is used to solve optimization problems, suppose we want to solve minx2R3 f(x) = min{x1,x2,x3} f1(x1, x3) + f2(x1, x2, x3) + f3(x3), where fi(.) 2 2 R[{+1}. The functions can represent soft costs, for example f3(x3) = (x3−1)2, or hard equality or inequality constraints, such as f1(x1, x3) = J(x1 x3), where we are using the notation J(.) = 0 if (.) is true or +1 if (.) is false. The TWA solves this optimization problem iteratively by passing messages on a bipartite graph, in the form of a Forney factor graph [14]: one minimizer-node per function fb, one equality-node per variable xj and an edge (b, j), connecting b and j, if fb depends on xj (see Figure 1-left). = = = 1 2 3 1 2 3 g g g g1 x1,3, ρ1,3 x1,1, ρ1,1 n1,1, ρ1,1 n1,3, ρ1,3 g2 x2,3, ρ2,3 x2,1, ρ2,1 n2,1, ρ2,1 n2,3, ρ2,3 n2,2, ρ2,2 x2,2, ρ2,2 g3 x3,3, ρ3,3 n3,3, ρ3,3 Figure 1: Left: bipartite graph, with one minimizer-node on the left for each function making up the overall objective function, and one equality-node on the right per variable in the problem. Right: The input and output variables for each minimizer block. Apart from the first-iteration message values, and two internal parameters1 that we specify in Section 4, the algorithm is fully specified by the behavior of the minimizers and the topology of the graph. What does a minimizer do? The minimizer-node g1, for example, solves a small optimization problem over its local variables x1 and x3. Without going into the full detail presented in [13] and the supplementary material, the estimates x1,1 and x1,3 are then combined with running sums of the differences between the minimizer estimates and the equality-node consensus estimates to obtain messages m1,1 and m1,3 on each neighboring edge that are sent to the neighboring equality-nodes along with corresponding certainty weights, −!⇢1,2 and −!⇢1,3. All other minimizers act similarly. The equality-nodes receive these local messages and weights and produce consensus estimates for all variables by computing an average of the incoming messages, weighted by the incoming certainty weights −!⇢. From these consensus estimates, correcting messages are computed and communicated back to the minimizers to help them reach consensus. A certainty weight for the correcting messages, −⇢, is also communicated back to the minimizers. For example, the minimizer g1 receives correcting messages n1,1 and n1,3 with corresponding certainty weights −⇢1,1 and −⇢1,3 (see Figure 1-right). When producing new local estimates, the bth minimizer node computes its local estimates {xj} by choosing a point that minimizes the sum of the local function fb and weighted squared distance from the incoming messages (ties are broken randomly): {xb,j}j = gb ! {nb,j}j, { −⇢k b,j}j " ⌘arg min {xj}j 2 4fb({xj}j) + 1 2 X j −⇢b,j(xj −nb,j)2 3 5 , (1) where {}j and P j run over all equality-nodes connected to b. In the TWA, the certainty weights {−!⇢b,j} that this minimizer outputs must be 0 (uncertain); 1 (certain); or ⇢0, set to some fixed value. The logic for setting weights from minimizer-nodes depends on the problem; as we shall see, in trajectory planning problems, we only use 0 or ⇢0 weights. If we choose that all minimizers always output weights equal to ⇢0, the TWA reduces to standard ADMM; however, 0-weights allows equality-nodes to ignore inactive constraints, traversing the search space much faster. Finally, notice that all minimizers can operate simultaneously, and the same is true for the consensus calculation performed by each equality-node. The algorithm is thus easy to parallelize. 3 Global trajectory planning We now turn to describing our decomposition of the global trajectory planning optimization problem in detail. We begin by defining the variables to be optimized in our optimization problem. In 1These are the step-size and ⇢0 constants. See Section B in the supplementary material for more detail. 3 our formulation, we are not tracking the points of the trajectories by a continuous-time variable taking values in [0, T]. Rather, our variables are the positions {xi(s)}i2[p], where the trajectories are indexed by i and break-points are indexed by a discrete variable s taking values between 1 and ⌘−1. Note that {xi(0)}i2[p] and {xi(⌘)}i2[p] are the initial and final configuration, sets of fixed values, not variables to optimize. 3.1 Formulation as unconstrained optimization without static obstacles In terms of these variables, the non-collision constraints2 are k(↵xi(s + 1) + (1 −↵)xi(s)) −(↵xj(s + 1) + (1 −↵)xj(s))k ≥ri + rj, (2) for all i, j 2 [p], s 2 {0, ..., ⌘−1} and ↵2 [0, 1]. The parameter ↵is used to trace out the constant-velocity trajectories of agents i and j between break-points s + 1 and s. The parameter ↵has no units, it is a normalized time rather than an absolute time. If t1 is the absolute time of the break-point with integer index s and t2 is the absolute time of the break-point with integer index s + 1 and t parametrizes the trajectories in absolute time then ↵= (t −t1)/(t2 −t1). Note that in the above formulation, absolute time does not appear, and any solution is simply a set of paths that, when travelled by each agent at constant velocity between break-points, leads to no collisions. When converting this solution into trajectories parameterized by absolute time, the break-points do not need to be chosen uniformly spaced in absolute time. The constraints represented in (2) can be formally incorporated into an unconstrained optimization problem as follows. We search for a solution to the problem: min {xi(s)}i,sf cost({xi(s)}i,s) + n−1 X s=0 X i>j f coll ri,rj(xi(s), xi(s + 1), xj(s), xj(s + 1)), (3) where {xi(0)}p and {xi(⌘)}p are constants rather than optimization variables, and where the function f cost is a function that represents some cost to be minimized (e.g. the integrated kinetic energy or the maximum velocity over all the agents) and the function f coll r,r0 is defined as, f coll r,r0(x, x, x0, x0) = J(k↵(x −x0) + (1 −↵)(x −x0)k ≥r + r0 8↵2 [0, 1]). (4) In this section, x and x represent the position of an arbitrary agent of radius r at two consecutive break-points and x0 and x0 the position of a second arbitrary agent of radius r0 at the same breakpoints. In the expression above J(.) takes the value 0 whenever its argument, a clause, is true and takes the value +1 otherwise. Intuitively, we pay an infinite cost in f coll r,r0 whenever there is a collision, and we pay zero otherwise. In (3) we can set f cost(.), to enforce a preference for trajectories satisfying specific properties. For example, we might prefer trajectories for which the total kinetic energy spent by the set of agents is small. In this case, defining f cost C (x, x) = Ckx −xk2, we have, f cost({xi(s)}i,s) = 1 pn p X i=1 n−1 X s=0 f cost Ci,s(xi(s), xi(s + 1)). (5) where the coefficients {Ci,s} can account for agents with different masses, different absolute-time intervals between-break points or different preferences regarding which agents we want to be less active and which agents are allowed to move faster. More simply, we might want to exclude trajectories in which agents move faster than a certain amount, but without distinguishing among all remaining trajectories. For this case we can write, f cost C (x, x) = J(kx −xk C). (6) In this case, associating each break-point to a time instant, the coefficients {Ci,s} in expression (5) would represent different limits on the velocity of different agents between different sections of the trajectory. If we want to force all agents to have a minimum velocity we can simply reverse the inequality in (6). 2We replaced the strict inequality in the condition for non-collision by a simple inequality “≥” to avoid technicalities in formulating the optimization problem. Since the agents are round, this allows for a single point of contact between two agents and does not reduce practical relevance. 4 3.2 Formulation as unconstrained optimization with static obstacles In many scenarios agents should also avoid collisions with static obstacles. Given two points in space, xL and xR, we can forbid all agents from crossing the line segment from xL to xR by adding the following term to the function (3): Pp i=1 Pn−1 s=0 f wall xL,xR,ri(xi(s), xi(s + 1)). We recall that ri is the radius of agent i and f wall xL,xR,r(x, x) = J(k(↵x + (1 −↵)x) −(βxR + (1 −β)xL)k ≥r for all ↵, β 2 [0, 1]). (7) Notice that f coll can be expressed using f wall. In particular, f coll r,r0(x, x, x0, x0) = f wall 0,0,r+r0(x0 −x, x0 −x). (8) We use this fact later to express the minimizer associated with agent-agent collisions using the minimizer associated with agent-obstacle collisions. When agents move in the plane, i.e. xi(s) 2 R2 for all i 2 [p] and s+1 2 [⌘+1], being able to avoid collisions with a general static line segment allows to automatically avoid collisions with multiple static obstacles of arbitrary polygonal shape. Our numerical experiments only consider agents in the plane and so, in this paper, we only describe the minimizer block for wall collision for a 2D world. In higher dimensions, different obstacle primitives need to be considered. 3.3 Message-passing formulation To solve (3) using the TWA, we need to specify the topology of the bipartite graph associated with the unconstrained formulation (3) and the operation performed by every minimizer, i.e. the −!⇢weight update logic and x-variable update equations. We postpone describing the choice of initial values and internal parameters until Section 4. We first describe the bipartite graph. To be concrete, let us assume that the cost functional has the form of (5). The unconstrained formulation (3) then tells us that the global objective function is the sum of ⌘p(p + 1)/2 terms: ⌘p(p −1)/2 functions f coll and ⌘p functions f cost C . These functions involve a total of (⌘+ 1)p variables out of which only (⌘−1)p are free (since the initial and final configurations are fixed). Correspondingly, the bipartite graph along which messages are passed has ⌘p(p+1)/2 minimizer-nodes that connect to the (⌘+1)p equality-nodes. In particular, the equalitynode associated with the break-point variable xi(s), ⌘> s > 0, is connected to 2(p −1) different gcoll minimizer-nodes and two different gcost C minimizer-nodes. If s = 0 or s = ⌘the equality-node only connects to half as many gcoll nodes and gcost C nodes. We now describe the different minimizers. Every minimizer basically is a special case of (1). 3.3.1 Agent-agent collision minimizer We start with the minimizer associated with the functions f coll, that we denoted by gcoll. This minimizer receives as parameters the radius, r and r0, of the two agents whose collision it is avoiding. The minimizer takes as input a set of incoming n-messages, {n, n, n0, n0}, and associated −⇢-weights, { −⇢, −⇢, −⇢0, −⇢0}, and outputs a set of updated x-variables according to expression (9). Messages n and n come from the two equality-nodes associated with the positions of one of the agents at two consecutive break-points and n0 and n0 from the corresponding equality-nodes for the other agent. gcoll(n, n, n0, n0, −⇢, −⇢, −⇢0, −⇢0, r, r0) = arg min {x,x,x0,x0} f coll r,r0(x, x, x0, x0) + −⇢ 2 kx −nk2 + −⇢ 2 kx −nk2 + −⇢0 2 kx0 −n0k2 + −⇢0 2 kx0 −n0k2. (9) The update logic for the weights −!⇢for this minimizer is simple. If the trajectory from n to n for an agent of radius r does not collide with the trajectory from n0 to n0 for an agent of radius r0 then set all the outgoing weights −!⇢to zero. Otherwise set them all to ⇢0. The outgoing zero weights indicate to the receiving equality-nodes in the bipartite graph that the collision constraint for this pair of agents is inactive and that the values it receives from this minimizer-node should be ignored when computing the consensus values of the receiving equality-nodes. The solution to (9) is found using the agent-obstacle collision minimizer that we describe next. 5 3.3.2 Agent-obstacle collision minimizer The minimizer for f wall is denoted by gwall. It is parameterized by the obstacle position {xL, xR} as well as the radius of the agent that needs to avoid the obstacle. It receives two n-messages, {n, n}, and corresponding weights { −⇢, −⇢}, from the equality-nodes associated with two consecutive positions of an agent that needs to avoid the obstacle. Its output, the x-variables, are defined as gwall(n, n, r, xL, xR, −⇢, −⇢) = arg min {x,x} f wall xL,xR,r(x, x) + −⇢ 2 kx −nk2 + −⇢ 2 kx −nk2. (10) When agents move in the plane (2D), this minimizer can be solved by reformulating the optimization in (10) as a mechanical problem involving a system of springs that we can solve exactly and efficiently. This reduction is explained in the supplementary material in Section D and the solution to the mechanical problem is explained in Section I. The update logic for the −!⇢-weights is similar to that of the gcoll minimizer. If an agent of radius r going from n and n does not collide with the line segment from xL to xR then set all outgoing weights to zero because the constraint is inactive; otherwise set all the outgoing weights to ⇢0. Notice that, from (8), it follows that the agent-agent minimizer gcoll can be expressed using gwall. More concretely, as proved in the supplementary material, Section C, gcoll(n, n, n0, n0, −⇢, −⇢, −⇢0, −⇢0, r, r0) = M2gwall ⇣ M1.{n, n, n0, n0, −⇢, −⇢, −⇢0, −⇢0, r, r0} ⌘ , for a constant rectangular matrix M1 and a matrix M2 that depend on {n, n, n0, n0, −⇢, −⇢, −⇢0, −⇢0}. 3.3.3 Minimum energy and maximum (minimum) velocity minimizer When f cost can be decomposed as in (5), the minimizer associated with the functions f cost is denoted by gcost and receives as input two n-messages, {n, n}, and corresponding weights, { −⇢, −⇢}. The messages come from two equality-nodes associated with two consecutive positions of an agent. The minimizer is also parameterized by a cost factor c. It outputs a set of updated x-messages defined as gcost(n, n, −⇢, −⇢, c) = arg min {x,x} f cost c (x, x) + −⇢ 2 kx −nk2 + −⇢ 2 kx −nk2. (11) The update logic for the −!⇢-weights of the minimum energy minimizer is very simply: always set all outgoing weights −!⇢to ⇢0. The update logic for the −!⇢-weights of the maximum velocity minimizer is the following. If kn −nk c set all outgoing weights to zero. Otherwise, set them to ⇢0. The update logic for the minimum velocity minimizer is similar. If kn −nk ≥c, set all the −!⇢-weights to zero. Otherwise set them to ⇢0. The solution to the minimum energy, maximum velocity and minimum velocity minimizer is written in the supplementary material in Sections E, F, and G respectively. 4 Numerical results We now report on the performance of our algorithm (see Appendix J for an important comment on the anytime properties of our algorithm). Note that the lack of open-source scalable algorithms for global trajectory planning in the literature makes it difficult to benchmark our performance against other methods. Also, in a paper it is difficult to appreciate the gracefulness of the discovered trajectory optimizations, so we include a video in the supplementary material that shows final optimized trajectories as well as intermediate results as the algorithm progresses for a variety of additional scenarios, including those with obstacles. All the tests described here are for agents in a twodimensional plane. All tests but the last were performed using six cores of a 3.4GHz i7 CPU. The different tests did not require any special tuning of parameters. In particular, the step-size in [13] (their ↵variable) is always 0.1. In order to quickly equilibrate the system to a reasonable set of variables and to wash out the importance of initial conditions, the default weight ⇢0 was set equal to a small value (⌘p ⇥10−5) for the first 20 iterations and then set to 1 for all further iterations. 6 The first test considers scenario CONF1: p (even) agents of radius r, equally spaced around on a circle of radius R, are each required to exchange position with the corresponding antipodal agent, r = (5/4)R sin(⇡/2(p −4)). This is a classical difficult test scenario because the straight line motion of all agents to their goal would result in them all colliding in the center of the circle. We compare the convergence time of the TWA with a similar version using standard ADMM to perform the optimizations. In this test, the algorithm’s initial value for each variable in the problem was set to the corresponding initial position of each agent. The objective is to minimize the total kinetic energy (C in the energy minimizer is set to 1). Figure 2-left shows that the TWA scales better with p than classic ADMM and typically gives an order of magnitude speed-up. Please see Appendix K for a further comment on the scaling of the convergence time of ADMM and TWA with p. Ê Ê Ê Ê Ê Ê Ê ‡ ‡ ‡ ‡ Ê Ê Ê Ê Ê Ê Ê ‡ ‡ ‡ ‡ ‡ Ê Ê Ê Ê Ê Ê ‡ ‡ ‡ ‡ ‡ ‡ 0 20 40 60 80 100 0 500 1000 1500 2000 100 200 300 400 500 600 700 0 5 10 15 20 25 301 2 3 4 5 6 7 Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù 0 5 10 15 20 0 500 1000 1500 2000 2500 Convergence time (sec) Number of cores Number of occurrences Objective value of trajectories Convergence time (sec) Number of agents, p Convergence time (sec) ⌘= 8 ⌘= 6 ⌘= 4 ⌘= 8 ⌘= 6 ⌘= 4 Physical cores (12) p = 100 p = 80 p = 60 p 40 Figure 2: Left: Convergence time using standard ADMM (dashed lines) and using TWA (solid lines). Middle: Distribution of total energy and time for convergence with random initial conditions (p = 20 and ⌘= 5). Right: Convergence time using a different number of cores (⌘= 5). The second test for CONF1 analyzes the sensitivity of the convergence time and objective value when the variables’ value at the first iteration are chosen uniformly at random in the smallest spacetime box that includes the initial and final configuration of the robots. Figure 2-middle shows that, although there is some spread on the convergence time, our algorithm seems to reliably converge to relatively similar-cost local minima (other experiments show that the objective value of these minima is around 5 times smaller than that found when the algorithm is run using only the collision avoidance minimizers without a kinetic energy cost term). As would be expected, the precise trajectories found vary widely between different random runs. Still for CONF1, and fixed initial conditions, we parallelize our method using several cores of a 2.66GHz i7 processor and a very primitive scheduling/synchronization scheme. Although this scheme does not fully exploit parallelization, Figure 2-right does show a speed-up as the number of cores increases and the larger p is, the greater the speed-up. We stall when we reach the twelve physical cores available and start using virtual cores. Finally, Figure 3-left compares the convergence time to optimize the total energy with the time to simply find a feasible (i.e. collision-free) solution. The agents initial and final configuration is randomly chosen in the plane (CONF2). Error bars indicate ± one standard deviation. Minimizing the kinetic energy is orders of magnitude computationally more expensive than finding a feasible solution, as is clear from the different magnitude of the left and right scale of Figure 3-left. Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê 0 20 40 60 80 100 0 2 4 6 8 10 12 0 300 600 900 1200 1500 1800 8 10* 12 14* 16 18* 20 24* 24 30* 32 40* 40 50* 52 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Feasible Convergence time (sec) Minimum energy Convergence time (sec) Number of agents, p ⌘= 8 ⌘= 6 ⌘= 4 ⌘= 8 ⌘= 6 ⌘= 4 Light blue: TWA Pink: MIQP Number of agents, p Convergence time, one epoch (sec) Figure 3: Left: Convergence time when minimizing energy (blue scale/dashed lines) and to simply find a feasible solution (red scale/solid lines). Right: (For Section 5). Convergence-time distribution for each epoch using our method (blue bars) and using the MIQP of [12] (red bars and star-values). 7 5 Local trajectory planning based on velocity obstacles In this section we show how the joint optimization presented in [12], which is based on the concept of velocity obstacles [11] (VO), can be also solved via the message-passing TWA. In VO, given the current position {xi(0)}i2[p] and radius {ri} of all agents, a new velocity command is computed jointly for all agents minimizing the distance to their preferred velocity {vref i }i2[p]. This new velocity command must guarantee that the trajectories of all agents remain collision-free for at least a time horizon ⌧. New collision-free velocities are computed every ↵⌧seconds, ↵< 1, until all agents reach their final configuration. Following [12], and assuming an obstacle-free environment and first order dynamics, the collision-free velocities are given by, minimize {vi}i2[p] X i2[p] Cikvi −vref i k2 s.t. k(xi(0) + vit) −(xj(0) + vjt)k ≥ri + rj 8 i 2 [p], t 2 [0, ⌧]. Since the velocities {vi}i2[p] are related linearly to the final position of each object after ⌧seconds, {xi(⌧)}i2[p], a simple change of variables allows us to reformulate the above problem as, minimize {xi}i2[p] X i2[p] C0 ikxi −xref i k2 s.t. k(1 −↵)(xi(0) −xj(0)) + ↵(xi −xj)k ≥ri + rj 8 j > i 2 [p], ↵2 [0, 1] (12) where C0 i = Ci/⌧2, xref i = xi(0) + vref i ⌧and we have dropped the ⌧in xi(⌧). The above problem, extended to account for collisions with the static line segments {xRk, xLk}k, can be formulated in an unconstrained form using the functions f cost, f coll and f wall. Namely, min {xi}i X i2[p] f cost C0 i (xi, xref i ) + X i>j f coll ri,rj(xi(0), xi, xj(0), xj) + X i2[p] X k f wall xRk,xLk,ri(xi(0), xi). (13) Note that {xi(0)}i and {xref i }i are constants, not variables being optimized. Given this formulation, the TWA can be used to solve the optimization. All corresponding minimizers are special cases of minimizers derived in the previous section for global trajectory planning (see Section H in the supplementary material for details). Figure 3-right shows the distribution of the time to solve (12) for CONF1. We compare the mixed integer quadratic programming (MIQP) approach from [12] with ours. Our method finds a local minima of exactly (13), while [12] finds a global minima of an approximation to (13). Specifically, [12] requires approximating the search domain by hyperplanes and an additional branch-and-bound algorithm while ours does not. Both approaches use a mechanism for breaking the symmetry from CONF1 and avoid deadlocks: theirs uses a preferential rotation direction for agents, while we use agents with slightly different C coefficients in their energy minimizers (Cith agent = 1 + 0.001i). Both simulations were done on a single 2.66GHz core. The results show the order of magnitude is similar, but, because our implementation is done in Java while [12] uses Matlab-mex interface of CPLEX 11, the results are not exactly comparable. 6 Conclusion and future work We have presented a novel algorithm for global and local planning of the trajectory of multiple distinct agents, a problem known to be hard. The solution is based on solving a non-convex optimization problem using TWA, a modified ADMM. Its similarity to ADMM brings scalability and easy parallelization. However, using TWA improves performance considerably. Our implementation of the algorithm in Java on a regular desktop computer, using a basic scheduler/synchronization over its few cores, already scales to hundreds of agents and achieves real-time performance for local planning. The algorithm can flexibly account for obstacles and different cost functionals. For agents in the plane, we derived explicit expressions that account for static obstacles, moving obstacles, and dynamic constraints on the velocity and energy. Future work should consider other restrictions on the smoothness of the trajectory (e.g. acceleration constraints) and provide fast solvers to our minimizers for agents in 3D. The message-passing nature of our algorithm hints that it might be possible to adapt our algorithm to do planning in a decentralized fashion. For example, minimizers like gcoll could be solved by message exchange between pairs of agents within a maximum communication radius. It is an open problem to build a practical communication-synchronization scheme for such an approach. 8 References [1] Javier Alonso-Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, and Paul Beardsley. Image and animation display with multiple mobile robots. 31(6):753–773, 2012. [2] Peter R. Wurman, Raffaello D’Andrea, and Mick Mountz. Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Magazine, 29(1):9–19, 2008. [3] Stephen J. Guy, Jatin Chhugani, Changkyu Kim, Nadathur Satish, Ming Lin, Dinesh Manocha, and Pradeep Dubey. Clearpath: highly parallel collision avoidance for multi-agent simulation. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 177–187, 2009. [4] John H. Reif. Complexity of the mover’s problem and generalizations. In IEEE Annual Symposium on Foundations of Computer Science, pages 421–427, 1979. [5] John E. Hopcroft, Jacob T. Schwartz, and Micha Sharir. On the complexity of motion planning for multiple independent objects; pspace-hardness of the ”warehouseman’s problem”. The International Journal of Robotics Research, 3(4):76–88, 1984. [6] Maren Bennewitz, Wolfram Burgard, and Sebastian Thrun. Finding and optimizing solvable priority schemes for decoupled path planning techniques for teams of mobile robots. Robotics and Autonomous Systems, 41(2–3):89–99, 2002. [7] Daniel Mellinger, Alex Kushleyev, and Vijay Kumar. Mixed-integer quadratic program trajectory generation for heterogeneous quadrotor teams. In IEEE International Conference on Robotics and Automation, pages 477–483, 2012. [8] Federico Augugliaro, Angela P. Schoellig, and Raffaello D’Andrea. Generation of collision-free trajectories for a quadrocopter fleet: A sequential convex programming approach. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1917–1922, 2012. [9] Steven M. LaValle and James J. Kuffner. Randomized kinodynamic planning. The International Journal of Robotics Research, 20(5):378–400, 2001. [10] Oussama Khatib. Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research, 5(1):90–98, 1986. [11] Paolo Fiorini and Zvi Shiller. Motion planning in dynamic environments using velocity obstacles. The International Journal of Robotics Research, 17(7):760–772, 1998. [12] Javier Alonso-Mora, Martin Rufli, Roland Siegwart, and Paul Beardsley. Collision avoidance for multiple agents with joint utility maximization. In IEEE International Conference on Robotics and Automation, 2013. [13] Nate Derbinsky, Jos´e Bento, Veit Elser, and Jonathan S. Yedidia. An improved three-weight messagepassing algorithm. arXiv:1305.1961 [cs.AI], 2013. [14] G. David Forney Jr. Codes on graphs: Normal realizations. IEEE Transactions on Information Theory, 47(2):520–548, 2001. [15] Sertac Karaman and Emilio Frazzoli. Incremental sampling-based algorithms for optimal motion planning. arXiv preprint arXiv:1005.0416, 2010. [16] R. Glowinski and A. Marrocco. Sur l’approximation, par ´el´ements finis d’ordre un, et la r´esolution, par p´enalisization-dualit´e, d’une class de probl`ems de Dirichlet non lin´eare. Revue Franc¸aise d’Automatique, Informatique, et Recherche Op´erationelle, 9(2):41–76, 1975. [17] Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976. [18] Hugh Everett III. Generalized lagrange multiplier method for solving problems of optimum allocation of resources. Operations Research, 11(3):399–417, 1963. [19] Magnus R. Hestenes. Multiplier and gradient methods. Journal of Optimization Theory and Applications, 4(5):303–320, 1969. [20] Magnus R. Hestenes. Multiplier and gradient methods. In L.A. Zadeh et al., editor, Computing Methods in Optimization Problems 2. Academic Press, New York, 1969. [21] M.J.D. Powell. A method for nonlinear constraints in minimization problems. In R. Fletcher, editor, Optimization. Academic Press, London, 1969. [22] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. 9
|
2013
|
80
|
5,159
|
Solving the multi-way matching problem by permutation synchronization Deepti Pachauri,† Risi Kondor§ and Vikas Singh‡† †Dept. of Computer Sciences, University of Wisconsin–Madison ‡Dept. of Biostatistics & Medical Informatics, University of Wisconsin–Madison §Dept. of Computer Science and Dept. of Statistics, The University of Chicago pachauri@cs.wisc.edu risi@uchicago.edu vsingh@biostat.wisc.edu Abstract The problem of matching not just two, but m different sets of objects to each other arises in many contexts, including finding the correspondence between feature points across multiple images in computer vision. At present it is usually solved by matching the sets pairwise, in series. In contrast, we propose a new method, Permutation Synchronization, which finds all the matchings jointly, in one shot, via a relaxation to eigenvector decomposition. The resulting algorithm is both computationally efficient, and, as we demonstrate with theoretical arguments as well as experimental results, much more stable to noise than previous methods. 1 Introduction Finding the correct bijection between two sets of objects X = {x1, x2, . . . , xn} and X′ = {x′ 1, x′ 2, . . . , x′ n} is a fundametal problem in computer science, arising in a wide range of contexts [1]. In this paper, we consider its generalization to matching not just two, but m different sets X1, X2, . . . , Xm. Our primary motivation and running example is the classic problem of matching landmarks (feature points) across many images of the same object in computer vision, which is a key ingredient of image registration [2], recognition [3, 4], stereo [5], shape matching [6, 7], and structure from motion (SFM) [8, 9]. However, our approach is fully general and equally applicable to problems such as matching multiple graphs [10, 11]. Presently, multi-matching is usually solved sequentially, by first finding a putative permutation τ12 matching X1 to X2, then a permutation τ23 matching X2 to X3, and so on, up to τm−1,m. While one can conceive of various strategies for optimizing this process, the fact remains that when the data are noisy, a single error in the sequence will typically create a large number of erroneous pairwise matches [12, 13, 14]. In contrast, in this paper we describe a new method, Permutation Synchronization, that estimates the entire matrix (τji)m i,j=1 of assignments jointly, in a single shot, and is therefore much more robust to noise. For consistency, the recovered matchings must satisfy τkjτji = τki. While finding an optimal matrix of permutations satisfying these relations is, in general, combinatorially hard, we show that for the most natural choice of loss function the problem has a natural relaxation to just finding the n leading eigenvectors of the cost matrix. In addition to vastly reducing the computational cost, using recent results from random matrix theory, we show that the eigenvectors are very effective at aggregating information from all (m 2 ) pairwise matches, and therefore make the algorithm surprisingly robust to noise. Our experiments show that in landmark matching problems Permutation Synchronization can recover the correct correspondence between landmarks across a large number of images with small error, even when a significant fraction of the pairwise matches are incorrect. The term “synchronization” is inspired by the recent celebrated work of Singer et al. on a similar problem involving finding the right rotations (rather than matchings) between electron microscopic 1 images [15][16][17]. Historically, multi-matching has received relatively little attention. However, independently of, and concurrently with the present work, Huang and Guibas [18] have recently proposed a semidefinite programming based solution, which parallels our approach, and in problems involving occlusion might perform even better. 2 Synchronizing permutations Consider a collection of m sets X1, X2, . . . , Xm of n objects each, Xi = {xi 1, xi 2, . . . , xi n}, such that for each pair (Xi, Xj), each xip in Xi has a natural counterpart xjq in Xj. For example, in computer vision, given m images of the same scene taken from different viewpoints, xi 1, xi 2, . . . , xi n might be n visual landmarks detected in image i, while xj 1, xj 2, . . . , xj n are n landmarks detected in image j, in which case xip ∼xjq signifies that xip and xjq correspond to the same physical feature. Since the correspondence between Xi and Xj is a bijection, one can write it as xip ∼xjτji(p) for some permutation τji : {1, 2, . . . , n} →{1, 2, . . . , n}. Key to our approach to solving multi-matching is that with respect to the natural definition of multiplication, (τ ′τ)(i) := (τ ′(τ(i)), the n! possible permutations of {1, 2, . . . , n} form a group, called the symmetric group of degree n, denoted Sn. We say that the system of correspondences between X1, X2, . . . , Xm is consistent if xip ∼xjq and xjq ∼xkr together imply that xip ∼xkr. In terms of permutations this is equivalent to requiring that the array (τij)m i,j=1 satisfy τkjτji = τki ∀i, j, k. (1) Alternatively, given some reference ordering of x1, x2, . . . , xn, we can think of each Xi as realizing its own permutation σi (in the sense of xℓ∼xiσi(ℓ)), and then τji becomes τji = σjσ−1 i . (2) The existence of permutations σ1, σ2, . . . , σm satisfying (2) is equivalent to requiring that (τji)m i,j=1 satisfy (1). Thus, assuming consistency, solving the multi-matching problem reduces to finding just m different permutations, rather than O(m2). However, the σi’s are of course not directly observable. Rather, in a typical application we have some tentative (noisy) ˜τji matchings which we must synchronize into the form (2) by finding the underlying σ1, . . . , σm. Given (˜τji)m i,j=1 and some appropriate distance metric d between permutations, we formalize Permutation Synchronization as the combinatorial optimization problem minimize σ1,σ2,...,σm∈Sn N ∑ i,j=1 d(σjσ−1 i , ˜τji). (3) The computational cost of solving (3) depends critically on the form of the distance metric d. In this paper we limit ourselves to the simplest choice d(σ, τ) = n −⟨P(σ), P(τ)⟩, (4) where P(σ) ∈Rn×n are the usual permutation matrices [P(σ)]q,p := { 1 if σ(p) = q 0 otherwise, and ⟨A, B⟩is the matrix inner product ⟨A, B⟩:= tr(A⊤B) = ∑n p,q=1 Ap,q Bp,q. The distance (4) simply counts the number of objects assigned differently by σ and τ. Furthermore, it allows us to rewrite (3) as maximizeσ1,σ2,...,σm ∑m i,j=1⟨P(σjσ−1 i ), P(˜τji)⟩, suggesting the generalization maximize σ1,σ2,...,σm m ∑ i,j=1 ⟨ P(σjσ−1 i ), Tji ⟩ , (5) where the Tji’s can now be any matrices, subject to T ⊤ ji = Tij. Intuitively, each Tji is an objective matrix, the (q, p) element of which captures the utility of matching xip in Xi to xjq in Xj. This generalization is very useful when the assignments of the different xip’s have different confidences. For example, in the landmark matching case, if, due to occlusion or for some other reason, the counterpart of xip is not present in Xj, then we can simply set [Tji]q,p = 0 for all q. 2 2.1 Representations and eigenvectors The generalized Permutation Synchronization problem (5) can also be written as maximize σ1,σ2,...,σm ⟨P, T ⟩, (6) where P = P(σ1σ−1 1 ) . . . P(σ1σ−1 m ) ... ... ... P(σmσ−1 1 ) . . . P(σmσ−1 m ) and T = T11 . . . T1m ... ... ... Tm1 . . . Tmm . (7) A matrix valued function ρ: Sn →Cd×d is said to be a representation of the symmetric group if ρ(σ2)ρ(σ1) = ρ(σ2σ1) for any pair of permutations σ1, σ2 ∈Sn. Clearly, P is a representation of Sn (actually, the so-called defining representation), since P(σ2σ1) = P(σ2)P(σ1). Moreover, P is a so-called orthogonal representation, because each P(σ) is real and P(σ−1) = P(σ)⊤. Our fundamental observation is that this implies that P has a very special form. Proposition 1. The synchronization matrix P is of rank n and is of the form P = U ·U ⊤, where U = P(σ1) ... P(σm) . Proof. From P being a representation of Sn, P = P(σ1)P(σ1)⊤ . . . P(σ1)P(σm)⊤ ... ... ... P(σm)P(σ1)⊤ . . . P(σm)P(σm)⊤ , (8) implying P = U · U ⊤. Since U has n columns, rank(P) is at most n. This rank is achieved because P(σ1) is an orthogonal matrix, therefore it has linearly independent columns, and consequently the columns of U cannot be linearly dependent. ■ Corollary 1. Letting [P(σi)]p denote the p’th column of P(σi), the normalized columns of U, uℓ= 1 √m [P(σ1)]ℓ ... [P(σm)]ℓ ℓ= 1, . . . , n, (9) are mutually orthogonal unit eigenvectors of P with the same eigenvalue m, and together span the row/column space of P. Proof. The columns of U are orthogonal because the columns of each constituent P(σi) are orthogonal. The normalization follows from each column of P(σi) having norm 1. The rest follows by Proposition 1. ■ 2.2 An easy relaxation Solving (6) is computationally difficult, because it involves searching the combinatorial space of a combination of m permutations. However, Proposition 1 and its corollary suggest relaxing it to maximize P∈Mn m ⟨P, T ⟩, (10) where Mm n is the set of mn–dimensional rank n symmetric matrices whose non-zero eigenvalues are m. This is now just a generalized Rayleigh problem, the solution of which is simply P = m n ∑ ℓ=1 vℓv⊤ ℓ, (11) 3 where v1, v2, . . . , vℓare the n leading normalized eigenvectors of T . Equivalently, P = U · U ⊤, where U = √m ( | | . . . | v1 v2 . . . vn | | . . . | ) . (12) Thus, in contrast to the original combinatorial problem, (10) can be solved by just finding the m leading eigenvectors of T . Algorithm 1 Permutation Synchronization Input: the objective matrix T Compute the n leading eigenvectors (v1, v2, . . . , vn) of T and set U = √m [v1, v2, . . . , vn] for i = 1 to m do Pi1 = U(i−1)n+1:in, 1:n U ⊤ 1:n, 1:n σi = arg maxσ∈Sn⟨Pi1, σ⟩[Kuhn-Munkres] end for for each (i, j) do τji = σjσ−1 i end for Output: the matrix (τji)m i,j=1 of globally consistent matchings Of course, from P we must still recover the individual permutations σ1, σ2, . . . , σm. However, as long as P is relatively close in form (7), this is quite a simple and stable process. One way to do it is to let each σi be the permutation that best matches the (i, 1) block of P in the linear assignment sense, σi = arg min σ∈Sn ⟨P(σ), [P]i,1⟩, which is solved in O(n3) time by the Kuhn– Munkres algorithm [19]1, and then set τji = σjσ−1 i , which will then satisfy the consistency relations. The pseudocode of the full algorithm is given in Algorithm 1. 3 Analysis of the relaxed algorithm Let us now investigate under what conditions we can expect the relaxation (10) to work well, in particular, in what cases we can expect the recovered matchings to be exact. In the absence of noise, i.e., when Tji = P(˜τji) for some array (˜τji)j,i of permutations that already satisfy the consistency relations (1), T will have precisely the same structure as described by Proposition 1 for P. In particular, it will have n mutually orthogonal eigenvectors vℓ= 1 √m [P(˜σ1)]ℓ ... [P(˜σm)]ℓ ℓ= 1, . . . , n (13) with the same eigenvalue m. Due to the n–fold degeneracy, however, the matrix of eigenvectors (12) is only defined up to multiplication by an arbitrary rotation matrix O on the right, which means that instead of the “correct” U (whose columns are (13)), the eigenvector decomposition of T may return any U ′ = UO. Fortunately, when forming the product P = U ′ · U ′⊤= U O O⊤U ⊤= U · U ⊤ this rotation cancels, confirming that our algorithm recovers P = T , and hence the matchings τji = ˜τji, with no error. Of course, rather than the case when the solution is handed to us from the start, we are more interested in how the algorithm performs in situations when either the Tji blocks are not permutation matrices, or they are not synchronized. To this end, we set T = T0 + N, (14) where T0 is the correct “ground truth” synchronization matrix, while N is a symmetric perturbation matrix with entries drawn independently from a zero-mean normal distribution with variance η2. In general, to find the permutation best aligned with a given n × n matrix T, the Kuhn–Munkres algorithm solves for bτ = arg maxτ∈Sn⟨P(τ), T⟩= arg maxτ∈Sn(vec(P(τ)) · vec(T)). Therefore, 1 Note that we could equally well have matched the σi’s to any other column of blocks, since they are only defined relative to an arbitrary reference permutation: if, for any fixed σ0, each σi is redefined as σiσ0, the predicted relative permutations τji = σjσ0(σiσ0)−1 = σjσ−1 i stay the same. 4 Figure 1: Singular value histogram of T under the noise model where each ˜τji with probability p = {0.10, 0.25, 0.85} is replaced by a random permutation (m = 100, n = 30). Note that apart from the extra peak at zero, the distribution of the stochastic eigenvalues is very similar to the semicircular distribution for Gaussian noise. As long as the small cluster of deterministic eigenvalues is clearly separated from the noise, Permutation Synchronization is feasible. writing T = P(τ0) + ϵ, where P(τ0) is the “ground truth”, while ϵ is an error term, it is guaranteed to return the correct permutation as long as ∥vec(ϵ) ∥< min τ ′∈Sn\{τ0} ∥vec(τ0) −vec(τ ′) ∥/2. By the symmetry of Sn, the right hand side is the same for any τ0, so w.l.o.g. we can set τ0 = e (the identity), and find that the minimum is achieved when τ ′ is just a transposition, e.g., the permutation that swaps 1 with 2 and leaves 3, 4, . . . , n in place. The corresponding permutation matrix differs from the idenity in exactly 4 entries, therefore a sufficient condition for correct reconstruction is that ∥ϵ∥Frob = ⟨ϵ, ϵ⟩1/2 = ∥vec(ϵ)∥< 1 2 √ 4 = 1. As n grows, ∥ϵ∥Frob becomes tightly concentrated around ηn, so the condition for recovering the correct permutation is η < 1/n. Permutation Synchronization can achieve a lower error, especially in the large m regime, because the eigenvectors aggregate information from all the Tji matrices, and tend to be very stable to perturbations. In general, perturbations of the form (14) exhibit a characteristic phase transition. As long as the largest eigenvalue of the random matrix N falls below a given multiple of the smallest non-zero eigenvalue of T0, adding N will have very little effect on the eigenvectors of T . On the other hand, when the noise exceeds this limit, the spectra get fully mixed, and it becomes impossible to recover T0 from T to any precision at all. If N is a symmetric matrix with independent N(0, η2) entries, as nm →∞, its spectrum will tend to Wigner’s famous semicircle distribution supported on the interval (−2η(nm)1/2, 2η(nm)1/2), and with probability one the largest eigenvalue will approach 2η(nm)1/2 [20, 21]. In contrast, the nonzero eigenvalues of T0 scale with m, which guarantees that for large enough m the two spectra will be nicely separated and Permutation Synchronization will have very low error. While much harder to analyze analytically, empirical evidence suggests that this type of phase transition behavior is characteristic of any reasonable noise model, for example the one in which we take each block of T and with some probability p replace it with a random permutation matrix (Figure 1). To derive more quantitative results, we consider the case where N is a so-called (symmetric) Gaussian Wigner matrix, which has independent N(0, η2) entries on its diagonal, and N(0, η2/2) entries everywhere else. It has recently been proved that for this type of matrix the phase transition occurs at λdet min/λstochastic max = 1/2, so to recover T0 to any accuracy at all we must have η < (m/n)1/2 [22]. Below this limit, to quantify the actual expected error, we write each leading normalized eigenvector v1, v2, . . . , vn of T as vi = v∗ i + v⊥ i , where v∗ i is the projection of vi to the space U0 spanned by the non-zero eigenvectors v0 1, v0 2, . . . , v0 n of T0. By Theorem 2.2 of [22] as nm →∞, ∥v∗ i ∥2 a.s. −−−→1 −η2 n m and ∥v⊥ i ∥2 a.s. −−−→η2 n m . (15) It is easy to see that ⟨v⊥ i , v⊥ j ⟩ a.s. −−→0, which implies ⟨v∗ i , v∗ j ⟩= ⟨vi, vj⟩−⟨v⊥ i , v⊥ j ⟩ a.s. −−→0, so, setting λ = (1 −η2n/m)−1/2, the normalized vectors λv∗ 1, . . . , λv∗ n almost surely tend to an orthonormal basis for U0. Thus, U = √m[v1, . . . , vn] is related to the “true” U0 = √m[v0 1, . . . , v0 n] by λU a.s. −−→U0O + λE′ = (U0 + λE)O, where O is some rotation and each column of the noise matrices E and E′ has norm η(n/m)1/2. Since multiplying U on the right by an orthogonal matrix does not affect P, and the Kuhn–Munkres 5 Figure 2: The fraction of (σi)m i=1 permutations that are incorrect when reconstructed by Permutation Synchronization from an array (˜τji)m j,i=1, in which each entry, with probability p is replaced by a random permutation. The plots show the mean and standard deviation of errors over 20 runs as a function of p for m = 10 (red), m = 50 (blue) and m = 100 (green). (Left) n = 10. (Center) n = 25. (Right) n = 30. algorithm is invariant to scaling by a constant, this equation tells us that (almost surely) the effect of (14) is equivalent to setting U = U0 +λE. In terms of the individual Pji blocks of P = UU ⊤, neglecting second order terms, Pji = (U 0 j + λEj)(U 0 i + λEi)⊤≈P(τji) + λU 0 j E⊤ i + λEjU 0⊤ i , where τji is the ground truth matching and U 0 i and Ei denote the appropriate n × n submatrices of U 0 and E. Conjecturing that in the limit Ei and Ej follow rotationally invariant distributions, almost surely lim ∥U 0 j E⊤ i + EjU 0⊤ i ∥Frob = lim ∥Ei + Ej ∥Frob ≤2ηn/m. Thus, plugging in to our earlier result for the error tolerance of the Kuhn–Munkres algorithm, Permutation Synchronization will correctly recover τji with probability one provided 2ληn/m < 1, or, equivalently, η2 < m/n 1 + 4(m/n)−1 . This is much better than our η < 1/n result for the naive algorithm, and remarkably only slightly stricter than the condition η < (m/n)1/2 for recovering the eigenvectors with any accuracy at all. Of course, these results are asymptotic (in the sense of nm →∞), and strictly speaking only apply to additive Gaussian Wigner noise. However, as Figure 2 shows, in practice, even when the noise is in the form of corrupting entire permutations and nm is relatively small, qualitatively our algorithm exhibits the correct behavior, and for large enough m Permutation Synchronization does indeed recover all (τji)m j,i=1 with no error even when the vast majority of the entries in T are incorrect. 4 Experiments Since computer vision is one of the areas where improving the accuracy of multi-matching problems is the most pressing, our experiments focused on this domain. For a more details of our results, please see the extended version of the paper available on project website. Stereo Matching. As a proof of principle, we considered the task of aligning landmarks in 2D images of the same object taken from different viewpoints in the CMU house (m = 111 frames of a video sequence of a toy house with n = 30 hand labeled landmark points in each frame) and CMU hotel (m = 101 frames of a video sequence of a toy hotel, n = 30 hand labeled landmark points in each frame) datasets. The baseline method is to compute (˜τji)m i,j=1 by solving (m 2 ) independent linear assignment problems based on matching landmarks by their shape context features [23]. Our method takes the same pairwise matches and synchronizes them with the eigenvector based procedure. Figure 3 shows that this clearly outperforms the baseline, which tends to degrade progressively as the number of images increases. This is due to the fact that the appearance (or descriptors) of keypoints differ considerably for large offset pairs (which is likely when the image set is large), leading to many false matches. In contrast, our method improves as the size of the image set increases. While simple, this experiment demonstrates the utility of Permutation Synchronization for multi-view stereo matching, showing that instead of heuristically propagating local pairwise matches, it can find a much more accurate globally consistent matching at little additional cost. 6 (a) (b) (c) Figure 3: (a) Normalized error as m increases on the House dataset. Permutation Synchronization (blue) vs. the pairwise Kuhn-Munkres baseline (red). (b-c) Matches found for a representative image pair. (Green circles) landmarks, (green lines) ground truth, (red lines) found matches. (b) Pairwise linear assignment, (c) Permutation Synchronization. Note that less visible green is good. Figure 4: Matches for a representative image pairs from the Building (top) and Books (bottom) datasets. (Green circles) landmark points, (green lines) ground truth matchings, (red lines) found matches. (Left) Pairwise linear assignment, (right) Permutation Synchronization. Note that less visible green is better (right). Repetitive Structures. Next, we considered a dataset with severe geometric ambiguities due to repetitive structures. There is some consensus in the community that even sophisticated features (like SIFT) yield unsatisfactory results in this scenario, and deriving a good initial matching for structure from motion is problematic (see [24] and references therein). Our evaluations included 16 images from the Building dataset [24]. We identified 25 “similar looking” landmark points in the scene, and hand annotated them across all images. Many landmarks were occluded due to the camera angle. Qualitative results for pairwise matching and Permutation Synchronization are shown in Fig 4 (top). We highlight two important observations. First, our method resolved geometrical ambiguities by enforcing mutual consistency efficiently. Second, Permutation Synchronization robustly handles occlusion: landmark points that are occluded in one image are seamlessly assigned to null nodes in the other (see the set of unassigned points in the rightmost image in Fig 4 (top)) thanks to evidence derived from the large number of additional images in the dataset. In contrast, pairwise matching struggles with occlusion in the presence of similar looking landmarks (and feature descriptors). For n = 25 and m = 16, the error from the baseline method (Pairwise Linear Assignment) was 0.74. Permutation Synchronization decreased this by 10% to 0.64. The Books dataset (Fig 4, bottom) contains m = 20 images of multiple books on a “L” shaped study table [24], and suffers geometrical ambiguities similar to the above with severe occlusion. Here we identified n = 34 landmark points, many of which were occluded in most images. The error from the baseline method was 0.92, and Permutation Synchronization decreased this by 22% to 0.70 (see extended version of the paper). Keypoint matching with nominal user supervision. Our final experiment deals with matching problems where keypoints in each image preserve a common structure. In the literature, this is usually tackled as a graph matching problem, with the keypoints defining the vertices, and their structural relationships being encoded by the edges of the graph. Ideally, one wants to solve the problem for all images at once but most practical solutions operate on image (or graph) pairs. Note 7 that in terms of difficulty, this problem is quite distinct from those discussed above. In stereo, the same object is imaged and what varies from one view to the other is the field of view, scale, or pose. In contrast, in keypoint matching, the background is not controlled and even sophisticated descriptors may go wrong. Recent solutions often leverage supervision to make the problem tractable [25, 26]. Instead of learning parameters [25, 27], we utilize supervision directly to provide the correct matches on a small subset of randomly picked image pairs (e.g., via a crowdsourced platform like Mechanical Turk). We hope to exploit this ‘ground-truth’ to significantly boost accuracy via Permutation Synchronization. For our experiments, we used the baseline method output to set up our objective matrix T but with a fixed “supervision probability”, we replaced the Tji block by the correct permutation matrix, and ran Permutation Synchronization. We considered the “Bikes” sub-class from the Caltech 256 dataset, which contains multiple images of common objects with varying backdrops, and chose to match images in the “touring bike” class. Figure 5: Normalized error as the degree of supervision varies. Baseline method PLA (red) and Permutation Synchronization (blue) Our analysis included 28 out of 110 images in this dataset that were taken “side-on”. SUSAN corner detector was used to identify landmarks in each image. Further, we identified 6 interest points in each image that correspond to the frame of the bicycle. We modeled the matching cost for an image pair as the shape distance between interest points in the pair. As before, the baseline was pairwise linear assignment. For a fixed degree of supervision, we randomly selected image pairs for supervision and estimated matchings for the rest of the image pairs. We performed 50 runs for each degree of supervision. Mean error and standard deviation is shown in Fig 5 as supervision increases. Fig 6 demonstrates qualitative results by our method (right) and pairwise linear assignment (left). 5 Conclusions Estimating the correct matching between two sets from noisy similarity data, such as the visual feature based similarity matrices that arise in computer vision is an error-prone process. However, when we have not just two, but m different sets, the consistency conditions between the (m 2 ) pairwise matchings severely constrain the solution. Our eigenvector decomposition based algorithm, Permutation Synchronization, exploits this fact and pools information from all pairwise similarity matrices to jointly estimate a globally consistent array of matchings in a single shot. Theoretical results suggest that this approach is so robust that no matter how high the noise level is, for large enough m the error is almost surely going to be zero. Experimental results confirm that in a range of computer vision tasks from stereo to keypoint matching in dissimilar images, the method does indeed significantly improve performance (especially when m is large, as expected in video), and can get around problems such as occlusion that a pairwise strategy cannot handle. In future work we plan to compare our method to [18] (which was published after the present paper was submitted), as well as investigate using the graph connection Laplacian [28]. Acknowledgments We thank Amit Singer for invaluable comments and for drawing our attention to [18]. This work was supported in part by NSF–1320344 and by funding from the University of Wisconsin Graduate School. Figure 6: A representative triplet from the “Touring bike” dataset. (Yellow circle) Interest points in each image. (Green lines) Ground truth matching for image pairs (left-center) and (center-right). (Red lines) Matches for the image pairs: (left) supervision=0.1, (right) supervision=0.5. 8 References [1] R. E. Burkard, M. Dell’Amico, and S. Martello. Assignment problems. SIAM, 2009. [2] D. Shen and C. Davatzikos. Hammer: hierarchical attribute matching mechanism for elastic registration. TMI, IEEE, 21, 2002. [3] K. Duan, D. Parikh, D. Crandall, and K. Grauman. Discovering localized attributes for fine-grained recognition. In CVPR, 2012. [4] M.F. Demirci, A. Shokoufandeh, Y. Keselman, L. Bretzner, and S. Dickinson. Object recognition as many-to-many feature matching. IJCV, 69, 2006. [5] M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S.M. Seitz. Multi-view stereo for community photo collections. In ICCV, 2007. [6] A.C. Berg, T.L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondences. In CVPR, 2005. [7] J. Petterson, T. Caetano, J. McAuley, and J. Yu. Exponential family graph matching and ranking. NIPS, 2009. [8] S. Agarwal, Y. Furukawa, N. Snavely, I. Simon, B. Curless, S.M. Seitz, and R. Szeliski. Building Rome in a day. Communications of the ACM, 54, 2011. [9] I. Simon, N. Snavely, and S.M. Seitz. Scene summarization for online image collections. In ICCV, 2007. [10] P.A. Pevzner. Multiple alignment, communication cost, and graph matching. SIAM JAM, 52, 1992. [11] S. Lacoste-Julien, B. Taskar, D. Klein, and M.I. Jordan. Word alignment via quadratic assignment. In Proc. HLT - NAACL, 2006. [12] A.J. Smola, S.V.N. Vishwanathan, and Q. Le. Bundle methods for machine learning. NIPS, 20, 2008. [13] I. Tsochantaridis, T. Joachims, T. Hofmann, Y. Altun, and Y. Singer. Large margin methods for structured and interdependent output variables. JMLR, 6, 2006. [14] M. Volkovs and R. Zemel. Efficient sampling for bipartite matching problems. In NIPS, 2012. [15] A. Singer and Y. Shkolnisky. Three-dimensional structure determination from common lines in cryo-EM by eigenvectors and semidefinite programming. SIAM Journal on Imaging Sciences, 4(2):543–572, 2011. [16] R. Hadani and A. Singer. Representation theoretic patterns in three dimensional cryo-electron microscopy I — the intrinsic reconstitution algorithm. Annals of Mathematics, 174(2):1219–1241, 2011. [17] R. Hadani and A. Singer. Representation theoretic patterns in three-dimensional cryo-electron microscopy II — the class averaging problem. Foundations of Computational Mathematics, 11(5):589–616, 2011. [18] Qi-Xing Huang and Leonidas Guibas. Consistent shape maps via semidefinite programming. Computer Graphics Forum, 32(5):177–186, 2013. [19] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2, 1955. [20] E.P. Wigner. On the distribution of the roots of certain symmetric matrices. Ann. Math, 67, 1958. [21] Z. F¨uredi and J. Koml´os. The eigenvalues of random symmetric matrices. Combinatorica, 1, 1981. [22] F. Benaych-Georges and R.R. Nadakuditi. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Advances in Mathematics, 227(1):494–521, 2011. [23] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. PAMI, 24(4):509–522, 2002. [24] R. Roberts, S. Sinha, R. Szeliski, and D. Steedly. Structure from motion for scenes with large duplicate structures. In CVPR, 2011. [25] T.S. Caetano, J.J. McAuley, L. Cheng, Q.V. Le, and A.J. Smola. Learning graph matching. PAMI, 31(6):1048–1058, 2009. [26] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph matching and map inference. In NIPS, 2009. [27] T. Jebara, J. Wang, and S.F. Chang. Graph construction and b-matching for semi-supervised learning. In ICML, 2009. [28] A. S. Bandeira, A. Singer, and D. A. Spielman. A Cheeger inequality for the graph connection Laplacian. SIAM Journal on Matrix Analysis and Applications, 34(4):1611–1630, 2013. 9
|
2013
|
81
|
5,160
|
Auditing: Active Learning with Outcome-Dependent Query Costs Sivan Sabato Microsoft Research New England sivan.sabato@microsoft.com Anand D. Sarwate TTI-Chicago asarwate@ttic.edu Nathan Srebro Technion-Israel Institute of Technology and TTI-Chicago nati@ttic.edu Abstract We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and rectangles), and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We also show a general competitive approach for learning with outcome-dependent costs. 1 Introduction Active learning algorithms seek to mitigate the cost of learning by using unlabeled data and sequentially selecting examples to query for their label to minimize total number of queries. In some cases, however, the actual cost of each query depends on the true label of the example and is thus not known before the label is requested. For instance, in detecting fraudulent credit transactions, a query with a positive answer is not wasteful, whereas a negative answer is the result of a wasteful investigation of an honest transaction, and perhaps a loss of good-will. More generally, in a multiclass setting, different queries may entail different costs, depending on the outcome of the query. In this work we focus on the binary case, and on the extreme version of the problem, as described in the example of credit fraud, in which the algorithm only pays for queries which return a negative label. We term this setting auditing, and the cost incurred by the algorithm its auditing complexity. There are several natural ways to measure performance for auditing. For example, we may wish the algorithm to maximize the number of positive labels it finds for a fixed “budget” of negative labels, or to minimize the number of negative labels while finding a certain number or fraction of positive labels. In this work we focus on the classical learning problem, in which one attempts to learn a classifier from a fixed hypothesis class, with an error close to the best possible. Similar to active learning, we assume we are given a large set of unlabeled examples, and aim to learn with minimal labeling cost. But unlike active learning, we only incur a cost when requesting the label of an example that turns out to be negative. The close relationship between auditing and active learning raises natural questions. Can the auditing complexity be significantly better than the label complexity in active learning? If so, should 1 algorithms be optimized for auditing, or do optimal active learning algorithms also have low auditing complexity? To answer these questions, and demonstrate the differences between active learning and auditing, we study the simple hypothesis classes of thresholds and of axis-aligned rectangles in Rd, in both the realizable and the agnostic settings. We then also consider a general competitive analysis for arbitrary hypothesis classes. Other work. Existing work on active learning with costs (Margineantu, 2007; Kapoor et al., 2007; Settles et al., 2008; Golovin and Krause, 2011) typically assumes that the cost of labeling each point is known a priori, so the algorithm can use the costs directly to select a query. Our model is significantly different, as the costs depend on the outcome of the query itself. Kapoor et al. (2007) do mention the possibility of class-dependent costs, but this possibility is not studied in detail. An unrelated game-theoretic learning model addressing “auditing” was proposed by Blocki et al. (2011). Notation and Setup For an integer m, let [m] = {1, 2, . . . , m}. The function I[A] is the indicator function of a set A. For a function f and a sub-domain X, f|X is the restriction of f to X. For vectors a and b in Rd, the inequality a ≤b implies ai ≤bi for all i ∈[d]. We assume a data domain X and a distribution D over labeled data points in X × {−1, +1}. A learning algorithm may sample i.i.d. pairs (X, Y ) ∼D. It then has access to the value of X, but the label Y remains hidden until queried. The algorithm returns a labeling function ˆh : X →{−1, +1}. The error of a function h : X →{−1, +1} on D is err(D, h) = E(X,Y )∼D[h(X) ̸= Y ]. The error of h on a multiset S ⊆X × {−1, +1} is given by err(S, h) = 1 |S| P (x,y)∈S I[h(x) ̸= y]. The passive sample complexity of an algorithm is the number of pairs it draws from D. The active label complexity of an algorithm is the total number of label queries the algorithm makes. Its auditing complexity is the number of queries the algorithm makes on points with negative labels. We consider guarantees for learning algorithms relative to a hypothesis class H ⊆{−1, +1}X . We denote the error of the best hypothesis in H on D by err(D, H) = minh∈H err(D, h). Similarly, err(S, H) = minh∈H err(S, h). We usually denote the best error for D by η = err(D, H). To describe our algorithms it will be convenient to define the following sample sizes, using universal constants C, c > 0. Let δ ∈(0, 1) be a confidence parameter, and let ϵ ∈(0, 1) be an error parameter. Let mag(ϵ, δ, d) = C(d + ln(c/δ))/ϵ2. If a sample S is drawn from D with |S| = mag(ϵ, δ, d) then with probability 1 −δ, ∀h ∈H, err(D, h) ≤err(S, h) + ϵ and err(S, H) ≤err(D, H) + ϵ (Bartlett and Mendelson, 2002). Let mν(ϵ, δ, d) = C(d ln(c/νϵ) + ln(c/δ))/ν2ϵ. Results of Vapnik and Chervonenkis (1971) show that if H has VC dimension d and S is drawn from D with |S| = mν, then for all h ∈H, err(S, h) ≤max {err(D, h)(1 + ν), err(D, h) + νϵ} and (1) err(D, h) ≤max {err(S, h)(1 + ν), err(S, h) + νϵ} . 2 Active Learning vs. Auditing: Summary of Results The main point of this paper is that the auditing complexity can be quite different from the active label complexity, and that algorithms tuned to minimizing the audit label complexity give improvements over standard active learning algorithms. Before presenting these differences, we note that in some regimes, neither active learning nor auditing can improve significantly over the passive sample complexity. In particular, a simple adaptation of a result of Beygelzimer et al. (2009), establishes the following lower bound. Lemma 2.1. Let H be a hypothesis class with VC dimension d > 1. If an algorithm always finds a hypothesis ˆh with err(D, ˆh) ≤err(D, H)+ϵ for ϵ > 0, then for any η ∈(0, 1) there is a distribution D with η = err(D, H) such that the auditing complexity of this algorithm for D is Ω(dη2/ϵ2). That is, when η is fixed while ϵ →0, the auditing complexity scales as Ω(d/ϵ2), similar to the passive sample complexity. Therefore the two situations which are interesting are the realizable 2 case, corresponding to η = 0, and the agnostic case, when we want to guarantee an excess error ϵ such that η/ϵ is bounded. We provide results for both of these regimes. We will first consider the realizable case, when η = 0. Here it is sufficient to consider the case where a fixed pool S of m points is given and the algorithm must return a hypothesis ˆh such that err(S, ˆh) = 0 with probability 1. A pool labeling algorithm can be used to learn a hypothesis which is good for a distribution by drawing and labeling a large enough pool. We define auditing complexity for an unlabeled pool as the minimal number of negative labels needed to perfectly classify it. It is easy to see that there are pools with an auditing complexity at least the VC dimension of the hypothesis class. For the agnostic case, when η > 0, we denote α = ϵ/η and say that an algorithm (α, δ)-learns a class of distributions D with respect to H if for all D ∈D, with probability 1 −δ, ˆh returned by the algorithm satisfies err(D, ˆh) ≤(1 + α)η. By Lemma 2.1 an auditing complexity of Ω(d/α2) is unavoidable, but we can hope to improve over the passive sample complexity lower bound of Ω(d/ηα2) (Devroye and Lugosi, 1995) by avoiding the dependence on η. Our main results are summarized in Table 1, which shows the auditing and active learning complexities in the two regimes, for thresholds on [0, 1] and axis-aligned rectangles in Rd, where we assume that the hypotheses label the points in the rectangle as negative and points outside as positive. Active Auditing Realizable Thresholds Θ(ln m) 1 Rectangles m 2d Agnostic Thresholds Ω ln 1 η + 1 α2 O 1 α2 Rectangles Ω d 1 η + 1 α2 O d2 ln2 1 η · 1 α2 ln 1 α Table 1: Auditing complexity upper bounds vs. active label complexity lower bounds for realizable (pool size m) and agnostic (err(D, H) = η) cases. Agnostic bounds are for (α, δ)-learning with a fixed δ, where α = ϵ/η. In the realizable case, for thresholds, the optimal active learning algorithm performs binary search, resulting in Ω(ln m) labels in the worst case. This is a significant improvement over the passive label complexity of m. However, a simple auditing procedure that scans from right to left queries only a single negative point, achieving an auditing complexity of 1. For rectangles, we present a simple coordinate-wise scanning procedure with auditing complexity of at most 2d, demonstrating a huge gap versus active learning, where the labels of all m points might be required. Not all classes enjoy reduced auditing complexity: we also show that for rectangles with positive points on the inside, there exists pools of size m with an auditing complexity of m. In the agnostic case we wish to (α, δ)-learn distributions with a true error of η = err(D, H), for constant α, δ. For active learning, it has been shown that in some cases, the Ω(d/η) passive sample complexity can be replaced by an exponentially smaller O(d ln(1/η)) active label complexity (Hanneke, 2011), albeit sometimes with a larger polynomial dependence on d. In other cases, an Ω(1/η) dependence exists also for active learning. Our main question is whether the dependence on η in the active label complexity can be further reduced for auditing. For thresholds, active learning requires Ω(ln(1/η)) labels (Kulkarni et al., 1993). Using auditing, we show that the dependence on η can be completely removed, for any true error level η > 0, if we know η in advance. We also show that if η is not known at least approximately, the logarithmic dependence on 1/η is unavoidable also for auditing. For rectangles, we show that the active label complexity is at least Ω(d/η). In contrast, we propose an algorithm with an auditing complexity of O(d2 ln2(1/η)), reducing the linear dependence on 1/η to a logarithmic dependence. We do not know whether a linear dependence on d is possible with a logarithmic dependence on 1/η. Omitted proofs of results below are provided in the extended version of this paper (Sabato et al., 2013). 3 3 Auditing for Thresholds on the Line The first question to ask is whether the audit label complexity can ever be significantly smaller than the active or passive label complexities, and whether a different algorithm is required to achieve this improvement. The following simple case answers both questions in the affirmative. Consider the hypothesis class of thresholds on the line, defined over the domain X = [0, 1]. A hypothesis with threshold a is ha(x) = I[x −a ≥0]. The hypothesis class is H⊣= {ha | a ∈[0, 1]}. Consider the pool setting for the realizable case. The optimal active label complexity of Θ(log2 m) can be achieved by a binary search on the pool. The auditing complexity of this algorithm can also be as large as Θ(log2(m)). However, auditing allows us to beat this barrier. This case exemplifies an interesting contrast between auditing and active learning. Due to information-theoretic considerations, any algorithm which learns an unlabeled pool S has an active label complexity of at least log2 |H|S| (Kulkarni et al., 1993), where H|S is the set of restrictions of functions in H to the domain S. For H⊣, log2 |H⊣|S| = Ω(log2 m). However, the same considerations are invalid for auditing. We showed that for the realizable case, the auditing label complexity for H⊣is a constant. We now provide a more complex algorithm that guarantees this for (α, δ)-learning in the agnostic case. The intuition behind our approach is that to get the optimal threshold in a pool with at most k errors, we can query from highest to lowest until observing k + 1 negative points and then find the minimal error threshold on the labeled points. Lemma 3.1. Let S be a pool of size m in [0, 1], and assume that err(S, H⊣) ≤k/m. Then the procedure above finds ˆh such that err(S, ˆh) = err(S, H⊣) with an auditing complexity of k + 1. Proof. Denote the last queried point by x0, and let ha∗ = argminh∈H⊣err(S, H⊣). Since err(S, ha∗) ≤k/m, a∗> x0. Denote by S′ ⊆S the set of points queried by the procedure. For any a > x0, err(S′, ha) = err(S, ha) + |{(x, y) ∈S | x < x0, y = 1}|/m. Therefore, minimizing the error on S′ results in a hypothesis that minimizes the error on S. To learn from a distribution, one can draw a random sample and use it as the pool in the procedure above. However, the sample size required for passive (α, δ)-learning of thresholds is Ω(ln(1/η)/η). Thus, the number of errors in the pool would be k = η·Ω(ln(1/η)/η) = Ω(ln(1/η)), which depends on η. To avoid this dependence, the auditing algorithm we propose uses Alg. 1 below to select a subset of the random sample, which still represents the distribution well, but its size is only Ω(1/η). Lemma 3.2. Let δ, ηmax ∈(0, 1). Let S be a pool such that err(S, H⊣) ≤ηmax. Let Sq be the output of Alg. 1 with inputs S, ηmax, δ, and let ˆh = argminh∈H⊣err(Sq, H⊣). Then with probability 1 −δ, err(Sq, ˆh) ≤6ηmax and err(S, ˆh) ≤17ηmax. The algorithm for auditing thresholds on the line in the agnostic case is listed in Alg. 2. This algorithm first achieves (C, δ) learning of H⊣for a fixed C (in step 7, based on Lemma 3.2 and Lemma 3.1, and then improves its accuracy to achieve (α, δ)-learning for α > 0, by additional passive sampling in a restricted region. The following theorem provides the guarantees for Alg. 2. Algorithm 1: Representative Subset Selection 1: Input: pool S = (x1, . . . , xm) (with hidden labels), xi ∈[0, 1], ηmax ∈(0, 1], δ ∈(0, 1). 2: T ←max{⌊1/3ηmax⌋, 1}. 3: Let U = {x1, . . . , x1 | {z } T copies , . . . , xm, . . . , xm | {z } T copies } be the multiset with T copies of each point in S. 4: Sort and rename the points in U such that x′ i ≤x′ i+1 for all i ∈[Tm]. 5: Let Sq be an empty multiset. 6: for t = 1 to T do 7: S(t) ←{x′ (t−1)m+1, . . . , x′ tm}. 8: Draw 14 ln(8/δ) random points from S(t) independently uniformly at random and add them to Sq (with duplications). 9: end for 10: Return Sq (with the corresponding hidden labels). 4 Algorithm 2: Auditing for Thresholds with a constant α 1: Input: ηmax, δ, α ∈(0, 1), access to distribution D such that err(D, H⊣) ≤ηmax. 2: ν ←α/5. 3: Draw a random labeled pool (with hidden labels) S0 of size mν(η, δ/2, 1) from D. 4: Draw a random sample S of size mag((1 + ν)ηmax, δ/2, 1) uniformly from S0. 5: Get a subset Sq using Alg. 1 with inputs S, 2(1 + ν)ηmax, δ/2. 6: Query points in Sq from highest to lowest. Stop after ⌈12|Sq|(1 + ν)ηmax⌉+ 1 negatives. 7: Find ˆa such that hˆa minimizes the error on the labeled part of Sq. 8: Let S1 be the set of the 36(1 + ν)ηmax|S0| closest points to ˆa in S from each side of ˆa. 9: Draw S2 of size mag(ν/72, δ/2, 1) from S1 (see definition on page 2). 10: Query all points in S2, and return ˆh that minimizes the error on S2. Theorem 3.3. Let ηmax, δ, α ∈(0, 1). Let D be a distribution with error err(D, H⊣) ≤ηmax. Alg. 2 with input ηmax, δ, α has an auditing complexity of O(ln(1/δ)/α2), and returns ˆh such that with probability 1 −δ, err(D, ˆh) ≤(1 + α)ηmax. It immediately follows that if η = err(D, H) is known, (α, δ)-learning is achievable with an auditing complexity that does not depend on η. This is formulated in the following corollary. Corollary 3.4 ((α, δ)-learning for H⊣). Let η, α, δ ∈(0, 1]. For any distribution D with error err(D, H⊣) = η, Alg. 2 with inputs ηmax = η, α, δ (α, δ)-learns D with respect to H⊣with an auditing complexity of O(ln(1/δ)/α2). A similar result holds if the error is known up to a multiplicative constant. But what if no bound on η is known? The following lower bound shows that in this case, the best active complexity for threshold this similar to the best active label complexity. Theorem 3.5 (Lower bound on auditing H⊣without ηmax). Consider any constant α ≥0. For any δ ∈(0, 1), if an auditing algorithm (α, δ)-learns any distribution D such that err(D, H⊣) ≥ηmin, then the algorithm’s auditing complexity is Ω(ln( 1−δ δ ) ln(1/ηmin)). In the next section show that there are classes with a significant gap between active and auditing complexities even without an upper bound on the error. 4 Axis Aligned Rectangles A natural extension of thresholds to higher dimension is the class of axis-aligned rectangles, in which the labels are determined by a d-dimensional hyperrectangle. This hypothesis class, first introduced in Blumer et al. (1989), has been studied extensively in different regimes (Kearns, 1998; Long and Tan, 1998), including active learning (Hanneke, 2007b). An axis-aligned-rectangle hypothesis is a disjunction of 2d thresholds. For simplicity of presentation, we consider here the slightly simpler class of disjunctions of d thresholds over the positive orthant Rd +. It is easy to reduce learning of an axis-aligned rectangle in Rd to learning of a disjunction of thresholds in R2d by mapping each point x ∈Rd to a point ˜x ∈R2d such that for i ∈[d], ˜x[i] = max(x[i], 0) and ˜x[i+d] = max(0, −x[i])). Thus learning the class of disjunctions is equivalent, up to a factor of two in the dimensionality, to learning rectangles1. Because auditing costs are asymmetric, we consider two possibilities for label assignment. For a vector a = (a[1], . . . , a[d]) ∈Rd +, define the hypotheses ha and h− a by ha(x) = 2I[∃i ∈[d], x[i] ≥a[i]] −1, and h− a (x) = −ha(x). Define H2 = {ha | a ∈Rd +} and H− 2 = {h− a | a ∈Rd +}. In H2 the positive points are outside the rectangle and in H− 2 the negatives are outside. Both classes have VC dimension d. All of our results for these classes can be easily extended to the corresponding classes of general axis-aligned rectangles on Rd, with at most a factor of two penalty on the auditing complexity. 1This reduction suffices if the origin is known to be in the rectangle. Our algorithms and results can all be extended to the case where rectangles are not required to include the origin. To keep the algorithm and analysis as simple as possible, we state the result for this special case. 5 4.1 The Realizable Case We first consider the pool setting for the realizable case, and show a sharp contrast between the auditing complexity and the active label complexity for H2 and H− 2. Assume a pool of size m. While the active learning complexity for H2 and H− 2 can be as large as m, the auditing complexities for the two classes are quite different. For H− 2, the auditing complexity can be as large as m, but for H2 it is at most d. We start by showing the upper bound for auditing of H2. Theorem 4.1 (Pool auditing upper bound for H2). The auditing complexity of any unlabeled pool Su of size m with respect to H2 is at most d. Proof. The method is a generalization of the approach to auditing for thresholds. Let h∗∈H2 such that err(S, h∗) = 0. For each i ∈[d], order the points x in S by the values of their i-th coordinates x[i]. Query the points sequentially from largest value to the smallest (breaking ties arbitrarily) and stop when the first negative label is returned, for some point xi. Set a[i] ←xi[i], and note that h∗ labels all points in {x | x[i] > a[i]} positive. Return the hypothesis ˆh = ha. This procedure clearly queries at most d negative points and agrees with the labeling of h∗. It is easy to see that a similar approach yields an auditing complexity of 2d for full axis-aligned rectangles. We now provide a lower bound for the auditing complexity of H− 2 that immediately implies the same lower bound for active label complexity of H− 2 and H2. Theorem 4.2 (Pool auditing lower bound for H− 2). For any m and any d ≥2, there is a pool Su ⊆Rd + of size m such that its auditing complexity with respect to H− 2 is m. Proof. The construction is a simple adaptation of a construction due to Dasgupta (2005), originally showing an active learning lower bound for the class of hyperplanes. Let the pool be composed of m distinct points on the intersection of the unit circle and the positive orthant: Su = {(cos θj, sin θj)} for distinct θj ∈[0, π/2]. Any labeling which labels all the points in Su negative except any one point is realizable for H− 2, and so is the all-negative labeling. Thus, any algorithm that distinguishes between these different labelings with probability 1 must query all the negative labels. Corollary 4.3 (Realizable active label complexity of H2 and H− 2). For H2 and H− 2, there is a pool of size m such that its active label complexity is m. 4.2 The Agnostic Case We now consider H2 in the agnostic case, where η > 0. The best known algorithm for active learning of rectangles (2, δ)-learns a very restricted class of distributions (continuous product distributions which are sufficiently balanced in all directions) with an active label complexity of ˜O(d3p(ln(1/η)p(ln(1/δ))), where p(·) is a polynomial (Hanneke, 2007b). However, for a general distribution, active label complexity cannot be significantly better than passive label complexity. This is formalized in the following theorem. Theorem 4.4 (Agnostic active label complexity of H2). Let α, η > 0, δ ∈(0, 1 2). Any learning algorithm that (α, δ)-learns all distributions such that err(D, H) = η for η > 0 with respect to H2 has an active label complexity of Ω(d/η). In contrast, the auditing complexity of H2 can be much smaller, as we show for Alg. 3 below. Theorem 4.5 (Auditing complexity of H2). For ηmin, α, δ ∈(0, 1), there is an algorithm that (α, δ)-learns all distributions with η ≥ηmin with respect to H2 with an auditing complexity of O( d2 ln(1/αδ) α2 ln2(1/ηmin)). If ηmin is polynomially close to the true η, we get an auditing complexity of O(d2 ln2(1/η)), compared to the active label complexity of Ω(d/η), an exponential improvement in η. It is an open question whether the quadratic dependence on d is necessary here. Alg. 3 implements a ‘low-confidence’ version of the realizable algorithm. It sequentially queries points in each direction, until enough negative points have been observed to make sure the threshold in this direction has been overstepped. To bound the number of negative labels, the algorithm iteratively refines lower bounds on the locations of the best thresholds, and an upper bound on the negative error, defined as the probability that a point from D with negative label is classified as 6 positive by a minimal-error classifier. The algorithm uses queries that mostly result in positive labels, and stops when the upper bound on the negative error cannot be refined. The idea of iteratively refining a set of possible hypotheses has been used in a long line of active learning works (Cohn et al., 1994; Balcan et al., 2006; Hanneke, 2007a; Dasgupta et al., 2008). Here we refine in a particular way that uses the structure of H2, and allows bounding the number of negative examples we observe. We use the following notation in Alg. 3. The negative error of a hypothesis is errneg(D, h) = P(X,Y )∼D[h(X) = 1 and Y = −1]. It is easy to see that the same convergence guarantees that hold for err(·, ·) using a sample size mν(ϵ, δ, d) hold also for the negative error errneg(·, ·) (see Sabato et al., 2013). For a labeled set of points S, an ϵ ≤(0, 1) and a hypothesis class H, denote Vν(S, ϵ, H) = {h ∈H | err(S, h) ≤err(S, H) + (2ν + ν2) · max(err(S, H), ϵ)}. For a vector b ∈Rd +, define H2[b] = {ha ∈H2 | a ≥b}. Algorithm 3: Auditing for H2 1: Input: ηmin > 0, α ∈(0, 1], access to distribution D over Rd + × {−1, +1}. 2: ν ←α/25. 3: for t = 0 to ⌊log2(1/ηmin)⌋do 4: ηt ←2−t. 5: Draw a sample St of size mν(ηt, δ/ log2(1/ηmin), 10d) with hidden labels. 6: for i = 1 to d do 7: j ←0 8: while j ≤⌈(1 + ν)ηt|St|⌉+ 1 do 9: If unqueried points exist, query the unqueried point with highest i’th coordinate; 10: If query returned −1, j ←j + 1. 11: end while 12: bt[i] ←the i’th coordinate of the last queried point, or 0 if all points were queried. 13: end for 14: Set Sbt to St, with unqueried labels set to −1. 15: Vt ←Vν(Sbt, ηt, H2[bt]). 16: ˆηt ←maxh∈Vt errneg(Sbt, h). 17: if ˆηt > ηt/4 then 18: Skip to step 21 19: end if 20: end for 21: Return ˆh ≡argminh∈H2[bt] err(Sbt, h). Theorem 4.5 is proven in Sabato et al. (2013). . The proof idea is to show that at each round t, Vt includes any h∗∈argminh∈H err(D, h), and ˆηt is an upper bound on errneg(D, h∗). Further, at any given point minimizing the error on Sbt is equivalent to minimizing the error on the entire (unlabeled) sample. We conclude that the algorithm obtains a good approximation of the total error. Its auditing complexity is bounded since it queries a bounded number of negative points at each round. 5 Outcome-dependent Costs for a General Hypothesis Class In this section we return to the realizable pool setting and consider finite hypothesis classes H. We address general outcome-dependent costs and a general space of labels Y, so that H ⊆YX . Let S ⊆X be an unlabeled pool, and let cost : S × H →R+ denote the cost of a query: For x ∈S and h ∈H, cost(x, h) is the cost of querying the label of x given that h is the true (unknown) hypothesis. In the auditing setting, Y = {−1, +1} and cost(x, h) = I[h(x) = −1]. For active learning, cost ≡1. Note that under this definition of cost function, the algorithm may not know the cost of the query until it reveals the true hypothesis. Define OPTcost(S) to be the minimal cost of an algorithm that for any labeling of S which is consistent with some h ∈H produces a hypothesis ˆh such that err(S, ˆh) = 0. In the active learning setting, where cost ≡1, it is NP-hard to obtain OPTcost(S) for general H and S. This can be 7 shown by a reduction to set-cover (Hyafil and Rivest, 1976). A simple adaptation of the reduction for the auditing complexity, which we defer to the full version of this work, shows that it is also NP-hard to obtain OPTcost(S) in the auditing setting. For active learning, and for query costs that do not depend on the true hypothesis (that is cost(x, h) ≡ cost(x)), Golovin and Krause (2011) showed an efficient greedy strategy that achieves a cost of O(OPTcost(S) · ln(|H|)) for any S. This approach has also been shown to provide considerable performance gains in practical settings (Gonen et al., 2013). The greedy strategy consists of iteratively selecting a point whose label splits the set of possible hypotheses as evenly as possible, with a normalization proportional on the cost of each query. We now show that for outcome-dependent costs, another greedy strategy provides similar approximation guarantees for OPTcost(S). The algorithm is defined as follows: Suppose that so far the algorithm requested labels for x1, . . . , xt and received the corresponding labels y1, . . . , yt. Letting St = {(x1, y1), . . . , (xt, yt)}, denote the current version space by V (St) = {h ∈H|S | ∀(x, y) ∈ St, h(x) = y}. The next query selected by the algorithm is x ∈argmax x∈S min h∈H |V (St) \ V (St ∪{(x, h(x))})| cost(x, h) . That is, the algorithm selects the query that in the worst-case over the possible hypotheses, would remove the most hypotheses from the version spaces, when normalizing by the outcome-dependent cost of the query. The algorithm terminates when |V (St)| = 1, and returns the single hypothesis in the version space. Theorem 5.1. For any cost function cost, hypothesis class H, pool S, and true hypothesis h ∈H, the cost of the proposed algorithm is at most (ln(|H|S| −1) + 1) · OPT. If cost is the auditing cost, the proposed algorithm corresponds to the following intuitive strategy: At every round, select a query such that, if its result is a negative label, then the number of hypotheses removed from the version space is the largest. This strategy is consistent with a simple principle based on a partial ordering of the points: For points x, x′ in the pool, define x′ ⪯x if {h ∈H | h(x′) = −1} ⊇{h ∈H | h(x) = −1}, so that if x′ has a negative label, so does x. In the auditing setting, it is always preferable to query x before querying x′. Therefore, for any realizable auditing problem, there exists an optimal algorithm that adheres to this principle. It is thus encouraging that our greedy algorithm is also consistent with it. An O(ln(|H|S|)) approximation factor for auditing is less appealing than the same factor for active learning. By information-theoretic arguments, active label complexity is at least log2(|H|S|) (and hence the approximation at most squares the cost), but this does not hold for auditing. Nonetheless, hardness of approximation results for set cover (Feige, 1998), in conjunction with the reduction to set cover of Hyafil and Rivest (1976) mentioned above, imply that such an approximation factor cannot be avoided for a general auditing algorithm. 6 Conclusion and Future Directions As summarized in Section 2, we show that in the auditing setting, suitable algorithms can achieve improved costs in the settings of thresholds on the line and axis parallel rectangles. There are many open questions suggested by our work. First, it is known that for some hypothesis classes, active learning cannot improve over passive learning for certain distributions (Dasgupta, 2005), and the same is true for auditing. However, exponential speedups are possible for active learning on certain classes of distributions (Balcan et al., 2006; Dasgupta et al., 2008), in particular ones with a small disagreement coefficient (Hanneke, 2007a). It is an open question whether a similar property of the distribution can guarantee an improvement with auditing over active or passive learning. This might be especially relevant to important hypothesis classes such as decision trees or halfspaces. An interesting generalization of the auditing problem is a multiclass setting with a different cost for each label. Finally, one may attempt to optimize other performance measures for auditing, as described in the introduction. These measures are different from those studied in active learning, and may lead to new algorithmic insights. 8 References M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning (ICML), pages 65–72, 2006. P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 49–56. ACM, 2009. J. Blocki, N. Christin, A. Dutta, and A. Sinha. Regret minimizing audits: A learning-theoretic basis for privacy protection. In Proceedings of 24th IEEE Computer Security Foundations Symposium, 2011. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the VapnikChervonenkis dimension. Journal of the ACM, 36(4):929–965, Oct. 1989. D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15:201–221, 1994. S. Dasgupta. Analysis of a greedy active learning strategy. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 337–344. MIT Press, Cambridge, MA, 2005. S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 353–360. MIT Press, Cambridge, MA, 2008. L. Devroye and G. Lugosi. Lower bounds in pattern recognition and learning. Pattern Recognition, 28(7):1011–1018, 1995. U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4): 634–652, 1998. D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42:427–486, 2011. A. Gonen, S. Sabato, and S. Shalev-Shwartz. Efficient active learning of halfspaces: an aggressive approach. In The 30th International Conference on Machine Learning (ICML), 2013. S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pages 353–360. ACM, 2007a. S. Hanneke. Teaching dimension and the complexity of active learning. In Learning Theory, pages 66–81. Springer, 2007b. S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333–361, 2011. L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information Processing Letters, 5(1):15–17, May 1976. A. Kapoor, E. Horvitz, and S. Basu. Selective supervision: Guiding supervised learning with decision-theoretic active learning. In Proceedings of IJCAI, 2007. M. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 45(6):983–1006, 1998. S. R. Kulkarni, S. K. Mitter, and J. N. Tsitsiklis. Active learning using arbitrary binary valued queries. Machine Learning, 11(1):23–35, 1993. P. M. Long and L. Tan. PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. Machine Learning, 30(1):7–21, 1998. D. D. Margineantu. Active cost-sensitive learning. In Proceedings of IJCAI, 2007. S. Sabato, A. D. Sarwate, and N. Srebro. Auditing: Active learning with outcome-dependent query costs. arXiv preprint arXiv:1306.2347, 2013. B. Settles, M. Craven, and L. Friedlan. Active learning with real annotation costs. In Proceedings of the NIPS Workshop on Cost-Sensitive Learning, 2008. V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications, XVI(2):264–280, 1971. 9
|
2013
|
82
|
5,161
|
Restricting exchangeable nonparametric distributions Sinead A. Williamson University of Texas at Austin Steven N. MacEachern The Ohio State University Eric P. Xing Carnegie Mellon University Abstract Distributions over matrices with exchangeable rows and infinitely many columns are useful in constructing nonparametric latent variable models. However, the distribution implied by such models over the number of features exhibited by each data point may be poorly-suited for many modeling tasks. In this paper, we propose a class of exchangeable nonparametric priors obtained by restricting the domain of existing models. Such models allow us to specify the distribution over the number of features per data point, and can achieve better performance on data sets where the number of features is not well-modeled by the original distribution. 1 Introduction The Indian buffet process [IBP, 11] is one of several distributions over matrices with exchangeable rows and infinitely many columns, only a finite (but random) number of which contain any non-zero entries. Such distributions have proved useful for constructing flexible latent feature models that do not require us to specify the number of latent features a priori. In such models, each column of the random matrix corresponds to a latent feature, and each row to a data point. The non-zero elements of a row select the subset of features that contribute to the corresponding data point. However, distributions such as the IBP make certain assumptions about the structure of the data that may be inappropriate. Specifically, such priors impose distributions on the number of data points that exhibit a given feature, and on the number of features exhibited by a given data point. For example, in the IBP, the number of features exhibited by a data point is marginally Poisson-distributed, and the probability of a data point exhibiting a previously-observed feature is proportional to the number of times that feature has been seen so far. These distributional assumptions may not be appropriate for many modeling tasks. For example, the IBP has been used to model both text [17] and network [13] data. It is well known that word frequencies in text corpora and degree distributions of networks often exhibit power-law behavior; it seems reasonable to suppose that this behavior would be better captured by models that assume a heavy-tailed distribution over the number of latent features, rather than the Poisson distribution assumed by the IBP and related random matrices. In certain cases we may instead wish to add constraints on the number of latent features exhibited per data point, particularly in cases where we expect, or desire, the latent features to correspond to interpretable features, or causes, of the data [20]. For example, we might believe that each data point exhibits exactly S features – corresponding perhaps to speakers in a dialog, members of a team, or alleles in a genotype – but be agnostic about the total number of features in our data set. A model that explicitly encodes this prior expectation about the number of features per data point will tend to lead to more interpretable and parsimonious results. Alternatively, we may wish to specify a minimum number of latent features. For example, the IBP has been used to select possible next states in a hidden Markov model [10]. In such a model, we do not expect to see a state that allows no transitions (including self-transitions). Nonetheless, because a data point in the IBP can have zero features with non-zero probability, this situation can occur, resulting in an invalid transition distribution. 1 In this paper, we propose a method for modifying the distribution over the number of non-zero elements per row in arbitrary exchangeable matrices, allowing us to control the number of features per data point in a corresponding latent feature model. We show that our construction yields exchangeable distributions, and present Monte Carlo methods for posterior inference. Our experimental evaluation shows that this approach allows us to incorporate prior beliefs about the number of features per data point into our model, yielding superior modeling performance. 2 Exchangeability We say a finite sequence (X1, . . . , XN) is exchangeable [see, for example, 1] if its distribution is unchanged under any permutation σ of {1, . . . , N}. Further, we say that an infinite sequence X1, X2, . . . is infinitely exchangeable if all of its finite subsequences are exchangeable. Such distributions are appropriate when we do not believe the order in which we see our data is important. In such cases, a model whose posterior distribution depends on the order in which we see our data is not justified. In addition, exchangeable models often yield efficient Gibbs samplers. De Finetti’s law tells us that a sequence is exchangeable iff the observations are i.i.d. given some latent distribution. This means that we can write the probability of any exchangeable sequence as P(X1 = x1, X2 = x2, . . . ) = Z Θ Y i µθ(Xi = xi)ν(dθ) (1) for some probability distribution ν over parameter space Θ, and some parametrized family {µθ}θ∈Θ of conditional probability distributions. Throughout this paper, we will use the notation p(x1, x2, . . . ) = P(X1 = x1, X2 = x2, . . . ) to represent the joint distribution over an exchangeable sequence x1, x2, . . . ; p(xn+1|x1, . . . , xn) to represent the associated predictive distribution; and p(x1, . . . , xn, θ) := Qn i=1 µθ(Xi = xi)ν(θ) to represent the joint distribution over the observations and the parameter θ. 2.1 Distributions over exchangeable matrices The Indian buffet process [IBP, 11] is a distribution over binary matrices with exchangeable rows and infinitely many columns. In the de Finetti representation, the mixing distribution ν is a beta process, the parameter θ is a countably infinite measure with atom sizes πk ∈(0, 1], and the conditional distribution µθ is a Bernoulli process [17]. The beta process and the Bernoulli process are both completely random measures [CRM, 12] – distributions over random measures on some space Ωthat assign independent masses to disjoint subsets of Ω, that can be written in the form Γ = P∞ k=1 πkδφk. We can think of each atom of θ as determining the latent probability for a column of a matrix with infinitely many columns, and the Bernoulli process as sampling binary values for the entries of that column of the matrix. The resulting matrix has a finite number of non-zero entries, with the number of non-zero entries in each row distributed as Poisson(α) and the total number of non-zero columns in N rows distributed as Poisson(αHN), where HN is the Nth harmonic number. The number of rows with a non-zero entry for a given column exhibits a “rich gets richer” property – a new row has a one in a given column with probability proportional to the number of times a one has appeared in that column in the preceding rows. Different patterns of behavior can be obtained with different choices of CRM. A three-parameter extension to the IBP [15] replaces the beta process with a completely random measure called the stable-beta process, which includes the beta process as a special case. The resulting random matrix exhibits power law behavior: the total number of features exhibited in a data set of size N grows as O(N s) for some s > 0, and the number of data points exhibiting each feature also follows a power law. The number of features per data point, however, remains Poisson-distributed. The infinite gamma-Poisson process [iGaP, 18] replaces the beta process with a gamma process, and the Bernoulli process with a Poisson process, to give a distribution over non-negative integer-valued matrices with infinitely many columns and exchangeable rows. In this model, the sum of each row is distributed according to a negative binomial distribution, and the number of non-zero entries in each row is Poisson-distributed. The beta-negative binomial process [21] replaces the Bernoulli process with a negative binomial process to get an alternative distribution over non-negative integer-valued matrices. 2 3 Removing the Poisson assumption While different choices of CRMs in the de Finetti construction can alter the distribution over the number of data points that exhibit a feature and (in the case of non-binary matrices) the row sums, they retain a marginally Poisson distribution over the number of distinct features exhibited by a given data point. The construction of Caron [4] extends the IBP to allow the number of features in each row to follow a mixture of Poissons, by assigning data point-specific parameters that have an effect equivalent to a monotonic transformation on the atom sizes in the underlying beta process; however conditioned on these parameters, the sum of each row is still Poisson-distributed. This repeatedly occurring Poisson distribution is a direct result of the construction of a binary matrix from a combination of CRMs. To elaborate on this, note that, marginally, the distribution over the value of each element zik of a row zi of the IBP (or a three-parameter IBP) is given by a Bernoulli distribution. Therefore, by the law of rare events, the sum P k zik is distributed according to a Poisson distribution. A similar argument applies to integer-valued matrices such as the infinite gamma-Poisson process. Marginally, the distribution over whether an element zik is greater than zero is given by a Bernoulli distribution, hence the number of non-zero elements, P k zik ∧1, is Poisson-distributed. The distribution over the row sum, P k zik, will depend on the choice of CRMs. It follows that, if we wish to circumvent the requirement of a Poisson (or mixture of Poisson) number of features per data point in an IBP-like model, we must remove the completely random assumption on either the de Finetti mixing distribution or the family of conditional distributions. The remainder of this section discusses how we can obtain arbitrary marginal distributions over the number of features per row by using conditional distributions that are not completely random. 3.1 Restricting the family of conditional distributions in the de Finetti representation Recall from Section 2 that any exchangeable sequence can be represented as a mixture over some family of conditional distributions. The support of this family determines the support of the exchangeable sequence. For example, in the IBP the family of conditional distributions is the Bernoulli process, which has support in {0, 1}∞. A sample from the IBP therefore has support in {{0, 1}∞}N. We are familiar with the idea of restricting the support of a distribution to a measurable subset. For example, a truncated Gaussian is a Gaussian distribution restricted to a contiguous section of the real line. In general, we can restrict an arbitrary probability distribution µ with support Ωto a measurable subset A ⊂Ωby defining µ|A(·) := µ(·)I(· ∈A)/µ(A). Theorem 1 (Restricted exchangeable distributions). We can restrict the support of an exchangeable distribution by restricting the family of conditional distributions {µθ}θ∈Θ introduced in Equation 1, to obtain an exchangeable distribution on the restricted space. Proof. Consider an unrestricted exchangeable model with de Finetti representation p(x1, . . . , xN, θ) = QN i=1 µθ(Xi = xi)ν(θ). Let p|A be the restriction of p such that Xi ∈A, i = 1, 2, . . . , obtained by restricting the family of conditional distributions {µθ} to {µ|A θ } as described above. Then p|A(x1, . . . , xN, θ) = QN i=1 µ|A θ (Xi = xi)ν(θ) = QN i=1 µθ(Xi=xi)I(xi∈A) µθ(Xi∈A) ν(θ) , and p|A(xN+1|x1, . . . , xN) ∝I(xN+1 ∈A) Z Θ QN+1 i=1 µθ(Xi=xi) QN+1 i=1 µθ(Xi∈A) ν(dθ) (2) is an exchangeable sequence by construction, according to de Finetti’s law. We give three examples of exchangeable matrices where the number of non-zero entries per row is restricted to follow a given distribution. While our focus is on exchangeability of the rows, we note that the following distributions (like their unrestricted counterparts) are invariant under reordering of the columns, and that the resulting matrices are separately exchangeable [2]. Example 1 (Restriction of the IBP to a fixed number of non-zero entries per row). The family of conditional distributions in the IBP is given by the Bernoulli process. We can restrict the support 3 of the Bernoulli process to an arbitrary measurable subset A ⊂{0, 1}∞– for example, the set of all vectors z ∈{0, 1}∞such that P k zk = S for some integer S. The conditional distribution of a matrix Z = {z1, . . . , zN} under such a distribution is given by: µ|S B (Z = Z) = QN i=1 µB(Zi = zi)I(P k zik = S) (µB(P k Zik = S))N = Q∞ k=1 πmk k (1 −πk)N−mk PoiBin(S|{πk}∞ k=1)N N Y i=1 I ∞ X k=1 zik = S , (3) where mk = P i zik and PoiBin(·|{πk}∞ k=1) is the infinite limit of the Poisson-binomial distribution [6], which describes the distribution over the number of successes in a sequence of independent but non-identical Bernoulli trials. The probability of Z given in Equation 3 is the infinite limit of the conditional Bernoulli distribution [6], which describes the distribution of the locations of the successes in such a trial, conditioned on their sum. Example 2 (Restriction of the iGaP to a fixed number of non-zero entries per row). The family of conditional distributions in the iGaP is given by the Poisson process, which has support in N∞. Following Example 1, we can restrict this support to the set of all vectors z ∈N∞such that P k zk ∧1 = S for some integer S – i.e. the set of all non-negative integer-valued infinite vectors with S non-zero entries. The conditional distribution of a matrix Z = {z1, . . . , zN} under such a distribution is given by: µ|S G (Z = Z) = QN i=1 µG(Zi = zi)I(P∞ k=1 zik ∧1 = S) (µG(P∞ k=1 Zik ∧1 = S))N = Q∞ k=1 λ mk k e−λk QN i=1 zik! PoiBin(S|{e−λk}∞ k=1)N N Y i=1 I ∞ X k=1 zik ∧1 = S . (4) Example 3 (Restriction of the IBP to a random number of non-zero entries per row). Rather than specify the number of non-zero entries in each row a priori, we can allow it to be random, with some arbitrary distribution f(·) over the non-negative integers. A Bernoulli process restricted to have f-marginals can be described as µ|f B (Z) = N Y i=1 µ|Si B (Zi = zi)f(Si) = N Y i=1 f(Si)I(P∞ k=1 zik = Si) PoiBin(Si|{πk}∞ k=1) ∞ Y k=1 πmk k (1 −πk)N−mk , (5) where Sn = P∞ k=1 znk. If we marginalize over B = P∞ k=1 πkδφk, the resulting distribution is exchangeable, because mixtures of i.i.d. distributions are i.i.d. We note that, even if we choose f to be Poisson(α), we will not recover the IBP. The IBP has Poisson(α) marginals over the number non-zero elements per row, but the conditional distribution is described by a Poisson-binomial distribution. The Poisson-restricted IBP, however, will have Poisson marginal and conditional distributions. Figure 1 shows some examples of samples from the single-parameter IBP, with parameter α = 5, with various restrictions applied. IBP 1 per row 5 per row 10 per row Uniform{1,...,20} Power−law (s=2) Figure 1: Samples from restricted IBPs. 3.2 Direct restriction of the predictive distributions The construction in Section 3.1 is explicitly conditioned on a draw B from the de Finetti mixing distribution ν. Since it might be cumbersome to explicitly represent the infinite dimensional 4 object B, it is tempting to consider constructions that directly restrict the predictive distribution p(XN+1|X1, . . . , XN), where B has been marginalized out. Unfortunately, the distribution over matrices obtained by this approach does not (in general – see the appendix for a counter-example) correspond to the distribution over matrices obtained by restricting the family of conditional distributions. Moreover, the resulting distribution will not in general be exchangeable. This means it is not appropriate for data sets where we have no explicit ordering of the data, and also means we cannot directly use the predictive distribution to define a Gibbs sampler (as is possible in exchangeable models). Theorem 2 (Sequences obtained by directly restricting the predictive distribution of an exchangeable sequence are not, in general, exchangeable). Let p be the distribution of the unrestricted exchangeable model introduced in the proof of Theorem 1. Let p∗|A be the distribution obtained by directly restricting this unrestricted exchangeable model such that Xi ∈A, i.e. p∗|A(xN+1|x1, . . . , xN) ∝I(xN+1 ∈A) R Θ QN+1 i=1 µθ(X = xi)ν(dθ) R Θ QN+1 i=1 µθ(X ∈A)ν(dθ) . (6) In general, this will not be equal to Equation 2, and cannot be expressed as a mixture of i.i.d. distributions. Proof. To demonstrate that this is true, consider the counterexample given in Example 4. Example 4 (A three-urn buffet). Consider a simple form of the Indian buffet process, with a base measure consisting of three unit-mass atoms. We can represent the predictive distribution of such a model using three indexed urns, each containing one red ball (representing a one in the resulting matrix) and one blue ball (representing a zero in the resulting matrix). We generate a sequence of ball sequences by repeatedly picking a ball from each urn, noting the ordered sequence of colors, and returning the balls to their urns, plus one ball of each sampled color. Proposition 1. The three-urn buffet is exchangeable. Proof. By using the fact that a sequence is exchangeable iff the predictive distribution given the first N elements of the sequence of the N + 1st and N + 2nd entries is exchangeable [9], it is trivial to show that this model is exchangeable and that, for example, p(XN+1 = (r, b, r), XN+2 = (r, r, b)|X1:N) =m1m2(N + 1 −m3) (N + 1)3 · (m + 1 + 1)(N + 1 −m2)m3 (N + 2)3 =p(XN+1 = (r, r, b), XN+2 = (r, b, r)|X1:N) , (7) where mi is the number of times in the first N samples that the ith ball in a sample has been red. Proposition 2. The directly restricted three-urn scheme (and, by extension, the directly restricted IBP) is not exchangeable. Proof. Consider the same scheme, but where the outcome is restricted such that there is one, and only one, red ball per sample. The probability of a sequence in this restricted model is given by p∗(XN+1 = x|X1:N) = P3 k=1 mk N+1−mk I(xi = r) P3 k=1 mk N+1−mk and, for example, p∗(XN+1 = (r, b, b), XN+2 = (b, r, b)|X1:N) = m1 N+1−m1 P k mk N+1−mk · m2 N+2−m3 m2 N+1−m2 − m2 N+2−m2 + P k mk N+1−mk ̸=p∗(XN+1 = (b, r, b), XN+2 = (r, b, b)|X1:N) , (8) therefore the restricted model is not exchangeable. By introducing a normalizing constant – corresponding to restricting over a subset of {0, 1}3 – that depends on the previous samples, we have broken the exchangeability of the sequence. By extension, a model obtained by directly restricting the predictive distribution of the IBP is not exchangeable. 5 We note that there may well be situations where a non-exchangeable model, such as that described in Proposition 2, is appropriate for our data – for example where there is an explicit ordering on the data. It is not, however, an appropriate model if we believe our data to be exchangeable, or if we are interested in finding a single, stationary latent distribution describing our data. This exchangeable setting is the focus of this paper, so we defer exploration of distribution of non-exchangeable matrices obtained by restriction of the predictive distribution to future work. 4 Inference We focus on models obtained by restricting the IBP to have f-marginals over the number of nonzero elements per row, as described in Example 3. Note that when f = δS, this yields the setting described in Example 1. Extension to other cases, such as the restricted iGaP model of Example 2, are straightforward. We work with a truncated model, where we approximate the countably infinite sequence {πk}∞ k=1 with a large, but finite, vector π := (π1, . . . , πK), where each atom πk is distributed according to Beta(α/K, 1). An alternative approach would be to develop a slice sampler that uses a random truncation, avoiding the error introduced by the fixed truncation [14, 16]. We assume a likelihood function g(X|Z) = Q i g(xi|zi). 4.1 Sampling the binary matrix Z For marginal functions f that assign probability mass to a contiguous, non-singleton subset of N, we can Gibbs sample each entry of Z according to p(zik = 1|xi, π, Z¬ik, P j̸=k zij = a) ∝πk f(a+1) p(P k zk=a+1|π)g(xi|zik = 1, Z¬ik) p(zik = 0|xi, π, Z¬ik, P j̸=k zij = a) ∝(1 −πk) f(a) p(P k zk=a|π)g(xi|zik = 0, Z¬ik). (9) Where f = δS, this approach will fail, since any move that changes zik must change P k zik. In this setting, instead, we sample the locations of the non-zero entries z(j) i , j = 1, . . . , S of zi: p(z(j) i = k|xi, π, z(¬j) i ) ∝πk(1 −πk)−1g(xi|z(j) i = k, z(¬j) i ) . (10) To improve mixing, we also include Metropolis-Hastings moves that propose an entire row of Z. We include details in the supplementary material. 4.2 Sampling the beta process atoms π Conditioned on Z, the the distribution of π is ν({πk}∞ k=1|Z) ∝µ|f {πk}(Z = Z)ν({πk}∞ k=1) ∝ QK k=1 π (mk+ α K −1) k (1 −πk)N−mk QN i=1 PoiBin(Si|π) . (11) The Poisson-binomial term can be calculated exactly in O(K P k zik) using either a recursive algorithm [3, 5] or an algorithm based on the characteristic function that uses the Discrete Fourier Transform [8]. It can also be approximated using a skewed-normal approximation to the Poisson-binomial distribution [19]. We can therefore sample from the posterior of π using Metropolis Hastings steps. Since we believe our posterior will be close to the posterior for the unrestricted model, we use the proposal distribution q(πk|Z) = Beta(α/K + mk, N + 1 −mk) to propose new values of πk. 4.3 Evaluating the predictive distribution In certain cases, we may wish to directly evaluate the predictive distribution p|f(zN+1|z1, . . . , zN). Unfortunately, in the case of the IBP, we are unable to perform the integral in Equation 2 analytically. We can, however, estimate the predictive distribution using importance sampling. We sample T measures π(t) ∼ν(π|Z), where ν(π|Z) is the posterior distribution over π in the finite approximation to the IBP, and then weight them to obtain the restricted predictive distribution p|f(zN+1|z1, . . . , zN) ≈1 T PT t=1 wtµ|f π(t)(zN+1) P t wt , (12) 6 Figure 2: Top row: True features. Bottom row: Sample data points for S = 2. S = 2 S = 5 S = 8 S = 11 S = 14 IBP 7297.4 ± 2822.8 8982.2 ± 1981.7 7442.8 ± 3602.0 8862.1 ± 3920.2 20244 ± 6809.7 rIBP 57.2 ± 66.4 3469.7 ± 133.7 5963.8 ± 871.4 11413 ± 1992.9 12199 ± 2593.8 Table 1: Structure error on synthetic data with 100 data points and S features per data point. where wt = µ|f π(t)(z1, . . . , zN)/µπ(t)(z1, . . . , zN), and µ|f π (Z) ∝ N Y i=1 f(Si)I(PK k=1 zik = Si) PoiBin(Si|π) K Y k=1 πmk k (1 −πk)N−mk. 5 Experimental evaluation In this paper, we have described how distributions over exchangeable matrices, such as the IBP, can be modified to allow more flexible control over the distributions over the number of latent features. In this section, we perform experiments on both real and synthetic data. The synthetic data experiments are designed to show that appropriate restriction can yield more interpretable features. The experiments on real data are designed to show that careful choice of the distribution over the number of latent features in our models can lead to improved predictive performance. 5.1 Synthetic data The IBP has been used to discover latent features that correspond to interpretable phenomena, such as latent causes behind patient symptoms [20]. If we have prior knowledge about the number of latent features per data point – for example, the number of players in a team, or the number of speakers in a conversation – we may expect both better predictive performance, and more interpretable latent features. In this experiment, we evaluate this hypothesis on synthetic data, where the true latent features are known. We generated images by randomly selecting S of 16 binary features, shown in Figure 2, superimposing them, and adding isotropic Gaussian noise (σ2 = 0.25). We modeled the resulting data using an uncollapsed linear Gaussian model, as described in [7], using both the IBP, and the IBP restricted to have S features per row. To compare the generating matrix Z0 and our posterior estimate Z, we looked at the structure error [20]. This is the sum absolute difference between the upper triangular portions of Z0ZT 0 and E[ZZT ], and is a general measure of graph dissimilarity. Table 1 shows the structure error obtained using both a standard IBP model (IBP) and an IBP restricted to have the correct number of latent features (rIBP), for varying numbers of features S. In each case, the number of data points is 100, the IBP parameter α is fixed to S, and the model is truncated to 50 features. Each experiment was repeated 10 times on independently generated data sets; we present the mean and standard deviation. All samplers were run for 5000 samples; the first 2500 were discarded as burn-in. Where the number of features per data point is small relative to the total number of features, the restricted model does a much better job at recovering the “correct” latent structure. While the IBP may be able to explain the training data set as well as the restricted model, it will not in general recover the desired latent structure – which is important if we wish to interpret the latent structure. Once the number of features per data point increases beyond half the total number of features, the model is ill-specified – it is more parsimonious to represent features via the absence of a bar. As a result, both models perform poorly at recovering the generating structure. The restricted model – and indeed the IBP – should only be expected to recover easily interpretable features where the number of such features per data point is small relative to the total number of features. 7 1 2 3 4 5 6 7 8 9 10 IBP 0.591 0.726 0.796 0.848 0.878 0.905 0.923 0.936 0.952 0.958 rIBP 0.622 0.749 0.819 0.864 0.899 0.918 0.935 0.948 0.959 0.966 11 12 13 14 15 16 17 18 19 20 IBP 0.961 0.969 0.974 0.978 0.982 0.989 0.991 0.996 0.997 1.000 rIBP 0.971 0.978 0.981 0.983 0.988 0.992 0.998 1.000 1.000 1.000 Table 2: Proportion correct at n on classifying documents from the 20newsgroup data set. 5.2 Classification of text data The IBP and its extensions have been used to directly model text data [17, 15]. In such settings, the IBP is used to directly model the presence or absence of words, and so the matrix Z is observed rather than latent, and the total number of features is given by the vocabulary size. We hypothesize that the Poisson assumption made by the IBP is not appropriate for text data, as the statistics of word use in natural language tends to follow a heavier tailed distribution [22]. To test this hypothesis, we modeled a collection of corpora using both an IBP, and an IBP restricted to have a negative Binomial distribution over the number of words. Our corpora were 20 collections of newsgroup postings on various topics (for example, comp.graphics, rec.autos, rec.sport.hockey)1. No pre-processing of the documents was performed. Since the vocabulary (and hence the feature space) is finite, we truncated both models to the vocabulary size. Due to the very large state space, we restricted our samples such that, in a single sample, atoms with the same posterior distribution were assigned the same value. For each model, α was set to the mean number of words per document in the corresponding group, and the maximum likelihood parameters were used for the negative Binomial distribution. To evaluate the quality of the models, we classified held out documents based on their likelihood under each of the 20 newsgroups. This experiment is designed to replicate an experiment performed by [15] to compare the original and three-parameter IBP models. For both models, we estimated the predictive distribution by generating 1000 samples from the posterior of the beta process in the IBP model. For the IBP, we used these samples directly to estimate the predictive distribution; for the restricted model, we used the importance-weighted samples obtained using Equation 12. For each model, we trained on 1000 randomly selected documents, and tested on a further 1000 documents. Table 2 shows the fraction of documents correctly classified in the first n labels – i.e. the fraction of documents for which the correct labels is one of the n most likely. The restricted IBP (rIBP) performs uniformly better than the unrestricted model. 6 Discussion and future work The framework explored in this paper allows us to relax the distributional assumptions made by existing exchangeable nonparametric processes. As future work, we intend to explore which applications and models can most benefit from this greater flexibility. We note that the model, as posed, suffers from an identifiability issue. Let ˜B = P∞ k=1 ˜πkδφk be the measure obtained by transforming B = P∞ k=1 πkδφk such that ˜πk = πk/(1 −πk). Then, scaling ˜B by a positive scalar does not affect the likelihood of a given matrix Z. We intend to explore the consequences of this in future work. Acknowledgments We would like to thank Zoubin Ghahramani for valuable suggestions and discussions throughout this project. We would also like to thank Finale Doshi-Velez and Ryan Adams for pointing out the non-identifiability mentioned in Section 6. This research was supported in part by NSF grants DMS-1209194 and IIS-1111142, AFOSR grant FA95501010247, and NIH grant R01GM093156. 1http://people.csail.mit.edu/jrennie/20Newsgroups/ 8 References [1] D. Aldous. Exchangeability and related topics. ´Ecole d’ ´Et´e de Probabilit´es de Saint-Flour XIII, pages 1–198, 1985. [2] D. J. Aldous. Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11(4):581–598, 1981. [3] R. E. Barlow and K. D. Heidtmann. Computing k-out-of-n system reliability. IEEE Transactions on Reliability, 33:322–323, 1984. [4] F. Caron. Bayesian nonparametric models for bipartite graphs. In Neural Information Processing Systems, 2012. [5] S. X Chen, A. P. Dempster, and J. S. Liu. Weighted finite population sampling to maximize entropy. Biometrika, 81:457–469, 1994. [6] S. X. Chen and J. S. Liu. Statistical applications of the Poisson-binomial and conditional Bernoulli distributions. Statistica Sinica, 7:875–892, 1997. [7] F. Doshi-Velez and Z. Ghahramani. Accelerated Gibbs sampling for the Indian buffet process. In International Conference on Machine Learning, 2009. [8] M. Fern´andez and S. Williams. Closed-form expression for the Poisson-binomial probability density function. IEEE Transactions on Aerospace Electronic Systems, 46:803–817, 2010. [9] S. Fortini, L. Ladelli, and E. Regazzini. Exchangeability, predictive distributions and parametric models. Sankhy¯a: The Indian Journal of Statistics, Series A, pages 86–109, 2000. [10] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. Sharing features among dynamical systems with beta processes. In Neural Information Processing Systems, 2010. [11] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Neural Information Processing Systems, 2005. [12] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967. [13] K. T. Miller, T. L. Griffiths, and M. I. Jordan. Nonparametric latent feature models for link prediction. In Neural Information Processing Systems, 2009. [14] R. M. Neal. Slice sampling. Annals of Statistics, 31(3):705–767, 2003. [15] Y. W. Teh and D. G¨or¨ur. Indian buffet processes with power law behaviour. In Neural Information Processing Systems, 2009. [16] Y. W. Teh, D. G¨or¨ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In Artificial Intelligence and Statistics, 2007. [17] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the Indian buffet process. In Artificial Intelligence and Statistics, 2007. [18] M. Titsias. The infinite gamma-Poisson feature model. In Neural Information Processing Systems, 2007. [19] A. Y. Volkova. A refinement of the central limit theorem for sums of independent random indicators. Theory of Probability and its Applications, 40:791–794, 1996. [20] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring hidden causes. In Uncertainty in Artificial Intelligence, 2006. [21] M. Zhou, L. A. Hannah, D. B. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In Artificial Intelligence and Statistics, 2012. [22] G. K. Zipf. Selective Studies and the Principle of Relative Frequency in Language. Harvard University Press, 1932. 9
|
2013
|
83
|
5,162
|
On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization Ke Hou, Zirui Zhou, Anthony Man–Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong Shatin, N. T., Hong Kong {khou,zrzhou,manchoso}@se.cuhk.edu.hk Zhi–Quan Luo Department of Electrical & Computer Engineering University of Minnesota Minneapolis, MN 55455, USA luozq@ece.umn.edu Abstract Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGM is in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm–regularized problem, which may be of independent interest. 1 Introduction The problem of finding a low–rank matrix that (approximately) satisfies a given set of conditions has recently generated a lot of interest in many communities. Indeed, such a problem arises in a wide variety of applications, including approximation algorithms [17], automatic control [5], matrix classification [20], matrix completion [6], multi–label classification [1], multi–task learning [2], network localization [7], subspace learning [24], and trace regression [9], just to name a few. Due to the combinatorial nature of the rank function, the task of recovering a matrix with the desired rank and properties is generally intractable. To circumvent this, a popular approach is to use the trace norm1 (also known as the nuclear norm) as a surrogate for the rank function. Such an approach is quite natural, as the trace norm is the tightest convex lower bound of the rank function over the set of matrices with spectral norm at most one [13]. In the context of machine learning, the trace norm is typically used as a regularizer in the minimization of certain convex loss function. This gives rise to convex optimization problems of the form min X∈Rm×n {F(X) = f(X) + τ∥X∥∗} , (1) where f : Rm×n →R is the convex loss function, ∥X∥∗denotes the trace norm of X, and τ > 0 is a regularization parameter. By standard results in convex optimization [4], the above formulation is tractable (i.e., polynomial–time solvable) for many choices of the loss function f. In practice, 1Recall that the trace norm of a matrix is defined as the sum of its singular values. 1 however, one is often interested in settings where the decision variable X is of high dimension. Thus, there has been much research effort in developing fast algorithms for solving (1) lately. Currently, a popular method for solving (1) is the proximal gradient method (PGM), which exploits the composite nature of the objective function F and certain smoothness properties of the loss function f [8, 19, 11]. The attractiveness of PGM lies not only in its excellent numerical performance, but also in its strong theoretical convergence rate guarantees. Indeed, for the trace norm–regularized problem (1) with f being convex and continuously differentiable and ∇f being Lipschitz continuous, the standard PGM will achieve an additive error of O(1/k) in the optimal value after k iterations. Moreover, this error can be reduced to O(1/k2) using acceleration techniques; see, e.g., [19]. The sublinear O(1/k2) convergence rate is known to be optimal if f is simply given by a first–order oracle [12]. On the other hand, if f is strongly convex, then the convergence rate can be improved to O(ck) for some c ∈(0, 1) (i.e., a linear convergence rate) [16]. However, in machine learning, the loss functions of interest are often highly structured and hence not just given by an oracle, but they are not necessarily strongly convex either. For instance, in matrix completion, a commonly used loss function is the square loss f(·) = ∥A(·) −b∥2 2/2, where A : Rm×n →Rp is a linear measurement operator and b ∈Rp is a given set of observations. Clearly, f is not strongly convex when A has a non–trivial nullspace (or equivalently, when A is not injective). In view of this, it is natural to ask whether linear convergence of the PGM can be established for a larger class of loss functions. In this paper, we take a first step towards answering this question. Specifically, we show that when the loss function f takes the form f(X) = h(A(X)), where A : Rm×n →Rp is an arbitrary linear operator and h : Rp →R is strictly convex with certain smoothness and curvature properties, the PGM for solving (1) has an asymptotic linear rate of convergence. Note that f need not be strictly convex even if h is, as A is arbitrary. Our result covers a wide range of loss functions used in the literature, such as square loss and logistic loss. Moreover, to the best of our knowledge, it is the first linear convergence result concerning the application of a first–order method to the trace norm–regularized problem (1) that does not require the strong convexity of f. The key to our convergence analysis is a new Lipschitzian error bound for problem (1). Roughly, it says that the distance between a point X ∈Rm×n and the optimal solution set of (1) is on the order of the residual norm ∥proxτ(X −∇f(X)) −X∥F, where proxτ is the proximity operator associated with the regularization term τ∥X∥∗. Once we have such a bound, a routine application of the powerful analysis framework developed by Luo and Tseng [10] will yield the desired linear convergence result. Prior to this work, Lipschitzian error bounds for composite function minimization are available for cases where the non–smooth part either has a polyhedral epigraph (such as the ℓ1–norm) [23] or is the (sparse) group LASSO regularization [22, 25]. However, the question of whether a similar bound holds for trace norm regularization has remained open, despite its apparent similarity to ℓ1–norm regularization. Indeed, unlike the ℓ1–norm, the trace norm has a non– polyhedral epigraph; see, e.g., [18]. Moreover, the existing approach for establishing error bounds for ℓ1–norm or (sparse) group LASSO regularization is based on splitting the decision variables into groups, where variables from different groups do not interfere with one another, so that each group can be analyzed separately. However, the trace norm of a matrix is determined by its singular values, and each of them depends on every single entry of the matrix. Thus, we cannot use the same splitting approach to analyze the entries of the matrix. To overcome the above difficulties, we make the crucial observation that if ¯X is an optimal solution to (1), then both ¯X and −∇f( ¯X) have the same set of left and right singular vectors; see Proposition 4.2. As a result, we can use matrix perturbation theory to get hold of the spectral structure of the points that are close to the optimal solution set. This in turn allows us to establish a Lipschitzian error bound for the trace norm–regularized problem (1), thereby resolving the aforementioned open question in the affirmative. 2 Preliminaries 2.1 Basic Setup We consider the trace norm–regularized optimization problem (1), in which the loss function f : Rm×n →R takes the form f(X) = h(A(X)), (2) where A : Rm×n →Rp is a linear operator and h : Rp →R is a function satisfying the following assumptions: 2 Assumption 2.1 (a) The effective domain of h, denoted by dom(h), is open and non–empty. (b) The function h is continuously differentiable with Lipschitz–continuous gradient on dom(h) and is strongly convex on any convex compact subset of dom(h). Note that Assumption 2.1(b) implies the strict convexity of h on dom(h) and the Lipschitz continuity of ∇f. Now, let X denote the set of optimal solutions to problem (1). We make the following assumption concerning X: Assumption 2.2 The optimal solution set X is non–empty. The above assumptions can be justified in various applications. For instance, in matrix completion, the square loss f(·) = ∥A(·) −b∥2 2/2 induced by the linear measurement operator A and the set of observations b ∈Rp is of the form (2), with h(·) = ∥(·) −b∥2 2/2. Moreover, it is clear that such an h satisfies Assumptions 2.1 and 2.2. In multi–task learning, the loss function takes the form f(·) = PT t=1 ℓ(At(·), yt), where T is the number of learning tasks, At : Rm×n →Rp is the linear operator defined by the input data for the t–th task, yt ∈Rp is the output data for the t–th task, and ℓ: Rp × Rp →R measures the learning error. Note that f can be put into the form (2), where A : Rm×n →RT p is given by A(X) = (A1(X), A2(X), . . . , AT (X)), and h : RT p →R is given by h(z) = PT t=1 ℓ(zt, yt) with zt ∈Rp for t = 1, . . . , T and z = (z1, . . . , zT). Moreover, in the case where ℓis, say, the square loss (i.e., ℓ(zt, yt) = ∥zt −yt∥2 2/2) or the logistic loss (i.e., ℓ(zt, yt) = Pp i=1 log(1 + exp(−ztiyti))), it can be verified that Assumptions 2.1 and 2.2 hold. 2.2 Some Facts about the Optimal Solution Set Since f(·) = h(A(·)) by (2) and h(·) is strictly convex on dom(h) by Assumption 2.1(b), it is easy to verify that the map X 7→A(X) is invariant over the optimal solution set X. In other words, there exists a ¯z ∈dom(h) such that for any X∗∈X, we have A(X∗) = ¯z. Thus, we can express X as X = X ∈Rm×n : τ∥X∥∗= v∗−h(¯z), A(X) = ¯z , where v∗> −∞is the optimal value of (1). In particular, X is a non–empty convex compact set. This implies that every X ∈Rm×n has a unique projection ¯X ∈X onto X, which is given by the solution to the following optimization problem: dist(X, X) = min Y ∈X ∥X −Y ∥F. In addition, since X is bounded and F is convex, it follows from [14, Corollary 8.7.1] that the level set {X ∈Rm×n : F(X) ≤ζ} is bounded for any ζ ∈R. 2.3 Proximal Gradient Method and the Residual Map To motivate the PGM for solving (1), we recall an alternative characterization of the optimal solution set X. Consider the proximity operator proxτ : Rm×n →Rm×n, which is defined as proxτ(X) = arg min Z∈Rm×n τ∥Z∥∗+ 1 2∥X −Z∥2 F . (3) By comparing the optimality conditions for (1) and (3), it is immediate that a solution X∗∈Rm×n is optimal for (1) if and only if it satisfies the following fixed–point equation: X∗= proxτ(X∗−∇f(X∗)). (4) This naturally lead to the following PGM for solving (1): Y k+1 = Xk −αk∇f(Xk), Xk+1 = proxταk(Y k+1), (5) where αk > 0 is the step size in the k–th iteration, for k = 0, 1, . . .; see, e.g., [8, 19, 11]. As is well–known, the proximity operator defined above can be expressed in terms of the so–called matrix 3 shrinkage operator. To describe this result, we introduce some definitions. Let µ > 0 be given. The non–negative vector shrinkage operator sµ : Rp + →Rp + is defined as (sµ(z))i = max{0, zi −µ}, where i = 1, . . . , p. The matrix shrinkage operator Sµ : Rm×n →Rm×n is defined as Sµ(X) = UΣµV T , where X = UΣV T is the singular value decomposition of X with Σ = Diag(σ(X)) and σ(X) being the vector of singular values of X, and Σµ = Diag(sµ(σ(X))). Then, it can be shown that proxτ(X) = Sτ(X); (6) see, e.g., [11, Theorem 3]. Our goal in this paper is to study the convergence rate of the PGM (5). Towards that end, we need a measure to quantify its progress towards optimality. One natural candidate would be dist(·, X), the distance to the optimal solution set X. Despite its intuitive appeal, such a measure is hard to compute or analyze. In view of (4) and (6), a reasonable alternative would be the norm of the residual map R : Rm×n →Rm×n, which is defined as R(X) = Sτ(X −∇f(X)) −X. (7) Intuitively, the residual map measures how much a solution X ∈Rm×n violates the optimality condition (4). In particular, X is an optimal solution to (1) if and only if R(X) = 0. However, since ∥R(·)∥F is only a surrogate of dist(·, X), we need to establish a relationship between them. This motivates the development of a so–called error bound for problem (1). 3 Main Results Key to our convergence analysis of the PGM (5) is the following error bound for problem (1), which constitutes the main contribution of this paper: Theorem 3.1 (Error Bound for Trace Norm Regularization) Suppose that in problem (1), f is of the form (2), and Assumptions 2.1 and 2.2 are satisfied. Then, for any ζ ≥v∗, there exist constants η > 0 and ǫ > 0 such that dist(X, X) ≤η∥R(X)∥F whenever F(X) ≤ζ, ∥R(X)∥F ≤ǫ. (8) Armed with Theorem 3.1 and some standard properties of the PGM (5), we can apply the convergence analysis framework developed by Luo and Tseng [10] to establish the linear convergence of (5). Recall that a sequence of vectors {wk}k≥0 is said to converge Q–linearly (resp. R– linearly) to a vector w∞if there exist an index K ≥0 and a constant ρ ∈(0, 1) such that ∥wk+1 −w∞∥2/∥wk −w∞∥2 ≤ρ for all k ≥K (resp. if there exist constants γ > 0 and ρ ∈(0, 1) such that ∥wk −w∞∥2 ≤γ · ρk for all k ≥0). Theorem 3.2 (Linear Convergence of the Proximal Gradient Method) Suppose that in problem (1), f is of the form (2), and Assumptions 2.1 and 2.2 are satisfied. Moreover, suppose that the step size αk in the PGM (5) satisfies 0 < α < αk < ¯α < 1/Lf for k = 0, 1, 2, . . ., where Lf is the Lipschitz constant of ∇f, and α, ¯α are given constants. Then, the sequence of solutions {Xk}k≥0 generated by the PGM (5) converges R–linearly to an element in the optimal solution set X, and the associated sequence of objective values {F(Xk)}k≥0 converges Q–linearly to the optimal value v∗. Proof. Under the given setting, it can be shown that there exist scalars κ1, κ2, κ3 > 0, which depend on α, ¯α, and Lf, such that F(Xk) −F(Xk+1) ≥ κ1∥Xk −Xk+1∥2 F, (9) F(Xk+1) −v∗ ≤ κ2 (dist(Xk, X))2 + ∥Xk+1 −Xk∥2 F , (10) ∥R(Xk)∥F ≤ κ3∥Xk −Xk+1∥F; (11) see the supplementary material. Since {F(Xk)}k≥0 is a monotonically decreasing sequence by (9) and F(Xk) ≥v∗for all k ≥0, we conclude, again by (9), that Xk −Xk+1 →0. This, together with (11), implies that R(Xk) →0. Thus, by (9), (10) and Theorem 3.1, there exist an index K ≥0 and a constant κ4 > 0 such that for all k ≥K, F(Xk+1) −v∗≤κ4∥Xk −Xk+1∥2 F ≤κ4 κ1 (F(Xk) −F(Xk+1)). 4 It follows that F(Xk+1) −v∗≤ κ4 κ1 + κ4 (F(Xk) −v∗), (12) which establishes the Q–linear convergence of {F(Xk)}k≥0 to v∗. Using (9) and (12), we can show that {∥Xk+1 −Xk∥2 F }k≥0 converges R–linearly to 0, which, together with (11), implies that {Xk}k≥0 converges R–linearly to a point in X; see the supplementary material. □ 4 Proof of the Error Bound The structure of our proof of Theorem 3.1 largely follows that laid out in [22, Section 6]. However, as explained in Section 1, some new ingredients are needed in order to analyze the spectral properties of a point that is close to the optimal solution set X. Before we proceed, let us set up the notation that will be used in the proof. Let L > 0 denote the Lipschitz constant of ∇h and ∂∥· ∥∗denote the subdifferential of ∥· ∥∗. Given a sequence {Xk}k≥0 ∈Rm×n\X, define Rk = R(Xk), ¯Xk = arg minY ∈X ∥Xk −Y ∥F, δk = ∥Xk −¯Xk∥F , zk = A(Xk), Gk = ∇f(Xk) = A∗(∇h(zk)), ¯G = A∗(∇h(¯z)), (13) where A∗is the adjoint operator of A. The crux of the proof of Theorem 3.1 is the following lemma: Lemma 4.1 Under the setting of Theorem 3.1, suppose that there exists a convergent sequence {Xk}k≥0 ∈Rm×n\X satisfying F(Xk) ≤ζ for all k ≥0, Rk →0, Rk δk →0. (14) Then, the following hold: (a) (Asymptotic Optimality) The limit point ¯X of {Xk}k≥0 belongs to X. (b) (Bounded Iterates) There exists a convex compact subset Z of dom(h) such that zk, ¯z ∈Z for all k ≥0. Consequently, there exists a constant σ ∈(0, L] such that for all k ≥0, (∇h(zk) −∇h(¯z))T (zk −¯z) ≥σ∥zk −¯z∥2 2. (15) (c) (Restricted Invertibility) There exists a constant κ > 0 such that ∥Xk −¯Xk∥F ≤κ∥zk −¯z∥2 = κ∥A(Xk −¯Xk)∥2 for all k ≥0. (16) It is clear that ∥A(Xk −¯Xk)∥2 ≤∥A∥· ∥Xk −¯Xk∥F , where ∥A∥= sup∥Y ∥F =1 ∥A(Y )∥2 is the spectral norm of A. Thus, the key element in Lemma 4.1 is the restricted invertibility property (16). For the sake of continuity, let us proceed to prove Theorem 3.1 by assuming the validity of Lemma 4.1. Proof. [Theorem 3.1] We argue by contradiction. Suppose that there exists ζ ≥v∗such that (8) fails to hold for all η > 0 and ǫ > 0. Then, there exists a sequence {Xk}k≥0 ∈Rm×n\X satisfying (14). Since {X ∈Rm×n : F(X) ≤ζ} is bounded (see Section 2.2), by passing to a subsequence if necessary, we may assume that {Xk}k≥0 converges to some ¯X. Hence, the premises of Lemma 4.1 are satisfied. Now, by Fermat’s rule [15, Theorem 10.1], for each k ≥0, Rk ∈arg min D ⟨Gk + Rk, D⟩+ τ∥Xk + D∥∗ . (17) Hence, we have ⟨Gk + Rk, Rk⟩+ τ∥Xk + Rk∥∗≤⟨Gk + Rk, ¯Xk −Xk⟩+ τ∥¯Xk∥∗. Since ¯Xk ∈X and ∇f( ¯Xk) = ¯G, we also have −¯G ∈τ∂∥¯Xk∥∗, which implies that τ∥¯Xk∥∗≤⟨¯G, Xk + Rk −¯Xk⟩+ τ∥Xk + Rk∥∗. Adding the two inequalities above and simplifying yield ⟨Gk −¯G, Xk −¯Xk⟩+ ∥Rk∥2 F ≤⟨¯G −Gk, Rk⟩+ ⟨Rk, ¯Xk −Xk⟩. (18) 5 Since zk = A(Xk) and ¯z = A( ¯ Xk), by Lemma 4.1(b,c), ⟨Gk −¯G, Xk −¯Xk⟩= (∇h(zk) −∇h(¯z))T (zk −¯z) ≥σ∥zk −¯z∥2 2 ≥σ κ2 ∥Xk −¯Xk∥2 F . (19) Hence, it follows from (15), (18), (19) and the Lipschitz continuity of ∇h that σ κ2 ∥Xk −¯Xk∥2 F + ∥Rk∥2 F ≤ (∇h(¯z) −∇h(zk))T A(Rk) + ⟨Rk, ¯Xk −Xk⟩ ≤ L∥A∥2∥Xk −¯Xk∥F∥Rk∥F + ∥Xk −¯Xk∥F ∥Rk∥F. In particular, this implies that σ κ2 ∥Xk −¯Xk∥2 F ≤(L∥A∥2 + 1)∥Xk −¯Xk∥F ∥Rk∥F for all k ≥0, which, upon dividing both sides by ∥Xk −¯Xk∥F, yields a contradiction to (14). □ 4.1 Proof of Lemma 4.1 We now return to the proof of Lemma 4.1. Since Rk →0 by (14) and R is continuous, we have R( ¯X) = 0, which implies that ¯X ∈X. This establishes (a). To prove (b), observe that due to (a), the sequence {Xk}k≥0 is bounded. Hence, the sequence {A(Xk)}k≥0 is also bounded, which implies that the points zk = A(Xk) and ¯z = A( ¯X) lie in a convex compact subset Z of dom(h) for all k ≥0. The inequality (15) then follows from Assumption 2.1(b). Note that we have σ ≤L, as ∇h is Lipschitz continuous with parameter L. To prove (c), we argue by contradiction. Suppose that (16) is false. Then, by further passing to a subsequence if necessary, we may assume that ∥A(Xk) −¯z∥2 ∥Xk −¯Xk∥F →0. (20) In the sequel, we will also assume without loss of generality that m ≤n. The following proposition establishes a property of the optimal solution set X that will play a crucial role in our proof. Proposition 4.2 Consider a fixed ¯X ∈X. Let ¯X −¯G = ¯U [Diag(¯σ) 0] ¯V T be the singular value decomposition of ¯X −¯G, where ¯U ∈Rm×m, ¯V ∈Rn×n are orthogonal matrices and ¯σ is the vector of singular values of ¯X −¯G. Then, the matrices ¯X and −¯G can be simultaneously singular–value–decomposed by ¯U and ¯V . Moreover, the set Xc ⊂X, which is defined as Xc = X ∈X : X = ¯U [Diag(σ(X)) 0] ¯V T , is a non–empty convex compact set. By Proposition 4.2, for every k ≥0, the point Xk has a unique projection ˜Xk ∈Xc onto Xc. Let γk = ∥Xk −˜Xk∥F = min Y ∈Xc ∥Xk −Y ∥F . (21) Since Xc ⊂X, we have γk = ∥Xk −˜Xk∥F ≥∥Xk −¯Xk∥F = δk. It follows from (20) that ∥A(Xk) −¯z∥2 ∥Xk −˜Xk∥F →0. This is equivalent to A(Qk) →0, where Qk = Xk −˜Xk γk for all k ≥0. (22) In particular, we have ∥Qk∥F = 1 for all k ≥0. By further passing to a subsequence if necessary, we will assume that {Qk}k≥0 converges to some ¯Q. Clearly, we have A( ¯Q) = 0 and ∥¯Q∥F = 1. 4.1.1 Decomposing ¯Q Our goal now is to show that for k sufficiently large and ǫ > 0 sufficiently small, the point ˆX = ˜Xk + ǫ ¯Q belongs to Xc and is closer to Xk than ˜Xk is to Xk. This would then contradict the fact that ˜Xk is the projection of Xk onto Xc. To begin, let σk be the vector of singular values of Xk −Gk. Since Xk −Gk →¯X −¯G, the sequence {σk}k≥0 is bounded. Hence, for i = 1, . . . , m, by passing to a subsequence if necessary, we can classify the sequence {σk i }k≥0 into one of the following three cases: (A) σk i ≤τ for all k ≥0; (B) σk i > τ and σi( ˜Xk) > 0 for all k ≥0; (C) σk i > τ and σi( ˜Xk) = 0 for all k ≥0. The following proposition gives the key structural properties of ¯Q that will lead to the desired contradiction: 6 Proposition 4.3 The matrix ¯Q admits the decomposition ¯Q = ¯U [Diag(λ) 0] ¯V T , where λi = −lim k→∞ σi( ˜Xk) γk ≤0 in Case (A), ∈R in Case (B), > 0 in Case (C), for i = 1, . . . , m. It should be noted that the decomposition given in Proposition 4.3 is not necessarily the singular value decomposition of ¯Q, as λ could have negative components. A proof of Proposition 4.3 can be found in the supplementary material. 4.1.2 Completing the Proof Armed with Proposition 4.3, we are now ready to complete the proof of Lemma 4.1(c). Since Qk ̸= 0 for all k ≥0, it follows from (22) that ⟨Xk −˜Xk, ¯Q⟩> 0 for all k sufficiently large. Fix any such k and let ˆX = ˜Xk + ǫ ¯Q, where ǫ > 0 is a parameter to be determined. Since A( ¯Q) = 0, it follows from (13) that ∇f( ˆX) = ∇f( ˜Xk) = ¯G. Moreover, since ˜Xk ∈Xc, by the optimality condition (4) and Proposition 4.2, we have max n 0, σi( ˜Xk) + σi(−¯G) −τ o = σi( ˜Xk) for i = 1, . . . , m. (23) Now, we claim that for ǫ > 0 sufficiently small, ˆX satisfies Sτ( ˆX −¯G)¯vi = ˆX¯vi for i = 1, . . . , n, (24) ¯uT i Sτ( ˆX −¯G) = ¯uT i ˆX for i = 1, . . . , m, where ¯ui (resp. ¯vi) is the i–th column of ¯U (resp. ¯V ). This would then imply that ˆX ∈Xc. To prove the claim, observe that for i = m + 1, . . . , n, both sides of (24) are equal to 0. Moreover, since ˜Xk ∈Xc, Propositions 4.2 and 4.3 give ˆX −¯G = ¯U Diag(σ( ˜Xk) + ǫλ + σ(−¯G)) 0 ¯V T . Thus, it suffices to show that for ǫ > 0 sufficiently small, σi( ˜Xk) + ǫλi + σi(−¯G) ≥0 for i = 1, . . . , m, (25) sτ(σi( ˜Xk) + ǫλi + σi(−¯G)) = σi( ˜Xk) + ǫλi for i = 1, . . . , m. (26) Towards that end, fix an index i = 1, . . . , m and consider the three cases defined in Section 4.1.1: Case (A). If σi( ˜Xk) = 0 for all k sufficiently large, then Proposition 4.3 gives λi = 0. Moreover, we have σi(−¯G) ≤τ by (23). This implies that both (25) and (26) are satisfied for any choice of ǫ > 0. On the other hand, if σi( ˜Xk) > 0 for all k sufficiently large, then Proposition 4.3 gives λi < 0. Moreover, we have σi(−¯G) = τ by (23). By choosing ǫ > 0 so that σi( ˜Xk) + ǫλi ≥0, we can guarantee that both (25) and (26) are satisfied. Case (B). Since σi( ˜Xk) > 0 for all k ≥0, we have σi(−¯G) = τ by (23). Hence, both (25) and (26) can be satisfied by choosing ǫ > 0 so that σi( ˜Xk) + ǫλi ≥0. Case (C). By Proposition 4.2, we have ¯X ∈Xc. Since Xk →¯X and γk = ∥Xk −˜Xk∥F ≤ ∥Xk −¯X∥F , we have ˜Xk →¯X as well. It follows that σi( ¯X) = 0, as σi( ˜Xk) = 0 for all k ≥0 by assumption. Now, since Xk −Gk →¯X −¯G and σk i > τ, we have ¯σi ≥τ. Thus, Proposition 4.2 implies that τ ≤¯σi = σi( ¯X −¯G) = σi( ¯X) + σi(−¯G) = σi(−¯G). This, together with (23), yields σi(−¯G) = τ. Since λi > 0 by Proposition 4.3, we conclude that both (25) and (26) can be satisfied by any choice of ǫ > 0. Thus, in all three cases, the claim is established. In particular, we have ˆX ∈Xc. This, together with ⟨Xk −˜Xk, ¯Q⟩> 0 and ∥¯Q∥F = 1, yields ∥Xk −ˆX∥2 F = ∥Xk −˜Xk −ǫ ¯Q∥2 F = ∥Xk −˜Xk∥2 F −2ǫ⟨Xk −˜Xk, ¯Q⟩+ ǫ2 < ∥Xk −˜Xk∥2 F for ǫ > 0 sufficiently small, which contradicts the fact that ˜Xk is the projection of Xk onto Xc. This completes the proof of Lemma 4.1(c). 7 5 Numerical Experiments In this section, we complement our theoretical results by testing the numerical performance of the PGM (5) on two problems: matrix completion and matrix classification. Matrix Completion: We randomly generate an n × n matrix M with a prescribed rank r. Then, we fix a sampling ratio θ ∈(0, 1] and sample p = ⌊θn2⌋entries of M uniformly at random. This induces a sampling operator P : Rm×n →Rp and an observation vector b ∈Rp. In our experiments, we fix the rank r = 3 and use the square loss f(·) = ∥P(·) −b∥2 2/2 with regularization parameter µ = 1 in problem (1). We then solve the resulting problem for different values of n and θ using the PGM (5) with a fixed step size α = 1. We stop the algorithm when F(Xk) −F(Xk+1) < 10−8. Figure 1 shows the semi–log plots of the error in objective value and the error in solution against the number of iterations. It can be seen that as long as the iterates are close enough to the optimal set, both the objective values and the solutions converge linearly. 0 200 400 600 800 1000 10 −10 10 −5 10 0 10 5 Iterations Log(error of objective value) Convergence Performance of Objective Value θ=0.1 θ=0.3 θ=0.5 n = 1000 0 50 100 150 200 250 300 10 −10 10 −5 10 0 10 5 Iterations Log(error of objective value) Convergence Performance of Objective Value n=100 n=500 n=1000 θ = 0.3 0 200 400 600 800 1000 10 −4 10 −2 10 0 10 2 10 4 Iterations Log(error of solution) Convergence Performance of Solution θ=0.1 θ=0.3 θ=0.5 n = 1000 0 50 100 150 200 250 300 10 −4 10 −2 10 0 10 2 10 4 Iterations Log(error of solution) Convergence Performance of Solution n=100 n=500 n=1000 θ = 0.3 Figure 1: Matrix Completion 0 200 400 600 800 1000 1200 10 −10 10 −5 10 0 10 5 Iterations Log(Error of Objective Value) Convergence Performance of Objective Value θ=0.2 θ=0.5 θ=0.8 n = 40 0 200 400 600 800 1000 1200 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Iterations Log(Error of Solution) Convergence Performance of Solution θ=0.2 θ=0.5 θ=0.8 n = 40 Figure 2: Matrix Classification Matrix Classification: We consider a matrix classification problem under the setting described in [21]. Specifically, we first randomly generate a low-rank matrix classifier X∗, which is an n × n symmetric matrix of rank r. Then, we specify a sampling ratio θ ∈(0, 1] and sample p = ⌊θn2⌋/2 independent n × n symmetric matrices W1, . . . , Wp from the standard Wishart distribution with n degrees of freedom. The label of Wi, denoted by yi, is given by sgn(⟨X∗, Wi⟩). In our experiments, we fix the rank r = 3, the dimension n = 40, and use the logistic loss f(·) = Pp i=1 log(1 + exp(−yi⟨·, Wi⟩)) with regularization parameter µ = 1 in problem (1). Since a good lower bound on the Lipschitz constant Lf of ∇f is not readily available in this case, a backtracking line search was adopted at each iteration to achieve an acceptable step size; see, e.g., [3]. We stop the algorithm when F(Xk) −F(Xk+1) < 10−6. Figure 2 shows the convergence performance of the PGM (5) as θ varies. Again, it can be seen that both the objective values and the solutions converge linearly. 6 Conclusion In this paper, we have established the linear convergence of the PGM for solving a class of trace norm–regularized problems. Our convergence result does not require the objective function to be strongly convex and is applicable to many settings in machine learning. The key technical tool in the proof is a Lipschitzian error bound for trace norm–regularized problems, which could be of independent interest. A future direction is to study error bounds for more general matrix norm– regularized problems and their implications on the convergence rates of first–order methods. Acknowledgments The authors would like to thank the anonymous reviewers for their careful reading of the manuscript and insightful comments. The research of A. M.–C. So is supported in part by a gift grant from Microsoft Research Asia. 8 References [1] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering Shared Structures in Multiclass Classification. In Proc. 24th ICML, pages 17–24, 2007. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex Multi–Task Feature Learning. Mach. Learn., 73(3):243– 272, 2008. [3] A. Beck and M. Teboulle. A Fast Iterative Shrinkage–Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci., 2(1):183–202, 2009. [4] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. MPS–SIAM Series on Optimization. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, 2001. [5] M. Fazel, H. Hindi, and S. P. Boyd. A Rank Minimization Heuristic with Application to Minimum Order System Approximation. In Proc. 2001 ACC, pages 4734–4739, 2001. [6] D. Gross. Recovering Low–Rank Matrices from Few Coefficients in Any Basis. IEEE Trans. Inf. Theory, 57(3):1548–1566, 2011. [7] S. Ji, K.-F. Sze, Z. Zhou, A. M.-C. So, and Y. Ye. Beyond Convex Relaxation: A Polynomial–Time Non–Convex Optimization Approach to Network Localization. In Proc. 32nd IEEE INFOCOM, pages 2499–2507, 2013. [8] S. Ji and J. Ye. An Accelerated Gradient Method for Trace Norm Minimization. In Proc. 26th ICML, pages 457–464, 2009. [9] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear–Norm Penalization and Optimal Rates for Noisy Low–Rank Matrix Completion. Ann. Stat., 39(5):2302–2329, 2011. [10] Z.-Q. Luo and P. Tseng. Error Bounds and Convergence Analysis of Feasible Descent Methods: A General Approach. Ann. Oper. Res., 46(1):157–178, 1993. [11] S. Ma, D. Goldfarb, and L. Chen. Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization. Math. Program., 128(1–2):321–353, 2011. [12] Yu. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Boston, 2004. [13] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed Minimum–Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization. SIAM Rev., 52(3):471–501, 2010. [14] R. T. Rockafellar. Convex Analysis. Princeton Landmarks in Mathematics and Physics. Princeton University Press, Princeton, New Jersey, 1997. [15] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317 of Grundlehren der mathematischen Wissenschaften. Springer–Verlag, Berlin Heidelberg, second edition, 2004. [16] M. Schmidt, N. Le Roux, and F. Bach. Convergence Rates of Inexact Proximal–Gradient Methods for Convex Optimization. In Proc. NIPS 2011, pages 1458–1466, 2011. [17] A. M.-C. So, Y. Ye, and J. Zhang. A Unified Theorem on SDP Rank Reduction. Math. Oper. Res., 33(4):910–920, 2008. [18] W. So. Facial Structures of Schatten p–Norms. Linear and Multilinear Algebra, 27(3):207–212, 1990. [19] K.-C. Toh and S. Yun. An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Linear Least Squares Problems. Pac. J. Optim., 6(3):615–640, 2010. [20] R. Tomioka and K. Aihara. Classifying Matrices with a Spectral Regularization. In Proc. of the 24th ICML, pages 895–902, 2007. [21] R. Tomioka, T. Suzuki, M. Sugiyama, and H. Kashima. A Fast Augmented Lagrangian Algorithm for Learning Low–Rank Matrices. In Proc. 27th ICML, pages 1087–1094, 2010. [22] P. Tseng. Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization. Math. Program., 125(2):263–295, 2010. [23] P. Tseng and S. Yun. A Coordinate Gradient Descent Method for Nonsmooth Separable Minimization. Math. Program., 117(1–2):387–423, 2009. [24] M. White, Y. Yu, X. Zhang, and D. Schuurmans. Convex Multi–View Subspace Learning. In Proc. NIPS 2012, pages 1682–1690, 2012. [25] H. Zhang, J. Jiang, and Z.-Q. Luo. On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems. J. Oper. Res. Soc. China, 1(2):163–186, 2013. 9
|
2013
|
84
|
5,163
|
Eluder Dimension and the Sample Complexity of Optimistic Exploration Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract This paper considers the sample complexity of the multi-armed bandit with dependencies among the arms. Some of the most successful algorithms for this problem use the principle of optimism in the face of uncertainty to guide exploration. The clearest example of this is the class of upper confidence bound (UCB) algorithms, but recent work has shown that a simple posterior sampling algorithm, sometimes called Thompson sampling, can be analyzed in the same manner as optimistic approaches. In this paper, we develop a regret bound that holds for both classes of algorithms. This bound applies broadly and can be specialized to many model classes. It depends on a new notion we refer to as the eluder dimension, which measures the degree of dependence among action rewards. Compared to UCB algorithm regret bounds for specific model classes, our general bound matches the best available for linear models and is stronger than the best available for generalized linear models. 1 Introduction Consider a politician trying to elude a group of reporters. She hopes to keep her true position hidden from the reporters, but each piece of information she provides must be new, in the sense that it’s not a clear consequence of what she has already told them. How long can she continue before her true position is pinned down? This is the essence of what we call the eluder dimension. We show this notion controls the rate at which algorithms using optimistic exploration converge to optimality. We consider an optimization problem faced by an agent who is uncertain about how her actions influence performance. The agent selects actions sequentially, and upon each action observes a reward. A reward function governs the mean reward of each action. As rewards are observed the agent learns about the reward function, and this allows her to improve behavior. Good performance requires adaptively sampling actions in a way that strikes an effective balance between exploring poorly understood actions and exploiting previously acquired knowledge to attain high rewards. Unless the agent has prior knowledge of the structure of the mean payoff function, she can only learn to attain near optimal performance by exhaustively sampling each possible action. In this paper, we focus on problems where there is a known relationship among the rewards generated by different actions, potentially allowing the agent to learn without exploring every action. Problems of this form are often referred to as multi-armed bandit (MAB) problems with dependent arms. A notable example is the “linear bandit” problem, where actions are described by a finite number of features and the reward function is linear in these features. Several researchers have studied algorithms for such problems and established theoretical guarantees that have no dependence on the number of actions [1, 2, 3]. Instead, their bounds depend on the linear dimension of the class of reward functions. In this paper, we assume that the reward function lies in a known but otherwise arbitrary class of uniformly bounded real-valued functions, and provide theoretical guarantees that 1 depend on more general measures of the complexity of the class of functions. Our analysis of this abstract framework yields a result that applies broadly, beyond the scope of specific problems that have been studied in the literature, and also identifies fundamental insights that unify more specialized prior results. The guarantees we provide apply to two popular classes of algorithms for the stochastic MAB: upper confidence bound (UCB) algorithms and Thompson sampling. Each algorithm is described in Section 3. The aforementioned papers on the linear bandit problem study UCB algorithms [1, 2, 3]. Other authors have studied UCB algorithms in cases where the reward function is Lipschitz continuous [4, 5], sampled from a Gaussian process [6], or takes the form of a generalized [7] or sparse [8] linear model. More generally, there is an immense literature on this approach to balancing between exploration and exploitation, including work on bandits with independent arms [9, 10, 11, 12], reinforcement learning [13, 14], and Monte Carlo Tree Search [15]. Recently, a simple posterior sampling algorithm called Thompson sampling was shown to share a close connection with UCB algorithms [16]. This connection enables us to study both types of algorithms in a unified manner. Though it was first proposed in 1933 [17], Thompson sampling has until recently received relatively little attention. Interest in the algorithm grew after empirical studies [18, 19] demonstrated performance exceeding state-of the-art methods. Strong theoretical guarantees are now available for an important class of problems with independent arms [20, 21, 22]. A recent paper considers the application of this algorithm to a linear contextual bandit problem [23]. To our knowledge, few other papers have studied MAB problems in a general framework like the one we consider. There is work that provides general bounds for contextual bandit problems where the context space is allowed to be infinite, but the action space is small (see e.g., [24]). Our model captures contextual bandits as a special case, but we emphasize problem instances with large or infinite action sets, and where the goal is to learn without sampling every possible action. The closest related work to ours is that of Amin et al. [25], who consider the problem of learning the optimum of a function that lies in a known, but otherwise arbitrary set of functions. They provide bounds based on a new notion of dimension, but unfortunately this notion does not provide a guarantee for the algorithms we consider. We provide bounds on expected regret over a time horizon T that are, up to a logarithmic factor, of order v u u tdimE F, T −2 | {z } Eluder dimension log N F, T −2, ∥·∥∞ | {z } log–covering number T. This quantity depends on the class of reward functions F through two measures of complexity. Each captures the approximate structure of the class of functions at a scale T −2 that depends on the time horizon. The first measures the growth rate of the covering numbers of F, and is closely related to measures of complexity that are common in the supervised learning literature. This quantity roughly captures the sensitivity of F to statistical over-fitting. The second measure, the eluder dimension, is a new notion we introduce. This captures how effectively the value of unobserved actions can be inferred from observed samples. We highlight in Section 4.1 why notions of dimension common to the supervised learning literature are insufficient for our purposes. Finally, we show that our more general result when specialized to linear models recovers the strongest known regret bound and in the case of generalized linear models yields a bound stronger than that established in prior literature. 2 Problem Formulation We consider a model involving a set of actions A and a set of real-valued functions F = {fρ : A 7→R| ρ ∈Θ}, indexed by a parameter that takes values from an index set Θ. We will define random variables with respect to a probability space (Ω, F, P). A random variable θ indexes the true reward function fθ. At each time t, the agent is presented with a possibly random subset At ⊆A and selects an action At ∈At, after which she observes a reward Rt. We denote by Ht the history (A1, A1, R1, . . . , At−1, At−1, Rt−1, At) of observations available to the agent when choosing an action At. The agent employs a policy π = {πt|t ∈N}, which is a deterministic sequence of functions, each mapping the history Ht to a probability distribution over actions A. For each realization of Ht, πt(Ht) is a distribution over A with support At. The action At 2 is selected by sampling from the distribution πt(·), so that P(At ∈·|Ht) = πt(Ht). We assume that E[Rt|Ht, θ, At] = fθ(At). In other words, the realized reward is the mean-reward value corrupted by zero-mean noise. We will also assume that for each f ∈F and t ∈N, arg maxa∈At f(a) is nonempty with probability one, though algorithms and results can be generalized to handle cases where this assumption does not hold. We fix constants C > 0 and η > 0 and impose two further simplifying assumptions. The first concerns boundedness of reward functions. Assumption 1. For all f ∈F and a ∈A, f(a) ∈[0, C]. Our second assumption ensures that observation noise is light-tailed. We say a random variable X is η-sub-Gaussian if E[exp(λX)] ≤exp(λ2η2/2) almost surely for all λ. Assumption 2. For all t ∈N, Rt −fθ(At) conditioned on (Ht, θ, At) is η-sub-Gaussian. We let A∗ t ∈arg maxa∈Atfθ(a) denote the optimal action at time t. The T period regret is the random variable R(T, π) = T X t=1 [fθ(A∗ t ) −fθ (At)] , where the actions {At : t ∈N} are selected according to π. We sometimes study expected regret E[R(T, π)], where the expectation is taken over the prior distribution of θ, the reward noise, and the algorithm’s internal randomization. This quantity is sometimes called Bayes risk or Bayesian regret. Similarly, we study conditional expected regret E [R(T, π) | θ], which integrates over all randomness in the system except for θ. Example 1. Contextual Models. The contextual multi-armed bandit model is a special case of the formulation presented above. In such a model, an exogenous Markov process Xt taking values in a set X influences rewards. In particular, the expected reward at time t is given by fθ(a, Xt). However, this is mathematically equivalent to a problem with stochastic time-varying decision sets At. In particular, one can define the set of actions to be the set of state-action pairs A := {(x, a) : x ∈A, a ∈A(x)}, and the set of available actions to be At = {(Xt, a) : a ∈A(Xt)}. 3 Algorithms We will establish performance bounds for two classes of algorithms: Thompson sampling and UCB algorithms. As background, we discuss the algorithms in this section. We also provide an example of each type of algorithm that is designed to address the “linear bandit” problem. UCB Algorithms: UCB algorithms have received a great deal of attention in the MAB literature. Here we describe a very broad class of UCB algorithms. We say that a confidence set is a random subset Ft ⊂F that is measurable with respect to σ(Ht). Typically, Ft is constructed so that it contains fθ with high probability. We denote by πF1:∞a UCB algorithm that makes use of a sequence of confidence sets {Ft : t ∈N}. At each time t, such an algorithm selects the action At ∈arg max a∈At sup f∈Ft f(a), where sup f∈Ft f(a) is an optimistic estimate of fθ(a) representing the greatest value that is statistically plausible at time t. Optimism encourages selection of poorly-understood actions, which leads to informative observations. As data accumulates, optimistic estimates are adapted, and this process of exploration and learning converges toward optimal behavior. In this paper, we will assume for simplicity that the maximum defining At is attained. Results can be generalized to handle cases when this technical condition does not hold. Unfortunately, for natural choices of Ft, it may be exceptionally difficult to solve for such an action. Thankfully, all results in this paper also apply to a posterior sampling algorithm that avoids this hard optimization problem. Thompson sampling: The Thompson sampling algorithm simply samples each action according to the probability it is optimal. In particular, the algorithm applies action sampling distributions πTS t (Ht) = P (A∗ t ∈· | Ht), where A∗ t is a random variable that satisfies A∗ t ∈arg maxa∈At fθ(a). Practical implementations typically operate by at each time t sampling an index ˆθt ∈Θ from the distribution P (θ ∈· | Ht) and then generating an action At ∈arg maxa∈At fˆθt(a). 3 Algorithm 1 Linear UCB 1: Initialize: Select d linearly independent actions 2: Update Statistics: ˆθt ←OLS estimate of θ Φt ←Pt−1 k=1 φ( ¯Ak)φ( ¯Ak)T Θt ← ρ :
ρ −ˆθt
Φt ≤β√d log t 3: Select Action: At ∈arg maxa∈A {maxρ∈Θt ⟨φ(a), ρ⟩} 4: Increment t and Goto Step 2 Algorithm 2 Linear Thompson sampling 1: Sample Model: ˆθt ∼N(µt, Σt) 2: Select Action: At ∈arg maxa∈A⟨φ(a), ˆθt⟩ 3: Update Statistics: µt+1 ←E[θ|Ht+1] Σt+1 ←E[(θ −µt+1)(θ −µt+1)⊤|Ht+1] 4: Increment t and Goto Step 1 Algorithms for Linear Bandits: Here we provide an example of a Thompson sampling and a UCB algorithm, each of which addresses a problem in which the reward function is linear in a ddimensional vector θ. In particular, there is a known feature mapping φ : A →Rd such that an action a yields expected reward fθ(a) = ⟨φ(a), θ⟩. Algorithm 1 is a variation of one proposed by Rusmevichientong and Tsitsiklis [3] to address such problems. Given past observations, the algorithm constructs a confidence ellipsoid Θt centered around a least squares estimate ˆθt and employs the upper confidence bound Ut(a) := maxθ∈Θt φ(a), θ = D φ(a), ˆθt E + β q d log(t) ∥φ(a)∥Φ−1 t . The term ∥φ(a)∥Φ−1 t captures the amount of previous exploration in the direction φ(a), and causes the “uncertainty bonus” β q d log(t) ∥φ(a)∥Φ−1 t to diminish as the number of observations increases. Now, consider Algorithm 2. Here we assume θ is drawn from a normal distribution N(µ1, Σ1). We consider a linear reward function fθ(a) = ⟨φ(a), θ⟩and assume the reward noise Rt −fθ(At) is normally distributed and independent from (Ht, At, θ). It is easy to show that, conditioned on the history Ht, θ remains normally distributed. Algorithm 2 presents an implementation of Thompson sampling for this problem. The expectations can be computed efficiently via Kalman filtering. 4 Notions of Dimension Recently, there has been a great deal of interest in the development of regret bounds for linear UCB algorithms [1, 2, 3, 26]. These papers show that for a broad class of problems, a variant π∗of Algorithm 1 satisfies the upper bounds E [R(T, π∗)] = ˜O(d √ T) and E [R(T, π∗) | θ] = ˜O(d √ T). An interesting feature of these bounds is that they have no dependence on the number actions in A, and instead depend only on the linear dimension of the set of functions F. Our goal is to provide bounds that depend on more general measures of the complexity of the class of functions. This section introduces a new notion, the eluder dimension, on which our bounds will depend. First, we highlight why common notions from statistical learning theory do not suffice when it comes to multi–armed bandit problems. 4.1 Vapnik-Chervonenkis Dimension We begin with an example that illustrates how a class of functions that is learnable in constant time in a supervised learning context may require an arbitrarily long duration when learning to optimize. Example 2. Consider a finite class of binary-valued functions F = {fρ : A 7→{0, 1} | ρ ∈{1, . . . , n}} over a finite action set A = {1, . . . , n}. Let fρ(a) = 1(ρ = a), so that each function is an indicator for an action. To keep things simple, assume that Rt = fθ(At), so that there is no noise. If θ is uniformly distributed over {1, . . . , n}, it is easy to see that the regret of any algorithm grows linearly with n. For large n, until θ is discovered, each sampled action is unlikely to reveal much about θ and learning therefore takes very long. Consider the closely related supervised learning problem in which at each time an action ˜At is sampled uniformly from A and the mean–reward value fθ( ˜At) is observed. For large n, the time it 4 takes to effectively learn to predict fθ( ˜At) given ˜At does not depend on t. In particular, prediction error converges to 1/n in constant time. Note that predicting 0 at every time already achieves this low level of error. In the preceding example, the Vapnik-Chervonenkis (VC) dimension, which characterizes the sample complexity of supervised learning, is 1. On the other hand, the eluder-dimension, which will we define below, is n. To highlight conceptual differences between the eluder dimension and the VC dimension, we will now define VC dimension in a way analogous to how will define eluder dimension. We begin with a notion of independence. Definition 1. An action a is VC-independent of ˜ A ⊆A if for any f, ˜f ∈F there exists some ¯f ∈F which agrees with f on a and with ˜f on ˜ A; that is, ¯f(a) = f(a) and ¯f(˜a) = ˜f(˜a) for all ˜a ∈˜ A. Otherwise, a is VC-dependent on ˜ A. By this definition, an action a is said to be VC-dependent on ˜ A if knowing the values f ∈F takes on ˜ A could restrict the set of possible values at a. This notion of independence is intimately related to the VC dimension of a class of functions. In fact, it can be used to define VC dimension. Definition 2. The VC dimension of a class of binary-valued functions with domain A is the largest cardinality of a set ˜ A ⊆A such that every a ∈˜ A is VC-independent of ˜ A\ {a}. In the above example, any two actions are VC-dependent because knowing the label fθ(a) of one action could completely determine the value of the other action. However, this only happens if the sampled action has label 1. If it has label 0, one cannot infer anything about the value of the other action. Instead of capturing the fact that one could gain useful information through exploration, we need a stronger requirement that guarantees one will gain useful information. 4.2 Defining Eluder Dimension Here we define the eluder dimension of a class of functions, which plays a key role in our results. Definition 3. An action a ∈A is ϵ-dependent on actions {a1, ..., an} ⊆A with respect to F if any pair of functions f, ˜f ∈F satisfying qPn i=1(f(ai) −˜f(ai))2 ≤ϵ also satisfies f(a) −˜f(a) ≤ϵ. Further, a is ϵ-independent of {a1, .., an} with respect to F if a is not ϵ-dependent on {a1, .., an}. Intuitively, an action a is independent of {a1, ..., an} if two functions that make similar predictions at {a1, ..., an} can nevertheless differ significantly in their predictions at a. The above definition measures the “similarity” of predictions at ϵ-scale, and measures whether two functions make similar predictions at {a1, ..., an} based on the cumulative discrepancy qPn i=1(f(ai) −˜f(ai))2. This measure of dependence suggests using the following notion of dimension. Definition 4. The ϵ-eluder dimension dimE(F, ϵ) is the length d of the longest sequence of elements in A such that, for some ϵ′ ≥ϵ, every element is ϵ′-independent of its predecessors. Recall that a vector space has dimension d if and only if d is the length of the longest sequence of elements such that each element is linearly independent or equivalently, 0-independent of its predecessors. Definition 4 replaces the requirement of linear independence with ϵ-independence. This extension is advantageous as it captures both nonlinear dependence and approximate dependence. 5 Confidence Bounds and Regret Decompositions A key to our analysis is recent observation [16] that the regret of both Thompson sampling and a UCB algorithm can be decomposed in terms of confidence sets. Define the width of a subset ˜F ⊂F at an action a ∈A by w ˜ F(a) = sup f,f∈˜ F f(a) −f(a) . (1) This is a worst–case measure of the uncertainty about the payoff fθ(a) at a given that fθ ∈˜F. 5 Proposition 1. Fix any sequence {Ft : t ∈N}, where Ft ⊂F is measurable with respect to σ(Ht). Then for any T ∈N, with probability 1, R(T, πF1:∞) ≤ T X t=1 [wFt(At) + C1(fθ /∈Ft)] (2) E R(T, πTS) ≤ E T X t=1 [wFt(At) + C1(fθ /∈Ft)] . (3) If the confidence sets Ft are constructed to contain fθ with high probability, this proposition essentially bounds regret in terms of the sum of widths PT t=1 wFt(At). In this sense, the decomposition bounds regret only in terms of uncertainty about the actions A1,..,At that the algorithm has actually sampled. As actions are sampled, the value of fθ(·) at those actions is learned accurately, and hence we expect that the width wFt(·) of the confidence sets should diminish over time. It is worth noting that the regret bound of the UCB algorithm πF1:∞depends on the specific confidence sets {Ft : t ∈N} used by the algorithm whereas the bound of πTS applies for any sequence of confidence sets. However, the decomposition (3) holds only in expectation under the prior distribution. The implications of these decompositions are discussed further in earlier work [16]. In the next section, we design abstract confidence sets Ft that are shown to contain the true function fθ with high probability. Then, in Section 7 we give a worst case bound on the sum PT t=1 wFt(At) in terms of the eluder dimension of the class of functions F. When combined with Proposition 1, this analysis provides regret bounds for both Thompson sampling and for a UCB algorithm. 6 Construction of confidence sets The abstract confidence sets we construct are centered around least squares estimates ˆf LS t ∈ arg minf∈F L2,t(f) where L2,t(f) = Pt−1 1 (f(At) −Rt)2 is the cumulative squared prediction error.1 The sets take the form Ft := {f ∈F : ∥f −ˆf LS t ∥2,Et ≤√βt} where βt is an appropriately chosen confidence parameter, and the empirical 2-norm ∥·∥2,Et is defined by ∥g∥2 2,Et = Pt−1 1 g2(Ak). Hence ∥f −fθ∥2 2,Et measures the cumulative discrepancy between the previous predictions of f and fθ. The following lemma is the key to constructing strong confidence sets (Ft : t ∈N). For an arbitrary function f, it bounds the squared error of f from below in terms of the empirical loss of the true function fθ and the aggregate empirical discrepancy ∥f −fθ∥2 2,Et between f and fθ. It establishes that for any function f, with high probability, the random process (L2,t(f) : t ∈N) never falls below the process (L2,t(fθ) + 1 2∥f −fθ∥2 2,Et : t ∈N) by more than a fixed constant. A proof of the lemma is provided in the appendix. Recall that η is a constant given in Assumption 2. Lemma 1. For any δ > 0 and f : A 7→R, P L2,t(f) ≥L2,t(fθ) + 1 2 ∥f −fθ∥2 2,Et −4η2 log (1/δ) ∀t ∈N θ ≥1 −δ. By Lemma 1, with high probability, f can enjoy lower squared error than fθ only if its empirical deviation ∥f −fθ∥2 2,Et from fθ is less than 8η2 log(1/δ). Through a union bound, this property holds uniformly for all functions in a finite subset of F. To extend this result to infinite classes of functions, we measure the function class at some discretization scale α. Let N(F, α, ∥·∥∞) denote the α-covering number of F in the sup-norm ∥· ∥∞, and let β∗ t (F, δ, α) := 8η2 log (N(F, α, ∥·∥∞)/δ) + 2αt 8C + p 8η2 ln(4t2/δ) . (4) 1The results can be extended to the case where the infimum of L2,t(f) is unattainable by selecting a function with squared prediction error sufficiently close to the infimum. 6 Proposition 2. For all δ > 0 and α > 0, if Ft = f ∈F :
f −ˆf LS t
2,Et ≤ p β∗ t (F, δ, α) for all t ∈N, then P fθ ∈ ∞ \ t=1 Ft θ ! ≥1 −2δ. Example 3. Suppose Θ ⊂[0, 1]d and for each a ∈A, fθ(a) is an L–Lipschitz function of θ. Then N(F, α, ∥· ∥∞) ≤(1 + L/ϵ)d and hence log N(F, α, ∥· ∥∞) ≤d log(1 + L/ϵ). 7 Measuring the rate at which confidence sets shrink Our remaining task is to provide a worst case bound on the sum PT 1 wFt(At). First consider the case of a linearly parameterized model where fρ(a) := ⟨φ(a), ρ⟩for each ρ ∈Θ ⊂Rd. Then, it can be shown that our confidence set takes the form Ft := {fρ : ρ ∈Θt} where Θt ⊂Rd is an ellipsoid. When an action At is sampled, the ellipsoid shrinks in the direction φ(At). Here the explicit geometric structure of the confidence set implies that the width wFt shrinks not only at At but also at any other action whose feature vector is not orthogonal to φ(At). Some linear algebra leads to a worst case bound on PT 1 wFt(At). For a general class of functions, the situation is much subtler, and we need to measure the way in which the width at each action can be reduced by sampling other actions. The following result uses our new notion of dimension to bound the number of times the width of the confidence interval for a selected action At can exceed a threshold. Proposition 3. If (βt ≥0|t ∈N) is a nondecreasing sequence and Ft := {f ∈F : ∥f − ˆf LS t ∥2,Et ≤√βt} then with probability 1 T X t=1 1(wFt(At) > ϵ) ≤ 4βT ϵ2 + 1 dimE(F, ϵ) for all T ∈N and ϵ > 0. Using Proposition 3, one can bound the sum PT t=1 wFt(At), as established by the following lemma. To extend our analysis to infinite classes of functions, we consider the αF T –eluder dimension of F, where αF t = max 1 t2 , inf {∥f1 −f2∥∞: f1, f2 ∈F, f1 ̸= f2} . (5) Lemma 2. If (βt ≥0|t ∈N) is a nondecreasing sequence and Ft := {f ∈F : ∥f −ˆf LS t ∥2,Et ≤ √βt} then with probability 1, for all T ∈N, T X t=1 wFt(At) ≤1 T + min dimE F, αF T , T C + 4 q dimE F, αF T βT T. (6) 8 Main Result Our analysis provides a new guarantee both for Thompson sampling, and for a UCB algorithm πF∗ 1:∞executed with appropriate confidence sets {F∗ t : t ∈N}. Recall, for a sequence of confidence sets {Ft : t ∈N} we denote by πF1:∞the UCB algorithm that chooses an action ¯At ∈ arg maxa∈A supf∈Ft fθ(a) at each time t. We establish bounds that are, up to a logarithmic factor, of order v u u tdimE F, T −2 | {z } Eluder dimension log N F, T −2, ∥·∥∞ | {z } log–covering number T. 7 This term depends on two measures of the complexity of the function class F. The first, which controls for statistical over–fitting, grows logarithmically in the cover numbers of the function class. This is a common feature of notions of dimension from statistical learning theory. The second measure of complexity, the eluder dimension, measures the extent to which the reward value at one action can be inferred by sampling other actions. The next two propositions, which provide finite time bounds for a particular UCB algorithm and for Thompson sampling, follow by combining Proposition 1, Propsition 2, and Lemma 2. Define, B(F, T, δ) = 1 T + min dimE F, αF T , T C + 4 q dimE F, αF T β∗ T F, αF T , δ T. Notice that B(F, T, δ) is the right hand side of the bound (6) with βT taken to be β∗ T (F, αF T , δ). Proposition 4. Fix any δ > 0 and T ∈ N, and define for each t ∈ N, F∗ t = f ∈F :
f −ˆf LS t
2,Et ≤ p β∗ t (F, αT , δ) . Then, P n R(T, πF∗ 1:∞) ≤B(F, T, δ) | θ o ≥1 −2δ E h R(T, πF∗ 1:∞) | θ i ≤B(F, T, δ) + 2δTC Proposition 5. For any T ∈N, E R(T, πTS) ≤B(F, T, T −1) + 2C The next two examples show how the regret bounds of Proposition 4 and 5 specialize to ddimensional linear and generalized linear models. For each of these examples Θ ⊂Rd and each action is associated with a known feature vector φ(a). Throughout these two examples, we fix positive constants γ and s and assume that γ ≥supa∈A ∥φ(a)∥and s ≥supρ∈Θ ∥ρ∥. For each of these examples, a bound on dimE (F, ϵ) is provided in the supplementary material. Example 4. Linear Models: Consider the case of a d-dimensional linear model fρ(a) := ⟨φ(a), ρ⟩. Then, dimE(F, ϵ) = O(d log(1/ϵ)) and log N(F, ϵ, ∥·∥∞) = O(d log(1/ϵ)). Propositions 4 and 5 therefore yield O(d log(1/αF T ) √ T) regret bounds. Since αF T ≥T −2, This is tight to within a factor of log T [3], and matches the best available bound for a linear UCB algorithm [2]. Example 5. Generalized Linear Models: Consider the case of a d-dimensional generalized linear model fθ(a) := g (⟨φ(a), θ⟩) where g is an increasing Lipschitz continuous function. Set h = sup˜θ,a g′(⟨φ(a), ˜θ⟩), h = inf ˜θ,a g′(⟨φ(a), ˜θ⟩) and r = h/h. Then, log N(F, ϵ, ∥·∥∞) = O(d log(h/ϵ)) and dimE(F, ϵ) = O(dr2 log(h/ϵ)), and Propositions 4 and 5 yield O(rd log(h/αF T ) √ T) regret bounds. To our knowledge, this bound is a slight improvement over the strongest regret bound available for any algorithm in this setting. The regret bound of Filippi et al. [7] is of order rd log3/2(T) √ T. 9 Conclusion In this paper, we have analyzed two algorithms, Thompson sampling and a UCB algorithm, in a very general framework, and developed regret bounds that depend on a new notion of dimension. In constructing these bounds, we have identified two factors that control the hardness of a particular multi-armed bandit problem. First, an agent’s ability to quickly attain near-optimal performance depends on the extent to which the reward value at one action can be inferred by sampling other actions. However, in order to select an action the agent must make inferences about many possible actions, and an error in its evaluation of any one could result in large regret. Our second measure of complexity controls for the difficulty of maintaining appropriate confidence sets simultaneously at every action. While our bounds are nearly tight in some cases, further analysis is likely to yield stronger results in other cases. We hope, however, that our work provides a conceptual foundation for the study of such problems, and inspires further investigation. Acknowledgments The first author is supported by a Burt and Deedee McMurty Stanford Graduate Fellowship. This work was supported in part by Award CMMI-0968707 from the National Science Foundation. 8 References [1] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), pages 355–366, 2008. [2] Y. Abbasi-Yadkori, D. P´al, and C. Szepesv´ari. Improved algorithms for linear stochastic bandits. Advances in Neural Information Processing Systems, 24, 2011. [3] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395–411, 2010. [4] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the 40th ACM Symposium on Theory of Computing, 2008. [5] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv´ari. X-armed bandits. Journal of Machine Learning Research, 12:15871627, 2011. [6] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. Information Theory, IEEE Transactions on, 58(5):3250 –3265, may 2012. ISSN 0018-9448. doi: 10.1109/TIT.2011.2182033. [7] S. Filippi, O. Capp´e, A. Garivier, and C. Szepesv´ari. Parametric bandits: The generalized linear case. Advances in Neural Information Processing Systems, 23:1–9, 2010. [8] Y. Abbasi-Yadkori, D. Pal, and C. Szepesv´ari. Online-to-confidence-set conversions and application to sparse stochastic bandits. In Conference on Artificial Intelligence and Statistics (AISTATS), 2012. [9] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. [10] T.L. Lai. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of Statistics, pages 1091–1114, 1987. [11] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002. [12] O. Capp´e, A. Garivier, O.-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper confidence bounds for optimal sequential allocation. Submitted to the Annals of Statistics. [13] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. The Journal of Machine Learning Research, 99:1563–1600, 2010. [14] P.L. Bartlett and A. Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. AUAI Press, 2009. [15] L. Kocsis and C. Szepesv´ari. Bandit based monte-carlo planning. In Machine Learning: ECML 2006, pages 282–293. Springer, 2006. [16] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. arXiv preprint arXiv:1301.2609, 2013. [17] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. [18] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26(6):639–658, 2010. [19] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Neural Information Processing Systems (NIPS), 2011. [20] S. Agrawal and N. Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. 2012. [21] S. Agrawal and N. Goyal. Further optimal regret bounds for Thompson sampling. arXiv preprint arXiv:1209.3353, 2012. [22] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite time analysis. In International Conference on Algorithmic Learning Theory, 2012. [23] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. arXiv preprint arXiv:1209.3352, 2012. [24] A. Beygelzimer, J. Langford, L. Li, L. Reyzin, and R.E. Schapire. Contextual bandit algorithms with supervised learning guarantees. In Conference on Artificial Intelligence and Statistics (AISTATS), volume 15. JMLR Workshop and Conference Proceedings, 2011. [25] K. Amin, M. Kearns, and U. Syed. Bandits, query learning, and the haystack dimension. In Proceedings of the 24th Annual Conference on Learning Theory (COLT), 2011. [26] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research, 3:397–422, 2003. 9
|
2013
|
85
|
5,164
|
Efficient Algorithm for Privately Releasing Smooth Queries Ziteng Wang Key Laboratory of Machine Perception, MOE School of EECS Peking University wangzt@cis.pku.edu.cn Kai Fan Key Laboratory of Machine Perception, MOE School of EECS Peking University interfk@hotmail.com Jiaqi Zhang Key Laboratory of Machine Perception, MOE School of EECS Peking University Zhangjq@cis.pku.edu.cn Liwei Wang Key Laboratory of Machine Perception, MOE School of EECS Peking University wanglw@cis.pku.edu.cn Abstract We study differentially private mechanisms for answering smooth queries on databases consisting of data points in Rd. A K-smooth query is specified by a function whose partial derivatives up to order K are all bounded. We develop an ϵ-differentially private mechanism which for the class of K-smooth queries has accuracy O(n− K 2d+K /ϵ). The mechanism first outputs a summary of the database. To obtain an answer of a query, the user runs a public evaluation algorithm which contains no information of the database. Outputting the summary runs in time O(n1+ d 2d+K ), and the evaluation algorithm for answering a query runs in time ˜O(n d+2+ 2d K 2d+K ). Our mechanism is based on L∞-approximation of (transformed) smooth functions by low degree even trigonometric polynomials with small and efficiently computable coefficients. 1 Introduction Privacy is an important problem in data analysis. Often people want to learn useful information from data that are sensitive. But when releasing statistics of sensitive data, one must tradeoff between the accuracy and the amount of privacy loss of the individuals in the database. In this paper we consider differential privacy [9], which has become a standard concept of privacy. Roughly speaking, a mechanism which releases information about the database is said to preserve differential privacy, if the change of a single database element does not affect the probability distribution of the output significantly. Differential privacy provides strong guarantees against attacks. It ensures that the risk of any individual to submit her information to the database is very small. An adversary can discover almost nothing new from the database that contains the individual’s information compared with that from the database without the individual’s information. Recently there have been extensive studies of machine learning, statistical estimation, and data mining under the differential privacy framework [29, 5, 18, 17, 6, 30, 20, 4]. Accurately answering statistical queries is an important problem in differential privacy. A simple and efficient method is the Laplace mechanism [9], which adds Laplace noise to the true answers. Laplace mechanism is especially useful for query functions with low sensitivity, which is the maximal difference of the query values of two databases that are different in only one item. A typical 1 class of queries that has low sensitivity is linear queries, whose sensitivity is O(1/n), where n is the size of the database. The Laplace mechanism has a limitation. It can answer at most O(n2) queries. If the number of queries is substantially larger than n2, Laplace mechanism is not able to provide differentially private answers with nontrivial accuracy. Considering that potentially there are many users and each user may submit a set of queries, limiting the number of total queries to be smaller than n2 is too restricted in some situations. A remarkable result due to Blum, Ligett and Roth [2] shows that information theoretically it is possible for a mechanism to answer far more than n2 linear queries while preserving differential privacy and nontrivial accuracy simultaneously. There are a series of works [10, 11, 21, 16] improving the result of [2]. All these mechanisms are very powerful in the sense that they can answer general and adversely chosen queries. On the other hand, even the fastest algorithms [16, 14] run in time linear in the size of the data universe to answer a query. Often the size of the data universe is much larger than that of the database, so these mechanisms are inefficient. Recently, [25] shows that there is no polynomial time algorithm that can answer n2+o(1) general queries while preserving privacy and accuracy (assuming the existence of one-way function). Given the hardness result, recently there are growing interests in studying efficient and differentially private mechanisms for restricted class of queries. From a practical point of view, if there exists a class of queries which is rich enough to contain most queries used in applications and allows one to develop fast mechanisms, then the hardness result is not a serious barrier for differential privacy. One class of queries that attracts a lot of attentions is the k-way conjunctions. The data universe for this problem is {0, 1}d. Thus each individual record has d binary attributes. A k-way conjunction query is specified by k features. The query asks what fraction of the individual records in the database has all these k features being 1. A series of works attack this problem using several different techniques [1, 13, 7, 15, 24] . They propose elegant mechanisms which run in time poly(n) when k is a constant. Another class of queries that yields efficient mechanisms is sparse query. A query is m-sparse if it takes non-zero values on at most m elements in the data universe. [3] develops mechanisms which are efficient when m = poly(n). When the data universe is [−1, 1]d, where d is a constant, [2] considers rectangle queries. A rectangle query is specified by an axis-aligned rectangle. The answer to the query is the fraction of the data points that lie in the rectangle. [2] shows that if [−1, 1]d is discretized to poly(n) bits of precision, then there are efficient mechanisms for the class of rectangle queries. There are also works studying related range queries [19]. In this paper we study smooth queries defined also on data universe [−1, 1]d for constant d. A smooth query is specified by a smooth function, which has bounded partial derivatives up to a certain order. The answer to the query is the average of the function values on data points in the database. Smooth functions are widely used in machine learning and data analysis [28]. There are extensive studies on the relation between smoothness, regularization, reproducing kernels and generalization ability [27, 22]. Our main result is an ϵ-differentially private mechanism for the class of K-smooth queries, which are specified by functions with bounded partial derivatives up to order K. The mechanism has (α, β)-accuracy, where α = O(n− K 2d+K /ϵ) for β ≥e−O(n d 2d+K ). The mechanism first outputs a summary of the database. To obtain an answer of a smooth query, the user runs a public evaluation procedure which contains no information of the database. Outputting the summary has running time O n1+ d 2d+K , and the evaluation procedure for answering a query runs in time ˜O(n d+2+ 2d K 2d+K ). The mechanism has the advantage that both the accuracy and the running time for answering a query improve quickly as K/d increases (see also Table 1 in Section 3). Our algorithm is a L∞-approximation based mechanism and is motivated by [24], which considers approximation of k-way conjunctions by low degree polynomials. The basic idea is to approximate the whole query class by linear combination of a small set of basis functions. The technical difficulties lie in that in order that the approximation induces an efficient and differentially private mechanism, all the linear coefficients of the basis functions must be small and efficiently computable. To guarantee these properties, we first transform the query function. Then by using even trigono2 metric polynomials as basis functions we prove a constant upper bound for the linear coefficients. The smoothness of the functions also allows us to use an efficient numerical method to compute the coefficients to a precision so that the accuracy of the mechanism is not affected significantly. 2 Background Let D be a database containing n data points in the data universe X. In this paper, we consider the case that X ⊂Rd where d is a constant. Typically, we assume that the data universe X = [−1, 1]d. Two databases D and D′ are called neighbors if |D| = |D′| = n and they differ in exactly one data point. The following is the formal definition of differential privacy. Definition 2.1 ((ϵ, δ)-differential privacy). A sanitizer S which is an algorithm that maps input database into some range R is said to preserve (ϵ, δ)-differential privacy, if for all pairs of neighbor databases D, D′ and for any subset A ⊂R, it holds that P(S(D) ∈A) ≤P(S(D′) ∈A) · eϵ + δ. If S preserves (ϵ, 0)-differential privacy, we say S is ϵ-differentially private. We consider linear queries. Each linear query qf is specified by a function f which maps data universe [−1, 1]d to R, and qf is defined by qf(D) := 1 |D| P x∈D f(x). Let Q be a set of queries. The accuracy of a mechanism with respect to Q is defined as follows. Definition 2.2 ((α, β)-accuracy). Let Q be a set of queries. A sanitizer S is said to have (α, β)accuracy for size n databases with respect to Q, if for every database D with |D| = n the following holds P(∃q ∈Q, |S(D, q) −q(D)| ≥α) ≤β, where S(D, q) is the answer to q given by S. We will make use of Laplace mechanism [9] in our algorithm. Laplace mechanism adds Laplace noise to the output. We denote by Lap(σ) the random variable distributed according to the Laplace distribution with parameter σ: P(Lap(σ) = x) = 1 2σ exp(−|x|/σ). We will design a differentially private mechanism which is accurate with respect to a query set Q possibly consisting of infinite number of queries. Given a database D, the sanitizer outputs a summary which preserves differential privacy. For any qf ∈Q, the user makes use of an evaluation procedure to measure f on the summary and obtain an approximate answer of qf(D). Although we may think of the evaluation procedure as part of the mechanism, it does not contain any information of the database and therefore is public. We will study the running time for the sanitizer outputting the summary. Ideally it is O(nc) for some constant c not much larger than 1. For the evaluation procedure, the running time per query is the focus. Ideally it is sublinear in n. Here and in the rest of the paper, we assume that calculating the value of f on a data point x can be done in unit time. In this work we will frequently use trigonometric polynomials. For the univariate case, a function p(θ) is called a trigonometric polynomial of degree m if p(θ) = a0 + Pm l=1 (al cos lθ + bl sin lθ), where al, bl are constants. If p(θ) is an even function, we say that it is an even trigonometric polynomial, and p(θ) = a0 + Pm l=1 al cos lθ. For the multivariate case, if p(θ1, . . . , θd) = P l=(l1,...,ld) al cos(l1θ1) . . . cos(ldθd), then p is said to be an even trigonometric polynomial (with respect to each variable), and the degree of θi is the upper limit of li. 3 Efficient differentially private mechanism Let us first describe the set of queries considered in this work. Since each query qf is specified by a function f, a set of queries QF can be specified by a set of functions F. Remember that each f ∈F maps [−1, 1]d to R. For any point x = (x1, . . . , xd) ∈[−1, 1]d, if k = (k1, . . . , kd) is a d-tuple with nonnegative integers, then we define Dk := Dk1 1 · · · Dkd d := ∂k1 ∂xk1 1 · · · ∂kd ∂xkd d . 3 Parameters: Privacy parameters ϵ, δ > 0; Failure probability β > 0; Smoothness order K ∈N; Set t = n 1 2d+K . Input: Database D ∈ [−1, 1]dn. Output: A td-dimensional vector as the summary. Algorithm: For each x = (x1, . . . , xd) ∈D: Set: θi(x) = arccos(xi), i = 1, . . . , d; For every d-tuple of nonnegative integers m = (m1, . . . , md), where ∥m∥∞≤t −1 Compute: Sum(D) = 1 n P x∈D cos (m1θ1(x)) . . . cos (mdθd(x)); c Sum(D) ←Sum(D) + Lap td nϵ ; Let c Su(D) = c Sum(D) ∥m∥∞≤t−1 be a td dimensional vector; Return: c Su(D). Algorithm 1: Outputting the summary Parameters: t = n 1 2d+K . Input: A query qf, where f : [−1, 1]d →R and f ∈CK B , Summary c Su(D) (a td-dimensional vector). Output: Approximate answer to qf(D). Algorithm: Let gf(θ) = f (cos(θ1), . . . , cos(θd)), θ = (θ1, . . . , θd) ∈[−π, π]d; Compute a trigonometric polynomial approximation pt(θ) of gf(θ), where the degree of each θi is t; // see Section 4 for details of computation. Denote pt(θ) = P m=(m1,...,md),∥m∥∞<t cm cos(m1θ1) . . . cos(mdθd); Let c = (cm)∥m∥∞<t be a td-dimensional vector; Return: the inner product < c, c Su(D) >. Algorithm 2: Answering a query Let |k| := k1 + . . . + kd. Define the K-norm as ∥f∥K := sup |k|≤K sup x∈[−1,1]d |Dkf(x)|. We will study the set CK B which contains all smooth functions whose derivatives up to order K have ∞-norm upper bounded by a constant B > 0. Formally, CK B := {f : ∥f∥K ≤B}. The set of queries specified by CK B , denoted as QCK B , is our focus. Smooth functions have been studied in depth in machine learning [26, 28, 27] and found wide applications [22]. The following theorem is our main result. It says that if the query class is specified by smooth functions, then there is a very efficient mechanism which preserves ϵ-differential privacy and good accuracy. The mechanism consists of two parts: One for outputting a summary of the database, the other for answering a query. The two parts are described in Algorithm 1 and Algorithm 2 respectively. The second part of the mechanism contains no private information of the database. Theorem 3.1. Let the query set be QCK B = {qf = 1 n P x∈D f(x) : f ∈CK B }, where K ∈N and B > 0 are constants. Let the data universe be [−1, 1]d, where d ∈N is a constant. Then the mechanism S given in Algorithm 1 and Algorithm 2 satisfies that for any ϵ > 0, the following hold: 1) The mechanism is ϵ-differentially private. 2) For any β ≥10 · e−1 5 (n d 2d+K ) the mechanism is (α, β)-accurate, where α = O 1 n K 2d+K /ϵ , and the hidden constant depends only on d, K and B. 4 Table 1: Performances vs. Order of smoothness Order of smoothness Accuracy α Time: Outputting summary Time: Answering a query K = 1 O(( 1 n) 1 2d+1 ) O(n 3 2 ) ˜O(n 3 2 + 1 4d+2 ) K = 2d O( 1 √n) O(n 5 4 ) ˜O(n 1 4 + 3/4 d ) d K = ϵ0 ≪1 O(( 1 n)1−2ϵ0) O(n1+ϵ0) ˜O(nϵ0(1+ 3 d )) 3) The running time for S to output the summary is O(n 3d+K 2d+K ). 4) The running time for S to answer a query is O(n d+2+ 2d K 2d+K polylog(n)). The proof of Theorem 3.1 is given in the supplementary material. To have a better idea of how the performances depend on the order of smoothness, let us consider three cases. The first case is K = 1, i.e., the query functions only have the first order derivatives. Another extreme case is K ≫d, and we assume d/K = ϵ0 ≪1. We also consider a case in the middle by assuming K = 2d. Table 1 gives simplified upper bounds for the error and running time in these cases. We have the following observations: 1) The accuracy α improves dramatically from roughly O(n−1 2d ) to nearly O(n−1) as K increases. For K > 2d, the error is smaller than the sampling error O( 1 √n). 2) The running time for outputting the summary does not change too much, because reading through the database requires Ω(n) time. 3) The running time for answering a query reduces significantly from roughly O(n3/2) to nearly O(nϵ0) as K getting large. When K = 2d, it is about n1/4 if d is not too small. In practice, the speed for answering a query may be more important than that for outputting the summary since the sanitizer only output the summary once. Thus having an nc-time (c ≪1) algorithm for query answering will be appealing. Conceptually our mechanism is simple. First, by change of variables we have gf(θ1, . . . , θd) = f(cos θ1, . . . , cos θd). It also transforms the data universe from [−1, 1]d to [−π, π]d. Note that for each variable θi, gf is an even function. To compute the summary, the mechanism just gives noisy answers to queries specified by even trigonometric monomials cos(m1θ1) . . . cos(mdθd). For each trigonometric monomial, the highest degree of any variable is t := maxd md = O(n 1 2d+K ). The summary is a O(n d 2d+K )-dimensional vector. To answer a query specified by a smooth function f, the mechanism computes a trigonometric polynomial approximation of gf. The answer to the query qf is a linear combination of the summary by the coefficients of the approximation trigonometric polynomial. Our algorithm is an L∞-approximation based mechanism, which is motivated by [24]. An approximation based mechanism relies on three conditions: 1) There exists a small set of basis functions such that every query function can be well approximated by a linear combination of them; 2) All the linear coefficients are small; 3) The whole set of the linear coefficients can be computed efficiently. If these conditions hold, then the mechanism just outputs noisy answers to the set of queries specified by the basis functions as the summary. When answering a query, the mechanism computes the coefficients with which the linear combination of the basis functions approximate the query function. The answer to the query is simply the inner product of the coefficients and the summary vector. The following theorem guarantees that by change of variables and using even trigonometric polynomials as the basis functions, the class of smooth functions has all the three properties described above. Theorem 3.2. Let γ > 0. For every f ∈CK B defined on [−1, 1]d, let gf(θ1, . . . , θd) = f(cos θ1, . . . , cos θd), θi ∈[−π, π]. 5 Then, there is an even trigonometric polynomial p whose degree of each variable is t(γ) = 1 γ 1/K : p(θ1, . . . , θd) = X 0≤l1,...,ld<t(γ) cl1,...,ld cos(l1θ1) . . . cos(ldθd), such that 1) ∥gf −p∥∞≤γ. 2) All the linear coefficients cl1,...,ld can be uniformly upper bounded by a constant M independent of t(γ) (i.e., M depends only on K, d, and B). 3) The whole set of the linear coefficients can be computed in time O ( 1 γ ) d+2 K + 2d K2 · polylog( 1 γ ) . Theorem 3.2 is proved in Section 4. Based on Theorem 3.2, the proof of Theorem 3.1 is mainly the argument for Laplace mechanism together with an optimization of the approximation error γ trading-off with the Laplace noise. (Please see the supplementary material.) 4 L∞-approximation of smooth functions: small and efficiently computable coefficients In this section we prove Theorem 3.2. That is, for every f ∈CK B the corresponding gf can be approximated by a low degree trigonometric polynomial in L∞-norm. We also require that the linear coefficients of the trigonometric polynomial are all small and can be computed efficiently. These properties are crucial for the differentially private mechanism to be accurate and efficient. In fact, L∞-approximation of smooth functions in CK B by polynomial (and other basis functions) is an important topic in approximation theory. It is well-known that for every f ∈CK B there is a low degree polynomial with small approximation error. However, it is not clear whether there is an upper bound for the linear coefficients that is sufficiently good for our purpose. Instead we transform f to gf and use trigonometric polynomials as the basis functions in the mechanism. Then we are able to give a constant upper bound for the linear coefficients. We also need to compute the coefficients efficiently. But results from approximation theory give the coefficients as complicated integrals. We adopt an algorithm which fully exploits the smoothness of the function and thus can efficiently compute approximations of the coefficients to certain precision so that the errors involved do not affect the accuracy of the differentially private mechanism too much. Below, Section 4.1 describes the classical theory on trigonometric polynomial approximation of smooth functions. Section 4.2 shows that the coefficients have a small upper bound and can be efficiently computed. Theorem 3.2 then follows from these results. 4.1 Trigonometric polynomial approximation with generalized Jackson kernel This section mainly contains known results of trigonometric polynomial approximation, stated in a way tailored to our problem. For a comprehensive description of univariate approximation theory, please refer to the excellent book of [8]; and to [23] for multivariate approximation theory. Let gf be the function obtained from f ∈CK B ([−1, 1]d): gf(θ1, . . . , θd) = f(cos θ1, . . . , cos θd). Note that gf ∈CK B′([−π, π]d) for some constant B′ depending only on B, K, d, and gf is even with respect to each variable. The key tool in trigonometric polynomial approximation of smooth functions is the generalized Jackson kernel. Definition 4.1. Define the generalized Jackson kernel as Jt,r(s) = 1 λt,r sin(ts/2) sin(s/2) 2r , where λt,r is determined by R π −π Jt,r(s)ds = 1. Jt,r(s) is an even trigonometric polynomial of degree r(t −1). Let Ht,r(s) = Jt′,r(s), where t′ = ⌊t/r⌋+ 1. Then Ht,r is an even trigonometric polynomial of degree at most t. We write Ht,r(s) = a0 + t X l=1 al cos ls. (1) 6 Suppose that g is a univariate function defined on [−π, π] which satisfies that g(−π) = g(π). Define the approximation operator It,K as It,K(g)(x) = − Z π −π Ht,r(s) K+1 X l=1 (−1)l K + 1 l g(x + ls)ds, (2) where r = ⌈K+3 2 ⌉. It is not difficult to see that It,K maps g to a trigonometric polynomial of degree at most t. Next suppose that g is a d-variate function defined on [−π, π]d, and is even with respect to each variable. Define an operator Id t,K as sequential composition of It,K,1, . . . , It,K,d, where It,K,j is the approximation operator given in (2) with respect to the jth variable of g. Thus Id t,K(g) is a trigonometric polynomial of d-variables and each variable has degree at most t. Theorem 4.1. Suppose that g is a d-variate function defined on [−π, π]d, and is even with respect to each variable. Let D(K) j g be the Kth order partial derivative of g respect to the j-th variable. If ∥D(K) j g∥∞≤M for some constant M for all 1 ≤j ≤d, then there is a constant C such that ∥g −Id t,K(g)∥∞≤ C tK+1 , where C depends only on M, d and K. 4.2 The linear coefficients In this subsection we study the linear coefficients in the trigonometric polynomial Id t,K(gf). The previous subsection established that gf can be approximated by Id t,K(gf) for a small t. Here we consider the upper bound and approximate computation of the coefficients. Since Id t,K(gf)(θ1, . . . , θd) is even with respect to each variable, we write Id t,K(gf)(θ1, . . . , θd) = X 0≤n1,...,nd≤t cn1,...,nd cos(n1θ1) . . . cos(ndθd). (3) Fact 4.2. The coefficients cn1,...,nd of Id t,K(gf) can be written as cn1,...,nd = (−1)d X 1≤k1,...,kd≤K+1 0≤l1,...,ld≤t li=ki·ni∀i∈[d] ml1,k1,...,ld,kd, (4) where ml1,k1,...,ld,kd = d Y i=1 (−1)kiali K + 1 ki Z [−π,π]d d Y i=1 cos li ki θi gf(θ)dθ ! , (5) and ali is the linear coefficient of cos(lis) in Ht,r(s) as given in (1). The following lemma shows that the coefficients cn1,...,nd of Id t,K(gf) can be uniformly upper bounded by a constant independent of t. Lemma 4.3. There exists a constant M which depends only on K, B, d but independent of t, such that for every f ∈CK B , all the linear coefficients cn1,...,nd of Id t,K(gf) satisfy |cn1,...,nd| ≤M. The proof of Lemma 4.3 is given in the supplementary material. Now we consider the computation of the coefficients cn1,...,nd of Id t,K(gf). Note that each coefficient involves d-dimensional integrations of smooth functions, so we have to numerically compute approximations of them. For function class CK B defined on [−1, 1]d, traditional numerical integration methods run in time O(( 1 τ )d/K) in order that the error is less than τ. Here we adopt the sparse grids algorithm due to Gerstner and Griebel [12] which fully exploits the smoothness of the integrand. By choosing a particular quadrature rule as the algorithm’s subroutine, we are able to prove that the running time of the sparse grids 7 is bounded by O(( 1 τ )2/K). The sparse grids algorithm, the theorem giving the bound for the running time and its proof are all given in the supplementary material. Based on these results, we establish the running time for computing the approximate coefficients of the trigonometric polynomial, which is stated in the following Lemma. Lemma 4.4. Let ˆcn1,...,nd be an approximation of the coefficient cn1,...,nd of Id t,K(gf) obtained by approximately computing the integral in (5) with a version of the sparse grids algorithm [12] (given in the supplementary material). Let ˆId t,K(gf)(θ1, . . . , θd) = X 0≤n1,...,nd≤t ˆcn1,...,nd cos(n1θ1) . . . cos(ndθd). Then for every f ∈CK B , in order that ∥ˆId t,K(gf) −Id t,K(gf)∥∞≤O t−K , it suffices that the computation of all the coefficients ˆcn1,...,nd runs in time O t(1+ 2 K )d+2 · polylog(t) . In addition, maxn1,...,nd |ˆcn1,...,nd −cn1,...,nd| = o(1) as t →∞. The proof of Lemma 4.4 is given in the supplementary material. Theorem 3.2 then follows easily from Lemma 4.3 and Lemma 4.4. Proof of Theorem 3.2. Setting t = t(γ) = 1 γ 1/K . Let p = ˆId m,K(gf). Combining Lemma 4.3 and Lemma 4.4, and note that the coefficients ˆcn1,...,nd are upper bounded by a constant, the theorem follows. 5 Conclusion In this paper we propose an ϵ-differentially private mechanism for efficiently releasing K-smooth queries. The accuracy of the mechanism is O( 1 n K 2d+K ). The running time for outputting the summary is O(n1+ d 2d+K ), and is ˜O(n d+2+2d/K 2d+K ) for answering a query. The result can be generalized to (ϵ, δ)-differential privacy straightforwardly using the composition theorem [11]. The accuracy improves slightly to O(( 1 n) 2K 3d+2K log( 1 δ ) K 3d+2K ), while the running time for outputting the summary and answering the query increase slightly. Our mechanism is based on approximation of smooth functions by linear combination of a small set of basis functions with small and efficiently computable coefficients. Directly approximating functions in CK B ([−1, 1]d) by polynomials does not guarantee small coefficients and is less efficient. To achieve these goals we use trigonometric polynomials to approximate a transformation of the query functions. It is worth pointing out that the approximation considered here for differential privacy is L∞approximation, because the accuracy is defined in the worst case sense with respect to databases and queries. L∞-approximation is different to L2-approximation, which is simply the Fourier transform if we use trigonometric polynomials as the basis functions. L2-approximation does not guarantee (worst case) accuracy. For the class of smooth functions defined on [−1, 1]d where d is a constant, in fact it is not difficult to design a poly(n) time differentially private mechanism. One can discretize [−1, 1]d to O( 1 √n) precision, and use the differentially private mechanism for answering general queries (e.g., [16]). However the mechanism runs in time ˜O(nd/2) to answer a query, and provides ˜O(n−1/2) accuracy. In contrast our mechanism exploits higher order smoothness of the queries. It is always more efficient, and for queries highly smooth it is more accurate. Acknowledgments This work was supported by NSFC(61222307, 61075003) and a grant from MOE-Microsoft Key Laboratory of Statistics and Information Technology of Peking University. We also thank Di He for very helpful discussions. 8 References [1] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In PODS, pages 273–282. ACM, 2007. [2] A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In STOC, pages 609–618. ACM, 2008. [3] A. Blum and A. Roth. Fast private data release algorithms for sparse queries. arXiv preprint arXiv:1111.6842, 2011. [4] K. Chaudhuri and D. Hsu. Sample complexity bounds for differentially private learning. In COLT, 2011. [5] K. Chaudhuri, C. Monteleoni, and A.D. Sarwate. Differentially private empirical risk minimization. JMLR, 12:1069, 2011. [6] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal differentially private principal components. In NIPS, pages 998–1006, 2012. [7] M. Cheraghchi, A. Klivans, P. Kothari, and H.K. Lee. Submodular functions are noise stable. In SODA, pages 1586–1592. SIAM, 2012. [8] R.A. DeVore and G. G. Lorentz. Constructive approximation, volume 303. Springer Verlag, 1993. [9] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. TCC, pages 265–284, 2006. [10] C. Dwork, M. Naor, O. Reingold, G.N. Rothblum, and S. Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In STOC, pages 381–390. ACM, 2009. [11] C. Dwork, G.N. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, pages 51–60. IEEE, 2010. [12] T. Gerstner and M. Griebel. Numerical integration using sparse grids. Numerical algorithms, 18(34):209–232, 1998. [13] A. Gupta, M. Hardt, A. Roth, and J. Ullman. Privately releasing conjunctions and the statistical query barrier. In STOC, pages 803–812. ACM, 2011. [14] M. Hardt, K. Ligett, and F. McSherry. A simple and practical algorithm for differentially private data release. In NIPS, 2012. [15] M. Hardt, G. N. Rothblum, and R. A. Servedio. Private data release via learning thresholds. In SODA, pages 168–187. SIAM, 2012. [16] M. Hardt and G.N. Rothblum. A multiplicative weights mechanism for privacy-preserving data analysis. In FOCS, pages 61–70. IEEE Computer Society, 2010. [17] D. Kifer and B.R. Lin. Towards an axiomatization of statistical privacy and utility. In PODS, pages 147–158. ACM, 2010. [18] J. Lei. Differentially private M-estimators. In NIPS, 2011. [19] C. Li, M. Hay, V. Rastogi, G. Miklau, and A. McGregor. Optimizing linear counting queries under differential privacy. In PODS, pages 123–134. ACM, 2010. [20] Pravesh K. Prateek J. and Abhradeep T. Differentially private online learning. In COLT, 2012. [21] A. Roth and T. Roughgarden. Interactive privacy via the median mechanism. In STOC, pages 765–774. ACM, 2010. [22] A. Smola, B. Sch¨olkopf, and K. M¨uller. The connection between regularization operators and support vector kernels. Neural Networks, 11(4):637–649, 1998. [23] V.N. Temlyakov. Approximation of periodic functions. Nova Science Pub Inc, 1994. [24] J. Thaler, J. Ullman, and S. Vadhan. Faster algorithms for privately releasing marginals. In ICALP, pages 810–821. Springer, 2012. [25] J. Ullman. Answering n2+o(1) counting queries with differential privacy is hard. In STOC. ACM, 2013. [26] A. van der Vart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996. [27] G. Wahba et al. Support vector machines, reproducing kernel Hilbert spaces and the randomized gacv. Advances in Kernel Methods-Support Vector Learning, 6:69–87, 1999. [28] L. Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning. Journal of Machine Learning Research, 12(2269-2292):5–2, 2011. [29] S. Wasserman, L.and Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375–389, 2010. [30] O. Williams and F. McSherry. Probabilistic inference and differential privacy. In NIPS, 2010. 9
|
2013
|
86
|
5,165
|
Buy-in-Bulk Active Learning Liu Yang Machine Learning Department, Carnegie Mellon University liuy@cs.cmu.edu Jaime Carbonell Language Technologies Institute, Carnegie Mellon University jgc@cs.cmu.edu Abstract In many practical applications of active learning, it is more cost-effective to request labels in large batches, rather than one-at-a-time. This is because the cost of labeling a large batch of examples at once is often sublinear in the number of examples in the batch. In this work, we study the label complexity of active learning algorithms that request labels in a given number of batches, as well as the tradeoff between the total number of queries and the number of rounds allowed. We additionally study the total cost sufficient for learning, for an abstract notion of the cost of requesting the labels of a given number of examples at once. In particular, we find that for sublinear cost functions, it is often desirable to request labels in large batches (i.e., buying in bulk); although this may increase the total number of labels requested, it reduces the total cost required for learning. 1 Introduction In many practical applications of active learning, the cost to acquire a large batch of labels at once is significantly less than the cost of the same number of sequential rounds of individual label requests. This is true for both practical reasons (overhead time for start-up, reserving equipment in discrete time-blocks, multiple labelers working in parallel, etc.) and for computational reasons (e.g., time to update the learner’s hypothesis and select the next examples may be large). Consider making one vs multiple hematological diagnostic tests on an out-patient. There are fixed up-front costs: bringing the patient in for testing, drawing and storing the blood, entring the information in the hospital record system, etc. And there are variable costs, per specific test. Consider a microarray assay for gene expression data. There is a fixed cost in setting up and running the microarray, but virtually no incremental cost as to the number of samples, just a constraint on the max allowed. Either of the above conditions are often the case in scientific experiments (e.g., [1]), As a different example, consider calling a focused group of experts to address questions w.r.t new product design or introduction. There is a fixed cost in forming the group (determine membership, contract, travel, etc.), and a incremental per-question cost. The common abstraction in such real-world versions of “oracles” is that learning can buy-in-bulk to advantage because oracles charge either per batch (answering a batch of questions for the same cost as answering a single question up to a batch maximum), or the cost per batch is axp + b, where b is the set-up cost, x is the number of queries, and p = 1 or p < 1 (for the case where practice yields efficiency). Often we have other tradeoffs, such as delay vs testing cost. For instance in a medical diagnosis case, the most cost-effective way to minimize diagnostic tests is purely sequential active learning, where each test may rule out a set of hypotheses (diagnoses) and informs the next test to perform. But a patient suffering from a serious disease may worsen while sequential tests are being conducted. Hence batch testing makes sense if the batch can be tested in parallel. In general one can convert delay into a second cost factor and optimize for batch size that minimizes a combination of total delay and the sum of the costs for the individual tests. Parallelizing means more tests would be needed, since we lack the benefit of earlier tests to rule out future ones. In order to perform this 1 batch-size optimization we also need to estimate the number of redundant tests incurred by turning a sequence into a shorter sequence of batches. For the reasons cited above, it can be very useful in practice to generalize active learning to activebatch learning, with buy-in-bulk discounts. This paper developes a theoretical framework exploring the bounds and sample compelxity of active buy-in-bulk machine learning, and analyzing the tradeoff that can be achieved between the number of batches and the total number of queries required for accurate learning. In another example, if we have many labelers (virtually unlimited) operating in parallel, but must pay for each query, and the amount of time to get back the answer to each query is considered independent with some distribution, it may often be the case that the expected amount of time needed to get back the answers to m queries is sublinear in m, so that if the “cost” is a function of both the payment amounts and the time, it might sometimes be less costly to submit multiple queries to be labeled in parallel. In scenarios such as those mentioned above, a batch mode active learning strategy is desirable, rather than a method that selects instances to be labeled one-at-a-time. There have recently been several attempts to construct heuristic approaches to the batch mode active learning problem (e.g., [2]). However, theoretical analysis has been largely lacking. In contrast, there has recently been significant progress in understanding the advantages of fully-sequential active learning (e.g., [3, 4, 5, 6, 7]). In the present work, we are interested in extending the techniques used for the fully-sequential active learning model, studying natural analogues of them for the batchmodel active learning model. Formally, we are interested in two quantities: the sample complexity and the total cost. The sample complexity refers to the number of label requests used by the algorithm. We expect batch-mode active learning methods to use more label requests than their fully-sequential cousins. On the other hand, if the cost to obtain a batch of labels is sublinear in the size of the batch, then we may sometimes expect the total cost used by a batch-mode learning method to be significantly less than the analogous fully-sequential algorithms, which request labels individually. 2 Definitions and Notation As in the usual statistical learning problem, there is a standard Borel space X, called the instance space, and a set C of measurable classifiers h : X →{−1, +1}, called the concept space. Throughout, we suppose that the VC dimension of C, denoted d below, is finite. In the learning problem, there is an unobservable distribution DXY over X × {−1, +1}. Based on this quantity, we let Z = {(Xt, Yt)}∞ t=1 denote an infinite sequence of independent DXY -distributed random variables. We also denote by Zt = {(X1, Y1), (X2, Y2), . . . , (Xt, Yt)} the first t such labeled examples. Additionally denote by DX the marginal distribution of DXY over X. For a classifier h : X →{−1, +1}, denote er(h) = P(X,Y )∼DXY (h(X) ̸= Y ), the error rate of h. Additionally, for m ∈N and Q ∈(X × {−1, +1})m, let er(h; Q) = 1 |Q| P (x,y)∈Q I[h(x) ̸= y], the empirical error rate of h. In the special case that Q = Zm, abbreviate erm(h) = er(h; Q). For r > 0, define B(h, r) = {g ∈C : DX(x : h(x) ̸= g(x)) ≤r}. For any H ⊆C, define DIS(H) = {x ∈X : ∃h, g ∈H s.t. h(x) ̸= g(x)}. We also denote by η(x) = P(Y = +1|X = x), where (X, Y ) ∼DXY , and let h∗(x) = sign(η(x) −1/2) denote the Bayes optimal classifier. In the active learning protocol, the algorithm has direct access to the Xt sequence, but must request to observe each label Yt, sequentially. The algorithm asks up to a specified number of label requests n (the budget), and then halts and returns a classifier. We are particularly interested in determining, for a given algorithm, how large this number of label requests needs to be in order to guarantee small error rate with high probability, a value known as the label complexity. In the present work, we are also interested in the cost expended by the algorithm. Specifically, in this context, there is a cost function c : N →(0, ∞), and to request the labels {Yi1, Yi2, . . . , Yim} of m examples {Xi1, Xi2, . . . , Xim} at once requires the algorithm to pay c(m); we are then interested in the sum of these costs, over all batches of label requests made by the algorithm. Depending on the form of the cost function, minimizing the cost of learning may actually require the algorithm to request labels in batches, which we expect would actually increase the total number of label requests. 2 To help quantify the label complexity and cost complexity, we make use of the following definition, due to [6, 7]. Definition 2.1. [6, 7] Define the disagreement coefficient of h∗as θ(ǫ) = sup r>ǫ DX (DIS(B(h∗, r))) r . 3 Buy-in-Bulk Active Learning in the Realizable Case: k-batch CAL We begin our anlaysis with the simplest case: namely, the realizable case, with a fixed prespecified number of batches. We are then interested in quantifying the label complexity for such a scenario. Formally, in this section we suppose h∗∈C and er(h∗) = 0. This is refered to as the realizable case. We first review a well-known method for active learning in the realizable case, refered to as CAL after its discoverers Cohn, Atlas, and Ladner [8]. Algorithm: CAL(n) 1. t ←0, m ←0, Q ←∅ 2. While t < n 3. m ←m + 1 4. If max y∈{−1,+1} min h∈C er(h; Q ∪{(Xm, y)}) = 0 5. Request Ym, let Q ←Q ∪{(Xm, Ym)}, t ←t + 1 6. Return ˆh = argminh∈C er(h; Q) The label complexity of CAL is known to be O (θ(ǫ)(d log(θ(ǫ)) + log(log(1/ǫ)/δ)) log(1/ǫ)) [7]. That is, some n of this size suffices to guarantee that, with probability 1 −δ, the returned classifier ˆh has er(ˆh) ≤ǫ. One particularly simple way to modify this algorithm to make it batch-based is to simply divide up the budget into equal batch sizes. This yields the following method, which we refer to as k-batch CAL, where k ∈{1, . . . , n}. Algorithm: k-batch CAL(n) 1. Let Q ←{}, b ←2, V ←C 2. For m = 1, 2, . . . 3. If Xm ∈DIS(V ) 4. Q ←Q ∪{Xm} 5. If |Q| = ⌊n/k⌋ 6. Request the labels of examples in Q 7. Let L be the corresponding labeled examples 8. V ←{h ∈V : er(h; L) = 0} 9. b ←b + 1 and Q ←∅ 10. If b > k, Return any ˆh ∈V We expect the label complexity of k-batch CAL to somehow interpolate between passive learning (at k = 1) and the label complexity of CAL (at k = n). Indeed, the following theorem bounds the label complexity of k-batch CAL by a function that exhibits this interpolation behavior with respect to the known upper bounds for these two cases. Theorem 3.1. In the realizable case, for some λ(ǫ, δ) = O kǫ−1/kθ(ǫ)1−1/k(d log(1/ǫ) + log(1/δ)) , for any n ≥λ(ǫ, δ), with probability at least 1 −δ, running k-batch CAL with budget n produces a classifier ˆh with er(ˆh) ≤ǫ. Proof. Let M = ⌊n/k⌋. Define V0 = C and i0M = 0. Generally, for b ≥1, let ib1, ib2, . . . , ibM denote the indices i of the first M points Xi ∈DIS(Vb−1) for which i > i(b−1)M, and let Vb = {h ∈ 3 Vb−1 : ∀j ≤M, h(Xibj) = h∗(Xibj)}. These correspond to the version space at the conclusion of batch b in the k-batch CAL algorithm. Note that Xib1, . . . , XibM are conditionally iid given Vb−1, with distribution of X given X ∈ DIS(Vb−1). Thus, the PAC bound of [9] implies that, for some constant c ∈(0, ∞), with probability ≥1 −δ/k, Vb ⊆B h∗, cd log(M/d) + log(k/δ) M P(DIS(Vb−1)) . By a union bound, the above holds for all b ≤k with probability ≥1 −δ; suppose this is the case. Since P(DIS(Vb−1)) ≤θ(ǫ) max{ǫ, maxh∈Vb−1er(h)}, and any b with maxh∈Vb−1 er(h) ≤ǫ would also have maxh∈Vb er(h) ≤ǫ, we have max h∈Vb er(h) ≤max ǫ, cd log(M/d) + log(k/δ) M θ(ǫ) max h∈Vb−1 er(h)) . Noting that P(DIS(V0)) ≤1 implies V1 ⊆B h∗, c d log(M/d)+log(k/δ) M , by induction we have max h∈Vk er(h) ≤max ( ǫ, cd log(M/d) + log(k/δ) M k θ(ǫ)k−1 ) . For some constant c′ > 0, any M ≥c′ θ(ǫ) k−1 k ǫ1/k d log 1 ǫ + log(k/δ) makes the right hand side ≤ǫ. Since M = ⌊n/k⌋, it suffices to have n ≥k 1 + c′ θ(ǫ) k−1 k ǫ1/k d log 1 ǫ + log(k/δ) . Theorem 3.1 has the property that, when the disagreement coefficient is small, the stated bound on the total number of label requests sufficient for learning is a decreasing function of k. This makes sense, since θ(ǫ) small would imply that fully-sequential active learning is much better than passive learning. Small values of k correspond to more passive-like behavior, while larger values of k take fuller advantage of the sequential nature of active learning. In particular, when k = 1, we recover a well-known label complexity bound for passive learning by empirical risk minimization [10]. In contrast, when k = log(1/ǫ), the ǫ−1/k factor is e (constant), and the rest of the bound is at most O(θ(ǫ)(d log(1/ǫ) + log(1/δ)) log(1/ǫ)), which is (up to a log factor) a well-known bound on the label complexity of CAL for active learning [7] (a slight refinement of the proof would in fact recover the exact bound of [7] for this case); for k larger than log(1/ǫ), the label complexity can only improve; for instance, consider that upon reaching a given data point Xm in the data stream, if V is the version space in k-batch CAL (for some k), and V ′ is the version space in 2k-batch CAL, then we have V ′ ⊆V (supposing n is a multiple of 2k), so that Xm ∈DIS(V ′) only if Xm ∈DIS(V ). Note that even k = 2 can sometimes provide significant reductions in label complexity over passive learning: for instance, by a factor proportional to 1/√ǫ in the case that θ(ǫ) is bounded by a finite constant. 4 Batch Mode Active Learning with Tsybakov noise The above analysis was for the realizable case. While this provides a particularly clean and simple analysis, it is not sufficiently broad to cover many realistic learning applications. To move beyond the realizable case, we need to allow the labels to be noisy, so that er(h∗) > 0. One popular noise model in the statistical learning theory literature is Tsybakov noise, which is defined as follows. Definition 4.1. [11] The distribution DXY satisfies Tsybakov noise if h∗∈C, and for some c > 0 and α ∈[0, 1], ∀t > 0, P(|η(x) −1/2| < t) < c1t α 1−α , equivalently, ∀h, P(h(x) ̸= h∗(x)) ≤c2(er(h) −er(h∗))α, where c1 and c2 are constants. Supposing DXY satisfies Tsybakov noise, we define a quantity Em = c3 d log(m/d) + log(km/δ) m 1 2−α . 4 based on a standard generalization bound for passive learning [12]. Specifically, [12] have shown that, for any V ⊆C, with probability at least 1 −δ/(4km2), sup h,g∈V |(er(h) −er(g)) −(erm(h) −erm(g))| < Em. (1) Consider the following modification of k-batch CAL, designed to be robust to Tsybakov noise. We refer to this method as k-batch Robust CAL, where k ∈{1, . . . , n}. Algorithm: k-batch Robust CAL(n) 1. Let Q ←{}, b ←1, V ←C, m1 ←0 2. For m = 1, 2, . . . 3. If Xm ∈DIS(V ) 4. Q ←Q ∪{Xm} 5. If |Q| = ⌊n/k⌋ 6. Request the labels of examples in Q 7. Let L be the corresponding labeled examples 8. V ←{h ∈V : (er(h; L) −ming∈V er(g; L)) ⌊n/k⌋ m−mb ≤Em−mb} 9. b ←b + 1 and Q ←∅ 10. mb ←m 11. If b > k, Return any ˆh ∈V Theorem 4.2. Under the Tsybakov noise condition, letting β = α 2−α, and ¯β = Pk−1 i=0 βi, for some λ(ǫ, δ) = O k 1 ǫ 2−α ¯ β (c2θ(c2ǫα))1−βk−1 ¯ β d log d ǫ + log kd δǫ 1+β ¯ β−βk ¯ β ! , for any n ≥λ(ǫ, δ), with probability at least 1 −δ, running k-batch Robust CAL with budget n produces a classifier ˆh with er(ˆh) −er(h∗) ≤ǫ. Proof. Let M = ⌊n/k⌋. Define i0M = 0 and V0 = C. Generally, for b ≥ 1, let ib1, ib2, . . . , ibM denote the indices i of the first M points Xi ∈DIS(Vb−1) for which i > i(b−1)M, and let Qb = {(Xib1, Yib1), . . . , (XibM , YibM )} and Vb = {h ∈Vb−1 : (er(h; Qb) − ming∈Vb−1 er(g; Qb)) M ibM−i(b−1)M ≤EibM−i(b−1)M }. These correspond to the set V at the conclusion of batch b in the k-batch Robust CAL algorithm. For b ∈ {1, . . . , k}, (1) (applied under the conditional distribution given Vb−1, combined with the law of total probability) implies that ∀m > 0, letting Zb,m = {(Xi(b−1)M+1, Yi(b−1)M+1), ..., (Xi(b−1)M +m, Yi(b−1)M+m)}, with probability at least 1−δ/(4km2), if h∗∈Vb−1, then er(h∗; Zb,m) −ming∈Vb−1 er(g; Zb,m) < Em, and every h ∈Vb−1 with er(h; Zb,m) −ming∈Vb−1 er(g; Zb,m) ≤Em has er(h) −er(h∗) < 2Em. By a union bound, this holds for all m ∈N, with probability at least 1 −δ/(2k). In particular, this means it holds for m = ibM −i(b−1)M. But note that for this value of m, any h, g ∈Vb−1 have er(h; Zb,m) −er(g; Zb,m) = (er(h; Qb) −er(g; Qb)) M m (since for every (x, y) ∈Zb,m \ Qb, either both h and g make a mistake, or neither do). Thus if h∗∈Vb−1, we have h∗∈Vb as well, and furthermore suph∈Vb er(h) −er(h∗) < 2EibM −i(b−1)M . By induction (over b) and a union bound, these are satisfied for all b ∈{1, . . . , k} with probability at least 1 −δ/2. For the remainder of the proof, we suppose this 1 −δ/2 probability event occurs. Next, we focus on lower bounding ibM −i(b−1)M, again by induction. As a base case, we clearly have i1M −i0M ≥M. Now suppose some b ∈{2, . . . , k} has i(b−1)M −i(b−2)M ≥Tb−1 for some Tb−1. Then, by the above, we have suph∈Vb−1 er(h) −er(h∗) < 2ETb−1. By the Tsybakov noise condition, this implies Vb−1 ⊆B h∗, c2 2ETb−1 α , so that if suph∈Vb−1 er(h) − er(h∗) > ǫ, P(DIS(Vb−1)) ≤θ(c2ǫα)c2 2ETb−1 α. Now note that the conditional distribution of ibM −i(b−1)M given Vb−1 is a negative binomial random variable with parameters M and 1 −P(DIS(Vb−1)) (that is, a sum of M Geometric(P(DIS(Vb−1))) random variables). A Chernoff bound (applied under the conditional distribution given Vb−1) implies that P(ibM −i(b−1)M < 5 M/(2P(DIS(Vb−1)))|Vb−1) < e−M/6. Thus, for Vb−1 as above, with probability at least 1−e−M/6, ibM−i(b−1)M ≥ M 2θ(c2ǫα)c2(2ETb−1)α . Thus, we can define Tb as in the right hand side, which thereby defines a recurrence. By induction, with probability at least 1 −ke−M/6 > 1 −δ/2, ikM −i(k−1)M ≥M ¯β 1 4c2θ(c2ǫα) ¯β−βk−1 1 2(d log(M) + log(kM/δ)) β( ¯β−βk−1) . By a union bound, with probability 1−δ, this occurs simultaneously with the above suph∈Vk er(h)− er(h∗) < 2EikM −i(k−1)M bound. Combining these two results yields sup h∈Vk er(h) −er(h∗) = O (c2θ(c2ǫα)) ¯β−βk−1 M ¯β ! 1 2−α (d log(M) + log(kM/δ)) 1+β( ¯ β−βk−1) 2−α ! . Setting this to ǫ and solving for n, we find that it suffices to have M ≥c4 1 ǫ 2−α ¯ β (c2θ(c2ǫα))1−βk−1 ¯ β d log d ǫ + log kd δǫ 1+β ¯ β−βk ¯ β , for some constant c4 ∈[1, ∞), which then implies the stated result. Note: the threshold Em in k-batch Robust CAL has a direct dependence on the parameters of the Tsybakov noise condition. We have expressed the algorithm in this way only to simplify the presentation. In practice, such information is not often available. However, we can replace Em with a data-dependent local Rademacher complexity bound ˆEm, as in [7], which also satisfies (1), and satisfies (with high probability) ˆEm ≤c′Em, for some constant c′ ∈[1, ∞) (see [13]). This modification would therefore provide essentially the same guarantee stated above (up to constant factors), without having any direct dependence on the noise parameters, and the analysis gets only slightly more involved to account for the confidences in the concentration inequalities for these ˆEm estimators. A similar result can also be obtained for batch-based variants of other noise-robust disagreementbased active learning algorithms from the literature (e.g., a variant of A2 [5] that uses updates based on quantities related to these ˆEm estimators, in place of the traditional upper-bound/lower-bound construction, would also suffice). When k = 1, Theorem 4.2 matches the best results for passive learning (up to log factors), which are known to be minimax optimal (again, up to log factors). If we let k become large (while still considered as a constant), our result converges to the known results for one-at-a-time active learning with RobustCAL (again, up to log factors) [7, 14]. Although those results are not always minimax optimal, they do represent the state-of-the-art in the general analysis of active learning, and they are really the best we could hope for from basing our algorithm on RobustCAL. 5 Buy-in-Bulk Solutions to Cost-Adaptive Active Learning The above sections discussed scenarios in which we have a fixed number k of batches, and we simply bounded the label complexity achievable within that constraint by considering a variant of CAL that uses k equal-sized batches. In this section, we take a slightly different approach to the problem, by going back to one of the motivations for using batch-based active learning in the first place: namely, sublinear costs for answering batches of queries at a time. If the cost of answering m queries at once is sublinear in m, then batch-based algorithms arise naturally from the problem of optimizing the total cost required for learning. Formally, in this section, we suppose we are given a cost function c : (0, ∞) →(0, ∞), which is nondecreasing, satisfies c(αx) ≤αc(x) (for x, α ∈[1, ∞)) , and further satisfies the condition that for every q ∈N, ∃q′ ∈N such that 2c(q) ≤c(q′) ≤4c(q), which typically amounts to a kind of smoothness assumption. For instance, c(q) = √q would satisfy these conditions (as would many other smooth increasing concave functions); the latter assumption can be generalized to allow other constants, though we only study this case below for simplicity. To understand the total cost required for learning in this model, we consider the following costadaptive modification of the CAL algorithm. 6 Algorithm: Cost-Adaptive CAL(C) 1. Q ←∅, R ←DIS(C), V ←C, t ←0 2. Repeat 3. q ←1 4. Do until P(DIS(V )) ≤P(R)/2 5. Let q′ > q be minimal such that c(q′ −q) ≥2c(q) 6. If c(q′ −q) + t > C, Return any ˆh ∈V 7. Request the labels of the next q′ −q examples in DIS(V ) 8. Update V by removing those classifiers inconsistent with these labels 9. Let t ←t + c(q′ −q) 10. q ←q′ 11. R ←DIS(V ) Note that the total cost expended by this method never exceeds the budget argument C. We have the following result on how large of a budget C is sufficient for this method to succeed. Theorem 5.1. In the realizable case, for some λ(ǫ, δ) = O c θ(ǫ) (d log(θ(ǫ)) + log(log(1/ǫ)/δ)) log(1/ǫ) , for any C ≥λ(ǫ, δ), with probability at least 1 −δ, Cost-Adaptive CAL(C) returns a classifier ˆh with er(ˆh) ≤ǫ. Proof. Supposing an unlimited budget (C = ∞), let us determine how much cost the algorithm incurs prior to having suph∈V er(h) ≤ǫ; this cost would then be a sufficient size for C to guarantee this occurs. First, note that h∗∈V is maintained as an invariant throughout the algorithm. Also, note that if q is ever at least as large as O(θ(ǫ)(d log(θ(ǫ)) + log(1/δ′))), then as in the analysis for CAL [7], we can conclude (via the PAC bound of [9]) that with probability at least 1 −δ′, sup h∈V P(h(X) ̸= h∗(X)|X ∈R) ≤1/(2θ(ǫ)), so that sup h∈V er(h) = sup h∈V P(h(X) ̸= h∗(X)|X ∈R)P(R) ≤P(R)/(2θ(ǫ)). We know R = DIS(V ′) for the set V ′ which was the value of the variable V at the time this R was obtained. Supposing suph∈V ′ er(h) > ǫ, we know (by the definition of θ(ǫ)) that P(R) ≤P DIS B h∗, sup h∈V ′ er(h) ≤θ(ǫ) sup h∈V ′ er(h). Therefore, sup h∈V er(h) ≤1 2 sup h∈V ′ er(h). In particular, this implies the condition in Step 4 will be satisfied if this happens while suph∈V er(h) > ǫ. But this condition can be satisfied at most ⌈log2(1/ǫ)⌉times while suph∈V er(h) > ǫ (since suph∈V er(h) ≤P(DIS(V ))). So with probability at least 1 − δ′⌈log2(1/ǫ)⌉, as long as suph∈V er(h) > ǫ, we always have c(q) ≤4c(O(θ(ǫ)(d log(θ(ǫ)) + log(1/δ′)))) ≤O(c(θ(ǫ)(d log(θ(ǫ)) + log(1/δ′)))). Letting δ′ = δ/⌈log2(1/ǫ)⌉, this is 1 −δ. So for each round of the outer loop while suph∈V er(h) > ǫ, by summing the geometric series of cost values c(q′ −q) in the inner loop, we find the total cost incurred is at most O(c(θ(ǫ)(d log(θ(ǫ)) + log(log(1/ǫ)/δ)))). Again, there are at most ⌈log2(1/ǫ)⌉rounds of the outer loop while suph∈V er(h) > ǫ, so that the total cost incurred before we have suph∈V er(h) ≤ǫ is at most O(c(θ(ǫ)(d log(θ(ǫ)) + log(log(1/ǫ)/δ))) log(1/ǫ)). Comparing this result to the known label complexity of CAL, which is (from [7]) O (θ(ǫ) (d log(θ(ǫ)) + log(log(1/ǫ)/δ)) log(1/ǫ)) , we see that the major factor, namely the O (θ(ǫ) (d log(θ(ǫ)) + log(log(1/ǫ)/δ))) factor, is now inside the argument to the cost function c(·). In particular, when this cost function is sublinear, we 7 expect this bound to be significantly smaller than the cost required by the original fully-sequential CAL algorithm, which uses batches of size 1, so that there is a significant advantage to using this batch-mode active learning algorithm. Again, this result is formulated for the realizable case for simplicity, but can easily be extended to the Tsybakov noise model as in the previous section. In particular, by reasoning quite similar to that above, a cost-adaptive variant of the Robust CAL algorithm of [14] achieves error rate er(ˆh) − er(h∗) ≤ǫ with probability at least 1 −δ using a total cost O c θ(c2ǫα)c2 2ǫ2α−2dpolylog (1/(ǫδ)) log (1/ǫ) . We omit the technical details for brevity. However, the idea is similar to that above, except that the update to the set V is now as in k-batch Robust CAL (with an appropriate modification to the δ-related logarithmic factor in Em), rather than simply those classifiers making no mistakes. The proof then follows analogous to that of Theorem 5.1, the only major change being that now we bound the number of unlabeled examples processed in the inner loop before suph∈V P(h(X) ̸= h∗(X)) ≤P(R)/(2θ); letting V ′ be the previous version space (the one for which R = DIS(V ′)), we have P(R) ≤θc2(suph∈V ′ er(h) −er(h∗))α, so that it suffices to have suph∈V P(h(X) ̸= h∗(X)) ≤(c2/2)(suph∈V ′ er(h) −er(h∗))α, and for this it suffices to have suph∈V er(h)−er(h∗) ≤2−1/α suph∈V ′ er(h)−er(h∗); by inverting Em, we find that it suffices to have a number of samples ˜O (2−1/α suph∈V ′ er(h) −er(h∗))α−2d . Since the number of label requests among m samples in the inner loop is roughly ˜O(mP(R)) ≤˜O(mθc2(suph∈V ′ er(h) − er(h∗))α), the batch size needed to make suph∈V P(h(X) ̸= h∗(X)) ≤P(R)/(2θ) is at most ˜O θc222/α(suph∈V ′ er(h) −er(h∗))2α−2d . When suph∈V ′ er(h) −er(h∗) > ǫ, this is ˜O θc222/αǫ2α−2d . If suph∈V P(h(X) ̸= h∗(X)) ≤P(R)/(2θ) is ever satisfied, then by the same reasoning as above, the update condition in Step 4 would be satisfied. Again, this update can be satisfied at most log(1/ǫ) times before achieving suph∈V er(h) −er(h∗) ≤ǫ. 6 Conclusions We have seen that the analysis of active learning can be adapted to the setting in which labels are requested in batches. We studied this in two related models of learning. In the first case, we supposed the number k of batches is specified, and we analyzed the number of label requests used by an algorithm that requested labels in k equal-sized batches. As a function of k, this label complexity became closer to that of the analogous results for fully-sequential active learning for larger values of k, and closer to the label complexity of passive learning for smaller values of k, as one would expect. Our second model was based on a notion of the cost to request the labels of a batch of a given size. We studied an active learning algorithm designed for this setting, and found that the total cost used by this algorithm may often be significantly smaller than that used by the analogous fully-sequential active learning methods, particularly when the cost function is sublinear. There are many active learning algorithms in the literature that can be described (or analyzed) in terms of batches of label requests. For instance, this is the case for the margin-based active learning strategy explored by [15]. Here we have only studied variants of CAL (and its noise-robust generalization). However, one could also apply this style of analysis to other methods, to investigate analogous questions of how the label complexities of such methods degrade as the batch sizes increase, or how such methods might be modified to account for a sublinear cost function, and what results one might obtain on the total cost of learning with these modified methods. This could potentially be a fruitful future direction for the study of batch mode active learning. The tradeoff between the total number of queries and the number of rounds examined in this paper is natural to study. Similar tradeoffs have been studied in other contexts. In any two-party communication task, there are three measures of complexity that are typically used: communication complexity (the total number of bits exchanged), round complexity (the number of rounds of communication), and time complexity. The classic work [16] considered the problem of the tradeoffs between communication complexity and rounds of communication. [17] studies the tradeoffs among all three of communication complexity, round complexity, and time complexity. Interested readers may wish to go beyond the present and to study the tradeoffs among all the three measures of complexity for batch mode active learning. 8 References [1] V. S. Sheng and C. X. Ling. Feature value acquisition in testing: a sequential batch test algorithm. In Proceedings of the 23rd international conference on Machine learning, 2006. [2] S. Chakraborty, V. Balasubramanian, and S. Panchanathan. An optimization based framework for dynamic batch mode active learning. In Advances in Neural Information Processing, 2010. [3] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281–299, 2009. [4] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems 18, 2005. [5] M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proc. of the 23rd International Conference on Machine Learning, 2006. [6] S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th International Conference on Machine Learning, 2007. [7] S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333–361, 2011. [8] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [9] V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York, 1982. [10] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [11] E. Mammen and A.B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27:1808–1829, 1999. [12] P. Massart and ´E. N´ed´elec. Risk bounds for statistical learning. The Annals of Statistics, 34(5):2326–2366, 2006. [13] V. Koltchinskii. Local rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593–2656, 2006. [14] S. Hanneke. Activized learning: Transforming passive to active with improved label complexity. Journal of Machine Learning Research, 13(5):1469–1587, 2012. [15] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Proceedings of the 20th Conference on Learning Theory, 2007. [16] C. H. Papadimitriou and M. Sipser. Communication complexity. Journal of Computer and System Sciences, 28(2):260269, 1984. [17] P. Harsha, Y. Ishai, Joe Kilian, Kobbi Nissim, and Srinivasan Venkatesh. Communication versus computation. In The 31st International Colloquium on Automata, Languages and Programming, pages 745–756, 2004. 9
|
2013
|
87
|
5,166
|
On Poisson Graphical Models Eunho Yang Department of Computer Science University of Texas at Austin eunho@cs.utexas.edu Pradeep Ravikumar Department of Computer Science University of Texas at Austin pradeepr@cs.utexas.edu Genevera I. Allen Department of Statistics and Electrical & Computer Engineering Rice University gallen@rice.edu Zhandong Liu Department of Pediatrics-Neurology Baylor College of Medicine zhandonl@bcm.edu Abstract Undirected graphical models, such as Gaussian graphical models, Ising, and multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables. These standard instances, however, are ill-suited to modeling count data, which are increasingly ubiquitous in big-data settings such as genomic sequencing data, user-ratings data, spatial incidence data, climate studies, and site visits. Existing classes of Poisson graphical models, which arise as the joint distributions that correspond to Poisson distributed node-conditional distributions, have a major drawback: they can only model negative conditional dependencies for reasons of normalizability given its infinite domain. In this paper, our objective is to modify the Poisson graphical model distribution so that it can capture a rich dependence structure between count-valued variables. We begin by discussing two strategies for truncating the Poisson distribution and show that only one of these leads to a valid joint distribution. While this model can accommodate a wider range of conditional dependencies, some limitations still remain. To address this, we investigate two additional novel variants of the Poisson distribution and their corresponding joint graphical model distributions. Our three novel approaches provide classes of Poisson-like graphical models that can capture both positive and negative conditional dependencies between count-valued variables. One can learn the graph structure of our models via penalized neighborhood selection, and we demonstrate the performance of our methods by learning simulated networks as well as a network from microRNA-sequencing data. 1 Introduction Undirected graphical models, or Markov random fields (MRFs), are a popular class of statistical models for representing distributions over a large number of variables. These models have found wide applicability in many areas including genomics, neuroimaging, statistical physics, and spatial statistics. Popular instances of this class of models include Gaussian graphical models [1, 2, 3, 4], used for modeling continuous real-valued data, the Ising model [3, 5], used for modeling binary data, as well as multinomial graphical models [6] where each variable takes values in a small finite set. There has also been recent interest in non-parametric extensions of these models [7, 8, 9, 10]. None of these models however are best suited to model count data, where the variables take values in the set of all positive integers. Examples of such count data are increasingly ubiquitous in big-data 1 settings, including high-throughput genomic sequencing data, spatial incidence data, climate studies, user-ratings data, term-document counts, site visits, and crime and disease incidence reports. In the univariate case, a popular choice for modeling count data is the Poisson distribution. Could we then model complex multivariate count data using some multivariate extension of the Poisson distribution? A line of work [11] has focused on log-linear models for count data in the context of contingency tables, however the number of parameters in these models grow exponentially with the number of variables and hence, these are not appropriate for high-dimensional regimes with large numbers of variables. Yet other approaches are based on indirect copula transforms [12], as well as multivariate Poisson distributions that do not have a closed, tractable form, and relying on limiting results [13]. Another important approach defines a multivariate Poisson distribution by modeling node variables as sums of independent Poisson variables [14, 15]. Since the sum of independent Poisson variables is Poisson as well, this construction yields Poisson marginal distributions. The resulting joint distribution, however, becomes intractable to characterize with even a few variables and moreover, can only model positive correlations, with further restrictions on the magnitude of these correlations. Other avenues for modeling multivariate count-data include hierarchical models commonly used in spatial statistics [16]. In a qualitatively different line of work, Besag [17] discusses a tractable and natural multivariate extension of the univariate Poisson distribution; while this work focused on the pairwise model case, Yang et al. [18, 19] extended this to the general graphical model setting. Their construction of a Poisson graphical model (PGM) is simple. Suppose all node-conditional distributions, the conditional distribution of a node conditioned on the rest of the nodes, are univariate Poisson. Then, there is a unique joint distribution consistent with these node-conditional distributions, and moreover this joint distribution is a graphical model distribution that factors according to a graph specified by the node-conditional distributions. While this graphical model seems like a good candidate to model multivariate count data, there is one major defect. For the density to be normalizable, the edge weights specifying the Poisson graphical model distribution have to be non-positive. This restriction implies that a Poisson graphical model distribution only models negative dependencies, or so called “competitive” relationships among variables. Thus, such a Poisson graphical model would have limited practical applicability in modeling more general multivariate count data [20, 21], with both positive and negative dependencies among the variables. To address this major drawback of non-positive conditional dependencies of the Poisson MRF, Kaiser and Cressie [20], Griffith [21] have suggested the use of the Winsorized Poisson distribution. This is the univariate distribution obtained by truncating the integer-valued Poisson random variable at a finite constant R. Specifically, they propose the use of this Winsorized Poisson as nodeconditional distributions, and assert that there exists a consistent joint distribution by following the construction of [17]. Interestingly, we will show that their result is incorrect and this approach can never lead to a consistent joint distribution in the vein of [17, 18, 19]. Thus, there currently does not exist a graphical model distribution for high-dimensional multivariate count data that does not suffer from severe deficiencies. In this paper, our objective is to specify a joint graphical model distribution over the set of non-negative integers that can capture rich dependence structures between variables. The major contributions of our paper are summarized as follows: We first consider truncated Poisson distributions and (1) show that the approach of [20] is NOT conducive to specifying a joint graphical model distribution; instead, (2) we propose a novel truncation approach that yields a proper MRF distribution, the Truncated PGM (TPGM). This model however, still has certain limitations on the types of variables and dependencies that may be modeled, and we thus consider more fundamental modifications to the univariate Poisson density’s base measure and sufficient statistics. (3) We will show that in order to have both positive and negative conditional dependencies, the requirements of normalizability are that the base measure of the Poisson density needs to scale quadratically for linear sufficient statistics. This leads to (4) a novel Quadratic PGM (QPGM) with linear sufficient statistics and its logical extension, (5) the Sublinear PGM (SPGM) with sub-linear sufficient statistics that permit sub-quadratic base measures. Our three novel approaches for the first time specify classes of joint graphical models for count data that permit rich dependence structures between variables. While the focus of this paper is model specification, we also illustrate how our models can be used to learn the network structure from iid samples of high-dimensional multivariate count data via neighborhood selection. We conclude our work by demonstrating our models on simulated networks and by learning a breast cancer microRNA expression network form count-valued next generation sequencing data. 2 2 Poisson Graphical Models & Truncation Poisson graphical models were introduced by [17] for the pairwise case, where they termed these “Poisson auto-models”; [18, 19] provide a generalization to these models. Let X = (X1, X2, . . . , Xp) be a p-dimensional random vector where the domain X of each Xs is {0, 1, 2, . . .}; and let G = (V, E) be an undirected graph over p nodes corresponding to the p variables. The pairwise Poisson graphical model (PGM) distribution over X is then defined as P(X) = exp X s∈V (θsXs −log(Xs!)) + X (s,t)∈E θst Xs Xt −A(θ) . (1) It can be seen that the node-conditional distributions for the above distribution are given by P(Xs|XV \s) = exp{ηsXs −log(Xs!) −exp(ηs)}, which is a univariate Poisson distribution with parameter λ = exp(ηs) = exp(θs + P t∈N (s) θstXt), and where N(s) is the neighborhood of node s according to graph G. As we have noted, there is a major drawback with this Poisson graphical model distribution. Note that the domain of parameters θ of the distribution in (1) are specified by the normalizability condition A(θ) < +∞, where A(θ) := log P X p exp n P s∈V (θsXs−log(Xs!))+P (s,t)∈E θst Xs Xt o . Proposition 1 (See [17]). Consider the Poisson graphical model distribution in (1). Then, for any parameter θ, A(θ) < +∞only if the pairwise parameters are non-positive: θst ≤0 for (s, t) ∈E . The above proposition asserts that the Poisson graphical model in (1) only allows negative edgeweights, and consequently can only capture negative conditional relationships between variables. Thus, even though the Poisson graphical model is a natural extension of the univariate Poisson distribution, it entails a highly restrictive parameter space with severely limited applicability. The objective of this paper, then, is to arrive at a graphical model for count data that would allow relaxing these restrictive assumptions, and model both positively and negatively correlated variables. 2.1 Truncation, Winsorization, and the Poisson Distribution The need for finiteness of A(θ) imposes a negativity constraint on θ because of the countably infinite domain of the random variables. A natural approach to address this would then be to truncate the domain of the Poisson random variables. In this section, we will investigate the two natural ways in which to do so and discuss their possible graphical model distributions. 2.1.1 A Natural Truncation Approach Kaiser and Cressie [20] first introduced an approach to truncate the Poisson distribution in the context of graphical models. Suppose Z′ is Poisson with parameter λ. Then, one can define what they termed a Winsorized Poisson random variable Z as follows: Z = I(Z′ < R)Z′ + I(Z′ ≥R)R, where I(A) is an indicator function, and R is a fixed positive constant denoting the truncation level. The probability mass function of this truncated Poisson variable, P(Z; λ, R), can then be written as I(Z < R) λZ Z! exp(−λ) + I(Z = R) 1 −PR−1 i=0 λi i! exp(−λ) . Now consider the use of this Winsorized Poisson distribution for node-conditional distributions, P(Xs|XV \s): I(Xs < R) λXs s Xs! exp(−λs) + I(Xs = R) 1 −PR−1 k=0 λk s k! exp(−λs) , where λs = exp(ηs) = exp θs + P t∈N (s) θstXt . By the Taylor series expansion of the exponential function, this distribution can be expressed in a form reminiscent of the exponential family, P(Xs|XV \s) = exp n ηsXs −log(Xs!) + I(Xs = R)Ψ(ηs) −exp(ηs) o , (2) where Ψ(ηs) is defined as log n R! exp(Rηs) P∞ k=R exp(kηs) k! o . We now have the machinery to describe the development in [20] of a Winsorized Poisson graphical model. Specifically, Kaiser and Cressie [20] assert in a Proposition of their paper that there is a valid joint distribution consistent with these Winsorized Poisson node-conditional distributions above. However, in the following theorem, we prove that such a joint distribution can never exist. 3 Theorem 1. Suppose X = (X1, . . . , Xp) is a p-dimensional random vector with domain {0, 1, ..., R}p where R > 3. Then there is no joint distribution over X such that the corresponding node-conditional distributions P(Xs|XV \s), of a node conditioned on the rest of the nodes, have the form specified as P(Xs|XV \s) ∝exp E(XV \s)Xs −log(Xs!) + I(Xs = R)Ψ E(XV \s) , where E(XV \s), the canonical exponential family parameter, can be an arbitrary function. Theorem 1 thus shows that we cannot just substitute the Winsorized Poisson distribution in the construction of [17, 18, 19] to obtain a Winsorized variant of Poisson graphical models. 2.1.2 A New Approach to Truncation It is instructive to study the probability mass function of the univariate Winsorized Poisson distribution in (2). The “remnant” probability mass of the Poisson distribution for the cases where X > R, was all moved to X = R. In the process, it is no longer an exponential family, a property that is crucial for compatibility with the construction in [17, 18, 19]. Could we then derive a truncated Poisson distribution that still belongs to the exponential family? It can be seen that the following distribution over a truncated Poisson variable Z ∈X = {0, 1, . . . , R} fits the bill perfectly: P(Z) = exp{θZ−log(Z!)} P k∈X exp{θk−log(k!)}. The random variable Z here is another natural truncated Poisson variant, where the “remnant” probability mass for the cases where X > R was distributed to all the remaining events X ≤R. It can be seen that this distribution also belongs to the exponential family. A natural strategy would then be to use this distribution as the node-conditional distributions in the construction of [17, 18]: P(Xs|XV \s) = exp n θs + P t∈N (s) θstXt Xs −log(Xs!) o P k∈X exp n θs + P t∈N (s) θstXt k −log(k!) o . (3) Theorem 2. Suppose X = (X1, X2, . . . , Xp) be a p-dimensional random vector, where each variable Xs for s ∈V takes values in the truncated positive integer set, {0, 1, ..., R}, where R is a fixed positive constant. Suppose its node-conditional distributions are specified as in (3), where the nodeneighborhoods are as specified by a graph G. Then, there exists a unique joint distribution that is consistent with these node-conditional distributions, and moreover this distribution belongs to the graphical model represented by G, with the form: P(X) := exp P s∈V (θsXs −log(Xs!)) + P (s,t)∈E θst Xs Xt −A(θ) , where A(θ) is the normalization constant. We call this distribution the Truncated Poisson graphical model (TPGM) distribution. Note that it is distinct from the original Poisson distribution (1); in particular its normalization constant involves a summation over finitely many terms. Thus, no restrictions are imposed on the parameters for the normalizability of the distribution. Unlike the original Poisson graphical model, the TPGM can model both positive and negative dependencies among its variables. There are, however, some drawbacks to this graphical model distribution. First, the domain of the variables is bounded a priori by the distribution specification, so that it is not broadly applicable to arbitrary, and possibly infinite, count-valued data. Second, problems arise when the random variables take on large count values close to R. In particular by examining (3), one can see that when Xt is large, the mass over Xs values get pushed towards R; thus, this truncated version is not always close to that of the original Poisson density. Therefore, as the truncation value R increases, the possible values that the parameters θ can take become increasingly negative or close to zero to prevent all random variables from always taking large count values at the same time. This can be seen as if we take R →∞, we arrive at the original PGM and negativity constraints. In summary, the TPGM approach offers some trade-offs between the value of R, it more closely follows the Poisson density when R is large, and the types of dependencies permitted. 3 A New Class of Poisson Variants and Their Graphical Model Distributions As discussed in the previous section, taking a Poisson random variable and truncating it may be a natural approach but does not lead to a valid multivariate graphical model extension, or does so with some caveats. Accordingly in this section, we investigate the possibility of modifying the Poisson distribution more fundamentally, by modifying its sufficient statistic and base measure. 4 Let us first briefly review the derivation of a Poisson graphical model as the graphical model extension of a univariate exponential family distribution from [17, 18, 19]. Consider a general univariate exponential family distribution, for a random variable Z: P(Z) = exp(θB(Z) −C(Z) −D(θ)), where B(Z) is the exponential family sufficient statistic, θ ∈R is the parameter, C(Z) is the base measure, and D(θ) is the log-partition function. Suppose the node-conditional distributions are all specified by the above exponential family, P(Xs|XV \s) = exp{E(XV \s) B(Xs) + C(Xs) −¯D(XV \s)}, (4) where the canonical parameter of exponential family is some function E(·) on the rest of the variables XV \s (and hence so is the log-normalization constant ¯D(·)). Further, suppose the corresponding joint distribution factors according to the graph G, with the factors over cliques of size at most k. Then, Proposition 2 in [18], shows that there exists a unique joint distribution corresponding to the node-conditional distributions in (4). With clique factors of size k at most two, this joint distribution takes the following form: P(X) = exp P s∈V θsB(Xs) + P (s,t)∈E θst B(Xs)B(Xt) − P s∈V C(Xs) −A(θ) . Note that although the log partition function A(θ) is usually computationally intractable, the log-partition function ¯D(·) of its node-conditional distribution (4) is still tractable, which allows consistent graph structure recovery [18]. Also note that the original Poisson graphical model (1) discussed in Section 2 can be derived from this construction with sufficient statistics B(X) = X, and base measure C(X) = log(X!). 3.1 A Quadratic Poisson Graphical Model As noted in Proposition 1, the normalizability of this Poisson graphical model distribution, however, requires that the pairwise parameters be negative. A closer look at the proof of Proposition 1 shows that a key driver of the result is that the base measure terms P s∈V C(Xs) = P s∈V log(Xs!) scale more slowly than the quadratic pairwise terms XsXt. Accordingly, we consider the following general distribution over count-valued variables: P(Z) = exp(θZ −C(Z) −D(θ)), (5) which has the same sufficient statistics as the Poisson, but a more general base measure C(Z), for some function C(·). The following theorem shows that for normalizability of the resulting graphical model distribution with possibly positive edge-parameters, the base measure cannot be sub-quadratic: Theorem 3. Suppose X = (X1, . . . , Xp) is a count-valued random vector, with joint distribution given by the graphical model extension of the univariate distribution in (5) which follows the construction of [17, 18, 19]). Then, if the distribution is normalizable so that A(θ) < ∞for θ ̸≤0, it necessarily holds that C(Z) = Ω(Z2). The previous theorem thus suggests using the “Gaussian-esque” quadratic base measure C(Z) = Z2, so that we would obtain the following distribution over count-valued vectors, P(X) = exp P s∈V θsXs + P (s,t)∈E θst XsXt −c P s∈V X2 s −A(θ) . for some fixed positive constant c > 0. We consider the following generalization of the above distribution: P(X) = exp X s∈V θsXs + X (s,t)∈E θst XsXt + X s∈V θssX2 s −A(θ) . (6) We call this distribution the Quadratic Poisson Graphical Model (QPGM). The following proposition shows that the QPGM is normalizable while permitting both positive and negative edge-parameters. Proposition 2. Consider the distribution in (6). Suppose we collate the quadratic term parameters into a p × p matrix Θ. Then the distribution is normalizable provided the following condition holds: There exists a positive constant cθ, such that for all X ∈Wp, XT ΘX ≤−cθ∥X∥2 2. The condition in the proposition would be satisfied provided that the pairwise parameters are pointwise negative: Θ < 0, similar to the original Poisson graphical model. Alternatively, it is also sufficient for the pairwise parameter matrix to be negative-definite: Θ ≺0, which does allow for positive and negative dependencies, as in the Gaussian distribution. A possible drawback with this distribution is that due to the quadratic base measure, the QPGM has a Gaussian-esque thin tail. Even though the domains of Gaussian and QPGM are distinct, 5 their densities have similar behaviors and shapes as long as θs + P t∈N(s) θstXt ≥0. Indeed, the Gaussian log-partition function serves as a variational upper bound for the QPGM. Specifically, under the restriction that θss < 0, we arrive at the following upper bound: D(θ; XV \s) = log X Xs∈W exp ηsXs + θssX2 s ≤log Z Xs∈R exp ηsXs + θssX2 s dXs = DGauss(θ; X\s) = 1/2 log 2π −1/2 log(−2θss) − 1 4θss (θs + X t∈N(s) θstXt)2, by relating to the log-partition function of a node-conditional Gaussian distribution. Thus, nodewise regressions according to the QPGM via the above variational upper bound on the partition function would behave similarly to that of a Gaussian graphical model. 3.2 A Sub-Linear Poisson Graphical Model From the previous section, we have learned that so long as we have linear sufficient statistics, B(X) = X, we must have a base measure that scales at least quadratically, C(Z) = Ω(Z2), for a Poisson-based graphical model (i) to permit both positive and negative conditional dependencies and (ii) to ensure normalizability. Such a quadratic base measure however results in a Gaussian-esque thin tail, while we would like to specify a distribution with possibly heavier tails than those of QPGM. It thus follows that we would need to control the linear Poisson sufficient statistics B(X) = X itself. Accordingly, we consider the following univariate distribution over count-valued variables: P(Z) = exp(θB(Z; R0, R) −log Z! −D(θ, R0, R)), (7) which has the same base measure C(Z) = log Z! as the Poisson, but with the following sub-linear sufficient statistics: B(x; R0, R) = x if x ≤R0 − 1 2(R−R0) x2 + R R−R0 x − R2 0 2(R−R0) if R0 < x ≤R R+R0 2 if x ≥R We depict this sublinear statistic in Figure 3 in the appendix; Up to R0, B(x) increases linearly, however, after R0 its slope decreases linearly and becomes zero at R. The following theorem shows the normalizability of the SPGM: Theorem 4. Suppose X = (X1, . . . , Xp) is a count-valued random vector, with joint distribution given by the graphical model extension of the univariate distribution in (7) (following the construction [17, 18, 19]): P(X) = exp X s∈V θsB(Xs; R0, R) + X (s,t)∈E θst B(Xs; R0, R)B(Xt; R0, R) − X s∈V log(Xs!) −A(θ, R0, R) . This distribution is normalizable, so that A(θ) < ∞for all pairwise parameters θst ∈R; (s, t) ∈E. On comparing with the QPGM, the SPGM has two distinct advantages: (1) it has a heavier tails with milder base measures as seen in its motivation, and (2) allows a broader set of feasible pairwise parameters (actually for all real values) as shown in Theorem 4. The log-partition function D(θ, R0, R) of node-conditional SPGM involves the summation over infinite terms, and hence usually does not have a closed-form. The log-partition function of traditional univariate Poisson distribution, however, can serve as a variational upper bound: Proposition 3. Consider the node-wise conditional distributions in (7). If θ ≥0, we obtain the following upper bound: D(θ, R0, R) ≤DPois(θ) = exp(θ). 4 Numerical Experiments While the focus of this paper is model specification, we can learn our models from iid samples of count-valued multivariate vectors using neighborhood selection approaches as suggested in [1, 5, 6 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 TPGM: Hub, n=200, p = 50 False Positive Rate True Positive Rate G G G G G G G G G G G G G G G GGGGG G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Karlis: Hub, n=200, p = 50 False Positive Rate True Positive Rate G G G G G G G G G G G G G G G G GGGG G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Karlis: Scale−free, n=200, p = 50 False Positive Rate True Positive Rate G G G G G G G G G G G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 TPGM: Hub, n=50, p = 100 False Positive Rate True Positive Rate GG G G G G G G G G G G G G G GGGGG G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Karlis: Hub, n=50, p = 100 False Positive Rate True Positive Rate GG G G G G G G G G G G G G G GGGGG G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Karlis: Scale−free, n=50, p = 100 False Positive Rate True Positive Rate G G G G G G G G G G G G G SPGM TPGM Glasso NPN−Copula NPN−Skeptic Figure 1: ROC curves for recovering the true network structure of count-data generated by the TPGM distribution or by [15] (sums of independent Poissons method) for both standard and highdimensional regimes. Our TPGM and SPGM M-estimators are compared to the graphical lasso [4], the non-paranormal copula-based method [7] and the non-paranormal SKEPTIC estimator [10]. 6, 18]. Specifically, we maximize the ℓ1 penalized node-conditional likelihoods for our TPGM, QPGM and SPGM models using proximal gradient ascent. Also, as our models are constructed in the framework of [18, 19], we expect extensions of their sparsistency analysis to confirm that the network structure of our model can indeed be learned from iid data; due to space limitations, this is left for future work. Simulation Studies. We evaluate the comparative performance of our TPGM and SPGM methods for recovering the true network from multivariate count data. Data of dimension n = 200 samples and p = 50 variables or the high-dimensional regime of n = 50 samples and p = 100 variables is generated via the TPGM distribution using Gibbs sampling or via the sums of independent Poissons method of [15]. For the former, edges were generated with both positive and negative weights, while for the latter, only edges with positive weights can be generated. As we expect the SPGM to be sparsistent for data generated from the SGPM distribution following the work of [18, 19], we have chosen to present results for data generated from other models. Two network structures are considered that are commonly used throughout genomics: the hub and scale-free graph structures. We compare the performance of our TPGM and SPGM methods with R set to the maximum count value to Gaussian graphical models [4], the non-paranormal [7], and the non-paranormal SKEPTIC [10]. In Figure 1, ROC curves computed by varying the regularization parameter, and averaged over 50 replicates are presented for each scenario. Both TPGM and SPGM have superior performance for count-valued data than Gaussian based methods. As expected, the TPGM method has the best results when data is generated according to its distribution. Additionally, TPGM shows some advantages in high-dimensional settings. This likely results from a facet of its node-conditional distribution which places larger mass on strongly dependent count values that are close to R. Thus, the TPGM method may be better able to infer edges from highly connected networks, such as those considered. Additionally, all methods compared outperform the original Poisson graphical model estimator, given in [18] (results not shown), as this method can only recover edges with negative weights. Case Study: Breast Cancer microRNA Networks. We demonstrate the advantages of our graphical models for count-valued data by learning a microRNA (miRNA) expression network from next generation sequencing data. This data consists of counts of sequencing reads mapped back to a reference genome and are replacing microarrays, for which GGMs are a popular tool, as the preferred measures of gene expression [22]. Level III data was obtained from the Cancer Genome Atlas (TCGA) [23] and processed according to techniques described in [24]; this data consists of n = 544 subjects and p = 262 miRNAs. Note that [18, 24] used this same data set to demonstrate 7 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 Figure 2: Breast cancer miRNA networks. Network inferred by (top left) TPGM with R = 11 and by (top right) SPGM with R = 11 and R0 = 5. The bottom row presents adjacency matrices of inferred networks with that of SPGM occupying the lower triangular portion and that of (left) PGM, (middle) TPGM with R = 11, and graphical lasso (right) occupying the upper triangular portion. network approaches for count-data, and thus, we use the same data set so that the results of our novel methods may be compared to those of existing approaches. Networks were learned from this data using the original Poisson graphical model, Gaussian graphical models, our novel TPGM approach with R = 11, the maximum count, and our novel SPGM approach with R = 11 and R0 = 5. Stability selection [25] was used to estimate the sparsity of the networks in a data-driven manner. Figure 2 depicts the inferred networks for our TPGM and SPGM methods as well as comparative adjacency matrices to illustrate the differences between our SPGM method and other approaches. Notice that SPGM and TPGM find similar network structures, but TPGM seems to find more hub miRNAs. This is consistent with the behavior of the TPGM distribution when strongly correlated counts have values close to R. The original Poisson graphical model, on the other hand, misses much of the structure learned by the other methods and instead only finds 14 miRNAs that have major conditionally negative relationships. As most miRNAs work in groups to regulate gene expression, this result is expected and illustrates a fundamental flaw of the PGM approach. Compared with Gaussian graphical models, our novel methods for count-valued data find many more edges and biologically important hub miRNAs. Two of these, mir-375 and mir-10b, found by both TPGM and SPGM but not by GGM, are known to be key players in breast cancer [26, 27]. Additionally, our TPGM and SPGM methods find a major clique which consists of miRNAs on chromosome 19, indicating that this miRNA cluster may by functionally associated with breast cancer. Acknowledgments The authors acknowledge support from the following sources: ARO via W911NF-12-1-0390 and NSF via IIS-1149803 and DMS-1264033 to E.Y. and P.R; Ken Kennedy Institute for Information Technology at Rice to G.A. and Z.L.; NSF DMS-1264058 and DMS-1209017 to G.A.; and NSF DMS-1263932 to Z.L.. 8 References [1] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436–1462, 2006. [2] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1): 19, 2007. [3] O. Banerjee, L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research, 9:485– 516, 2008. [4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the lasso. Biostatistics, 9(3):432–441, 2007. [5] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using ℓ1regularized logistic regression. Annals of Statistics, 38(3):1287–1319, 2010. [6] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS), 14, 2011. [7] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295–2328, 2009. [8] A. Dobra and A. Lenkoski. Copula gaussian graphical models and their application to modeling functional disability data. The Annals of Applied Statistics, 5(2A):969–993, 2011. [9] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High dimensional semiparametric gaussian copula graphical models. Arxiv preprint arXiv:1202.2169, 2012. [10] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. The nonparanormal skeptic. Arxiv preprint arXiv:1206.6488, 2012. [11] S. L. Lauritzen. Graphical models, volume 17. Oxford University Press, USA, 1996. [12] I. Yahav and G. Shmueli. An elegant method for generating multivariate poisson random variable. Arxiv preprint arXiv:0710.5670, 2007. [13] A. S. Krishnamoorthy. Multivariate binomial and poisson distributions. Sankhy¯a: The Indian Journal of Statistics (1933-1960), 11(2):117–124, 1951. [14] P. Holgate. Estimation for the bivariate poisson distribution. Biometrika, 51(1-2):241–287, 1964. [15] D. Karlis. An em algorithm for multivariate poisson distribution and related models. Journal of Applied Statistics, 30(1):63–77, 2003. [16] N. A. C. Cressie. Statistics for spatial data. Wiley series in probability and mathematical statistics, 1991. [17] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2):192–236, 1974. [18] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur. Info. Proc. Sys., 25, 2012. [19] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family distributions. Arxiv preprint arXiv:1301.4183, 2013. [20] M. S. Kaiser and N. Cressie. Modeling poisson variables with positive spatial dependence. Statistics & Probability Letters, 35(4):423–432, 1997. [21] D. A. Griffith. A spatial filtering specification for the auto-poisson model. Statistics & probability letters, 58(3):245–251, 2002. [22] J. C. Marioni, C. E. Mason, S. M. Mane, M. Stephens, and Y. Gilad. Rna-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome research, 18(9):1509–1517, 2008. [23] Cancer Genome Atlas Research Network. Comprehensive molecular portraits of human breast tumours. Nature, 490(7418):61–70, 2012. [24] G. I. Allen and Z. Liu. A log-linear graphical model for inferring genetic networks from high-throughput sequencing data. IEEE International Conference on Bioinformatics and Biomedicine, 2012. [25] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for high dimensional graphical models. Arxiv preprint arXiv:1006.3316, 2010. [26] L. Ma, F. Reinhardt, E. Pan, J. Soutschek, B. Bhat, E. G. Marcusson, J. Teruya-Feldstein, G. W. Bell, and R. A. Weinberg. Therapeutic silencing of mir-10b inhibits metastasis in a mouse mammary tumor model. Nature biotechnology, 28(4):341–347, 2010. [27] P. de Souza Rocha Simonini, A. Breiling, N. Gupta, M. Malekpour, M. Youns, R. Omranipour, F. Malekpour, S. Volinia, C. M. Croce, H. Najmabadi, et al. Epigenetically deregulated microrna-375 is involved in a positive feedback loop with estrogen receptor α in breast cancer cells. Cancer research, 70(22):9175–9184, 2010. 9
|
2013
|
88
|
5,167
|
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations Tamir Hazan University of Haifa Subhransu Maji TTI Chicago Tommi Jaakkola CSAIL, MIT Abstract In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs’ distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our approach also leads to new ways to derive lower bounds on partition functions. We demonstrate empirically that our method excels in the typical “high signal high coupling” regime. The setting results in ragged energy landscapes that are challenging for alternative approaches to sampling and/or lower bounds. 1 Introduction Inference in complex models drives much of the research in machine learning applications, from computer vision, natural language processing, to computational biology. Examples include scene understanding, parsing, or protein design. The inference problem in such cases involves finding likely structures, whether objects, parsers, or molecular arrangements. Each structure corresponds to an assignment of values to random variables and the likelihood of an assignment is based on defining potential functions in a Gibbs distribution. Usually, it is feasible to find only the most likely or maximum a-posteriori (MAP) assignment (structure) rather than sampling from the full Gibbs distribution. Substantial effort has gone into developing algorithms for recovering MAP assignments, either based on specific structural restrictions such as super-modularity [2] or by devising cutting-planes based methods on linear programming relaxations [19, 24]. However, MAP inference is limited when there are other likely assignments. Our work seeks to leverage MAP inference so as to sample efficiently from the full Gibbs distribution. Specifically, we aim to draw either approximate or unbiased samples from Gibbs distributions by introducing low dimensional perturbations in the potential functions and solving the corresponding MAP assignments. Connections between random MAP perturbations and Gibbs distributions have been explored before. Recently [17, 21] defined probability models that are based on low dimensional perturbations, and empirically tied them to Gibbs distributions. [5] augmented these results by providing bounds on the partition function in terms of random MAP perturbations. In this work we build on these results to construct an efficient sampler for the Gibbs distribution, also deriving new lower bounds on the partition function. Our approach excels in regimes where there are several but not exponentially many prominent assignments. In such ragged energy landscapes classical methods for the Gibbs distribution such as Gibbs sampling and Markov chain Monte Carlo methods, remain computationally expensive [3, 25]. 2 Background Statistical inference problems involve reasoning about the states of discrete variables whose configurations (assignments of values) specify the discrete structures of interest. We assume that the 1 models are parameterized by real valued potentials θ(x) = θ(x1, ..., xn) < ∞defined over a discrete product space X = X1 × · · · × Xn. The effective domain is implicitly defined through θ(x) via exclusions θ(x) = −∞whenever x ̸∈dom(θ). The real valued potential functions are mapped to the probability scale via the Gibbs’ distribution: p(x1, ..., xn) = 1 Z exp(θ(x1, ..., xn)), where Z = X x1,...,xn exp(θ(x1, ..., xn)). (1) The normalization constant Z is called the partition function. The feasibility of using the distribution for prediction, including sampling from it, is inherently tied to the ability to evaluate the partition function, i.e., the ability to sum over the discrete structures being modeled. In general, such counting problems are often hard, in #P. A slightly easier problem is that of finding the most likely assignment of values to variables, also known as the maximum a-posterior (MAP) prediction. (MAP) arg max x1,...,yn θ(x1, ..., xn) (2) Recent advances in optimization theory have been translated to successful algorithms for solving such MAP problems in many cases of practical interest. Although the MAP prediction problem is still NP-hard in general, it is often simpler than sampling from the Gibbs distribution. Our approach is based on representations of the Gibbs distribution and the partition function using extreme value statistics of linearly perturbed potential functions. Let {γ(x)}x∈X be a collection of random variables with zero mean, and consider random potential functions of the form θ(x)+γ(x). Analytic expressions for the statistics of a randomized MAP predictor, ˆx ∈argmaxx{θ(x) + γ(x)}, can be derived for general discrete sets, whenever independent and identically distributed (i.i.d.) random perturbations are applied for every assignment x ∈X. Specifically, when the random perturbations follow the Gumbel distribution (cf. [12]), we obtain the following result. Theorem 1. ([4], see also [17, 5]) Let {γ(x)}x∈X be a collection of i.i.d. random variables, each following the Gumbel distribution with zero mean, whose cumulative distribution function is F(t) = exp(−exp(−(t + c))), where c is the Euler constant. Then log Z = Eγ h max x∈X{θ(x) + γ(x)} i . 1 Z exp(θ(ˆx)) = Pγ h ˆx ∈arg max x∈X{θ(x) + γ(x)} i . The max-stability of the Gumbel distribution provides a straight forward approach to generate unbiased samples from the Gibbs distribution as well as to approximate the partition function by a sample mean of random MAP perturbation. Assume we sample j = 1, ..., m independent predictions maxx{θ(x) + γj(x)}, then every maximal argument is an unbiased sample from the Gibbs distribution. Moreover, the randomized MAP predictions maxx{θ(x)+γj(x)} are independent and follow the Gumbel distribution, whose variance is π2/6. Therefore Chebyshev’s inequality dictates, for every t, m Prγ h 1 m m X j=1 max x {θ(x) + γj(x)} −log Z ≥ϵ i ≤ π 6mϵ2 (3) In general each x = (x1, ..., xn) represents an assignment to n variables. Theorem 1 suggests to introduce an independent perturbation γ(x) for each such n−dimensional assignment x ∈X. The complexity of inference and learning in this setting would be exponential in n. In our work we propose to investigate low dimensional random perturbations as the main tool to efficiently (approximate) sampling from the Gibbs distribution. 3 Probable approximate samples from the Gibbs distribution Sampling from the Gibbs distribution is inherently tied to estimating the partition function. Markov properties that simplify the distribution also decompose the computation of the partition function. 2 For example, assume a graphical model with potential functions associated with subsets of variables α ⊂{1, ..., n} so that θ(x) = P α∈A θα(xα). Assume that the subsets are disjoint except for their common intersection β = ∩α∈A. This separation implies that the partition function can be computed in lower dimensional pieces Z = X xβ Y α∈A X xα\xβ exp(θα(xα)) As a result, the computation is exponential only in the size of the subsets α ∈A. Thus, we can also estimate the partition function with lower dimensional random MAP perturbations, Eγ[maxxα\xβ{θα(xα) + γα(xα)}]. The random perturbation are now required only for each assignment of values to the variables within the subsets α ∈A rather than the set of all variables. We approximate such partition functions with low dimensional perturbations and their averages. The overall computation is cast in a single MAP problem using an extended representation of potential functions by replicating variables. Lemma 1. Let A be subsets of variables that are separated by their joint intersection β = ∩α∈Aα. We create multiple copies of xα, namely ˆxα = (xα,jα)jα=1,...,mα, and define the extended potential function ˆθα(ˆxα) = Pmα jα=1 θα(xα,jα)/mα. We also define the extended perturbation model ˆγα(ˆxα) = Pmα jα=1 γα,jα(xα,jα)/mα, where each γα,jα(xα,jα) is independent and distributed according to the Gumbel distribution with zero mean. Then, for every xβ, with probability at least 1 −P α∈A π2 6mαϵ2 max ˆx\xβ X α∈A ˆθα(ˆxα) + X α∈A ˆγα(ˆxα) − X α∈A log X xα\xβ exp(θα(xα)) ≤ϵ|A| Proof: Equation (3) implies that for every xβ with probability at most π2/6mαϵ2 holds 1 mα mα X jα=1 max xα\xβ{θα(xα) + γα,jα(xα)} −log X xα\xβ exp(θα(xα)) ≤ϵ. To compute the sampled average with a single max-operation we introduce the multiple copies ˆxα = (xα,jα)jα=1,...,mα thus Pmα jα=1 maxxα\xβ{θα(xα) + γα,jα(xα)} = maxxα,jα\xβ Pm j=1{θα(xα,jα) + γα,jα(xα,jα)}. By the union bound it holds for every α ∈A simultaneously with probability at least 1 −P α∈A π2/6mαϵ2. Since xβ is fixed for every α ∈A the maximizations are done independently across subsets in ˆx \ xβ, where ˆx is the concatenation of all ˆxα, and X α∈A max ˆxα\xβ mα X jα=1 n θα(xα,jα) + γα,jα(xα,jα) o = max ˆx\xβ mα X jα=1 n X α∈A θα(xα,jα) + X α∈A γα,jα(xα,jα) o . The proof then follows from the triangle inequality. □ Whenever the graphical model has no cycles we can iteratively apply the separation properties without increasing the computational complexity of perturbations. Thus we may randomly perturb the subsets of potentials in the graph. For notational simplicity we describe our approximate sampling scheme for pairwise interactions α = (i, j) although it holds for general graphical models without cycles: Theorem 2. Let θ(x) = P i∈V θi(xi) + P i,j∈E θi,j(xi, xj) be a graphical model without cycles, and let p(x) be the Gibbs distribution defined in Equation (1). Let ˆθ(x) = Pmi ki=1 θ(x1,k1, ..., xn,kn)/ Q i mi, and ˆγi,j(xi, xj) = Pmi,mj ki,kj=1 γi,j,ki,kj(xi,ki, xj,kj)/mimj where each perturbation is independent and distributed according to the Gumbel distribution with zero mean. Then, for every edge (r, s) while mr = ms = 1 (i.e., they have no multiple copies) there holds with probability at least 1 −Pn i=1 π2c/6miϵ2, where c = maxi |Xi| log Pγ h xr, xs ∈arg max ˆx n ˆθ(x) + X i,j∈E ˆγi,j(xi, xj) oi −log X x\xr,xs p(x) ≤ϵn 3 Proof: Theorem 1 implies that we sample (xr, xs) approximately from the Gibbs distribution marginal probabilities with a max-operation, if we approximate P x\{xr,xs} exp(θ(x)). Using graph separation (or equivalently the Markov property) it suffices to approximate the partial partition function over the disjoint subtrees Tr, Ts that originate from r, s respectively. Lemma 1 describes this case for a directed tree with a single parent. We use this by induction on the parents on these directed trees, noticing that graph separation guarantees: the statistics of Lemma 1 hold uniformly for every assignment of the parent’s non-descendants as well; the optimal assignments in Lemma 1 are chosen independently for every child for every assignment of the parent’s non-descendants label. □ Our approximated sampling procedure expands the graphical model, creating layers of the original graph, while connecting edges between vertices in the different layers if an edge exists in the original graph. We use graph separations (Markov properties) to guarantee that the number of added layers is polynomial in n, while we approach arbitrarily close to the Gibbs distribution. This construction preserves the structure of the original graph, in particular, whenever the original graph has no cycles, the expanded graph does not have cycles as well. In the experiments we show that this probability model approximates well the Gibbs distribution for graphical models with many cycles. 4 Unbiased sampling using sequential bounds on the partition function In the following we describe how to use random MAP perturbations to generate unbiased samples from the Gibbs distribution. Sampling from the Gibbs distribution is inherently tied to estimating the partition function. Assume we could have compute the partition function exactly, then we could have sample from the Gibbs distribution sequentially: for every dimension we sample xi with probability which is proportional to P xi+1,...,xn exp(θ(x)). Unfortunately, approximations to the partition function, as described in Section 3, cannot provide a sequential procedure that would generate unbiased samples from the full Gibbs distribution. Instead, we construct a family of self-reducible upper bounds which imitate the partition function behavior, namely bound the summation over its exponentiations. These upper bounds extend the one in [5] when restricted to local perturbations. Lemma 2. Let {γi(xi)} be a collection of i.i.d. random variables, each following the Gumbel distribution with zero mean. Then for every j = 1, ..., n and every x1, ..., xj−1 holds X xj exp Eγ h max xj+1,...,xn{θ(x) + n X i=j+1 γi(xi)} i ≤exp Eγ h max xj,...,xn{θ(x) + n X i=j γi(xi)} i In particular, for j = n holds P xn exp(θ(x)) = exp Eγn(xn) h maxxj,...,xn{θ(x) + γn(xn)} i . Proof: The result is an application of the expectation-optimization interpretation of the partition function in Theorem 1. The left hand side equals to Eγj maxxj Eγj+1,...,γn maxxj+1,...,xn{θ(x)+ Pn i=j γi(xi) , while the right hand side is attained by alternating the maximization with respect to xj with the expectation of γj+1, ..., γn. The proof then follows by taking the exponent.□ We use these upper bounds for every dimension i = 1, ..., n to sample from a probability distribution that follows a summation over exponential functions, with a discrepancy that is described by the upper bound. This is formalized below in Algorithm 1 Algorithm 1 Unbiased sampling from Gibbs distribution using randomized prediction Iterate over j = 1, ..., n, while keeping fixed x1, ..., xj−1. Set 1. pj(xj) = exp Eγ maxxj+1,...,xn{θ(x)+Pn i=j+1 γi(xi)} exp Eγ maxxj ,...,xn{θ(x)+Pn i=j γi(xi)} . 2. pj(r) = 1 −P xj p(xj) 3. Sample an element according to pj(·). If r is sampled then reject and restart with j = 1. Otherwise, fix the sampled element xj and continue the iterations. Output: x1, ..., xn When we reject the discrepancy, the probability we accept a configuration x is the product of probabilities in all rounds. Since these upper bounds are self-reducible, i.e., for every dimension i we 4 are using the same quantities that were computed in the previous dimensions 1, ..., i −1, we are sampling an accepted configuration proportionally to exp(θ(x)), the full Gibbs distribution. Theorem 3. Let p(x) be the Gibbs distribution, defined in Equation (1) and let {γi(xi)} be a collection of i.i.d. random variables following the Gumbel distribution with zero mean. Then whenever Algorithm 1 accepts, it produces a configuration (x1, ..., xn) according to the Gibbs distribution P h Algorithm 1 outputs x Algorithm 1 accepts i = p(x). Proof: The probability of sampling a configuration (x1, ..., xn) without rejecting is n Y j=1 exp Eγ max xj+1,...,xn{θ(x) + Pn i=j+1 γi(xi)} exp Eγ max xj,...,xn{θ(x) + Pn i=j γi(xi)} = exp(θ(x)) exp Eγ max x1,...,xn{θ(x) + Pn i=1 γi(xi)} . The probability of sampling without rejecting is thus the sum of this probability over all configuration, i.e., P Algorithm 1 accepts = Z exp Eγ maxx1,...,xn{θ(x) + Pn i=1 γi(xi)} . Therefore conditioned on accepting a configuration, it is produced according to the Gibbs distribution. □. Acceptance/rejection follows the geometric distribution, therefore the sampling procedure rejects k times with probability (1 −P[Algorithm 1 accepts])k. The running time of our Gibbs sampler is determined by the average number of rejections 1/P[Algorithm 1 accepts]. Interestingly, this average is the quality of the partition upper bound presented in [5]. To augment this result we investigate in the next section efficiently computable lower bounds to the partition function, that are based on random MAP perturbations. These lower bounds provide a way to efficiently determine the computational complexity for sampling from the Gibbs distribution for a given potential function. 5 Lower bounds on the partition function The realization of the partition function as expectation-optimization pair in Theorem 1 provides efficiently computable lower bounds on the partition function. Intuitively, these bounds correspond to moving expectations (or summations) inside the maximization operations. In the following we present two lower bounds that are derived along these lines, the first holds in expectation and the second holds in probability. Corollary 1. Consider a family of subsets α ∈A and let xα be a set of variables {xi}i∈α restricted to the indexes in α. Assume that the random variables γα(xα) are i.i.d. according to the Gumbel distribution with zero mean, for every α, xα. Then ∀α ∈A log Z ≥Eγ h max x θ(x) + γα(xα) i . In particular, log Z ≥Eγ h maxx θ(x) + 1 |A| P α∈A γα(xα) i . Proof: Let ¯α = {1, ..., n} \ α then Z = P xα P x ¯ α exp(θ(x)) ≥P xα maxx ¯ α exp(θ(x)). The first result is derived by swapping the maximization with the exponent, and applying Theorem 1. The second result is attained while averaging these lower bounds log Z ≥P α∈A 1 |A|Eγ[maxx{θ(x) + γα(xα)}], and by moving the summation inside the maximization operation. □ The expected lower bound requires to invoke a MAP solver multiple times. Although this expectation may be estimated with a single MAP execution, the variance of this random MAP prediction is around √n. We suggest to recursively use Lemma 1 to lower bound the partition function with a single MAP operation in probability. Corollary 2. Let θ(x) be a potential function over x = (x1, ..., xn). We create multiple copies of xi, namely xi,ki for ki = 1, ..., mi, and define the extended potential function ˆθ(x) = Pmi ki=1 θ(x1,k1, ..., xn,kn)/ Q mi. We define the extended perturbation model ˆγi(xi) = Pmi ki=1 γi,ki(xi,ki)/mi where each perturbation is independent and distributed according to the Gumbel distribution with zero mean. Then, with probability at least 1 −Pn i=1 π2|dom(θ)|/6miϵ2 holds log Z ≥maxˆx{ˆθ(x) + Pn i=1 ˆγi(xi)} −ϵn 5 lower bounds unbiased samplesr complexity approximate sampler Figure 1: Left: comparing our expected lower and probable lower bounds with structured mean-field and belief propagation on attractive models with high signal and varying coupling strength. Middle: estimating our unbiased sampling procedure complexity on spin glass models of varying sizes. Right: Comparing our approximate sampling procedure on attractive models with high signal. Proof: We estimate the expectation-optimization value of the log-partition function iteratively for every dimension, while replacing each expectation with its sampled average, as described in Lemma 1. Our result holds for every potential function, thus the statistics in each recursion hold uniformly for every x with probability at least 1 −π2|dom(θ)|/6miϵ2. We then move the averages inside the maximization operation, thus lower bounding the ϵn−approximation of the partition function. □ The probable lower bound that we provide does not assume graph separations thus the statistical guarantees are worse than the ones presented in the approximation scheme of Theorem 2. Also, since we are seeking for lower bound, we are able relax our optimization requirements and thus to use vertex based random perturbations γi(xi). This is an important difference that makes this lower bound widely applicable and very efficient. 6 Experiments We evaluated our approach on spin glass models θ(x) = P i∈V θixi + P (i,j)∈E θi,jxixj. where xi ∈{−1, 1}. Each spin has a local field parameter θi, sampled uniformly from [−1, 1]. The spins interact in a grid shaped graphical model with couplings θi,j, sampled uniformly from [0, c]. Whenever the coupling parameters are positive the model is called attractive as adjacent variables give higher values to positively correlated configurations. Attractive models are computationally appealing as their MAP predictions can be computed efficiently by the graph-cut algorithm [2]. We begin by evaluating our lower bounds, presented in Section 5, on 10 × 10 spin glass models. Corollary 1 presents a lower bound that holds in expectation. We evaluated these lower bounds while perturbing the local potentials with γi(xi). Corollary 2 presents a lower bound that holds in probability and requires only a single MAP prediction on an expanded model. We evaluate the probable bound by expanding the model to 1000 × 1000 grids, ignoring the discrepancy ϵ. For both the expected lower bound and the probable lower bound we used graph-cuts to compute the random MAP perturbations. We compared these bounds to the different forms of structured mean-field, taking the one that performed best: standard structured mean-field that we computed over the vertical chains [8, 1], and the negative tree re-weighted computed on the horizontal and vertical trees [14]. We also compared to the sum-product belief propagation algorithm, which was recently proven to produce lower bounds for attractive models [20, 18]. We computed the error in estimating the logarithm of the partition function, averaged over 10 spin glass models, see Figure 1. One can see that the probable bound is the tightest when considering the medium and high coupling domain, which is traditionally hard for all methods. As it holds in probability it might generate a solution which is not a lower bound. One can also verify that on average this does not happen. The expected lower bound is significantly worse for the low coupling regime, in which many configurations need to be taken into account. It is (surprisingly) effective for the high coupling regime, which is characterized by a few dominant configurations. Section 4 describes an algorithm that generates unbiased samples from the full Gibbs distribution. Focusing on spin glass models with strong local field potentials, it is well know that one cannot produce unbiased samples from the Gibbs distributions in polynomial time [3]. Theorem 3 connects 6 Image + annotation MAP solution Average of 20 samples Error estimates Figure 2: Example image with the boundary annotation (left) and the error estimates obtained using our method (right). Thin structures of the object are often lost in a single MAP solution (middle-left), which are recovered by averaging the samples (middle-right) leading to better error estimates. the computational complexity of our unbiased sampling procedure to the gap between the logarithm of the partition function and its upper bound in [5]. We use our probable lower bound to estimate this gap on large grids, for which we cannot compute the partition function exactly. Figure 1 suggests that the running time for this sampling procedure is sub-exponential. Sampling from the Gibbs distribution in spin glass models with non-zero local field potentials is computationally hard [7, 3]. The approximate sampling technique in Theorem 3 suggests a method to overcome this difficulty by efficiently sampling from a distribution that approximates the Gibbs distribution on its marginal probabilities. Although our theory is only stated for graphs without cycles, it can be readily applied to general graphs, in the same way the (loopy) belief propagation algorithm is applied. For computational reasons we did not expand the graph. Also, we experiment both with pairwise perturbations, as Theorem 2 suggests, and with local perturbations, which are guaranteed to preserve the potential function super-modularity. We computed the local marginal probability errors of our sampling procedure, while comparing to the standard methods of Gibbs sampling, Metropolis and Swendsen-Wang1. In our experiments we let them run for at most 1e8 iterations, see Figure 1. Both Gibbs sampling and the Metropolis algorithm perform similarly (we omit the Gibbs sampler performance for clarity). Although these algorithms as well as the Swendsen-Wang algorithm directly sample from the Gibbs distribution, they typically require exponential running time to succeed on spin glass models. Figure 1 shows that these samplers are worse than our approximate samplers. Although we omit from the plots for clarity, our approximate sampling marginal probabilities compare to those of the sum-product belief propagation and the tree re-weighted belief propagation [22]. Nevertheless, our sampling scheme also provides a probability notion, which lacks in the belief propagation type algorithms. Surprisingly, the approximate sampler that uses pairwise perturbations performs (slightly) worse than the approximate sampler that only use local perturbations. Although this is not explained by our current theory, it is an encouraging observation, since approximate sampler that uses random MAP predictions with local perturbations is orders of magnitude faster. Lastly, we emphasize the importance of probabilistic reasoning over the current variational methods, such as tree re-weighted belief propagation [22] or max-marginal probabilities [10], that only generate probabilities over small subsets of variables. The task we consider is to obtain pixel accurate boundaries from rough boundaries provided by the user. For example in an image editing application the user may provide an input in the form of a rough polygon and the goal is to refine the boundaries using the information from the gradients in the image. A natural notion of error is the average deviation of the marked boundary from the true boundary of the image. Given a user boundary we set up a graphical model on the pixels using foreground/background models trained from regions well inside/outside the marked boundary. Exact binary labeling can be obtained using the graph-cuts algorithm. From this we can compute the expected error by sampling multiple solutions using random MAP predictors and averaging. On a dataset of 10 images which we carefully annotated to obtain pixel accurate boundaries we find that random MAP perturbations produce significantly more accurate estimates of boundary error compared to a single MAP solution. On average the error estimates obtained using random MAP perturbations is off by 1.04 pixels from the true error (obtained from ground truth) whereas the MAP which is off by 3.51 pixels. Such a measure can be used in an active annotation framework where the users can iteratively fix parts of the boundary that contain errors. 1We used Talya Meltzer’s inference package. 7 Figure 2 shows an example annotation, the MAP solution, the mean of 20 random MAP solutions, and boundary error estimates. 7 Related work The Gibbs distribution plays a key role in many areas of science, including computer science, statistics and physics. To learn more about its roles in machine learning, as well as its standard samplers, we refer the interested reader to the textbook [11]. Our work is based on max-statistics of collections of random variables. For comprehensive introduction to extreme value statistics we refer the reader to [12]. The Gibbs distribution and its partition function can be realized from the statistics of random MAP perturbations with the Gumbel distribution (see Theorem 1), [12, 17, 21, 5]. Recently, [16, 9, 17, 21, 6] explore the different aspects of random MAP predictions with low dimensional perturbation. [16] describe sampling from the Gaussian distribution with random Gaussian perturbations. [17] show that random MAP predictors with low dimensional perturbations share similar statistics as the Gibbs distribution. [21] describe the Bayesian perspectives of these models and their efficient sampling procedures. [9, 6] consider the generalization properties of such models within PAC-Bayesian theory. In our work we formally relate random MAP perturbations and the Gibbs distribution. Specifically, we describe the case for which the marginal probabilities of random MAP perturbations, with the proper expansion, approximate those of the Gibbs distribution. We also show how to use the statistics of random MAP perturbations to generate unbiased samples from the Gibbs distribution. These probability models generate samples efficiently thorough optimization: they have statistical advantages over purely variational approaches such as tree re-weighted belief propagation [22] or max-marginals [10], and they are faster than standard Gibbs samplers and Markov chain Monte Carlo approaches when MAP prediction is efficient [3, 25]. Other methods that efficiently produce samples include Herding [23] and determinantal processes [13]. Our suggested samplers for the Gibbs distribution are based on low dimensional representation of the partition function, [5]. We augment their results in a few ways. In Lemma 2 we refine their upper bound, to a series of sequentially tighter bounds. Corollary 2 shows that the approximation scheme of [5] is in fact a lower bound that holds in probability. Lower bounds for the partition function have been extensively developed in the recent years within the context of variational methods. Structured mean-field methods are inner-bound methods where a simpler distribution is optimized as an approximation to the posterior in a KL-divergence sense [8, 1, 14]. The difficulty comes from non-convexity of the set of feasible distributions. Surprisingly, [20, 18] have shown that the sum-product belief propagation provides a lower bound to the partition function for super-modular potential functions. This result is based on the four function theorem which considers nonnegative functions over distributive lattices. 8 Discussion This work explores new approaches to sample from the Gibbs distribution. Sampling from the Gibbs distribution is key problem in machine learning. Traditional approaches, such as Gibbs sampling, fail in the “high-signal high-coupling” regime that results in ragged energy landscapes. Following [17, 21], we showed here that one can take advantage of efficient MAP solvers to generate approximate or unbiased samples from the Gibbs distribution, when we randomly perturb the potential function. Since MAP predictions are not affected by ragged energy landscapes, our approach excels in the “high-signal high-coupling” regime. As a by-product to our approach we constructed lower bounds to the partition functions, which are both tighter and faster than the previous approaches in the ”high-signal high-coupling” regime. Our approach is based on random MAP perturbations that estimate the partition functions with expectation. In practice we compute the empirical mean. [15] show that the deviation of the sampled mean from its expectation decays exponentially. The computational complexity of our approximate sampling procedure is determined by the perturbations dimension. Currently, our theory do not describe the success of the probability model that is based on the maximal argument of perturbed MAP program with local perturbations. 8 References [1] Alexandre Bouchard-Cˆot´e and Michael I Jordan. Optimization of structured mean field objectives. In AUAI, pages 67–74, 2009. [2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2001. [3] L.A. Goldberg and M. Jerrum. The complexity of ferromagnetic ising with local fields. Combinatorics Probability and Computing, 16(1):43, 2007. [4] E.J. Gumbel and J. Lieblein. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Govt. Print. Office, 1954. [5] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In Proceedings of the 29th International Conference on Machine Learning, 2012. [6] T. Hazan, S. Maji, Keshet J., and T. Jaakkola. Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions. Advances in Neural Information Processing Systems, 2013. [7] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the ising model. SIAM Journal on computing, 22(5):1087–1116, 1993. [8] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [9] J. Keshet, D. McAllester, and T. Hazan. Pac-bayesian approach for minimization of phoneme error rate. In ICASSP, 2011. [10] Pushmeet Kohli and Philip HS Torr. Measuring uncertainty in graph cut solutions–efficiently computing min-marginal energies using dynamic graph cuts. In ECCV, pages 30–43. 2006. [11] D. Koller and N. Friedman. Probabilistic graphical models. MIT press, 2009. [12] S. Kotz and S. Nadarajah. Extreme value distributions: theory and applications. World Scientific Publishing Company, 2000. [13] A. Kulesza and B. Taskar. Structured determinantal point processes. In Proc. Neural Information Processing Systems, 2010. [14] Qiang Liu and Alexander T Ihler. Negative tree reweighted belief propagation. arXiv preprint arXiv:1203.3494, 2012. [15] Francesco Orabona, Tamir Hazan, Anand D Sarwate, and Tommi. Jaakkola. On measure concentration of random maximum a-posteriori perturbations. arXiv:1310.4227, 2013. [16] G. Papandreou and A. Yuille. Gaussian sampling by local perturbations. In Proc. Int. Conf. on Neural Information Processing Systems (NIPS), pages 1858–1866, December 2010. [17] G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In ICCV, Barcelona, Spain, November 2011. [18] Nicholas Ruozzi. The bethe partition function of log-supermodular graphical models. arXiv preprint arXiv:1202.6035, 2012. [19] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for MAP using message passing. In Conf. Uncertainty in Artificial Intelligence (UAI), 2008. [20] E.B. Sudderth, M.J. Wainwright, and A.S. Willsky. Loop series and Bethe variational bounds in attractive graphical models. Advances in neural information processing systems, 20, 2008. [21] D. Tarlow, R.P. Adams, and R.S. Zemel. Randomized optimum models for structured prediction. In Proceedings of the 15th Conference on Artificial Intelligence and Statistics, 2012. [22] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. Trans. on Information Theory, 51(7):2313–2335, 2005. [23] Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1121–1128. ACM, 2009. [24] T. Werner. High-arity interactions, polyhedral relaxations, and cutting plane algorithm for soft constraint optimisation (map-mrf). In CVPR, pages 1–8, 2008. [25] J. Zhang, H. Liang, and F. Bai. Approximating partition functions of the two-state spin system. Information Processing Letters, 111(14):702–710, 2011. 9
|
2013
|
89
|
5,168
|
A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles Jinwoo Shin Department of Electrical Engineering Korea Advanced Institute of Science and Technology Daejeon, 305-701, Republic of Korea jinwoos@kaist.ac.kr Andrew E. Gelfand ∗ Department of Computer Science University of California, Irvine Irvine, CA 92697-3435, USA agelfand@ics.uci.edu Michael Chertkov Theoretical Division & Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos, NM 87545, USA chertkov@lanl.gov Abstract Max-product ‘belief propagation’ (BP) is a popular distributed heuristic for finding the Maximum A Posteriori (MAP) assignment in a joint probability distribution represented by a Graphical Model (GM). It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following common feature: the Linear Programming (LP) relaxation to the MAP problem is tight (has no integrality gap). Unfortunately, tightness of the LP relaxation does not, in general, guarantee convergence and correctness of the BP algorithm. The failure of BP in such cases motivates reverse engineering a solution – namely, given a tight LP, can we design a ‘good’ BP algorithm. In this paper, we design a BP algorithm for the Maximum Weight Matching (MWM) problem over general graphs. We prove that the algorithm converges to the correct optimum if the respective LP relaxation, which may include inequalities associated with non-intersecting odd-sized cycles, is tight. The most significant part of our approach is the introduction of a novel graph transformation designed to force convergence of BP. Our theoretical result suggests an efficient BP-based heuristic for the MWM problem, which consists of making sequential, “cutting plane”, modifications to the underlying GM. Our experiments show that this heuristic performs as well as traditional cutting-plane algorithms using LP solvers on MWM problems. 1 Introduction Graphical Models (GMs) provide a useful representation for reasoning in a range of scientific fields [1, 2, 3, 4]. Such models use a graph structure to encode the joint probability distribution, where vertices correspond to random variables and edges (or lack of thereof) specify conditional dependencies. An important inference task in many applications involving GMs is to find the most likely assignment to the variables in a GM - the maximum a posteriori (MAP) configuration. Belief Propagation (BP) is a popular algorithm for approximately solving the MAP inference problem. BP is an iterative, message passing algorithm that is exact on tree structured GMs. However, BP often shows remarkably strong heuristic performance beyond trees, i.e. on GMs with loops. Distributed implementation, associated ease of programming and strong parallelization potential are among the main reasons for the popularity of the BP algorithm, e.g., see the parallel implementations of [5, 6]. The convergence and correctness of BP was recently established for a certain class of loopy GM formulations of several classic combinatorial optimization problems, including matchings [7, 8, 9], perfect matchings [10], independent sets [11] and network flows [12]. The important common ∗Also at Theoretical Division of Los Alamos National Lab. 1 feature of these instances is that BP converges to a correct MAP assignment when the Linear Programming (LP) relaxation of the MAP inference problem is tight, i.e., it shows no integrality gap. While this demonstrates that LP tightness is necessary for the convergence and correctness of BP, it is unfortunately not sufficient in general. In other words, BP may not work even when the corresponding LP relaxation to the MAP inference problem is tight. This motivates a quest for improving BP-based MAP solvers so that they work when the LP is tight. In this paper, we consider a specific class of GMs corresponding to the Maximum Weight Matching (MWM) problem and study if BP can be used as an iterative, message passing-based LP solver when the MWM LP (relaxation) is tight. It was recently shown [15] that a MWM can be found in polynomial time by solving a carefully chosen sequence of LP relaxations, where the sequence of LPs are formed by adding and removing sets of so-called “blossom” inequalities [13] to the base LP relaxation. Utilizing successive LP relaxations to solve the MWM problem is an example of the popular cutting plane method for solving combinatorial optimization problems [14]. While the approach in [15] is remarkable in that one needs only a polynomial number of “cut” inequalities, it unfortunately requires solving an emerging sequence of LPs via traditional, centralized methods (e.g., ellipsoid, interior-point or simplex) that may not be practical for large-scale problems. This motivates our search for an efficient and distributed BP-based LP solver for this class of problems. Our work builds upon that of Sanghavi, Malioutov and Willsky [8], who studied BP for the GM formulation of the MWM problem on an arbitrary graph. The authors showed that max-product BP converges to the correct, MAP solution if the base LP relaxation with no blossom - referred to herein as MWM-LP - is tight. Unfortunately, the tightness is not guaranteed in general, and the convergence and correctness for max-product BP do not readily extend to a GM with blossom constraints. To resolve this issue, we propose a novel GM formulation of the MWM problem and show that maxproduct BP on this new GM converges to the MWM assignment as long as the MWM-LP relaxation with blossom constraints is tight. The only restriction placed on our GM construction is that the set of blossom constraints added to the base MWM-LP be non-intersecting (in edges). Our GM construction is motivated by the so-called ‘degree-two’ (DT) condition, which requires that every variable in the GM be associated to at most two factor functions. The DT condition is necessary for analysis of BP using the computational tree technique, developed and advanced in [7, 8, 12, 16, 18, 19]. Note, that the DT condition is not satisfied by the standard MWM GM formulation, and hence, we design a new GM that satisfies the DT condition via a clever graphical transformation namely, collapsing odd-sized cycles and defining new weights on the contracted graph. Importantly, the MAP assignments of the two GMs are in one-to-one correspondence guaranteeing that a solution to the original problem can be recovered. Our theoretical result suggests a cutting-plane approach to the MWM problem, where BP is used as the LP solver. In particular, we examine the BP solution to identify odd-sized cycle constraints - “cuts” - to add to the MWM-LP relaxation; then construct a new GM using our graphical transformation, run BP and repeat. We evaluate this heuristic empirically and show that its performance is close to a traditional cutting-plane approach employing an LP solver rather than BP. Finally, we note that the DT condition may neither be sufficient nor necessary for BP to work. It was necessary, however, to provide theoretical guarantees for the special class of GMs considered. To our knowledge, our result is the first to suggest how to “fix” BP via a graph transformation so that it works properly, i.e., recovers the desired LP solution. We believe that our success in crafting a graphical transformation will offer useful insight into the design and analysis of BP algorithms for a wider class of problems. Organization. In Section 2, we introduce a standard GM formulation of the MWM problem as well as the corresponding BP and LP. In Section 3, we introduce our new GM and describe performance guarantees of the respective BP algorithm. In Section 4, we describe a cutting-plane(-like) method using BP for the MWM problem and show its empirical performance for random MWM instances. 2 2 Preliminaries 2.1 Graphical Model for Maximum Weight Matchings A joint distribution of n (discrete) random variables Z = [Zi] ∈Ωn is called a Graphical Model (GM) if it factorizes as follows: for z = [zi] ∈Ωn, Pr[Z = z] ∝ Y α∈F ψα(zα), (1) where F is a collection of subsets of Ω, zα = [zi : i ∈α ⊂Ω] is a subset of variables, and ψα is some (given) non-negative function. The function ψα is called a factor (variable) function if |α| ≥2 (|α| = 1). For variable functions ψα with α = {i}, we simply write ψα = ψi. One calls z a valid assignment if Pr[Z = z] > 0. The MAP assignment z∗is defined as z∗= arg max z∈Ωn Pr[Z = z]. Let us introduce the Maximum Weight Matching (MWM) problem and its related GM. Suppose we are given an undirected graph G = (V, E) with weights {we : e ∈E} assigned to its edges. A matching is a set of edges without common vertices. The weight of a matching is the sum of corresponding edge weights. The MWM problem consists of finding a matching of maximum weight. Associate a binary random variable with each edge X = [Xe] ∈{0, 1}|E| and consider the probability distribution: for x = [xe] ∈{0, 1}|E|, Pr[X = x] ∝ Y e∈E ewexe Y i∈V ψi(x) Y C∈C ψC(x), (2) where ψi(x) = ( 1 if P e∈δ(i) xe ≤1 0 otherwise and ψC(x) = ( 1 if P e∈E(C) xe ≤|C|−1 2 0 otherwise . Here C is a set of odd-sized cycles C ⊂2V , δ(i) = {(i, j) ∈E} and E(C) = {(i, j) ∈E : i, j ∈C}. Throughout the manuscript, we assume that cycles are non-intersecting in edges, i.e., E(C1) ∩E(C2) = ∅for all C1, C2 ∈C. It is easy to see that a MAP assignment x∗for the GM (2) induces a MWM in G. We also assume that the MAP assignment is unique. 2.2 Belief Propagation and Linear Programming for Maximum Weight Matchings In this section, we introduce max-product Belief Propagation (BP) and the Linear Programming (LP) relaxation to computing the MAP assignment in (2). We first describe the BP algorithm for the general GM (1), then tailor the algorithm to the MWM GM (2). The BP algorithm updates the set of 2|Ω| messages {mt α→i(zi), mt i→α(zi) : zi ∈Ω} between every variable i and its associated factors α ∈Fi = {α ∈F : i ∈α, |α| ≥2} using the following update rules: mt+1 α→i(zi) = X z′:z′ i=zi ψα(z′) Y j∈α\i mt j→α(z′ j) and mt+1 i→α(zi) = ψi(zi) Y α′∈Fi\α mt α′→i(zi). Here t denotes time and initially m0 α→i(·) = m0 i→α(·) = 1. Given a set of messages {mi→α(·), mα→i(·))}, the BP (max-marginal) beliefs {ni(zi)} are defined as follows: ni(zi) = ψi(zi) Y α∈Fi mα→i(zi). For the GM (2), we let nt e(·) to denote the BP belief on edge e ∈E at time t. The algorithm outputs the MAP estimate at time t, xBP(t) = [xBP e (t)] ∈[0, ?, 1]|E|, using the using the beliefs and the rule: xBP e (t) = 1 if nt e(0) < nt e(1) ? if nt ij(0) = nt e(1) 0 if nt e(0) > nt e(1) . The LP relaxation to the MAP problem for the GM (2) is: C-LP : max X e∈E wexe s.t. X e∈δ(i) xe ≤1, ∀i ∈V, X e∈E(C) xe ≤|C| −1 2 , ∀C ∈C, xe ∈[0, 1]. 3 Observe that if the solution xC-LP to C-LP is integral, i.e., xC-LP ∈{0, 1}|E|, then it is a MAP assignment, i.e., xC-LP = x∗. Sanghavi, Malioutov and Willsky [8] proved the following theorem connecting the performance of BP and C-LP in a special case: Theorem 2.1. If C = ∅and the solution of C-LP is integral and unique, then xBP(t) under the GM (2) converges to the MWM assignment x∗. Adding small random component to every weight guarantees the uniqueness condition required by Theorem 2.1. A natural hope is that Theorem 2.1 extends to a non-empty C since adding more cycles can help to reduce the integrality gap of C-LP. However, the theorem does not hold when C ̸= ∅. For example, BP does not converge for a triangle graph with edge weights {2, 1, 1} and C consisting of the only cycle. This is true even though the solution to its C-LP is unique and integral. 3 A Graphical Transformation for Convergent & Correct BP The loss of convergence and correctness of BP when the MWM LP is tight (and unique) but C ̸= ∅ motivates the work in this section. We resolve the issue by designing a new GM, equivalent to the original GM, such that when BP is run on this new GM it converges to the MAP/MWM assignment whenever the LP relaxation is tight and unique - even if C ̸= ∅. The new GM is defined on an auxiliary graph G′ = (V ′, E′) with new weights {w′ e : e ∈E′}, as follows: V ′ = V ∪{iC : C ∈C}, E′ = E ∪{(iC, j) : j ∈V (C), C ∈C} \ {e : e ∈∪C∈CE(C)} w′ e = ( 1 2 P e′∈E(C)(−1)dC(j,e′)we′ if e = (iC, j) for some C ∈C we otherwise . Here dC(j, e) is the graph distance of j and e in cycle C = (j1, j2, . . . , jk), e.g., if e = (j2, j3), then dC(j1, e) = 1. Figure 1: Example of original graph G (left) and new graph G′ (right) after collapsing cycle C = (1, 2, 3, 4, 5). In the new graph G′, edge weight w1C = 1/2(w12 −w23 + w34 −w45 + w15). Associate a binary variable with each new edge and consider the new probability distribution on y = [ye : e ∈E′] ∈{0, 1}|E′|: Pr[Y = y] ∝ Y e∈E′ ew′ eye Y i∈V ψi(y) Y C∈C ψC(y), (3) where ψi(y) = 1 if P e∈δ(i) ye ≤1 0 otherwise ψC(y) = 0 if P e∈δ(iC) ye > |C| −1 0 if P j∈V (C) (−1)dC(j,e)yiC,j /∈{0, 2} for some e ∈E(C) 1 otherwise . It is not hard to check that the number of operations required to update messages at each round of BP under the above GM is O(|V ||E|), as messages updates involving factor ψC require solving a MWM problem on a simple cycle – which can be done efficiently via dynamic programming in time O(|C|) – and the summation of the numbers of edges of non-intersecting cycles is at most |E|. We are now ready to state the main result of this paper. Theorem 3.1. If the solution of C-LP is integral and unique, then the BP-MAP estimate yBP(t) under the GM (3) converges to the corresponding MAP assignment y∗. Furthermore, the MWM assignment x∗is reconstructible from y∗as: x∗ e = ( 1 2 P j∈V (C)(−1)dC(j,e)y∗ iC,j if e ∈S C∈C E(C) y∗ e otherwise . (4) 4 The proof of Theorem 3.1 is provided in the following sections. We also establish the convergence time of the BP algorithm under the GM (3) (see Lemma 3.2). We stress that the new GM (3) is designed so that each variable is associated to at most two factor nodes. We call this condition, which did not hold for the original GM (2), the ‘degree-two’ (DT) condition. The DT condition will play a critical role in the proof of Theorem 3.1. We further remark that even under the DT condition and given tightness/uniqueness of the LP, proving correctness and convergence of BP is still highly non trivial. In our case, it requires careful study of the computation tree induced by BP with appropriate truncations at its leaves. 3.1 Main Lemma for Proof of Theorem 3.1 Let us introduce the following auxiliary LP over the new graph and weights. C-LP′ : max X e∈E′ w′ eye s.t. X e∈δ(i) ye ≤1, ∀i ∈V, ye ∈[0, 1], ∀e ∈E′, (5) X j∈V (C) (−1)dC(j,e)yiC,j ∈[0, 2], ∀e ∈E(C), X e∈δ(iC) ye ≤|C| −1, ∀C ∈C. (6) Consider the following one-to-one linear mapping between x = [xe : e ∈E] and y = [ye : e ∈E′]: ye = (P e′∈E(C)∩δ(i) xe′ if e = (i, iC) xe otherwise xe = ( 1 2 P j∈V (C)(−1)dC(j,e)yiC,j if e ∈S C∈C E(C) ye otherwise . Under the mapping, one can check that C-LP = C-LP′ and if the solution xC-LP of C-LP is unique and integral, the solution yC-LP′ of C-LP′ is as well, i.e., yC-LP′ = y∗. Hence, (4) in Theorem 3.1 follows. Furthermore, since the solution y∗= [y∗ e] to C-LP′ is unique and integral, there exists c > 0 such that c = inf y̸=y∗:y is feasible to C-LP′ w′ · (y∗−y) |y∗−y| , where w′ = [w′ e]. Using this notation, we establish the following lemma characterizing performance of the max-product BP over the new GM (3). Theorem 3.1 follows from this lemma directly. Lemma 3.2. If the solution yC-LP′ of C-LP′ is integral and unique, i.e., yC-LP′ = y∗, then • If y∗ e = 1, nt e[1] > nt e[0] for all t > 6w′ max c + 6, • If y∗ e = 0, nt e[1] < nt e[0] for all t > 6w′ max c + 6, where nt e[·] denotes the BP belief of edge e at time t under the GM (3) and w′ max = maxe∈E′ |w′ e|. 3.2 Proof of Lemma 3.2 This section provides the complete proof of Lemma 3.2. We focus here on the case of y∗ e = 1, while translation of the result to the opposite case of y∗ e = 0 is straightforward. To derive a contradiction, assume that nt e[1] ≤nt e[0] and construct a tree-structured GM Te(t) of depth t + 1, also known as the computational tree, using the following scheme: 1. Add a copy of Ye ∈{0, 1} as the (root) variable (with variable function ew′ eYe). 2. Repeat the following t times for each leaf variable Ye on the current tree-structured GM. 2-1. For each i ∈V such that e ∈δ(i) and ψi is not associated to Ye of the current model, add ψi as a factor (function) with copies of {Ye′ ∈{0, 1} : e′ ∈δ(i) \ e} as child variables (with corresponding variable functions, i.e., {ew′ e′ Ye′ }). 2-2. For each C ∈C such that e ∈δ(iC) and ψC is not associated to Ye of the current model, add ψC as a factor (function) with copies of {Ye′ ∈{0, 1} : e′ ∈δ(iC)\e} as child variables (with corresponding variable functions, i.e., {ew′ e′ Ye′ }). 5 It is known from [17] that there exists a MAP configuration yTMAP on Te(t) with yTMAP e = 0 at the root variable. Now we construct a new assignment yNEW on the computational tree Te(t) as follows. 1. Initially, set yNEW ←yTMAP and e is the root of the tree. 2. yNEW ←FLIPe(yNEW). 3. For each child factor ψ, which is equal to ψi (i.e., e ∈δ(i)) or ψC (i.e., e ∈δ(iC)), associated with e, (a) If ψ is satisfied by yNEW and FLIPe(y∗) (i.e., ψ(yNEW) = ψ(FLIPe(y∗)) = 1), then do nothing. (b) Else if there exists a e’s child e′ through factor ψ such that yNEW e′ ̸= y∗ e′ and ψ is satisfied by FLIPe′(yNEW) and FLIPe′(FLIPe(y∗)), then go to the step 2 with e ←e′. (c) Otherwise, report ERROR. To aid readers understanding, we provide a figure describing an example of the above construction in our technical report [21]. In the construction, FLIPe(y) is the 0-1 vector made by flipping (i.e., changing from 0 to 1 or 1 to 0) the e’s position in y. We note that there exists exactly one child factor ψ in step 3 and we only choose one child e′ in step (b) (even though there are many possible candidates). Due to this reason, flip operations induce a path structure P in tree Te(t).1 Now we state the following key lemma for the above construction of yNEW. Lemma 3.3. ERROR is never reported in the construction described above. Proof. The case when ψ = ψi at the step 3 is easy, and we only provide the proof for the case when ψ = ψC. We also assume that yNEW e is flipped as 1 →0 (i.e., y∗ e = 0), where the proof for the case 0 →1 follows in a similar manner. First, one can observe that y satisfies ψC if and only if y is the 0-1 indicator vector of a union of disjoint even paths in the cycle C. Since yNEW e is flipped as 1 →0, the even path including e is broken into an even (possibly, empty) path and an odd (always, non-empty) path. We consider two cases: (a) there exists e′ within the odd path (i.e., yNEW e′ = 1) such that y∗ e′ = 0 and flipping yNEW e′ as 1 →0 broke the odd path into two even (disjoint) paths; (b) there exists no such e′ within the odd path. For the first case (a), it is easy to see that we can maintain the structure of disjoint even paths in yNEW after flipping yNEW e′ as 1 →0, i.e., ψ is satisfied by FLIPe′(yNEW). For the second case (b), we choose e′ as a neighbor of the farthest end point (from e) in the odd path, i.e., yNEW e′ = 0 (before flipping). Then, y∗ e′ = 1 since y∗satisfies factor ψC and induces a union of disjoint even paths in the cycle C. Therefore, if we flip yNEW e′ as 0 →1, then we can still maintain the structure of disjoint even paths in yNEW, ψ is satisfied by FLIPe′(yNEW). The proof for the case of the ψ satisfied by FLIPe′(FLIPe(y∗)) is similar. This completes the proof of Lemma 3.3. Due to how it is constructed yNEW is a valid configuration, i.e., it satisfies all the factor functions in Te(t). Hence, it suffices to prove that w′(yNEW) > w′(yTMAP), which contradicts to the assumption that yMAP is a MAP configuration on Te(t). To this end, for (i, j) ∈E′, let n0→1 ij and n1→0 ij be the number of flip operations 0 →1 and 1 →0 for copies of (i, j) in the step 2 of the construction of Te(t). Then, one derives w′(yNEW) = w′(yTMAP) + w′ · n0→1 −w′ · n1→0, where n0→1 = [n0→1 ij ] and n1→0 = [n1→0 ij ]. We consider two cases: (i) the path P does not arrive at a leave variable of Te(t), and (ii) otherwise. Note that the case (i) is possible only when the condition in the step (a) holds during the construction of yNEW. Case (i). In this case, we define y† ij := y∗ ij +ε(n1→0 ij −n0→1 ij ), and establish the following lemma. Lemma 3.4. y† is feasible to C-LP′ for small enough ε > 0. Proof. We have to show that y† satisfies (5) and (6). Here, we prove that y† satisfies (6) for small enough ε > 0, and the proof for (5) can be argued in a similar manner. To this end, for given C ∈C, 1P may not have an alternating structure since both yNEW e and its child yNEW e′ can be flipped in a same way. 6 we consider the following polytope PC : X j∈V (C) yiC,j ≤|C| −1, yiC,j ∈[0, 1], ∀j ∈C, X j∈V (C) (−1)dC(j,e)yiC,j ∈[0, 2], ∀e ∈E(C). We have to show that y† C = [ye : e ∈δ(iC)] is within the polytope. It is easy to see that the condition of the step (a) never holds if ψ = ψC in the step 3. For the i-th copy of ψC in P ∩Te(t), we set y∗ C(i) = FLIPe′(FLIPe(y∗ C)) in the step (b), where y∗ C(i) ∈PC. Since the path P does not hit a leave variable of Te(t), we have 1 N XN i=1 y∗ C(i) = y∗ C + 1 N n1→0 C −n0→1 C , where N is the number of copies of ψC in P ∩Te(t). Furthermore, 1 N PN i=1 y∗ C(i) ∈PC due to y∗ C(i) ∈PC. Therefore, y† C ∈PC if ε ≤1/N. This completes the proof of Lemma 3.4. The above lemma with w′(y∗) > w′(y†) (due to the uniqueness of y∗) implies that w′ · n0→1 > w′ · n1→0, which leads to w′(yNEW) > w′(yTMAP). Case (ii). We consider the case when only one end of P hits a leave variable Ye of Te(t), where the proof of the other case follows in a similar manner. In this case, we define y‡ ij := y∗ ij + ε(m1→0 ij −m0→1 ij ), where m1→0 = [m1→0 ij ] and m0→1 = [m0→1 ij ] is constructed as follows: 1. Initially, set m1→0, m0→1 by n1→0, n0→1. 2. If yNEW e is flipped as 1 →0 and it is associated to a cycle parent factor ψC for some C ∈C, then decrease m1→0 e by 1 and 2-1 If the parent yNEW e′ is flipped from 1 →0, then decrease m1→0 e′ by 1. 2-2 Else if there exists a ‘brother’ edge e′′ ∈δ(iC) of e such that y∗ e′′ = 1 and ψC is satisfied by FLIPe′′(FLIPe′(y∗)), then increase m0→1 e′′ by 1. 2-3 Otherwise, report ERROR. 3. If yNEW e is flipped as 1 →0 and it is associated to a vertex parent factor ψi for some i ∈V , then decrease m1→0 e by 1. 4. If yNEW e is flipped as 0 →1 and it is associated to a vertex parent factor ψi for some i ∈V , then decrease m0→1 e , m1→0 e′ by 1, where e′ ∈δ(i) is the ‘parent’ edge of e, and 4-1 If the parent yNEW e′ is associated to a cycle parent factor ψC, 4-1-1 If the grad-parent yNEW e′′ is flipped from 1 →0, then decrease m1→0 e′′ by 1. 4-1-2 Else if there exists a ‘brother’ edge e′′′ ∈δ(iC) of e′ such that y∗ e′′′ = 1 and ψC is satisfied by FLIPe′′′(FLIPe′′(y∗)), then increase m0→1 e′′′ by 1. 4-1-3 Otherwise, report ERROR. 4-2 Otherwise, do nothing. We establish the following lemmas. Lemma 3.5. ERROR is never reported in the above construction. Lemma 3.6. y‡ is feasible to C-LP′ for small enough ε > 0. Proofs of Lemma 3.5 and Lemma 3.6 are analogous to those of Lemma 3.3 and Lemma 3.4, respectively. From Lemma 3.6, we have c ≤w′ · (y∗−y‡) |y∗−y‡| ≤ε w′(m0→1 −m1→0) ε(t −3) ≤ε w′(n0→1 −n1→0) + 3w′ max ε(t −3) , where |y∗−y‡| ≥ε(t −3) follows from the fact that P hits a leave variable of Te(t) and there are at most three increases or decreases in m0→1 and m1→0 in the above construction. Hence, w′(n0→1 −n1→0) ≥c(t −3) −3w′ max > 0 if t > 3w′ max c + 3, 7 which implies w′(yNEW) > w′(yTMAP). If both ends of P hit leave variables of Te(t), we need t > 6w′ max c + 6. This completes the proof of Lemma 3.2. 4 Cutting-Plane Algorithm using Belief Propagation In the previous section we established that BP on a carefully designed GM using non-intersecting odd-sized cycles solves the MWM problem when the corresponding MWM-LP relaxation is tight. However, finding a collection of odd-sized cycles to ensure tightness of the MWM-LP is a challenging task. In this section, we provide a heuristic algorithm which we call CP-BP (cutting-plane using BP) for this task. It consists of making sequential, “cutting plane”, modifications to the underlying LP (and corresponding GM) using the output of the BP algorithm in the previous step. CP-BP is defined as follows: 1. Initialize C = ∅. 2. Run BP on the GM in (3) for T iterations 3. For each edge e ∈E, set ye = 1 if nT e [1] > nT e [0] and nT −1 e [1] > nT −1 e [0] 0 if nT e [1] < nT e [0] and nT −1 e [1] < nT −1 e [0] 1/2 otherwise . 4. Compute x = [xe] using y = [ye] as per (4), and terminate if x /∈{0, 1/2, 1}|E|. 5. If there is no edge e with xe = 1/2, return x. Otherwise, add a non-intersecting odd-sized cycle of edges {e : xe = 1/2} to C and go to step 2; or terminate if no such cycle exists. In the above procedure, BP can be replaced by an LP solver to directly obtain x in step 4. This results in a traditional cutting-plane LP (CP-LP) method for the MWM problem [20]. The primary reason why we design CP-BP to terminate when x /∈{0, 1/2, 1}|E| is because the solution x of C-LP is always half integral 2. Note that x /∈{0, 1/2, 1}|E| occurs when BP fails to find the solution to the current MWM-LP. We compare CP-BP and CP-LP in order to gauge the effectiveness of BP as an LP solver for MWM problems. We conducted experiments on two types of synthetically generated problems: 1) Sparse Graph instances; and 2) Triangulation instances. The sparse graph instances were generated by forming a complete graph on |V | = {50, 100} nodes and independently eliminating edges with probability p = {0.5, 0.9}. Integral weights, drawn uniformly in [1, 220], are assigned to the remaining edges. The triangulation instances were generated by randomly placing |V | = {100, 200} points in the 220 × 220 square and computing a Delaunay triangulation on this set of points. Edge weights were set to the rounded Euclidean distance between two points. A set of 100 instances were generated for each setting of |V | and CP-BP was run for T = 100 iterations. The results are summarized in Table 1 and show that: 1) CP-BP is almost as good as CP-LP for solving the MWM problem; and 2) our graphical transformation allows BP to solve significantly more MWM problems than are solvable by BP run on the ‘bare’ LP without odd-sized cycles. 50 % sparse graphs 90 % sparse graphs |V | / |E| # CP-BP # Tight LPs # CP-LP |V | / |E| # CP-BP # Tight LPs # CP-LP 50 / 490 94 % 65 % 98 % 50 / 121 90 % 59 % 91 % 100 / 1963 92 % 48 % 95 % 100 / 476 63 % 50 % 63 % Triangulation, |V | = 100, |E| = 285 Triangulation, |V | = 200, |E| = 583 Algorithm # Correct / # Converged Time (sec) # Correct / # Converged Time (sec) CP-BP 33 / 36 0.2 [0.0,0.4] 11 / 12 0.9 [0.2,2.5] CP-LP 34 / 100 0.1 [0.0,0.3] 15 / 100 0.8 [0.3,1.6] Table 1: Evaluation of CP-BP and CP-LP on random MWM instances. Columns # CP-BP and # CP-LP indicate the percentage of instances in which the cutting plane methods found a MWM. The column # Tight LPs indicates the percentage for which the initial MWM-LP is tight (i.e. C = ∅). # Correct and # Converged indicate the number of correct matchings and number of instances in which CP-BP converged upon termination, but we failed to find a non-intersecting odd-sized cycle. The Time column indicates the mean [min,max] time. 2A proof of 1 2-integrality, which we did not find in the literature, is presented in our technical report [21]. 8 References [1] J. Yedidia, W. Freeman, and Y. Weiss, “Constructing free-energy approximations and generalized belief propagation algorithms,” IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2282 – 2312, 2005. [2] T. J. Richardson and R. L. Urbanke, Modern Coding Theory. Cambridge University Press, 2008. [3] M. Mezard and A. Montanari, Information, physics, and computation, ser. Oxford Graduate Texts. Oxford: Oxford Univ. Press, 2009. [4] M. J. Wainwright and M. I. Jordan, “Graphical models, exponential families, and variational inference,” Foundations and Trends in Machine Learning, vol. 1, no. 1, pp. 1–305, 2008. [5] J. Gonzalez, Y. Low, and C. Guestrin. “Residual splash for optimally parallelizing belief propagation,” in International Conference on Artificial Intelligence and Statistics, 2009. [6] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein, “GraphLab: A New Parallel Framework for Machine Learning,” in Conference on Uncertainty in Artificial Intelligence (UAI), 2010. [7] M. Bayati, D. Shah, and M. Sharma, “Max-product for maximum weight matching: Convergence, correctness, and lp duality,” IEEE Transactions on Information Theory, vol. 54, no. 3, pp. 1241 –1251, 2008. [8] S. Sanghavi, D. Malioutov, and A. Willsky, “Linear Programming Analysis of Loopy Belief Propagation for Weighted Matching,” in Neural Information Processing Systems (NIPS), 2007 [9] B. Huang, and T. Jebara, “Loopy belief propagation for bipartite maximum weight b-matching,” in Artificial Intelligence and Statistics (AISTATS), 2007. [10] M. Bayati, C. Borgs, J. Chayes, R. Zecchina, “Belief-Propagation for Weighted b-Matchings on Arbitrary Graphs and its Relation to Linear Programs with Integer Solutions,” SIAM Journal in Discrete Math, vol. 25, pp. 989–1011, 2011. [11] S. Sanghavi, D. Shah, and A. Willsky, “Message-passing for max-weight independent set,” in Neural Information Processing Systems (NIPS), 2007. [12] D. Gamarnik, D. Shah, and Y. Wei, “Belief propagation for min-cost network flow: convergence & correctness,” in SODA, pp. 279–292, 2010. [13] J. Edmonds, “Paths, trees, and flowers”, Canadian Journal of Mathematics, vol. 3, pp. 449– 467, 1965. [14] G. Dantzig, R. Fulkerson, and S. Johnson, “Solution of a large-scale traveling-salesman problem,” Operations Research, vol. 2, no. 4, pp. 393–410, 1954. [15] K. Chandrasekaran, L. A. Vegh, and S. Vempala. “The cutting plane method is polynomial for perfect matchings,” in Foundations of Computer Science (FOCS), 2012 [16] R. G. Gallager, “Low Density Parity Check Codes,” MIT Press, Cambridge, MA, 1963. [17] Y. Weiss, “Belief propagation and revision in networks with loops,” MIT AI Laboratory, Technical Report 1616, 1997. [18] B. J. Frey, and R. Koetter, “Exact inference using the attenuated max-product algorithm,” Advanced Mean Field Methods: Theory and Practice, ed. Manfred Opper and David Saad, MIT Press, 2000. [19] Y. Weiss, and W. T. Freeman, “On the Optimality of Solutions of the MaxProduct BeliefPropagation Algorithm in Arbitrary Graphs,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 736–744. 2001. [20] M. Grotschel, and O. Holland, “Solving matching problems with linear programming,” Mathematical Programming, vol. 33, no. 3, pp. 243–259. 1985. [21] J. Shin, A.E. Gelfand, and M. Chertkov, “A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles,” arXiv preprint arXiv:1306.1167 (2013). 9
|
2013
|
9
|
5,169
|
Factorized Asymptotic Bayesian Inference for Latent Feature Models Kohei Hayashi∗† ∗National Institute of Informatics †JST, ERATO, Kawarabayashi Large Graph Project kohei-h@nii.ac.jp Ryohei Fujimaki NEC Laboratories America rfujimaki@nec-labs.com Abstract This paper extends factorized asymptotic Bayesian (FAB) inference for latent feature models (LFMs). FAB inference has not been applicable to models, including LFMs, without a specific condition on the Hessian matrix of a complete loglikelihood, which is required to derive a “factorized information criterion” (FIC). Our asymptotic analysis of the Hessian matrix of LFMs shows that FIC of LFMs has the same form as those of mixture models. FAB/LFMs have several desirable properties (e.g., automatic hidden states selection and parameter identifiability) and empirically perform better than state-of-the-art Indian Buffet processes in terms of model selection, prediction, and computational efficiency. 1 Introduction Factorized asymptotic Bayesian (FAB) inference is a recently-developed Bayesian approximation inference method for model selection of latent variable models [5, 6]. FAB inference maximizes a computationally tractable lower bound of a “factorized information criterion” (FIC) which converges to a marginal log-likelihood for a large sample limit. In application with respect to mixture models (MMs) and hidden Markov models, previous work has shown that FAB inference achieves as good or even better model selection accuracy as state-of-the-art non-parametric Bayesian (NPB) methods and variational Bayesian (VB) methods with less computational cost. One of the interesting characteristics of FAB inference is that it estimates both models (e.g., the number of mixed components for MMs) and parameter values without priors (i.e., it asymptotically ignores priors), and it does not have a hand-tunable hyper-parameter. With respect to the trade-off between controllability and automation, FAB inference places more importance on automation. Although FAB inference is a promising model selection method, as yet it has only been applicable to models satisfying a specific condition that the Hessian matrix of a complete log-likelihood (i.e., of a log-likelihood over both observed and latent variables) must be block diagonal, with only a part of the observed samples contributing individual sub-blocks. Such models include basic latent variable models as MMs [6]. The application of FAB inference to more advanced models that do not satisfy the condition remains to be accomplished. This paper extends an FAB framework to latent feature models (LFMs) [9, 17]. Model selection for LFMs (i.e., determination of the dimensionality of latent features) has been addressed by NBP and VB methods [10, 3]. Although they have shown promising performance in such applications as link prediction [16], their high computational costs restrict their applications to large-scale data. Our asymptotic analysis of the Hessian matrix of the log-likelihood shows that FICs for LFMs have the same form as those for MMs, despite the fact that LFMs do not satisfy the condition explained above (see Lemma 1). Eventually, as FAB/MMs, FAB/LFMs offer several desirable properties, such as FIC convergence to a marginal log-likelihood, automatic hidden states selection, and monotonic increase in the lower FIC bound through iterative optimization. Further we conduct two analysis in 1 Section 3: 1) we relate FAB E-steps to a convex concave procedure (CCCP) [29]. Inspired by this analysis, we propose a shrinkage acceleration method which drastically reduces computational cost in practice, and 2) we show that FAB/LFMs have parameter identifiability. This analysis offers a natural guide to the merging post-processing of latent features. Rigorous proofs and assumptions with respect to the main results are given in the supplementary materials. Notation In this paper, we denote the (i, j)-th element, the i-th row vector, and the j-th column vector of A by aij, ai, and a·j, respectively. 1.1 Related Work FIC for MMs Suppose we have N × D observed data X and N × K latent variables Z. FIC considers the following alternative representation of the marginal log-likelihood: log p(X|M) = max q {∑ Z q(Z) log p(X, Z|M) q(Z) } , p(X, Z|M) = ∫ p(X, Z|P)p(P|M)dP, (1) where q(Z) is a variational distribution on Z; M and P are a model and its parameter, respectively. In the case of MMs, log p(X, Z|P) can be factorized into log p(Z) and log p(X|Z) = ∑ k log pk(X|z·k), where pk is the k-th observation distribution (we here omit parameters for notational simplicity.) We can then approximate p(X, Z|M) by individually applying Laplace’s method [28] to log p(Z) and log pk(X|z·k): p(X, Z|M) ≈p(X, Z| ˆP) (2π)DZ/2 N DZ/2 det |FZ|1/2 K ∏ k=1 (2π)Dk/2 (∑ n znk)Dk/2 det |Fk|1/2 , (2) where ˆP is the maximum likelihood estimator (MLE) of p(X, Z|P).1 DZ and Dk are the parameter dimensionalities of p(Z) and pk(X|z·k), respectively. FZ and Fk are −∇∇log p(Z)| ˆ P/N and −∇∇log pk(X|z·k)| ˆ P/(∑ n znk), respectively. Under conditions for asymptotic ignoring of log det |FZ| and log det |Fk|, substituting Eq.(2) into (1) gives the FIC for MMs as follows: FICMM ≡max q Eq [ log p(X, Z| ˆP) −DZ 2 log N − ∑ k Dk 2 log ∑ n znk ] + H(q), (3) where H(q) is the entropy of q(Z). The most important term in FICMM (3) is log(∑ n znk), which offers such theoretically desirable properties for FAB inference as automatic shrinkage of irrelevant latent variables and parameter identifiability [6]. Direct optimization of FICMM is difficult because: (i) evaluation of Eq[log ∑ n znk] is computationally infeasible, and (ii) the MLE is not available in principle. Instead, FAB optimizes a tractable lower bound of an FIC [6]. For (i), since −log ∑ n znk is a convex function, its linear approximation at N ˜πk > 0 yields the lower bound: − ∑ k Dk 2 Eq [ log ∑ n znk ] ≥− ∑ k Dk 2 ( log N ˜πk + ∑ n Eq[znk]/N −˜πk ˜πk ) , (4) where 0 < ˜πk ≤1 is a linearization parameter. For (ii), since, from the definition of the MLE, the inequality log p(X, Z| ˆP) ≥log p(X, Z|P) holds for any P, we optimize P along with q. Alternating maximization of the lower bound with respect to q, P, and ˜π guarantees a monotonic increase in the FIC lower bound [6]. Infinite LFMs and Indian Buffet Process The IBP [10, 11] is a nonparametric prior over infinite LFMs. It enables us to express an infinite number of latent features, and making it possible to adjust model complexity on the basis of observations. Infinite IBPs have still been actively studied in terms of both applications (e.g., link prediction [16]) and model representations (e.g., latent attribute models [19]). Since naive Gibbs sampling requires unrealistic computational cost, acceleration algorithms such as accelerated sampling [2] and VB [3] have been developed. Reed and Ghahramani [22] have recently proposed an efficient MAP estimation framework of an IBP model via submodular optimization, which is referred to as maximum-expectation IBP (MEIBP). As similar to FIC, “MAD-Bayes” [1] considers asymptotics of MMs and LFMs, but it is based on a limiting case that the noise variance goes to zero, which yields a prior-derived regularization term. 1While p(X|P) is a non-regular model, P(X, Z|P) is a regular model (i.e., the Fisher information is nonsingular at the ML estimator,) and Fk and FZ have their inversions at ˆP. 2 2 FIC and FAB Algorithm for LFMs LFMs assume underlying relationships for X with binary features Z ∈{0, 1}N×K and linear bases W ∈RD×K such that, for n = 1, . . . , N, xn = Wzn + b + εn, (5) where εn ∼N(0, Λ−1) is the Gaussian noise having the diagonal precision matrix Λ ≡diag(λ), and b ∈RD is a bias term. For later convenience, we define the centered observation ¯X = X − 1b⊤. Z follows a Bernoulli prior distribution znk ∼Bern(πk) with a mean parameter πk. The parameter set P is defined as P ≡{W, b, λ, π}. Also, we denote parameters with respect to the d-th dimension as θd = (wd, bd, λd). Similarly with other FAB frameworks, the log-priors of P are assumed to be constant with respect to N, i.e., limN→∞ log p(P|M) N = 0 In the case of MMs, we implicitly use the fact that: A1) parameters of pk(X|z·k) are mutually independent for k = 1, . . . , K (in other words, ∇∇log p(X|Z) is block diagonal having K blocks), and A2) the number of observations which contribute ∇∇log pk(X|z·k) is ∑ n znk. These conditions naturally yield the FAB regularization term log ∑ n znk by the Laplace approximation of MMs (2). However, since θd is shared by all latent features in LFMs, A1 and A2 are not satisfied. In the next section, we address this issue and derive FIC for LFMs. 2.1 FICs for LFMs The following lemma plays the most important role in our derivation of FICs for LFMs. Lemma 1. Let F(d) be the Hessian matrix of the negated log-likelihood with respect to θd, i.e., −∇∇log p(x·d|Z, θd). Under some mild assumptions (see the supplementary materials), the following equality holds: log det |F(d)| = ∑ k log ∑ n znk N + Op(1). (6) An important fact is that the log ∑ n znk term naturally appears in log det |F(d)| without A1 and A2. Lemma 1 induces the following theorem, which states an asymptotic approximation of a marginal complete log-likelihood, log p(X, Z|M). Theorem 2. If Lemma 1 holds and the joint marginal log-likelihood is bounded for a sufficiently large N, it can be asymptotically approximated as: log p(X, Z|M) = J(Z, ˆP) + Op(1), (7) J(Z, P) ≡log p(X, Z|P) −|P| −DK 2 log N −D 2 ∑ k log ∑ n znk. (8) It is worth noting that, if we evaluate the model complexity of θd (log det |F(d)|) by N, i.e., if we apply Laplace’s method without Lemma 1, Eq. (7) falls into Bayesian Information Criterion [23], which tells us that the model complexity relevant to θd increases not O(K log N) but O(∑ k log ∑ n znk). By substituting approximation (7) into Eq. (1), we obtain the FIC of the LFM as follows: FICLFM ≡max q Eq[J(Z, ˆP)] + H(q). (9) It is interesting that FICLFM (9) and FICMM (3) have exactly the same representation despite the fact that LFMs do not satisfy A1 and A2. This indicates the wide applicability of FICs and suggests that FIC representation of approximated marginal log-likelihoods is feasible not only for MMs but also for more general (discrete) latent variable models. Since the asymptotic constant terms of Eq. (7) are not affected by the expectation of q(Z), the difference between the FIC and the marginal log-likelihood is asymptotically constant; in other words, the distance between log p(X|M)/N and FICLFM/N is asymptotically small. Corollary 3. For N →∞, log p(X|M) = FICLFM + Op(1) holds. 3 2.2 FAB/LFM Algorithm As with the case of MMs (3), FICLFM is not available in practice, and we employ the lower bounding techniques (i) and (ii). For LFMs, we further introduce a mean-filed approximation on Z, i.e., we restrict the class of q(zn) to a factorized form: q(zn) = ∏ k ˜q(znk|µnk), where ˜q(z|µ) is a Bernoulli distribution with a mean parameter µ = Eq[z]. Rather than this approximation’s making the FIC lower bound looser (the equality (1) no longer holds), the variational distribution has a closed-form solution. Note that this approximation does not cause significant performance degradation in VB contexts [20, 25]. The VB-extension of IBP [3] also uses this factorized assumption. By applying (i), (ii), and the mean-field approximation, we obtain the lower bound: L(q, P, ˜π) = Eq [log p(X|Z, Θ) + log p(Z|π) + RHS of (4)] −2D + K 2 log N + ∑ n H(q(zn)). (10) An FAB algorithm alternatingly maximizes L(q, P, ˜π) with respect to {{µn}, P, ˜π}. Notice that the algorithm described below monotonically increases L in every single step, and therefore we are guaranteed to obtain a local maximum. This monotonic increase in L gives us a natural stopping condition with a tolerance δ: if (Lt −Lt−1)/N < δ then stop the algorithm, where we denote the value of L at the t-th iteration by Lt. FAB E-step In the FAB E-step, we update µn in a way similar to that with the variational meanfield inference in a restricted Boltzmann machine [20]. Taking the gradient of L with respect to µn and setting it to zero yields the following fixed-point equations: µnk = g (cnk + η(πk) −D/2N ˜πk) (11) where g(x) = (1+exp(−x))−1 is the sigmoid function, cnk = w⊤ ·kΛ(¯xn −∑ l̸=k µnlw·l −1 2w·k), and η(πk) = log πk 1−πk is a natural parameter of the prior of z·k. Update equation (11) is a form of coordinate descent, and every update is guaranteed to increase the lower bound [25]. After several iterations of Eq. (11) over k = 1, . . . , K, we are able to obtain a local maximum of Eq[zn] = µn and Eq[znz⊤ n ] = µnµ⊤ n + diag(µn −µ2 n). One unique term in Eq. (11) is − D 2N ˜πk , which originated in the log ∑ n znk term in Eq. (8). In updating µnk (11), the smaller ˜πk (or equivalent to πk by Eq. (12)) is, the smaller µnk is. And a smaller µnk is likely to induce a smaller ˜πk (see Eq. (12)). This results in the shrinking of irrelevant features, and therefore FAB/LFMs are capable of automatically selecting feature dimensionality K. This regularization effect is induced independently of prior (i.e., asymptotic ignorance of prior) and is known as “model induced regularization” which is caused by Bayesian marginalization in singular models [18]. Notice that Eq. (11) offers another shrinking effect, by means of η(πk), which is a prior-based regularization. We empirically show that the latter shrinking effect is too weak to mitigate over-fitting and the FAB algorithm achieves faster convergence, with respect to N, to the true model (see Section 4.) Note that if we only use the effect of η(πk) (i.e. setting D/2N ˜πk = 0), then update equation (11) is equivalent to that of variational EM. FAB M-step The FAB M-step is equivalent to the M-step in the EM algorithm of LFMs; the solutions of W, Λ and b are given as in closed form and is exactly the same as those of PPCA [24] (see the supplementary materials.) For ˜π and π, we obtain the following solutions: πk = ˜πk = ∑ n µnk/N. (12) Shrinkage step As we have explained, in principle, the FAB regularization term D 2N ˜πk in Eq. (11) automatically eliminates irrelevant latent features. While the elimination does not change the value of Eq[log(X|Z, P)], removing them from the model increases L due to a decrease in model complexity. We eliminate shrunken features after FAB E-step in terms of that LFMs approximate X by ∑ k µ·kw⊤ ·k + 1b⊤. When ∑ n µnk/N = 0, the k-th feature does not affect to the approximation (∑ l z·lw⊤ ·l = ∑ l̸=k z·lw⊤ ·l ), and we simply remove it. When ∑ n µnk/N = 1, wk can be seen as a bias (∑ l z·lw⊤ ·l = ∑ l̸=k z·lw⊤ ·l + 1w⊤ ·k), and we update bnew = b + wk and then remove it. 4 Algorithm 1 The FAB algorithm for LFMs. 1: Initialize {µn} 2: while Convergence do 3: Update P 4: accelerateShrinkage({µn}) 5: for k = 1, . . . , K do 6: Update {µnk} by Eq. (11) 7: end for 8: Shrink unnecessary latent features 9: if (Lt −Lt−1)/N < δ then 10: {{µ′ n}, W′} ←merge({µn}, W) 11: if dim(W′) = dim(W) then Converge 12: else {µn} ←{µ′ n}, W ←W′ 13: end if 14: end while Algorithm 2 accelerateShrinkage input {µn} 1: for k = 1, . . . , K do 2: ck ←( ¯X−∑ l̸=k µ·lw⊤ ·l −1 21w⊤ ·k)Λw·k 3: for t = 1, . . . , Tshrink do 4: Update {µnk} by Eq. (11) 5: Update π and ˜π by Eq. (12) 6: end for 7: end for Elapsed time (sec) Estimated K 10 20 30 40 50 100 200 300 400 Elapsed time (sec) FIC lower bound / N −100 −90 −80 −70 −60 −50 −40 −30 100 200 300 400 Acceleration On Off #Iteration 20 40 80 160 Figure 1: Time evolution of K (top) and L/N (bottom) in FAB with and without shrinkage acceleration (D = 50 and K = 5). Different lines represent different random starts. This model shrinkage also works to avoid the ill-conditioning of the FIC; if there are latent features that are never activated (∑ n µnk/N = 0) or always activated (∑ n µnk/N = 1), the FIC will no longer be an approximation of the marginal log-likelihood. Algorithm 1 summarizes whole procedures with respect to the FAB/LFMs. Note that details regarding sub-routines accelerateShrinkage() and merge() are explained in Section 3. 3 Analysis and Refinements CCCP Interpretation and Shrinkage Acceleration Here we interpret the alternating updates of µ and ˜π as a convex concave procedure (CCCP) [29] and consider to eliminate irrelevant features in early steps to reduce computational cost. By substituting an optimality condition ˜πk = ∑ n µnk/N (12) into the lower bound, we obtain L(q) = −D 2 ∑ k log ∑ n µnk + (∑ n (cn + η)⊤µn + H(q) ) + const. (13) The first and second terms are convex and concave with respect to µnk, respectively. The CCCP solves Eq.(13) by iteratively linearizing the first term around µt−1 nk . By setting the derivative of the “linearized” objective to be zero, we obtain the CCCP update as follows: µt nk = g ( cnk + η(πk) −D 2 ∑ n µt−1 nk ) . (14) By taking N ˜πk = ∑ n µt−1 nk into account, Eq.(14) is equivalent to Eq.(11). This new view of the FAB optimization gives us an important insight to accelerate the algorithm. By considering the FAB optimization as the alternating maximization in terms of P and µ (˜π is removed), it is natural to take multiple CCCP steps (14). Such multiple CCCP steps in each FABEM step is expected to accelerate the shrinkage effect discussed in the previous section because the 5 regularization in terms of −D/2(∑ n µnk) causes the effect. Eventually, it is expected to reduce the total computational cost since we may be able to remove irrelevant latent features in earlier iterations. We summarize the whole routine of accelerateShrinkage() based on the CCCP in Algorithm 2. Note that, in practice, we update π along with ˜π for further acceleration of the shrinkage. We empirically confirmed that Algorithm 2 significantly reduced computational costs (see Section 4 and Figure 1.) Further discussion of this this update (an exponentiated gradient descent interpretation) can be found in the supplementary materials. Identifiability and Merge Post-processing Parameter identifiability is an important theoretical aspect in learning algorithms for latent variable models. It has been known [26, 27] that generalization error significantly worsens if the mapping between parameters and functions is not one-toone (i.e., is non-identifiable.) Let us consider the LFM case of K = 2. If w·1 = w·2, then any combination of µn1 and µn2 = 2µ −µn1 will have the same representation: Eq[Ex[¯xnd|θd]] = wd1(µn1 + µn2) = 2wd1µ, and therefore the MLE is non-identifiable. The following theorem shows that FAB inference resolves such non-identifiability in LFMs. Theorem 4. Let P∗and q∗be stationary points of L such that 0 < ∑ n µ∗ nk/N < 1 for k = 1, . . . , K and |¯x⊤ n Λ∗w∗ ·k| < ∞for k = 1, . . . , K, n = 1, . . . , N. Then, w∗ ·k = w∗ ·l is a sufficient condition of ∑ n µ∗ nk/N = ∑ n µ∗ nl/N. For the ill-conditioned situation described above, the FAB algorithm has a unique solution that balances the sizes of latent features. In large sample limit, both FAB and EM reach the same ML value. The point is, for LFMs, ML solutions are not unique and EM is likely to choose large-Ksolutions because of this non-identifiability issue. On the other hands, FAB prefers to small-K ML solutions on the basis of the regularizer. In addition, Theorem 4 gives us an important insight about post-processing of latent features. If w∗ ·k = w∗ ·l, then Eq[log p(X, Z|M∗)] is equivalent without relation to µnk and µnl, while model complexity is smaller if we only have one latent feature. Therefore, if w∗ ·k = w∗ ·l, merging these two latent features increases L, i.e., w∗ ·k = 2w∗ ·k and µ∗ ·k = µ∗ ·k+µ∗ ·l 2 . In practice, we search for such overlapping features on the basis of a Euclidean distance matrix of W∗and w∗ ·k for k = 1, . . . , K and merge them if the lower bound increases after the post-processing. We empirically found that a few merging operations were likely to occur in real world data sets. The algorithm of merge() is summarized in the supplementary materials. 4 Experiments We have evaluated FAB/LFMs in terms of computational speed, model selection accuracy, and prediction performance with respect to missing values. We compared FAB inference and the variational EM algorithm (see Section 2.2) with an IBP that utilized fast Gibbs sampling [2], a VB [3] having a finite K, and MEIBP [22]. IBP and MEIBP select a model which maximizes posterior probability. For VB, we performed inference with K = 2, . . . , D and selected the model having the highest free energy. EM selects K using the shrinkage effect of η as we have explained in Section 2.2. All the methods were implemented in Matlab (for IBP, VB, and MEIBP, we used original codes released by the authors), and the computational performance were fairly compared. For FAB and EM, we set δ = 10−4 (this was not sensitive) and Tshrink = 100 (FAB only); {µn} were randomly and uniformly initialized by 0 and 1; the initial number of latent features was set to min(N, D) as well as MEIBP. Since the softwares of IBP, VB, and MEIBP did not learn the standard deviation of the noise (1/ √ λ in FAB), we fixed it to 1 for artificial simulations, which is the true standard deviation of toy data, and 0.75 for real data by following the original papers [2, 22]. We set other parameters with software default values. For example, α, a hyperparameter of IBP, was set to 3, which might cause overestimation of K. As common preprocessing, we normalized X (i.e., the sample variance is 1) in all experiments. Artificial Simulations We first conducted artificial simulations with fully-observed synthetic data generated by model (5) having a fixed λk = 1 and πk = 0.5. Figure 1 shows the results of a comparison between FAB with and without shrinkage acceleration.2 Clearly, our shrinkage acceleration 2We also investigated the effect of merge post-processing, but none was observed in this small example. 6 N Elapsed time (sec) 100.5 101 101.5 102 102.5 103 True K=5 100 250 500 10002000 10 100 250 500 10002000 fab em ibp meibp vb N Estimated K 5 10 15 20 25 30 5 100 250 500 1000 2000 10 100 250 500 1000 2000 Figure 2: Comparative evaluation of the artificial simulations in terms of N v.s. elapsed time (left) and selected K (right). Each error-bar shows the standard deviation over 10 trials (D = 30). Figure 3: Learned Ws in block data. significantly reduced computational cost by eliminating irrelevant features in the early steps, while both algorithms achieved roughly the same objective value L and model selection performance at the convergence. Figure 2 shows the results of a comparison between FAB (with acceleration) and the other methods. While MEIBP was much faster than FAB in terms of elapsed computational time, FAB achieved the most accurate estimation of K, especially for large N. Block Data We next demonstrate performance of FAB/LFMs in terms of learning features. We used the block data, a synthetic data originally used in [10]. Observations were generated by combining four distinct patterns (i.e., K = 4, see Figure 3) with Gaussian noise, on 6 by 6 pixels (i.e., D = 36). We prepared the results of N = 2000 samples with the noise standard deviation 0.3 and no missing values (more results can be found in the supplementary materials.) Figure 3 compares estimated features of each method on early learning phase (at the 5th iteration) and after the convergence (the result displayed is the example which has the median log-likelihood over 10 trials.) Note that, we omitted MEIPB since we observed that its parameter setting was very sensitive for this data. While EM and IBP retain irrelevant features, FAB successfully extracts the true patterns without irrelevant features. Real World Data We finally evaluated predictive performance by using the real data sets described in Table 1. We randomly removed 30% of data with 5 different random seeds and treated them as missing values, and we measured predictive and training log-likelihood (PLL and TLL) for them. Table 1 summarizes the results with respect to elapsed computational time (hours), selected K, PLL, and TLL. Note that, for cases when the computational time for a method exceeded 50 hours, we stopped the program after that iteration.3 Since MEIBP is the method for non-negative data, we omitted the results of those containing negative values. Also, since MEIBP did not finish the first iteration within 50 hours for yaleB and USPS data, we set the initial K as 100. FAB consistently achieved good predictive performance (higher PLL) with low computational cost. Although MEIBP performed faster than FAB with appropriately set the initial value of K (i.e., yaleB and USPS), PLLs of FAB were much better than those of MEIBP. In terms of K, FAB typically achieved a more compact and better model representation than the others (smaller K). Another important observation is that FAB have much smaller differences between TLL and PLL than the others. This suggests that FAB’s unique regularization worked well for mitigating over-fitting. For the large sample data sets (EEG, Piano, USPS), PLLs of FAB and EM were competitive with one another; 3We totally omitted VB because of its long computational time. 7 Table 1: Results on real-world data sets. The best result (e.g., the smallest K in model selection) and those not significantly worse than it are highlighted in boldface. We used a one-side t-test with 95% confidence. *We exclude the results of MEIBP for yaleB and USPS from the t-test because of the different experimental settings (initial K was smaller than the others. See the body text for details.) Data Method Time (h) K PLL TLL Sonar [4] FAB < 0.01 4.4 ± 1.1 −1.25 ± 0.02 −1.14 ± 0.03 208 × 49 EM < 0.01 48.8 ± 0.5 −4.04 ± 0.46 −0.08 ± 0.07 IBP 3.3 69.6 ± 4.8 −4.48 ± 0.15 0.13 ± 0.02 MEIBP < 0.01 45.4 ± 1.7 −18.10 ± 1.90 −15.60 ± 1.80 Libras [4] FAB < 0.01 19.0 ± 0.7 −0.63 ± 0.03 −0.42 ± 0.03 360 × 90 EM 0.01 75.6 ± 8.6 −0.68 ± 0.11 0.76 ± 0.24 IBP 4.8 36.4 ± 1.1 −0.18 ± 0.01 0.13 ± 0.01 MEIBP 0.05 40.8 ± 1.3 −11.30 ± 2.00 −10.70 ± 1.80 Auslan [14] FAB 0.04 6.0 ± 0.7 −1.34 ± 0.15 −0.92 ± 0.02 16180 × 22 EM 0.2 22 ± 0 −1.79 ± 0.27 −0.78 ± 0.02 IBP 50.2 73 ± 5 −4.54 ± 0.08 0.08 ± 0.01 MEIBP N/A N/A N/A N/A EEG [12] FAB 1.6 11.2 ± 1.6 −0.93 ± 0.02 −0.76 ± 0.04 120576 × 32 EM 3.7 32 ± 0 −0.88 ± 0.09 −0.59 ± 0.01 IBP 53.0 46.4 ± 4.4 −3.16 ± 0.03 −0.26 ± 0.05 MEIBP N/A N/A N/A N/A Piano [21] FAB 19.4 58.0 ± 3.5 −0.83 ± 0.01 −0.63 ± 0.02 57931 × 161 EM 50.1 158.6 ± 3.4 −0.82 ± 0.02 −0.45 ± 0.01 IBP 55.8 89.6 ± 4.2 −1.83 ± 0.02 −0.84 ± 0.05 MEIBP 14.3 48.4 ± 3.2 −7.14 ± 0.52 −6.90 ± 0.50 yaleB [7] FAB 2.2 77.2 ± 7.9 −0.37 ± 0.02 −0.29 ± 0.03 2414 × 1024 EM 50.9 929 ± 20 −4.60 ± 1.20 0.80 ± 0.27 IBP 51.7 94.2 ± 7.5 −0.54 ± 0.02 −0.35 ± 0.02 ∗MEIBP 7.2 69.8 ± 2.7 −1.18 ± 0.02 −1.12 ± 0.02 USPS [13] FAB 11.2 110.2 ± 5.1 −0.96 ± 0.01 −0.64 ± 0.02 110000 × 256 EM 45.7 256 ± 0 −1.06 ± 0.01 −0.36 ± 0.01 IBP 61.6 181.0 ± 4.8 −2.59 ± 0.08 −0.76 ± 0.01 ∗MEIBP 1.9 22.0 ± 2.7 −1.35 ± 0.03 −1.31 ± 0.03 this is reasonable, for large N, both of them ideally achieve the maximum likelihood while FAB achieved much smaller K (see identifiability discussion in Section 3). In small N scenarios, on the other hand, FIC approximation would be not accurate, and FAB would perform worse than NPBs (while we observed such case only in Libras.) 5 Summary We have considered here an FAB framework for LFMs that offers fully automated model selection, i.e., selecting the number of latent features. While LFMs do not satisfy the assumptions that naturally induce FIC/FAB on MMs, we have shown that they have the same “degree” of model complexity as the approximated marginal log-likelihood, and we have derived FIC/FAB in a form similar to that for MMs. In addition, our proposed accelerating mechanism for shrinking models drastically reduces total computational time. Experimental comparisons of FAB inference with existing methods, including state-of-the-art IBP methods, have demonstrated the superiority of FAB/LFM. Acknowledgments The authors would like to thank Finale Doshi-Velez for providing Piano and EEG data sets. This work was supported by JSPS KAKENHI Grant Number 25880028. 8 References [1] T. Broderick, B. Kulis, and M. I. Jordan. MAD-Bayes: MAP-based Asymptotic Derivations from Bayes. In ICML, 2013. [2] F. Doshi-Velez and Z. Ghahramani. Accelerated sampling for the indian buffet process. In ICML, 2009. [3] F. Doshi-Velez, K. T. Miller, J. Van Gael, and Y. W. Teh. Variational inference for the Indian buffet process. In AISTATS, 2009. [4] A. Frank and A. Asuncion. UCI machine learning repository, 2010. [5] R. Fujimaki and K. Hayashi. Factorized asymptotic bayesian hidden markov model. In ICML, 2012. [6] R. Fujimaki and S. Morinaga. Factorized asymptotic bayesian inference for mixture modeling. In AISTATS, 2012. [7] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23:643–660, 2001. [8] Z. Ghahramani. Factorial learning and the EM algorithm. In NIPS, 1995. [9] Z. Ghahramani, T. L. Griffiths, and P. Sollich. Bayesian nonparametric latent feature models (with discussion). In 8th Valencia International Meeting on Bayesian Statistics, 2006. [10] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the indian buffet process, 2005. [11] T. L. Griffiths and Z. Ghahramani. The indian buffet process: An introduction and review. JMLR, 12:1185–1224, 2011. [12] U. Hoffmann, G. Garcia, J. M. Vesin, K. Diserens, and T. Ebrahimi. A boosting approach to p300 detection with application to brain-computer interfaces. In International IEEE EMBS Conference on Neural Engineering, pages 97–100. 2005. [13] J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550–554, 1994. [14] M. W. Kadous. Temporal Classification: Extending the Classification Paradigm to Multivariate Time Series. PhD thesis, School of Computer Science & Engineering, University of New South Wales, 2002. [15] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997. [16] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In NIPS, 2009. [17] K. T. Miller. Bayesian Nonparametric Latent Feature Models. PhD thesis, University of California, Berkeley, 2011. [18] S. Nakajima, M. Sugiyama, and D. Babacan. On bayesian PCA: Automatic dimensionality selection and analytic solution. In ICML, 2011. [19] K. Palla, D. A. Knowles, and Z. Ghahramani. An infinite latent attribute model for network data. In ICML, 2012. [20] C. Peterson and J. Anderson. A mean field theory learning algorithm for neural networks. Complex systems, 1:995–1019, 1987. [21] G. E. Poliner and D. P. W. Ellis. A discriminative model for polyphonic piano transcription. EURASIP Journal of Advances in Signal Processing, 2007(1):154, 2007. [22] C. Reed and Z. Ghahramani. Scaling the indian buffet process via submodular maximization. In ICML, 2013. [23] G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464, 1978. [24] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society. Series B, 61(3):611–622, 1999. [25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [26] S. Watanabe. Algebraic analysis for nonidentifiable learning machines. Neural Computation, 13(4):899– 933, 2001. [27] S. Watanabe. Algebraic Geometry and Statistical Learning Theory (Cambridge Monographs on Applied and Computational Mathematics). Cambridge University Press, 2009. [28] R. Wong. Asymptotic Approximation of Integrals (Classics in Applied Mathematics). SIAM, 2001. [29] A. L. Yuille and A. Rangarajan. The Concave-Convex procedure. Neural Computation, 15(4):915–936, 2003. [30] R. S. Zemel and G. E. Hinton. Learning population codes by minimizing description length. Neural Computation, 7(3):11–18, 1994. 9
|
2013
|
90
|
5,170
|
Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean Separation Martin Azizyan Machine Learning Department Carnegie Mellon University mazizyan@cs.cmu.edu Aarti Singh Machine Learning Department Carnegie Mellon University aarti@cs.cmu.edu Larry Wasserman Department of Statistics Carnegie Mellon University larry@stat.cmu.edu Abstract While several papers have investigated computationally and statistically efficient methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not well-understood. In this paper, we provide precise information theoretic bounds on the clustering accuracy and sample complexity of learning a mixture of two isotropic Gaussians in high dimensions under small mean separation. If there is a sparse subset of relevant dimensions that determine the mean separation, then the sample complexity only depends on the number of relevant dimensions and mean separation, and can be achieved by a simple computationally efficient procedure. Our results provide the first step of a theoretical basis for recent methods that combine feature selection and clustering. 1 Introduction Gaussian mixture models provide a simple framework for several machine learning problems including clustering, density estimation and classification. Mixtures are especially appealing in high dimensional problems. Perhaps the most common use of Gaussian mixtures is for clustering. Of course, the statistical (and computational) behavior of these methods can degrade in high dimensions. Inspired by the success of variable selection methods in regression, several authors have considered variable selection for clustering. However, there appears to be no theoretical results justifying the advantage of variable selection in high dimensional setting. To see why some sort of variable selection might be useful, consider clustering n subjects using a vector of d genes for each subject. Typically d is much larger than n which suggests that statistical clustering methods will perform poorly. However, it may be the case that there are only a small number of relevant genes in which case we might expect better behavior by focusing on this small set of relevant genes. The purpose of this paper is to provide precise bounds on clustering error with mixtures of Gaussians. We consider both the general case where all features are relevant, and the special case where only a subset of features are relevant. Mathematically, we model an irrelevant feature by requiring the mean of that feature to be the same across clusters, so that the feature does not serve to differentiate the groups. Throughout this paper, we use the probability of misclustering an observation, relative to the optimal clustering if we had known the true distribution, as our loss function. This is akin to using excess risk in classification. This paper makes the following contributions: • We provide information theoretic bounds on the sample complexity of learning a mixture of two isotropic Gaussians with equal weight in the small mean separation setting that precisely captures the dimension dependence, and matches known sample complexity requirements for some existing algorithms. This also debunks the myth that there is a gap between 1 statistical and computational complexity of learning mixture of two isotropic Gaussians for small mean separation. Our bounds require non-standard arguments since our loss function does not satisfy the triangle inequality. • We consider the high-dimensional setting where only a subset of relevant dimensions determine the mean separation between mixture components and show that learning is substantially easier as the sample complexity only depends on the sparse set of relevant dimensions. This provides some theoretical basis for feature selection approaches to clustering. • We show that a simple computationally feasible procedure nearly achieves the information theoretic sample complexity even in high-dimensional sparse mean separation settings. Related Work. There is a long and continuing history of research on mixtures of Gaussians. A complete review is not feasible but we mention some highlights of the work most related to ours. Perhaps the most popular method for estimating a mixture distribution is maximum likelihood. Unfortunately, maximizing the likelihood is NP-Hard. This has led to a stream of work on alternative methods for estimating mixtures. These new algorithms use pairwise distances, spectral methods or the method of moments. Pairwise methods are developed in Dasgupta (1999), Schulman and Dasgupta (2000) and Arora and Kannan (2001). These methods require the mean separation to increase with dimension. The first one requires the separation to be √ d while the latter two improve it to d1/4. To avoid this problem, Vempala and Wang (2004) introduced the idea of using spectral methods for estimating mixtures of spherical Gaussians which makes mean separation independent of dimension. The assumption that the components are spherical was removed in Brubaker and Vempala (2008). Their method only requires the components to be separated by a hyperplane and runs in polynomial time, but requires n = Ω(d4 log d) samples. Other spectral methods include Kannan et al. (2005), Achlioptas and McSherry (2005) and Hsu and Kakade (2013). The latter uses clever spectral decompositions together with the method of moments to derive an effective algorithm. Kalai et al. (2012) use the method of moments to get estimates without requiring separation between components of the mixture components. A similar approach is given in Belkin and Sinha (2010). Chaudhuri et al. (2009) give a modified k-means algorithm for estimating a mixture of two Gaussians. For the large mean separation setting µ > 1, Chaudhuri et al. (2009) show that n = ˜Ω(d/µ2) samples are needed. They also provide an information theoretic bound on the necessary sample complexity of any algorithm which matches the sample complexity of their method (up to log factors) in d and µ. If the mean separation is small µ < 1, they show that n = ˜Ω(d/µ4) samples are sufficient. Our results for the small mean separation setting give a matching necessary condition. Assuming the separation between the component means is not too sparse, Chaudhuri and Rao (2008) provide an algorithm for learning the mixture that has polynomial computational and sample complexity. Most of these papers are concerned with computational efficiency and do not give precise, statistical minimax upper and lower bounds. None of them deal with the case we are interested in, namely, a high dimensional mixture with sparse mean separation. We should also point out that the results in different papers are not necessarily comparable since different authors use different loss functions. In this paper we use the probability of misclassifying a future observation, relative to how the correct distribution clusters the observation, as our loss function. This should not be confused with the probability of attributing a new observation to the wrong component of the mixture. The latter loss does not typically tend to zero as the sample size increases. Our loss is similar to the excess risk used in classification where we compare the misclassification rate of a classifier to the misclassification rate of the Bayes optimal classifier. Finally, we remind the reader that our motivation for studying sparsely separated mixtures is that this provides a model for variable selection in clustering problems. There are some relevant recent papers on this problem in the high-dimensional setting. Pan and Shen (2007) use penalized mixture models to do variable selection and clustering simultaneously. Witten and Tibshirani (2010) develop a penalized version of k-means clustering. Related methods include Raftery and Dean (2006); Sun et al. (2012) and Guo et al. (2010). The applied bioinformatics literature also contains a huge number of heuristic methods for this problem. None of these papers provide minimax bounds for the clustering error or provide theoretical evidence of the benefit of using variable selection in unsupervised problems such as clustering. 2 2 Problem Setup In this paper, we consider the simple setting of learning a mixture of two isotropic Gaussians with equal mixing weights,1 given n data points X1, . . . , Xn ∈Rd drawn i.i.d. from a d-dimensional mixture density function pθ(x) = 1 2f(x; µ1, σ2I) + 1 2f(x; µ2, σ2I), where f(·; µ, Σ) is the density of N(µ, Σ), σ > 0 is a fixed constant, and θ := (µ1, µ2) ∈Θ. We consider two classes Θ of parameters: Θλ = {(µ1, µ2) : ∥µ1 −µ2∥≥λ} Θλ,s = {(µ1, µ2) : ∥µ1 −µ2∥≥λ, ∥µ1 −µ2∥0 ≤s} ⊆Θλ. The first class defines mixtures where the components have a mean separation of at least λ > 0. The second class defines mixtures with mean separation λ > 0 along a sparse set of s ∈{1, . . . , d} dimensions. Also, let Pθ denote the probability measure corresponding to pθ. For a mixture with parameter θ, the Bayes optimal classification, that is, assignment of a point x ∈Rd to the correct mixture component, is given by the function Fθ(x) = argmax i∈{1,2} f(x; µi, σ2I). Given any other candidate assignment function F : Rd →{1, 2}, we define the loss incurred by F as Lθ(F) = min π Pθ({x : Fθ(x) ̸= π(F(x))}) where the minimum is over all permutations π : {1, 2} →{1, 2}. This is the probability of misclustering relative to an oracle that uses the true distribution to do optimal clustering. We denote by bFn any assignment function learned from the data X1, . . . , Xn, also referred to as estimator. The goal of this paper is to quantify how the minimax expected loss (worst case expected loss for the best estimator) Rn ≡inf b Fn sup θ∈Θ EθLθ( bFn) scales with number of samples n, the dimension of the feature space d, the number of relevant dimensions s, and the signal-to-noise ratio defined as the ratio of mean separation to standard deviation λ/σ. We will also demonstrate a specific estimator that achieves the minimax scaling. For the purposes of this paper, we say that feature j is irrelevant if µ1(j) = µ2(j). Otherwise we say that feature j is relevant. 3 Minimax Bounds 3.1 Small mean separation setting without sparsity We begin without assuming any sparsity, that is, all features are relevant. In this case, comparing the projections of the data to the projection of the sample mean onto the first principal component suffices to achieve both minimax optimal sample complexity and clustering loss. Theorem 1 (Upper bound). Define bFn(x) = 1 if xT v1(bΣn) ≥bµT nv1(bΣn) 2 otherwise. where bµn = n−1 Pn i=1 Xi is the sample mean, bΣn = n−1 Pn i=1(Xi−bµn)(Xi−bµn)T is the sample covariance and v1(bΣn) denotes the eigenvector corresponding to the largest eigenvalue of bΣn. If n ≥max(68, 4d), then sup θ∈Θλ EθLθ( bF) ≤600 max 4σ2 λ2 , 1 r d log(nd) n . 1We believe our results should also hold in the unequal mixture weight setting without major modifications. 3 Furthermore, if λ σ ≥2 max(80, 14 √ 5d), then sup θ∈Θλ EθLθ( bF) ≤17 exp −n 32 + 9 exp −λ2 80σ2 . We note that the estimator in Theorem 1 (and that in Theorem 3) does not use knowledge of σ2. Theorem 2 (Lower bound). Assume that d ≥9 and λ σ ≤0.2. Then inf b Fn sup θ∈Θλ EθLθ( bFn) ≥ 1 500 min (√log 2 3 σ2 λ2 r d −1 n , 1 4 ) . We believe that some of the constants (including lower bound on d and exact upper bound on λ/σ) can be tightened, but the results demonstrate matching scaling behavior of clustering error with d, n and λ/σ. Thus, we see (ignoring constants and log terms) that Rn ≈σ2 λ2 r d n, or equivalently n ≈ d λ4/σ4 for a constant target value of Rn. The result is quite intuitive: the dependence on dimension d is as expected. Also we see that the rate depends in a precise way on the signal-to-noise ratio λ/σ. In particular, the results imply that we need d ≤n. In modern high-dimensional datasets, we often have d > n i.e. large number of features and not enough samples. However, inference is usually tractable since not all features are relevant to the learning task at hand. This sparsity of relevant feature set has been successfully exploited in supervised learning problems such as regression and classification. We show next that the same is true for clustering under the Gaussian mixture model. 3.2 Sparse and small mean separation setting Now we consider the case where there are s < d relevant features. Let S denote the set of relevant features. We begin by constructing an estimator bSn of S as follows. Define bτn = 1 + α 1 −α min i∈{1,...,d} bΣn(i, i), where where α = r 6 log(nd) n + 2 log(nd) n . Now let bSn = {i ∈{1, . . . , d} : bΣn(i, i) > bτn}. Now we use the same method as before, but using only the features in bSn identified as relevant. Theorem 3 (Upper bound). Define bFn(x) = 1 if xT bSnv1(bΣbSn) ≥bµT bSnv1(bΣbSn) 2 otherwise where xbSn are the coordinates of x restricted to bSn, and bµbSn and bΣbSn are the sample mean and covariance of the data restricted to bSn. If n ≥max(68, 4s), d ≥2 and α ≤1 4, then sup θ∈Θλ,s EθLθ( bF) ≤603 max 16σ2 λ2 , 1 r s log(ns) n + 220σ√s λ log(nd) n 1 4 . Next we find the lower bound. Theorem 4 (Lower bound). Assume that λ σ ≤0.2, d ≥17, and that 5 ≤s ≤d+3 4 . Then inf b Fn sup θ∈Θλ,s EθLθ( bFn) ≥ 1 600 min (r 8 45 σ2 λ2 s s −1 n log d −1 s −1 , 1 2 ) . 4 We remark again that the constants in our bounds can be tightened, but the results suggest that σ λ s2 log d n 1/4 ≻Rn ≻σ2 λ2 r s log d n , or n = Ω s2 log d λ4/σ4 for a constant target value of Rn. In this case, we have a gap between the upper and lower bounds for the clustering loss. Also, the sample complexity can possibly be improved to scale as s (instead of s2) using a different method. However, notice that the dimension only enters logarithmically. If the number of relevant dimensions is small then we can expect good rates. This provides some justification for feature selection. We conjecture that the lower bound is tight and that the gap could be closed by using a sparse principal component method as in Vu and Lei (2012) to find the relevant features. However, that method is combinatorial and so far there is no known computationally efficient method for implementing it with similar guarantees. We note that the upper bound is achieved by a two-stage method that first finds the relevant dimensions and then estimates the clusters. This is in contrast to the methods described in the introduction which do clustering and variable selection simultaneously. This raises an interesting question: is it always possible to achieve the minimax rate with a two-stage procedure or are there cases where a simultaneous method outperforms a two-stage procedure? Indeed, it is possible that in the case of general covariance matrices (non-spherical) two-stage methods might fail. We hope to address this question in future work. 4 Proofs of the Lower Bounds The lower bounds for estimation problems rely on a standard reduction from expected error to hypothesis testing that assumes the loss function is a semi-distance, which the clustering loss isn’t. However, a local triangle inequality-type bound can be shown (Proposition 2). This weaker condition can then be used to lower-bound the expected loss, as stated in Proposition 1 (which follows easily from Fano’s inequality). The proof techniques of the sparse and non-sparse lower bounds are almost identical. The main difference is that in the non-sparse case, we use the Varshamov–Gilbert bound (Lemma 1) to construct a set of sufficiently dissimilar hypotheses, whereas in the sparse case we use an analogous result for sparse hypercubes (Lemma 2). See the supplementary material for complete proofs of all results. In this section and the next, φ and Φ denote the univariate standard normal PDF and CDF. Lemma 1 (Varshamov–Gilbert bound). Let Ω= {0, 1}m for m ≥8. There exists a subset {ω0, ..., ωM} ⊆Ωsuch that ω0 = (0, ..., 0), ρ(ωi, ωj) ≥ m 8 for all 0 ≤i < j ≤M, and M ≥2m/8, where ρ denotes the Hamming distance between two vectors (Tsybakov (2009)). Lemma 2. Let Ω= {ω ∈{0, 1}m : ∥ω∥0 = s} for integers m > s ≥1 such that s ≤m/4. There exist ω0, ..., ωM ∈Ωsuch that ρ(ωi, ωj) > s/2 for all 0 ≤i < j ≤M, and M ≥ m s s/5 −1 (Massart (2007), Lemma 4.10). Proposition 1. Let θ0, ..., θM ∈Θλ (or Θλ,s), M ≥2, 0 < α < 1/8, and γ > 0. If for all 1 ≤i ≤ M, KL(Pθi, Pθ0) ≤α log M n , and if Lθi( bF) < γ implies Lθj( bF) ≥γ for all 0 ≤i ̸= j ≤M and clusterings bF, then inf b Fn maxi∈[0..M] EθiLθi( bFn) ≥0.07γ. Proposition 2. For any θ, θ′ ∈Θλ, and any clustering bF, let τ = Lθ( bF) + p KL(Pθ, Pθ′)/2. If Lθ(Fθ′) + τ ≤1/2, then Lθ(Fθ′) −τ ≤Lθ′( bF) ≤Lθ(Fθ′) + τ. We will also need the following two results. Let θ = (µ0−µ/2, µ0+µ/2) and θ′ = (µ0−µ′/2, µ0+ µ′/2) for µ0, µ, µ′ ∈Rd such that ∥µ∥= ∥µ′∥, and let cos β = |µT µ′| ∥µ∥2 . Proposition 3. Let g(x) = φ(x)(φ(x) −xΦ(−x)). Then 2g ∥µ∥ 2σ sin β cos β ≤Lθ(Fθ′) ≤tan β π . Proposition 4. Let ξ = ∥µ∥ 2σ . Then KL(Pθ, Pθ′) ≤ξ4(1 −cos β). 5 Proof of Theorem 2. Let ξ = λ 2σ, and define ϵ = min n √log 2 3 σ2 λ 1 √n, λ 4√d−1 o . Define λ2 0 = λ2−(d− 1)ϵ2. Let Ω= {0, 1}d−1. For ω = (ω(1), ..., ω(d −1)) ∈Ω, let µω = λ0ed + Pd−1 i=1 (2ω(i) −1)ϵei (where {ei}d i=1 is the standard basis for Rd). Let θω = −µω 2 , µω 2 ∈Θλ. By Proposition 4, KL(Pθω, Pθν) ≤ξ4(1 −cos βω,ν) where cos βω,ν = 1 −2ρ(ω,ν)ϵ2 λ2 , ω, ν ∈Ω, and ρ is the Hamming distance, so KL(Pθω, Pθν) ≤ξ4 2(d−1)ϵ2 λ2 . By Proposition 3, since cos βω,ν ≥1 2, Lθω(Fθν) ≤1 π tan βω,ν ≤1 π p 1 + cos βω,ν cos βω,ν p 1 −cos βω,ν ≤4 π √ d −1ϵ λ , and Lθω(Fθν) ≥2g(ξ) sin βω,ν cos βω,ν ≥g(ξ) p 1 + cos βω,ν p 1 −cos βω,ν ≥ √ 2g(ξ) p ρ(ω, ν)ϵ λ where g(x) = φ(x)(φ(x) −xΦ(−x)). By Lemma 1, there exist ω0, ..., ωM ∈Ωsuch that M ≥ 2(d−1)/8 and ρ(ωi, ωj) ≥d−1 8 for all 0 ≤i < j ≤M. For simplicity of notation, let θi = θωi for all i ∈[0..M]. Then, for i ̸= j ∈[0..M], KL(Pθi, Pθj) ≤ξ4 2(d −1)ϵ2 λ2 , Lθi(Fθj) ≤4 π √ d −1ϵ λ and Lθi(Fθj) ≥1 2g(ξ) √ d −1ϵ λ . Define γ = 1 4(g(ξ) −2ξ2) √d−1ϵ λ . Then for any i ̸= j ∈[0..M], and any bF such that Lθi( bF) < γ, Lθi(Fθj) + Lθi( bF) + r KL(Pθi, Pθj) 2 < 4 π + 1 4(g(ξ) −2ξ2) + ξ2 √ d −1ϵ λ ≤1 2 because, for ξ ≤0.1, by definition of ϵ, 4 π + 1 4(g(ξ) −2ξ2) + ξ2 √ d −1ϵ λ ≤2 √ d −1ϵ λ ≤1 2. So, by Proposition 2, Lθj( bF) ≥γ. Also, KL(Pθi, Pθ0) ≤(d −1)ξ4 2ϵ2 λ2 ≤log M 9n for all 1 ≤i ≤M, because, by definition of ϵ, ξ4 2ϵ2 λ2 ≤log 2 72n . So by Proposition 1 and the fact that ξ ≤0.1, inf b Fn max i∈[0..M] EθiLθi( bFn) ≥0.07γ ≥ 1 500 min (√log 2 3 σ2 λ2 r d −1 n , 1 4 ) and to complete the proof we use supθ∈Θλ EθLθ( bFn) ≥maxi∈[0..M] EθiLθi( bFn) for any bFn. □ Proof of Theorem 4. For simplicity, we state this construction for Θλ,s+1, assuming 4 ≤s ≤d−1 4 . Let ξ = λ 2σ, and define ϵ = min q 8 45 σ2 λ q 1 n log d−1 s , 1 2 λ √s . Define λ2 0 = λ2 −sϵ2. Let Ω= {ω ∈{0, 1}d−1 : ∥ω∥0 = s}. For ω = (ω(1), ..., ω(d −1)) ∈Ω, let µω = λ0ed + Pd−1 i=1 ω(i)ϵei (where {ei}d i=1 is the standard basis for Rd). Let θω = −µω 2 , µω 2 ∈Θλ,s. By Lemma 2, there exist ω0, ..., ωM ∈Ωsuch that M ≥ d−1 s s/5 −1 and ρ(ωi, ωj) ≥s 2 for all 0 ≤i < j ≤M. The remainder of the proof is analogous to that of Theorem 2 with γ = 1 4(g(ξ) − √ 2ξ2) √sϵ λ . □ 5 Proofs of the Upper Bounds Propositions 5 and 6 below bound the error in estimating the mean and principal direction, and can be obtained using standard concentration bounds and a variant of the Davis–Kahan theorem. Proposition 7 relates these errors to the clustering loss. For the sparse case, Propositions 8 and 9 bound the added error induced by the support estimation procedure. See supplementary material for proof details. Proposition 5. Let θ = (µ0 −µ, µ0 + µ) for some µ0, µ ∈Rd and X1, ..., Xn i.i.d. ∼Pθ. For any δ > 0, we have ∥µ0 −bµn∥≥σ q 2 max(d,8 log 1 δ ) n + ∥µ∥ q 2 log 1 δ n with probability at least 1 −3δ. 6 Proposition 6. Let θ = (µ0 −µ, µ0 + µ) for some µ0, µ ∈Rd and X1, ..., Xn i.i.d. ∼Pθ with d > 1 and n ≥4d. Define cos β = |v1(σ2I + µµT )T v1(bΣn)|. For any 0 < δ < d−1 √e , if max σ2 ∥µ∥2 , σ ∥µ∥ q max(d,8 log 1 δ ) n ≤ 1 160, then with probability at least 1 −12δ −2 exp −n 20 , sin β ≤14 max σ2 ∥µ∥2 , σ ∥µ∥ √ d s 10 n log d δ max 1, 10 n log d δ . Proposition 7. Let θ = (µ0 −µ, µ0 + µ), and for some x0, v ∈Rd with ∥v∥= 1, let bF(x) = 1 if xT v ≥xT 0 v, and 2 otherwise. Define cos β = |vT µ|/∥µ∥. If |(x0 −µ0)T v| ≤σϵ1 + ∥µ∥ϵ2 for some ϵ1 ≥0 and 0 ≤ϵ2 ≤1 4, and if sin β ≤ 1 √ 5, then Lθ( bF) ≤exp ( −1 2 max 0, ∥µ∥ 2σ −2ϵ1 2) 2ϵ1 + ϵ2 ∥µ∥ σ + 2 sin β 2 sin β ∥µ∥ σ + 1 . Proof. Let r = (x0−µ0)T v cos β . Since the clustering loss is invariant to rotation and translation, Lθ( bF) ≤1 2 Z ∞ −∞ 1 σ φ x σ Φ ∥µ∥+ |x| tan β + r σ −Φ ∥µ∥−|x| tan β −r σ dx ≤ Z ∞ −∞ φ(x) Φ ∥µ∥ σ −Φ ∥µ∥−r σ −|x| tan β dx. Since tan β ≤ 1 2 and ϵ2 ≤ 1 4, we have r ≤2σϵ1 + 2∥µ∥ϵ2, and Φ ∥µ∥ σ −Φ ∥µ∥−r σ ≤ 2 ϵ1 + ϵ2 ∥µ∥ σ φ max 0, ∥µ∥ 2σ −2ϵ1 . Defining A = ∥µ∥−r σ , Z ∞ −∞ φ(x) Φ ∥µ∥−r σ −Φ ∥µ∥−r σ −|x| tan β dx ≤2 Z ∞ 0 Z A A−x tan β φ(x)φ(y)dydx = 2 Z ∞ −A sin β Z A cos β+(u+A sin β) tan β A cos β φ(u)φ(v)dudv ≤2φ (A) tan β (A sin β + 1) ≤2φ max 0, ∥µ∥ 2σ −2ϵ1 tan β 2∥µ∥ σ + 2ϵ1 sin β + 1 where we used u = x cos β −y sin β and v = x sin β + y cos β in the second step. The bound now follows easily. Proof of Theorem 1. Using Propositions 5 and 6 with δ = 1 √n, Proposition 7, and the fact that (C + x) exp(−max(0, x −4)2/8) ≤(C + 6) exp(−max(0, x −4)2/10) for all C, x > 0, EθLθ( bF) ≤600 max 4σ2 λ2 , 1 r d log(nd) n (it is easy to verify that the bounds are decreasing with ∥µ∥, so we use ∥µ∥= λ 2 to bound the supremum). In the d = 1 case Proposition 6 need not be applied, since the principal directions agree trivially. The bound for λ σ ≥2 max(80, 14 √ 5d) can be shown similarly, using δ = exp −n 32 . □ Proposition 8. Let θ = (µ0 −µ, µ0 + µ) for some µ0, µ ∈Rd and X1, ..., Xn i.i.d. ∼Pθ. For any 0 < δ < 1 √e such that q 6 log 1 δ n ≤1 2, with probability at least 1 −6dδ, for all i ∈[d], |bΣn(i, i) −(σ2 + µ(i)2)| ≤σ2 s 6 log 1 δ n + 2σ|µ(i)| s 2 log 1 δ n + (σ + |µ(i)|)2 2 log 1 δ n . Proposition 9. Let θ = (µ0 −µ, µ0 + µ) for some µ0, µ ∈Rd and X1, ..., Xn i.i.d. ∼Pθ. Define S(θ) = {i ∈[d] : µ(i) ̸= 0} and eS(θ) = {i ∈[d] : |µ(i)| ≥4σ√α}. Assume that n ≥1, d ≥2, and α ≤1 4. Then eS(θ) ⊆bSn ⊆S(θ) with probability at least 1 −6 n. 7 Proof. By Proposition 8, with probability at least 1 −6 n, |bΣn(i, i) −(σ2 + µ(i)2)| ≤σ2 r 6 log(nd) n + 2σ|µ(i)| r 2 log(nd) n + (σ + |µ(i)|)2 2 log(nd) n for all i ∈[d]. Assume the above event holds. If S(θ) = [d], then of course bSn ⊆S(θ). Otherwise, for i /∈S(θ), we have (1 −α)σ2 ≤bΣn(i, i) ≤(1 + α)σ2, so it is clear that bSn ⊆S(θ). The remainder of the proof is trivial if eS(θ) = ∅or S(θ) = ∅. Assume otherwise. For any i ∈S(θ), bΣn(i, i) ≥(1 −α)σ2 + 1 −2 log(nd) n µ(i)2 −2ασ|µ(i)|. By definition, |µ(i)| ≥4σ√α for all i ∈eS(θ), so (1+α)2 1−α σ2 ≤bΣn(i, i) and i ∈bSn (we ignore strict equality above as a measure 0 event), i.e. eS(θ) ⊆bSn, which concludes the proof. Proof of Theorem 3. Define S(θ) = {i ∈[d] : µ(i) ̸= 0} and eS(θ) = {i ∈[d] : |µ(i)| ≥4σ√α}. Assume eS(θ) ⊆bSn ⊆S(θ) (by Proposition 9, this holds with probability at least 1 −6 n). If eS(θ) = ∅, then we simply have EθLθ( bFn) ≤1 2. Assume eS(θ) ̸= ∅. Let cos bβ = |v1(bΣbSn)T v1(Σ)|, cos eβ = |v1(ΣbSn)T v1(Σ)|, and cos β = |v1(bΣbSn)T v1(ΣbSn)| where Σ = σ2I + µµT , and for simplicity we define bΣbSn and ΣbSn to be the same as bΣn and Σ in bSn, respectively, and 0 elsewhere. Then sin bβ ≤sin eβ + sin β, and sin eβ = ∥µ −µbS(θ)∥ ∥µ∥ ≤ ∥µ −µeS(θ)∥ ∥µ∥ ≤ 4σ√α q |S(θ)| −|eS(θ)| ∥µ∥ ≤8σ√sα λ . Using the same argument as the proof of Theorem 1, as long as the above bound is smaller than 1 2 √ 5, EθLθ( bF) ≤600 max σ2 λ 2 −4σ√sα 2 , 1 ! r s log(ns) n + 104σ√sα λ + 3 n. Using the fact Lθ( bF) ≤1 2 always, and that α ≤1 4 implies log(nd) n ≤1, the bound follows. □ 6 Conclusion We have provided minimax lower and upper bounds for estimating high dimensional mixtures. The bounds show explicitly how the statistical difficulty of the problem depends on dimension d, sample size n, separation λ and sparsity level s. For clarity, we focused on the special case where there are two spherical components with equal mixture weights. In future work, we plan to extend the results to general mixtures of k Gaussians. One of our motivations for this work is the recent interest in variable selection methods to facilitate clustering in high dimensional problems. Existing methods such as Pan and Shen (2007); Witten and Tibshirani (2010); Raftery and Dean (2006); Sun et al. (2012) and Guo et al. (2010) provide promising numerical evidence that variable selection does improve high dimensional clustering. Our results provide some theoretical basis for this idea. However, there is a gap between the results in this paper and the above methodology papers. Indeed, as of now, there is no rigorous proof that the methods in those papers outperform a two stage approach where the first stage screens for relevant features and the second stage applies standard clustering methods on the features found in the first stage. We conjecture that there are conditions under which simultaneous feature selection and clustering outperforms a two stage method. Settling this question will require the aforementioned extension of our results to the general mixture case. Acknowledgements This research is supported in part by NSF grants IIS-1116458 and CAREER award IIS-1252412, as well as NSF Grant DMS-0806009 and Air Force Grant FA95500910373. 8 References Dimitris Achlioptas and Frank McSherry. On spectral learning of mixtures of distributions. In Learning Theory, pages 458–469. Springer, 2005. Sanjeev Arora and Ravi Kannan. Learning mixtures of arbitrary gaussians. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 247–257. ACM, 2001. Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 103–112. IEEE, 2010. S Charles Brubaker and Santosh S Vempala. Isotropic pca and affine-invariant clustering. In Building Bridges, pages 241–281. Springer, 2008. Kamalika Chaudhuri and Satish Rao. Learning mixtures of product distributions using correlations and independence. In COLT, pages 9–20, 2008. Kamalika Chaudhuri, Sanjoy Dasgupta, and Andrea Vattani. Learning mixtures of gaussians using the k-means algorithm. arXiv preprint arXiv:0912.0086, 2009. Sanjoy Dasgupta. Learning mixtures of gaussians. In Foundations of Computer Science, 1999. 40th Annual Symposium on, pages 634–644. IEEE, 1999. Jian Guo, Elizaveta Levina, George Michailidis, and Ji Zhu. Pairwise variable selection for highdimensional model-based clustering. Biometrics, 66(3):793–804, 2010. Daniel Hsu and Sham M Kakade. Learning mixtures of spherical gaussians: moment methods and spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 11–20. ACM, 2013. Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant. Disentangling gaussians. Communications of the ACM, 55(2):113–120, 2012. Ravindran Kannan, Hadi Salmasian, and Santosh Vempala. The spectral method for general mixture models. In Learning Theory, pages 444–457. Springer, 2005. Pascal Massart. Concentration inequalities and model selection. 2007. Wei Pan and Xiaotong Shen. Penalized model-based clustering with application to variable selection. The Journal of Machine Learning Research, 8:1145–1164, 2007. Adrian E Raftery and Nema Dean. Variable selection for model-based clustering. Journal of the American Statistical Association, 101(473):168–178, 2006. Leonard J. Schulman and Sanjoy Dasgupta. A two-round variant of em for gaussian mixtures. In Proc. 16th UAI (Conference on Uncertainty in Artificial Intelligence), pages 152–159, 2000. Wei Sun, Junhui Wang, and Yixin Fang. Regularized k-means clustering of high-dimensional data and its asymptotic consistency. Electronic Journal of Statistics, 6:148–167, 2012. Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer Series in Statistics. Springer, 2009. Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. Journal of Computer and System Sciences, 68(4):841–860, 2004. Vincent Q Vu and Jing Lei. Minimax sparse principal subspace estimation in high dimensions. arXiv preprint arXiv:1211.0373, 2012. Daniela M Witten and Robert Tibshirani. A framework for feature selection in clustering. Journal of the American Statistical Association, 105(490), 2010. 9
|
2013
|
91
|
5,171
|
Efficient Optimization for Sparse Gaussian Process Regression Yanshuai Cao1 Marcus A. Brubaker2 David J. Fleet1 Aaron Hertzmann1,3 1Department of Computer Science 2TTI-Chicago 3Adobe Research University of Toronto Abstract We propose an efficient optimization algorithm for selecting a subset of training data to induce sparsity for Gaussian process regression. The algorithm estimates an inducing set and the hyperparameters using a single objective, either the marginal likelihood or a variational free energy. The space and time complexity are linear in training set size, and the algorithm can be applied to large regression problems on discrete or continuous domains. Empirical evaluation shows state-ofart performance in discrete cases and competitive results in the continuous case. 1 Introduction Gaussian Process (GP) learning and inference are computationally prohibitive with large datasets, having time complexities O(n3) and O(n2), where n is the number of training points. Sparsification algorithms exist that scale linearly in the training set size (see [10] for a review). They construct a low-rank approximation to the GP covariance matrix over the full dataset using a small set of inducing points. Some approaches select inducing points from training points [7, 8, 12, 13]. But these methods select the inducing points using ad hoc criteria; i.e., they use different objective functions to select inducing points and to optimize GP hyperparameters. More powerful sparsification methods [14, 15, 16] use a single objective function and allow inducing points to move freely over the input domain which are learned via gradient descent. This continuous relaxation is not feasible, however, if the input domain is discrete, or if the kernel function is not differentiable in the input variables. As a result, there are problems in myraid domains, like bio-informatics, linguistics and computer vision where current sparse GP regression methods are inapplicable or ineffective. We introduce an efficient sparsification algorithm for GP regression. The method optimizes a single objective for joint selection of inducing points and GP hyperparameters. Notably, it optimizes either the marginal likelihood, or a variational free energy [15], exploiting the QR factorization of a partial Cholesky decomposition to efficiently approximate the covariance matrix. Because it chooses inducing points from the training data, it is applicable to problems on discrete or continuous input domains. To our knowledge, it is the first method for selecting discrete inducing points and hyperparameters that optimizes a single objective, with linear space and time complexity. It is shown to outperform other methods on discrete datasets from bio-informatics and computer vision. On continuous domains it is competitive with the Pseudo-point GP [14] (SPGP). 1.1 Previous Work Efficient state-of-the-art sparsification methods are O(m2n) in time and O(mn) in space for learning. They compute the predictive mean and variance in time O(m) and O(m2). Methods based on continuous relaxation, when applicable, entail learning O(md) continuous parameters, where d is the input dimension. In the discrete case, combinatorial optimization is required to select the inducing points, and this is, in general, intractable. Existing discrete sparsification methods therefore use other criteria to greedily select inducing points [7, 8, 12, 13]. Although their criteria are justified, each in their own way (e.g., [8, 12] take an information theoretic perspective), they are greedy and do not use the same objective to select inducing points and to estimate GP hyperparameters. 1 The variational formulation of Titsias [15] treats inducing points as variational parameters, and gives a unified objective for discrete and continuous inducing point models. In the continuous case, it uses gradient-based optimization to find inducing points and hyperparameters. In the discrete case, our method optimizes the same variational objective of Titsias [15], but is a significant improvement over greedy forward selection using the variational objective as selection criteria, or some other criteria. In particular, given the cost of evaluating the variational objective on all training points, Titsias [15] evaluates the objective function on a small random subset of candidates at each iteration, and then select the best element from the subset. This approximation is often slow to achieve good results, as we explain and demonstrate below in Section 4.1. The approach in [15] also uses greedy forward selection, which provides no way to refine the inducing set after hyperparameter optimization, except to discard all previous inducing points and restart selection. Hence, the objective is not guaranteed to decrease after each restart. By comparison, our formulation considers all candidates at each step, and revisiting previous selections is efficient, and guaranteed to decrease the objective or terminate. Our low-rank decomposition is inspired by the Cholesky with Side Information (CSI) algorithm for kernel machines [1]. We extend that approach to GP regression. First, we alter the form of the lowrank matrix factorization in CSI to be suitable for GP regression with full-rank diagonal term in the covariance. Second, the CSI algorithm selects inducing points in a single greedy pass using an approximate objective. We propose an iterative optimization algorithm that swaps previously selected points with new candidates that are guaranteed to lower the objective. Finally, we perform inducing set selection jointly with gradient-based hyperparameter estimation instead of the grid search in CSI. Our algorithm selects inducing points in a principled fashion, optimizing the variational free energy or the log likelihood. It does so with time complexity O(m2n), and in practice provides an improved quality-speed trade-off over other discrete selection methods. 2 Sparse GP Regression Let y ∈R be the noisy output of a function, f, of input x. Let X = {xi}n i=1 denote n training inputs, each belonging to input space D, which is not necessarily Euclidean. Let y ∈Rn denote the corresponding vector of training outputs. Under a full zero-mean GP, with the covariance function E[yiyj] = κ(xi, xj) + σ21[i = j] , (1) where κ is the kernel function, 1[·] is the usual indicator function, and σ2 is the variance of the observation noise, the predictive distribution over the output f⋆at a test point x⋆is normally distributed. The mean and variance of the predictive distribution can be expressed as µ⋆= κ(x⋆)T K + σ2In −1 y v2 ⋆= κ(x⋆, x⋆) −κ(x⋆)T K + σ2In −1 κ(x⋆) where In is the n × n identity matrix, K is the kernel matrix whose ijth element is κ(xi, xj), and κ(x⋆) is the column vector whose ith element is κ(x⋆, xi). The hyperparameters of a GP, denoted θ, comprise the parameters of the kernel function, and the noise variance σ2. The natural objective for learning θ is the negative marginal log likelihood (NMLL) of the training data, −log (P(y|X, θ)), given up to a constant by Efull(θ) = ( y⊤ K+σ2In −1y + log |K+σ2In| ) / 2 . (2) The computational bottleneck lies in the O(n2) storage and O(n3) inversion of the full covariance matrix, K + σ2In. To lower this cost with a sparse approximation, Csat´o and Opper [5] and Seeger et al. [12] proposed the Projected Process (PP) model, wherein a set of m inducing points are used to construct a low-rank approximation of the kernel matrix. In the discrete case, where the inducing points are a subset of the training data, with indices I ⊂{1, 2, ..., n}, this approach amounts to replacing the kernel matrix K with the following Nystr¨om approximation [11]: K ≃ˆK = K[:, I] K[I, I]−1 K[I, :] (3) where K[:, I] denotes the sub-matrix of K comprising columns indexed by I, and K[I, I] is the sub-matrix of K comprising rows and columns indexed by I. We assume the rank of K is m or higher so we can always find such rank-m approximations. The PP NMLL is then algebraically equivalent to replacing K with ˆK in Eq. (2), i.e., E(θ, I) = ED(θ, I) + EC(θ, I) /2 , (4) 2 with data term ED(θ, I) = y⊤(ˆK+σ2In)−1y, and model complexity EC(θ, I) = log | ˆK+σ2In|. The computational cost reduction from O(n3) to O(m2n) associated with the new likelihood is achieved by applying the Woodbury inversion identity to ED(θ, I) and EC(θ, I). The objective in (4) can be viewed as an approximate log likelihood for the full GP model, or as the exact log likelihood for an approximate model, called the Deterministically Trained Conditional [10]. The same PP model can also be obtained by a variational argument, as in [15], for which the variational free energy objective can be shown to be Eq. (4) plus one extra term; i.e., F(θ, I) = ED(θ, I) + EC(θ, I) + EV(θ, I) / 2 , (5) where EV (θ, I) = σ−2 tr(K−ˆK) arises from the variational formulation. It effectively regularizes the trace norm of the approximation residual of the covariance matrix. The kernel machine of [1] also uses a regularizer of the form λ tr(K−ˆK), however λ is a free parameter that is set manually. 3 Efficient optimization We now outline our algorithm for optimizing the variational free energy (5) to select the inducing set I and the hyperparameters θ. (The negative log-likelihood (4) is similarly minimized by simply discarding the EV term.) The algorithm is a form of hybrid coordinate descent that alternates between discrete optimization of inducing points, and continuous optimization of the hyperparameters. We first describe the algorithm to select inducing points, and then discuss continuous hyperparameter optimization and termination criteria in Sec. 3.4. Finding the optimal inducing set is a combinatorial problem; global optimization is intractable. Instead, the inducing set is initialized to a random subset of the training data, which is then refined by a fixed number of swap updates at each iteration.1 In a single swap update, a randomly chosen inducing point is considered for replacement. If swapping does not improve the objective, then the original point is retained. There are n −m potential replacements for each each swap update; the key is to efficiently determine which will maximally improve the objective. With the techniques described below, the computation time required to approximately evaluate all possible candidates and swap an inducing point is O(mn). Swapping all inducing points once takes O(m2n) time. 3.1 Factored representation To support efficient evaluation of the objective and swapping, we use a factored representation of the kernel matrix. Given an inducing set I of k points, for any k ≤m, the low-rank Nystr¨om approximation to the kernel matrix (Eq. 3) can be expressed in terms of a partial Cholesky factorization: ˆK = K[:, I] K[I, I]−1 K[I, :] = L(I)L(I)⊤, (6) where L(I) ∈Rn×k is, up to permutation of rows, lower trapezoidal matrix (i.e., has a k × k top lower triangular block, again up to row permutation). The derivation of Eq. 6 follows from Proposition 1 in [1], and the fact that, given the ordered sequence of pivots I, the partial Cholesky factorization is unique. Using this factorization and the Woodbury identities (dropping the dependence on θ and I for clarity), the terms of the negative marginal log-likelihood (4) and variational free energy (5) become ED = σ−2 y⊤y −y⊤L L⊤L + σ2I −1 L⊤y (7) EC = log (σ2)n−k|L⊤L + σ2I| (8) EV = σ−2(tr(K) −tr(L⊤L)) (9) We can further simplify the data term by augmenting the factor matrix as eL = [L⊤, σIk]⊤, where Ik is the k×k identity matrix, and ey = [yT, 0T k] T is the y vector with k zeroes appended: ED = σ−2 y⊤y −ey⊤eL (eL⊤eL)−1 eL⊤ey (10) 1The inducing set can be incrementally constructed, as in [1], however we found no benefit to this. 3 Now, let eL = QR be a QR factorization of eL, where Q ∈R(n+k)×k has orthogonal columns and R ∈Rk×k is invertible. The first two terms in the objective simplify further to ED = σ−2 ∥y∥2 −∥Q⊤ey∥2 (11) EC = (n −k) log(σ2) + 2 log |R| . (12) 3.2 Factorization update Here we present the mechanics of the swap update algorithm, see [3] for pseudo-code. Suppose we wish to swap inducing point i with candidate point j in Im, the inducing set of size m. We first modify the factor matrices in order to remove point i from Im, i.e. to downdate the factors. Then we update all the key terms using one step of Cholesky and QR factorization with the new point j. Downdating to remove inducing point i requires that we shift the corresponding columns/rows in the factorization to the right-most columns of eL, Q, R and to the last row of R. We can then simply discard these last columns and rows, and modify related quantities. When permuting the order of the inducing points, the underlying GP model is invariant, but the matrices in the factored representation are not. If needed, any two points in Im, can be permuted, and the Cholesky or QR factors can be updated in time O(mn). This is done with the efficient pivot permutation presented in the Appendix of [1], with minor modifications to account for the augmented form of eL. In this way, downdating and removing i take O(mn) time, as does the updating with point j. After downdating, we have factors eLm−1,Qm−1, Rm−1, and inducing set Im−1. To add j to Im−1, and update the factors to rank m, one step of Cholesky factorization is performed with point j, for which, ideally, the new column to append to eL is formed as ℓm = (K−ˆKm−1)[:, j] . q (K−ˆKm−1)[j, j] (13) where ˆKm−1 = Lm−1Lm−1 T. Then, we set eLm = [eLm−1 ˜ℓm], where ˜ℓm is just ℓm augmented with σem = [0, 0, ..., σ, ..., 0, 0]⊤. The final updates are Qm = [Qm−1 qm], where qm is given by Gram-Schmidt orthogonalization, qm = ((I −Qm−1Q⊤ m−1)˜ℓm) / ∥(I −Qm−1Q⊤ m−1)˜ℓm∥, and Rm is updated from Rm−1 so that eLm = QmRm. 3.3 Evaluating candidates Next we show how to select candidates for inclusion in the inducing set. We first derive the exact change in the objective due to adding an element to Im−1. Later we will provide an approximation to this objective change that can be computed efficiently. Given an inducing set Im−1, and matrices eLm−1, Qm−1, and Rm−1, we wish to evaluate the change in Eq. 5 for Im =Im−1 ∪j. That is, ∆F ≡F(θ, Im−1)−F(θ, Im) = (∆ED + ∆EC + ∆EV )/2, where, based on the mechanics of the incremental updates above, one can show that ∆ED = σ−2(ey⊤ I −Qm−1Q⊤ m−1 ˜ℓm)2 . ∥ I −Qm−1Q⊤ m−1 ˜ℓm∥2 (14) ∆EC = log σ2 −log ∥(I −Qm−1Q⊤ m−1)˜ℓm∥2 (15) ∆EV = σ−2∥ℓm∥2 (16) This gives the exact decrease in the objective function after adding point j. For a single point this evaluation is O(mn), so to evaluate all n −m points would be O(mn2). 3.3.1 Fast approximate cost reduction While O(mn2) is prohibitive, computing the exact change is not required. Rather, we only need a ranking of the best few candidates. Thus, instead of evaluating the change in the objective exactly, we use an efficient approximation based on a small number, z, of training points which provide information about the residual between the current low-rank covariance matrix (based on inducing points) and the full covariance matrix. After this approximation proposes a candidate, we use the actual objective to decide whether to include it. The techniques below reduce the complexity of evaluating all n −m candidates to O(zn). To compute the change in objective for one candidate, we need the new column of the updated Cholesky factorization, ℓm. In Eq. (13) this vector is a (normalized) column of the residual 4 K −ˆKm−1 between the full kernel matrix and the Nystr¨om approximation. Now consider the full Cholesky decomposition of K = L∗L∗⊤where L∗= [Lm−1, L(Jm−1)] is constructed with Im−1 as the first pivots and Jm−1 = {1, ..., n}\Im−1 as the remaining pivots, so the residual becomes K −ˆKm−1 = L(Jm−1)L(Jm−1)⊤. We approximate L(Jm−1) by a rank z ≪n matrix, Lz, by taking z points from Jm−1 and performing a partial Cholesky factorization of K −ˆKm−1 using these pivots. The residual approximation becomes K −ˆKm−1 ≈LzL⊤ z , and thus ℓm ≈(LzL⊤ z )[:, j] .p (LzL⊤ z )[j, j]. The pivots used to construct Lz are called information pivots; their selection is discussed in Sec. 3.3.2. The approximations to ∆ED k , ∆EC k and ∆EV k , Eqs. (14)-(16), for all candidate points, involve the following terms: diag(LzL⊤ z LzL⊤ z ), y⊤LzL⊤ z , and (Qk−1[1 : n, :])⊤LzL⊤ z . The first term can be computed in time O(z2n), and the other two in O(zmn) with careful ordering of matrix multiplications.2 Computing Lz costs O(z2n), but can be avoided since information pivots change by at most one when an information pivots is added to the inducing set and needs to be replaced. The techniques in Sec. 3.2 bring the associated update cost to O(zn) by updating Lz rather than recomputing it. These z information pivots are equivalent to the “look-ahead” steps of Bach and Jordan’s CSI algorithm, but as described in Sec. 3.3.2, there is a more effective way to select them. 3.3.2 Ensuring a good approximation Selection of the information pivots determines the approximate objective, and hence the candidate proposal. To ensure a good approximation, the CSI algorithm [1] greedily selects points to find an approximation of the residual K −ˆKm−1 in Eq. (13) that is optimal in terms of a bound of the trace norm. The goal, however, is to approximate Eqs. (14)-(16) . By analyzing the role of the residual matrix, we see that the information pivots provide a low-rank approximation to the orthogonal complement of the space spanned by current inducing set. With a fixed set of information pivots, parts of that subspace may never be captured. This suggests that we might occasionally update the entire set of information pivots. Although information pivots are changed when one is moved into the inducing set, we find empirically that this is not insufficient. Instead, at regular intervals we replace the entire set of information pivots by random selection. We find this works better than optimizing the information pivots as in [1]. 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 exact total reduction approx total reduction 0 50 100 150 0 50 100 150 ranking exact total reduction ranking approx total reduction Figure 1: Exact vs approximate costs, based on the 1D example of Sec. 4, with z =10, n=200. Figure 1 compares the exact and approximate cost reduction for candidate inducing points (left), and their respective rankings (right). The approximation is shown to work well. It is also robust to changes in the number of information pivots and the frequency of updates. When bad candidates are proposed, they are rejected after evaluating the change in the true objective. We find that rejection rates are typically low during early iterations (< 20%), but increase as optimization nears convergence (to 30% or 40%). Rejection rates also increase for sparser models, where each inducing point plays a more critical role and is harder to replace. 3.4 Hybrid optimization The overall hybrid optimization procedure performs block coordinate descent in the inducing points and the continuous hyperparameters. It alternates between discrete and continuous phases until improvement in the objective is below a threshold or the computational time budget is exhausted. In the discrete phase, inducing points are considered for swapping with the hyper-parameters fixed. With the factorization and efficient candidate evaluation above, swapping an inducing point i ∈Im proceeds as follows: (I) down-date the factorization matrices as in Sec. 3.2 to remove i; (II) compute the true objective function value Fm−1 over the down-dated model with Im\{i}, using (11), (12) and (9); (III) select a replacement candidate using the fast approximate cost change from Sec. 3.3.1; (IV) evaluate the exact objective change, using (14), (15), and (16); (V) add the exact change to the true objective Fm−1 to get the objective value with the new candidate. If this improves, we include 2Both can be further reduced to O(zn) by appropriate caching during the updates of Q,R and eL, and Lz 5 16 32 64 128 256 512 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 number of inducing points (m) Testing SNLP CholQR−z16 IVM Random Titsias−16 Titsias−512 16 32 64 128 256 512 0.3 0.4 0.5 0.6 0.7 number of inducing points (m) Testing SMSE 16 32 64 128 256 512 −1.2 −1 −0.8 −0.6 −0.4 number of inducing points (m) Testing SNLP CholQR−z16 IVM Random Titsias−16 Titsias−512 16 32 64 128 256 512 0.1 0.15 0.2 0.25 0.3 0.35 0.4 number of inducing points (m) Testing SMSE Figure 2: Test performance on discrete datasets. (top row) BindingDB, values at each marker is the average of 150 runs (50-fold random train/test splits times 3 random initialization); (bottom row) HoG dataset, each marker is the average of 10 randomly initialized runs. the candidate in I and update the matrices as in Sec. 3.2. Otherwise it is rejected and we revert to the factorization with i; (VI) if needed, update the information pivots as in Secs. 3.3.1 and 3.3.2. After each discrete optimization step we fix the inducing set I and optimize the hyperparameters using non-linear conjugate gradients (CG). The equivalence in (6) allows us to compute the gradient with respect to the hyperparameters analytically using the Nystr¨om form. In practice, because we alternate each phase for many training epochs, attempting to swap every inducing point in each epoch is unnecessary, just as there is no need to run hyperparameter optimization until convergence. As long as all inducing set points are eventually considered we find that optimized models can achieve similar performance with shorter learning times. 4 Experiments and analysis For the experiments that follow we jointly learn inducing points and hyperparameters, a more challenging task than learning inducing points with known hyperparameters [12, 14]. For all but the 1D example, the number of inducing points swapped per epoch is min(60, m). The maximum number of function evaluations per epoch in CG hyperparameter optimization is min(20, max(15, 2d)), where d is the number of continuous hyperparameters. Empirically we find the algorithm is robust to changes in these limits. We use two performance measures, (a) standardized mean square error (SMSE), 1 N ΣN t=1(ˆyt −yt)2/ˆσ2 ∗, where ˆσ2 ∗is the sample variance of test outputs {yt}, and (2) standardized negative log probability (SNLP) defined in [11]. 4.1 Discrete input domain We first show results on two discrete datasets with kernels that are not differentiable in the input variable x. Because continuous relaxation methods are not applicable, we compare to discrete selection methods, namely, random selection as baseline (Random), greedy subset-optimal selection of Titsias [15] with either 16 or 512 candidates (Titsias-16 and Titsias-512), and Informative Vector Machine [8] (IVM). For learning continuous hyperparameters, each method optimizes the same objective using non-linear CG. Care is taken to ensure consist initialization and termination criteria [3]. For our algorithm we use z = 16 information pivots with random selection (CholQR-z16). Later, we show how variants of our algorithm trade-off speed and performance. Additionally, we also compare to least-square kernel regression using CSI (in Fig. 3(c)). The first discrete dataset, from bindingdb.org, concerns the prediction of binding affinity for a target (Thrombin), from the 2D chemical structure of small molecules (represented as graphs). We do 50-fold random splits to 3660 training points and 192 test points for repeated runs. We use a compound kernel, comprising 14 different graph kernels, and 15 continuous hyperparameters (one 6 16 32 64 128 256 512 10 2 10 3 10 4 number of inducing points (m) Total training time (secs) (a) 16 32 64 128 256 512 0 500 1000 number of inducing points (m) Training VAR CholQR−z16 IVM Random Titsias−16 Titsias−512 (b) 10 1 10 2 10 3 10 4 0.1 0.2 0.3 Testing SMSE Time in secs (log scaled) CholQR−z8 CholQR−z16 CholQR−OI−z16 CholQR−z64 CholQR−OI−z64 CholQR−AA−z128 IVM Random Titsias−16 Titsias−512 CSI (c) 10 0 10 1 10 2 10 3 10 4 −0.3 −0.2 −0.1 0 Cumulative training time in secs (log scale) Testing SNLP CholQR−z16 IVM Random Titsias−16 Titsias−512 (d) 10 0 10 1 10 2 10 3 10 4 0.55 0.6 0.65 0.7 0.75 Cumulative training time in secs (log scale) Testing SMSE CholQR−z16 IVM Random Titsias−16 Titsias−512 (e) 10 1 10 2 0.138 0.14 0.142 0.144 Testing SMSE Time in secs (log scaled) (f) Figure 3: Training time versus test performance on discrete datasets. (a) the average BindingDB training time; (b) the average BindingDB objective function value at convergence; (d) and (e) show test scores versus training time with m = 32 for a single run; (c) shows the trade-off between training time and testing SMSE on the HoG dataset with m = 32, for various methods including multiple variants of CholQR and CSI; (f) a zoomed-in version of (c) comparing the variants of CholQR. noise variance and 14 data variances). In the second task, from [2], the task is to predict 3D human joint position from histograms of HoG image features [6]. Training and test sets have 4819 and 4811 data points. Because our goal is the general purpose sparsification method for GP regression, we make no attempt at the more difficult problem of modelling the multivariate output structure in the regression as in [2]. Instead, we predict the vertical position of joints independently, using a histogram intersection kernel [9], having four hyperparameters: one noise variance, and three data variances corresponding to the kernel evaluated over the HoG from each of three cameras. We select and show result on the representative left wrist here (see [3] for others joints, and more details about the datasets and kernels used). The results in Fig. 2 and 3 show that CholQR-z16 outperforms the baseline methods in terms of test-time predictive power with significantly lower training time. Titsias-16 and Titsias-512 shows similar test performance, but they are two to four orders of magnitude slower than CholQR-z16 (see Figs. 3(d) and 3(e)). Indeed, Fig. 3(a) shows that the training time for CholQR-z16 is comparable to IVM and Random selection, but with much better performance. The poor performance of Random selection highlights the importance of selecting good inducing points, as no amount of hyperparameter optimization can correct for poor inducing points. Fig. 3(a) also shows IVM to be somewhat slower due to the increased number of iterations needed, even though per epoch, IVM is faster than CholQR. When stopped earlier, IVM test performance further degrades. Finally, Fig. 3(c) and 3(f) show the trade-off between the test SMSE and training time for variants of CholQR, with baselines and CSI kernel regression [1]. For CholQR we consider different numbers of information pivots (denoted z8, z16, z64 and z128), and different strategies for their selection including random selection, optimization as in [1] (denote OI) and adaptively growing the information pivot set (denoted AA, see [3] for details). These variants of CholQR trade-off speed and performance (3(f)), all significantly outperform the other methods (3(c)); CSI, which uses grid search to select hyper-parameters, is slow and exhibits higher SMSE. 4.2 Continuous input domain Although CholQR was developed for discrete input domains, it can be competitive on continuous domains. To that end, we compare to SPGP [14] and IVM [8], using RBF kernels with one lengthscale parameter per input dimension; κ(xi, xj) = c exp(−0.5 Pd t=1 bt(x(t) i −x(t) j )2). We show results from both the PP log likelihood and variational objectives, suffixed by MLE and VAR. 7 (a) CholQR-MLE (b) CholQR-MLE (c) SPGP (d) CholQR-VAR (e) CholQR-VAR (f) SPGP Figure 4: Snelson’s 1D example: prediction mean (red curves); one standard deviation in prediction uncertainty (green curves); inducing point initialization (black points at top of each figure); learned inducing point locations (the cyan points at the bottom, also overlaid on data for CholQR). 128 256 512 1024 2048 0.05 0.1 0.15 0.2 0.25 testing SMSE CholQR−MLE CholQR−VAR SPGP IVM−MLE IVM−VAR 128 256 512 1024 2048 −2.5 −2 −1.5 −1 −0.5 testing SNLP Figure 5: Test scores on KIN40K as function of number of inducing points: for each number of inducing points the value plotted is averaged over 10 runs from 10 different (shared) initializations. We use the 1D toy dataset of [14] to show how the PP likelihood with gradient-based optimization of inducing points is easily trapped in local minima. Fig. 4(a) and 4(d) show that for this dataset our algorithm does not get trapped when initialization is poor (as in Fig. 1c of [14]). To simulate the sparsity of data in high-dimensional problems we also down-sample the dataset to 20 points (every 10th point). Here CholQR out-performs SPGP (see Fig. 4(b), 4(e), and 4(c)). By comparison, Fig. 4(f) shows SPGP learned with a more uniform initial distribution of inducing points avoids this local optima and achieves a better negative log likelihood of 11.34 compared to 14.54 in Fig. 4(c). Finally, we compare CholQR to SPGP [14] and IVM [8] on a large dataset. KIN40K concerns nonlinear forward kinematic prediction. It has 8D real-valued inputs and scalar outputs, with 10K training and 30K test points. We perform linear de-trending and re-scaling as pre-processing. For SPGP we use the implementation of [14]. Fig. 5 shows that CholQR-VAR outperforms IVM in terms of SMSE and SNLP. Both CholQR-VAR and CholQR-MLE outperform SPGP in terms of SMSE on KIN40K with large m, but SPGP exhibits better SNLP. This disparity between the SMSE and SNLP measures for CholQR-MLE is consistent with findings about the PP likelihood in [15]. Recently, Chalupka et al. [4] introduced an empirical evaluation framework for approximate GP methods, and showed that subset of data (SoD) often compares favorably to more sophisticated sparse GP methods. Our preliminary experiments using this framework suggest that CholQR outperforms SPGP in speed and predictive scores; and compared to SoD, CholQR is slower during training, but proportionally faster during testing since CholQR finds a much sparser model to achieve the same predictive scores. In future work, we will report results on the complete suit of benchmark tests. 5 Conclusion We describe an algorithm for selecting inducing points for Gaussian Process sparsification. It optimizes principled objective functions, and is applicable to discrete domains and non-differentiable kernels. On such problems it is shown to be as good as or better than competing methods and, for methods whose predictive behavior is similar, our method is several orders of magnitude faster. On continuous domains the method is competitive. Extension to the SPGP form of covariance approximation would be interesting future research. 8 References [1] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. ICML, pp. 33–40, 2005.. [2] L. Bo and C. Sminchisescu. Twin gaussian processes for structured prediction. IJCV, 87:28– 52, 2010. [3] Y. Cao, M. A. Brubaker, D. J. Fleet, and A. Hertzmann. Project page: supplementary material and software for efficient optimization for sparse gaussian process regression. www.cs.toronto.edu/˜caoy/opt_sgpr, 2013. [4] K. Chalupka, C. K. I. Williams, and I. Murray. A framework for evaluating approximation methods for gaussian process regression. JMLR, 14(1):333–350, February 2013. [5] L. Csat´o and M. Opper. Sparse on-line gaussian processes. Neural Comput., 14:641–668, 2002. [6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. IEEE CVPR, pp. 886–893, 2005. [7] S. S. Keerthi and W. Chu. A matching pursuit approach to sparse gaussian process regression. NIPS 18, pp. 643–650. 2006. [8] N. D. Lawrence, M. Seeger, and R. Herbrich, Fast sparse gaussian process methods: The informative vector machine. NIPS 15, pp. 609–616. 2003. [9] J. J. Lee. Libpmk: A pyramid match toolkit. TR: MIT-CSAIL-TR-2008-17, MIT CSAIL, 2008. URL http://hdl.handle.net/1721.1/41070. [10] J. Qui˜nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. JMLR, 6:1939–1959, 2005. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, 2006. [12] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse gaussian process regression. AI & Stats. 9, 2003. [13] A. J. Smola and P. Bartlett. Sparse greedy gaussian process regression. In Advances in Neural Information Processing Systems 13, pp. 619–625. 2001. [14] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. NIPS 18, pp. 1257–1264. 2006. [15] M. K. Titsias. Variational learning of inducing variables in sparse gaussian processes. JMLR, 5:567–574, 2009. [16] C. Walder, K. I. Kwang, and B. Sch¨olkopf. Sparse multiscale gaussian process regression. ICML, pp. 1112–1119, 2008. 9
|
2013
|
92
|
5,172
|
Robust learning of low-dimensional dynamics from large neural ensembles David Pfau Eftychios A. Pnevmatikakis Liam Paninski Center for Theoretical Neuroscience Department of Statistics Grossman Center for the Statistics of Mind Columbia University, New York, NY pfau@neurotheory.columbia.edu {eftychios,liam}@stat.columbia.edu Abstract Recordings from large populations of neurons make it possible to search for hypothesized low-dimensional dynamics. Finding these dynamics requires models that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality reduction for neural data that is convex, does not make strong assumptions about dynamics, does not require averaging over many trials and is extensible to more complex statistical models that combine local and global influences. The results can be combined with spectral methods to learn dynamical systems models. The basic method extends PCA to the exponential family using nuclear norm minimization. We evaluate the effectiveness of this method using an exact decomposition of the Bregman divergence that is analogous to variance explained for PCA. We show on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary we can still recover the true latent subspace. We also demonstrate an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics. Finally, we demonstrate improved prediction on real neural data from monkey motor cortex compared to fitting linear dynamical models without nuclear norm smoothing. 1 Introduction Progress in neural recording technology has made it possible to record spikes from ever larger populations of neurons [1]. Analysis of these large populations suggests that much of the activity can be explained by simple population-level dynamics [2]. Typically, this low-dimensional activity is extracted by principal component analysis (PCA) [3, 4, 5], but in recent years a number of extensions have been introduced in the neuroscience literature, including jPCA [6] and demixed principal component analysis (dPCA) [7]. A downside of these methods is that they do not treat either the discrete nature of spike data or the positivity of firing rates in a statistically principled way. Standard practice smooths the data substantially or averages it over many trials, losing information about fine temporal structure and inter-trial variability. One alternative is to fit a more complex statistical model directly from spike data, where temporal dependencies are attributed to latent low dimensional dynamics [8, 9]. Such models can account for the discreteness of spikes by using point-process models for the observations, and can incorporate temporal dependencies into the latent state model. State space models can include complex interactions such as switching linear dynamics [10] and direct coupling between neurons [11]. These methods have drawbacks too: they are typically fit by approximate EM [12] or other methods that are prone to local minima, the number of latent dimensions is typically chosen ahead of time, and a certain class of possible dynamics must be chosen before doing dimensionality reduction. 1 In this paper we attempt to combine the computational tractability of PCA and related methods with the statistical richness of state space models. Our approach is convex and based on recent advances in system identification using nuclear norm minimization [13, 14, 15], a convex relaxation of matrix rank minimization. Compared to recent work on spectral methods for fitting state space models [16], our method more easily generalizes to handle different nonlinearities, non-Gaussian, nonlinear, and non-stationary latent dynamics, and direct connections between observed neurons. When applied to model data, we find that: (1) low-dimensional subspaces can be accurately recovered, even when the dynamics are unknown and nonstationary (2) standard spectral methods can robustly recover the parameters of state space models when applied to data projected into the recovered subspace (3) the confounding effects of common input for inferring sparse synaptic connectivity can be ameliorated by accounting for low-dimensional dynamics. In applications to real data we find comparable performance to models trained by EM with less computational overhead, particularly as the number of latent dimensions grows. The paper is organized as follows. In section 2 we introduce the class of models we aim to fit, which we call low-dimensional generalized linear models (LD-GLM). In section 3 we present a convex formulation of the parameter learning problem for these models, as well as a generalization of variance explained to LD-GLMs used for evaluating results. In section 4 we show how to fit these models using the alternating direction method of multipliers (ADMM). In section 5 we present results on real and artificial neural datasets. We discuss the results and future directions in section 6. 2 Low dimensional generalized linear models Our model is closely related to the generalized linear model (GLM) framework for neural data [17]. Unlike the standard GLM, where the inputs driving the neurons are observed, we assume that the driving activity is unobserved, but lies on some low dimensional subspace. This can be a useful way of capturing spontaneous activity, or accounting for strong correlations in large populations of neurons. Thus, instead of fitting a linear receptive field, the goal of learning in low-dimensional GLMs is to accurately recover the latent subspace of activity. Let xt ∈Rm be the value of the dynamics at time t. To turn this into spiking activity, we project this into the space of neurons: yt = Cxt + b is a vector in Rn, n ≫m, where each dimension of yt corresponds to one neuron. C ∈Rn×m denotes the subspace of the neural population and b ∈Rn the bias vector for all the neurons. As yt can take on negative values, we cannot use this directly as a firing rate, and so we pass each element of yt through some convex and log-concave increasing point-wise nonlinearity f : R →R+. Popular choices for nonlinearities include f(x) = exp(x) and f(x) = log(1+exp(x)). To account for biophysical effects such as refractory periods, bursting, and direct synaptic connections, we include a linear dependence on spike history before the nonlinearity. The firing rate f(yt) is used as the rate for some point process ξ such as a Poisson process to generate a vector of spike counts st for all neurons at that time: yt = Cxt + k X τ=1 Dτst−τ + b (1) st ∼ ξ(f(yt)) (2) Much of this paper is focused on estimating yt, which is the natural parameter for the Poisson distribution in the case f(·) = exp(·), and so we refer to yt as the natural rate to avoid confusion with the actual rate f(yt). We will see that our approach works with any point process with a log-concave likelihood, not only Poisson processes. We can extend this simple model by adding dynamics to the low-dimensional latent state, including input-driven dynamics. In this case the model is closely related to the common input model used in neuroscience [11], the difference being that the observed input is added to xt rather than being directly mapped to yt. The case without history terms and with linear Gaussian dynamics is a wellstudied state space model for neural data, usually fit by EM [19, 12, 20], though a consistent spectral method has been derived [16] for the case f(·) = exp(·). Unlike these methods, our approach largely decouples the problem of dimensionality reduction and learning dynamics: even in the case of nonstationary, non-Gaussian dynamics where A, B and Cov[ϵ] change over time, we can still robustly recover the latent subspace spanned by xt. 2 3 Learning 3.1 Nuclear norm minimization In the case that the spike history terms D1:k are zero, the natural rate at time t is yt = Cxt +b, so all yt are elements of some m-dimensional affine space given by the span of the columns of C offset by b. Ideally, our estimate of y1:T would trade off between making the dimension of this affine space as low as possible and the likelihood of y1:T as high as possible. Let Y = [y1, . . . , yT ] be the n × T matrix of natural rates and let A(·) be the row mean centering operator A(Y ) = Y −1 T Y 1T 1T T . Then rank(A(Y )) = m. Ideally we would minimize λnTrank(A(Y )) −PT t=1 log p(st|yt), where λ controls how much we trade off between a simple solution and the likelihood of the data, however general rank minimization is a hard non convex problem. Instead we replace the matrix rank with its convex envelope: the sum of singular values or nuclear norm ∥· ∥∗[13], which can be seen as the analogue of the ℓ1 norm for vector sparsity. Our problem then becomes: min Y λ √ nT||A(Y )||∗− T X t=1 log p(st|yt) (3) Since the log likelihood scales linearly with the size of the data, and the singular values scale with the square root of the size, we also add a factor of √ nT in front of the nuclear norm term. In the examples in this paper, we assume spikes are drawn from a Poisson distribution: log p(st|yt) = N X i=1 sit log f(yit) −f(yit) −log sit! (4) However, this method can be used with any point process with a log-concave likelihood. This can be viewed as a convex formulation of exponential family PCA [21, 22] which does not fix the number of principal components ahead of time. 3.2 Stable principal component pursuit The model above is appropriate for cases where the spike history terms Dτ are zero, that is the observed data can entirely be described by some low-dimensional global dynamics. In real data neurons exhibit history-dependent behavior like bursting and refractory periods. Moreover if the recorded neurons are close to each other some may have direct synaptic connections. In this case Dτ may have full column rank, so from Eq. 1 it is clear that yt is no longer restricted to a lowdimensional affine space. In most practical cases we expect Dτ to be sparse, since most neurons are not connected to one another. In this case the natural rates matrix combines a low-rank term and a sparse term, and we can minimize a convex function that trades off between the rank of one term via the nuclear norm, the sparsity of another via the ℓ1 norm, and the data log likelihood: min Y,D1:k,L λ √ nT||A(L)||∗+ γ T n k X τ=1 ||Dτ||1 − T X t=1 log p(st|yt) (5) s.t. Y = L + k X τ=1 DτSτ, with Sτ = [0n,τ, s1, . . . , sT −τ], where 0n,τ is a matrix of zeros of size n × τ, used to account for boundary effects. This is an extension of stable principal component pursuit [23], which separates sparse and low-rank components of a noise-corrupted matrix. Again to ensure that every term in the objective function of Eq. 5 has roughly the same scaling O(nT) we have multiplied each ℓ1 norm with T/n. One can also consider the use of a group sparsity penalty where each group collects a specific synaptic weight across all the k time lags. 3.3 Evaluation through Bregman divergence decomposition We need a way to evaluate the model on held out data, without assuming a particular form for the dynamics. As we recover a subspace spanned by the columns of Y rather than a single parameter, this presents a challenge. One option is to compute the marginal likelihood of the data integrated 3 over the entire subspace, but this is computationally difficult. For the case of PCA, we can project the held out data onto a subspace spanned by principal components and compute what fraction of total variance is explained by this subspace. We extend this approach beyond the linear Gaussian case by use of a generalized Pythagorean theorem. For any exponential family with natural parameters θ, link function g, function F such that ∇F = g−1 and sufficient statistic T, the log likelihood can be written as DF [θ||g(T(x))] −h(x), where D·[·||·] is a Bregman divergence [24]: DF [x||y] = F(x) −F(y) −(x −y)T ∇F(y). Intuitively, the Bregman divergence between x and y is the difference between the value of F(x) and the value of the best linear approximation around y. Bregman divergences obey a generalization of the Pythagorean theorem: for any affine set Ωand points x /∈Ωand y ∈Ω, it follows that DF [x||y] = DF [x||ΠΩ(x)] + DF [ΠΩ(x)||y] where ΠΩ(x) = arg minω∈ΩDF [x||ω] is the projection of x onto Ω. In the case of squared error this is just a linear projection, and for the case of GLM log likelihoods this is equivalent to maximum likelihood estimation when the natural parameters are restricted to Ω. Given a matrix of natural rates recovered from training data, we compute the fraction of Bregman divergence explained by a sequence of subspaces as follows. Let ui be the ith singular vector of the recovered natural rates. Let b be the mean natural rate, and let y(q) t be the maximum likelihood natural rates restricted to the space spanned by u1, . . . , uq: y(q) t = q X i=1 uiv(q) it + k X τ=1 Dτst−τ + b v(q) t = arg max v log p st q X i=1 uivit + k X τ=1 Dτst−τ + b ! (6) Here v(q) t is the projection of y(q) t onto the singular vectors. Then the divergence from the mean explained by the qth dimension is given by P t DF h y(q−1) t y(q) t i P t DF h y(0) t g(st) i (7) where y(0) t is the bias b plus the spike history terms. The sum of divergences explained over all q is equal to one by virtue of the generalized Pythagorean theorem. For Gaussian noise g(x) = x and F(x) = 1 2||x||2 and this is exactly the variance explained by each principal component, while for Poisson noise g(x) = log(x) and F(x) = P i exp(xi). This decomposition is only exact if f = g−1 in Eq. 4, that is, if the nonlinearity is exponential. However, for other nonlinearities this may still be a useful approximation, and gives us a principled way of evaluating the goodness of fit of a learned subspace. 4 Algorithms Minimizing Eq. 3 and Eq. 5 is difficult, because the nuclear and ℓ1 norm are not differentiable everywhere. By using the alternating direction method of multipliers (ADMM), we can turn these problems into a sequence of tractable subproblems [25]. While not always the fastest method for solving a particular problem, we use it for its simplicity and generality. We describe the algorithm below, with more details in the supplemental materials. 4.1 Nuclear norm minimization To find the optimal Y we alternate between minimizing an augmented Lagrangian with respect to Y , minimizing with respect to an auxiliary variable Z, and performing gradient ascent on a Lagrange multiplier Λ. The augmented Lagrangian is Lρ(Y, Z, Λ) = λ √ nT||Z||∗− X t log p(st|yt) + ⟨Λ, A(Y ) −Z⟩+ ρ 2||A(Y ) −Z||2 F (8) which is a smooth function of Y and can be minimized by Newton’s method. The gradient and Hessian of Lρ with respect to Y at iteration k are 4 ∇Y Lρ = −∇Y X t log p(st|yt) + ρA(Y ) −AT (ρZk −Λk) (9) ∇2 Y Lρ = −∇2 Y X t log p(st|yt) + ρInT −ρ 1 T (1T ⊗In)(1T ⊗In)T (10) where ⊗is the Kronecker product. Note that the first two terms of the Hessian are diagonal and the third is low-rank, so the Newton step can be computed in O(nT) time by using the Woodbury matrix inversion lemma. The minimum of Eq. 17 with respect to Z is given exactly by singular value thresholding: Zk+1 = USλ √ nT /ρ(Σ)V T , (11) where UΣV T is the singular value decomposition of A(Yk+1) + Λk/ρ, and St(·) is the (pointwise) soft thresholding operator St(x) = sgn(x)max(0, |x| −t). Finally, the update to Λ is a simple gradient ascent step: Λk+1 = Λk + ρ(A(Yk+1) −Zk+1) where ρ is a step size that can be chosen. 4.2 Stable principal component pursuit To extend ADMM to the problem in Eq. 5 we only need to add one extra step, taking the minimum over the connectivity matrices with the other parameters held fixed. To simplify the notation, we group the connectivity matrices into a single matrix D = (D1, . . . , Dk), and stack the different time-shifted matrices of spike histories on top of one another to form a single spike history matrix H. The objective then becomes min Y,D λ √ nT||A(Y −DH)||∗+ γ T n ||D||1 − X t log p(st|yt) (12) where we have substituted Y −DH for the variable L, and the augmented Lagrangian is Lρ(Y, Z, D, Λ) = λ √ nT||Z||∗+ γ T n ||D||1 − X t log p(st|yt) (13) +⟨Λ, A(Y −DH) −Z⟩+ ρ 2||A(Y −DH) −Z||2 F The updates for Λ and Z are almost unchanged, except that A(Y ) becomes A(Y −DH). Likewise for Y the only change is one additional term in the gradient: ∇Y Lρ = −∇Y X t log p(st|yt) + ρA(Y ) −AT (ρZ + ρA(DH) −Λ) (14) Minimizing D requires solving: arg min D γ T n ||D||1 + ρ 2||A(DH) + Z −A(Y ) −Λ/ρ||2 F (15) This objective has the same form as LASSO regression. We solve this using ADMM as well, but any method for LASSO regression can be substituted. 5 Experiments We demonstrate our method on a number of artificial datasets and one real dataset. First, we show in the absence of spike history terms that the true low dimensional subspace can be recovered in the limit of large data, even when the dynamics are nonstationary. Second, we show that spectral methods can accurately recover the transition matrix when dynamics are linear. Third, we show that local connectivity can be separated from low-dimensional common input. Lastly, we show that nuclear-norm penalized subspace recovery leads to improved prediction on real neural data recorded from macaque motor cortex. Model data was generated with 8 latent dimension and 200 neurons, without any external input. For linear dynamical systems, the transition matrix was sampled from a Gaussian distribution, and the 5 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 −50 0 50 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 5 10 15 20 25 0 0.5 1 1.5 λ Subspace Angle T = 1000, Spikes T = 10000, Spikes T = 1000, NN T = 10000, NN T = 1000, True Y T = 10000, True Y 1e−3 1e−2 1e−1 1e0 1e1 0 0.5 1 λ Divergence Explained 1 Dim 5 Dim 10 Dim 1e−3 1e−2 1e−1 1e0 1e1 Figure 1: Recovering low-dimensional subspaces from nonstationary model data. While the subspace remains the same, the dynamics switch between 5 different linear systems. Left top: one dimension of the latent trajectory, switching from one set of dynamics to another (red line). Left middle: firing rates of a subset of neurons during the same switch. Left bottom: covariance between spike counts for different neurons during each epoch of linear dynamics. Right top: Angle between the true subspace and top principal components directly from spike data, from natural rates recovered by nuclear norm minimization, and from the true natural rates. Right bottom: fraction of Bregman divergence explained by the top 1, 5 or 10 dimensions from nuclear norm minimization. Dotted lines are variance explained by the same number of principal components. For λ < 0.1 the divergence explained by a given number of dimensions exceeds the variance explained by the same number of PCs. eigenvalues rescaled so the magnitude fell between .9 and .99 and the angle between ± π 10, yielding slow and stable dynamics. The linear projection C was a random Gaussian matrix with standard deviation 1/3, and the biases bi were sampled from N(−4, 1), which we found gave reasonable firing rates with nonlinearity f(x) = log(1 + exp(x)). To investigate the variance of our estimates, we generated multiple trials of data with the same parameters but different innovations. We first sought to show that we could accurately recover the subspace in which the dynamics take place even when those dynamics are not stationary. We split each trial into 5 epochs and in each epoch resampled the transition matrix A and set the covariance of innovations ϵt to QQT where Q is a random Gaussian matrix. We performed nuclear norm minimization on data generated from this model, varying the smoothing parameter λ from 10−3 to 10, and compared the subspace angle between the top 8 principal components and the true matrix C. We repeated this over 10 trials to compute the variance of our estimator. We found that when smoothing was optimized the recovered subspace was significantly closer to the true subspace than the top principal components taken directly from spike data. Increasing the amount of data from 1000 to 10000 time bins significantly reduced the average subspace angle at the optimal λ. The top PCs of the true natural rates Y , while not spanning exactly the same space as C due to differences between the mean column and true bias b, was still closer to the true subspace than the result of nuclear norm minimization. We also computed the fraction of Bregman divergence explained by the sequence of spaces spanned by successive principal components, solving Eq. 6 by Newton’s method. We did not find a clear drop at the true dimensionality of the subspace, but we did find that a larger share of the divergence could be explained by the top dimensions than by PCA directly on spikes. Results are presented in Fig. 1. To show that the parameters of a latent dynamical system can be recovered, we investigated the performance of spectral methods on model data with linear Gaussian latent dynamics. As the model is a linear dynamical system with GLM output, we call this a GLM-LDS model. After estimating natural rates by nuclear norm minimization with λ = 0.01 on 10 trials of 10000 time bins with unit-variance innovations ϵt, we fit the transition matrix A by subspace identification (SSID) [26]. The transition matrix is only identifiable up to a change of coordinates, so we evaluated our fit by comparing the eigenvalues of the true and estimated A. Results are presented in Fig. 2. As expected, SSID directly on spikes led to biased estimates of the transition. By contrast, SSID on the output of 6 −0.2 −0.1 0 0.1 0.2 0.8 0.85 0.9 0.95 1 1.05 1.1 Imaginary Component Real Component True Best Empirical Estimate (a) −0.2 −0.1 0 0.1 0.2 0.8 0.85 0.9 0.95 1 1.05 1.1 Imaginary Component Real Component True SSID (b) −0.2 −0.1 0 0.1 0.2 0.8 0.85 0.9 0.95 1 1.05 1.1 Imaginary Component Real Component True NN+SSID (c) Figure 2: Recovered eigenvalues for the transition matrix of a linear dynamical system from model neural data. Black: true eigenvalues. Red: recovered eigenvalues. (2a) Eigenvalues recovered from the true natural rates. (2b) Eigenvalues recovered from subspace identification directly on spike counts. (2c) Eigenvalues recovered from subspace identification on the natural rates estimated by nuclear norm minimization. nuclear norm minimization had little bias, and seemed to perform almost as well as SSID directly on the true natural rates. We found that other methods for fitting linear dynamical systems from the estimated natural rates were biased, as was SSID on the result of nuclear norm minimization without mean-centering (see the supplementary material for more details). We incorporated spike history terms into our model data to see whether local connectivity and global dynamics could be separated. Our model network consisted of 50 neurons, randomly connected with 95% sparsity, and synaptic weights sampled from a unit variance Gaussian. Data were sampled from 10000 time bins. The parameters λ and γ were both varied from 10−10 to 104. We found that we could recover synaptic weights with an r2 up to .4 on this data by combining both a nuclear norm and ℓ1 penalty, compared to at most .25 for an ℓ1 penalty alone, or 0.33 for a nuclear norm penalty alone. Somewhat surprisingly, at the extreme of either no nuclear norm penalty or a dominant nuclear norm penalty, increasing the ℓ1 penalty never improved estimation. This suggests that in a regime with strong common inputs, some kind of correction is necessary not only for sparse penalties to achieve optimal performance, but to achieve any improvement over maximum likelihood. It is also of interest that the peak in r2 is near a sharp transition to total sparsity. λ γ r2 for Synaptic Weights 1.00e−10 1.00e−07 1.00e−04 1.00e−01 1.00e+02 1.00e−10 1.00e−08 1.00e−06 1.00e−04 1.00e−02 1.00e+00 1.00e+02 1.00e+04 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 True Recovered Synaptic Weights, Optimal −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 True Recovered Synaptic Weights, Small λ −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 True Recovered Synaptic Weights, Large λ Figure 3: Connectivity matrices recovered by SPCP on model data. Left: r2 between true and recovered synaptic weights across a range of parameters. The position in parameter space of the data to the right is highlighted by the stars. Axes are on a log scale. Right: scatter plot of true versus recovered synaptic weights, illustrating the effect of the nuclear norm term. Finally, we demonstrated the utility of our method on real recordings from a large population of neurons. The data consists of 125 well-isolated units from a multi-electrode recording in macaque motor cortex while the animal was performing a pinball task in two dimensions. Previous studies on this data [27] have shown that information about arm velocity can be reliably decoded. As the electrodes are spaced far apart, we do not expect any direct connections between the units, and so leave out the ℓ1 penalty term from the objective. We used 800 seconds of data binned every 100 ms for training and 200 seconds for testing. We fit linear dynamical systems by subspace identification as in Fig. 2, but as we did not have access to a “true” linear dynamical system for comparison, we evaluated our model fits by approximating the held out log likelihood by Laplace-Gaussian filtering [28]. 7 0 5 10 15 20 25 30 35 40 45 50 −2000 −1500 −1000 −500 Number of Latent Dimensions Log Likelihood (bits/s) Prediction of Held out Data from GLM−LDS λ = 1.00e−04 λ = 1.00e−03 λ = 1.00e−02 λ = 3.16e−02 EM Figure 4: Log likelihood of held out motor cortex data versus number of latent dimensions for different latent linear dynamical systems. Prediction improves as λ increases, until it is comparable to EM. We also fit the GLM-LDS model by running randomly initialized EM for 50 iterations for models with up to 30 latent dimensions (beyond which training was prohibitively slow). We found that a strong nuclear norm penalty improved prediction by several hundred bits per second, and that fewer dimensions were needed for optimal prediction as the nuclear norm penalty was increased. The best fit models predicted held out data nearly as well as models trained via EM, even though nuclear norm minimization is not directly maximizing the likelihood of a linear dynamical system. 6 Discussion The method presented here has a number of straightforward extensions. If the dimensionality of the latent state is greater than the dimensionality of the data, for instance when there are long-range history dependencies in a small population of neurons, we would extend the natural rate matrix Y so that each column contains multiple time steps of data. Y is then a block-Hankel matrix. Constructing the block-Hankel matrix is also a linear operation, so the objective is still convex and can be efficiently minimized [15]. If there are also observed inputs ut then the term inside the nuclear norm should also include a projection orthogonal to the row space of the inputs. This could enable joint learning of dynamics and receptive fields for small populations of neurons with high dimensional inputs. Our model data results on connectivity inference have important implications for practitioners working with highly correlated data. GLM models with sparsity penalties have been used to infer connectivity in real neural networks [29], and in most cases these networks are only partially observed and have large amounts of common input. We offer one promising route to removing the confounding influence of unobserved correlated inputs, which explicitly models the common input rather than conditioning on it [30]. It remains an open question what kinds of dynamics can be learned from the recovered natural parameters. In this paper we have focused on linear systems, but nuclear norm minimization could just as easily be combined with spectral methods for switching linear systems and general nonlinear systems. We believe that the techniques presented here offer a powerful, extensible and robust framework for extracting structure from neural activity. Acknowledgments Thanks to Zhang Liu, Michael C. Grant, Lars Buesing and Maneesh Sahani for helpful discussions, and Nicho Hatsopoulos for providing data. This research was generously supported by an NSF CAREER grant. References [1] I. H. Stevenson and K. P. Kording, “How advances in neural recording affect data analysis,” Nature neuroscience, vol. 14, no. 2, pp. 139–142, 2011. [2] M. Okun, P. Yger, S. L. Marguet, F. Gerard-Mercier, A. Benucci, S. Katzner, L. Busse, M. Carandini, and K. D. Harris, “Population rate dynamics and multineuron firing patterns in sensory cortex,” The Journal of Neuroscience, vol. 32, no. 48, pp. 17108–17119, 2012. [3] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan, “Optical imaging of neuronal populations during decision-making,” Science, vol. 307, no. 5711, pp. 896–901, 2005. [4] C. K. Machens, R. Romo, and C. D. Brody, “Functional, but not anatomical, separation of “what” and “when” in prefrontal cortex,” The Journal of Neuroscience, vol. 30, no. 1, pp. 350–360, 2010. [5] M. Stopfer, V. Jayaraman, and G. Laurent, “Intensity versus identity coding in an olfactory system,” Neuron, vol. 39, no. 6, pp. 991–1004, 2003. 8 [6] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V. Shenoy, “Neural population dynamics during reaching,” Nature, 2012. [7] W. Brendel, R. Romo, and C. K. Machens, “Demixed principal component analysis,” Advances in Neural Information Processing Systems, vol. 24, pp. 1–9, 2011. [8] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. R. Rad, M. Vidne, J. Vogelstein, and W. Wu, “A new look at state-space models for neural data,” Journal of Computational Neuroscience, vol. 29, no. 1-2, pp. 107–126, 2010. [9] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, “Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity,” Journal of neurophysiology, vol. 102, no. 1, pp. 614–635, 2009. [10] B. Petreska, B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, “Dynamical segmentation of single trials from population neural data,” Advances in neural information processing systems, vol. 24, 2011. [11] J. E. Kulkarni and L. Paninski, “Common-input models for multiple neural spike-train data,” Network: Computation in Neural Systems, vol. 18, no. 4, pp. 375–407, 2007. [12] A. Smith and E. Brown, “Estimating a state-space model from point process observations,” Neural Computation, vol. 15, no. 5, pp. 965–991, 2003. [13] M. Fazel, H. Hindi, and S. P. Boyd, “A rank minimization heuristic with application to minimum order system approximation,” Proceedings of the American Control Conference., vol. 6, pp. 4734–4739, 2001. [14] Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 1235–1256, 2009. [15] Z. Liu, A. Hansson, and L. Vandenberghe, “Nuclear norm system identification with missing inputs and outputs,” Systems & Control Letters, vol. 62, no. 8, pp. 605–612, 2013. [16] L. Buesing, J. Macke, and M. Sahani, “Spectral learning of linear dynamics from generalised-linear observations with application to neural population data,” Advances in neural information processing systems, vol. 25, 2012. [17] L. Paninski, J. Pillow, and E. Simoncelli, “Maximum likelihood estimation of a stochastic integrate-andfire neural encoding model,” Neural computation, vol. 16, no. 12, pp. 2533–2561, 2004. [18] E. Chornoboy, L. Schramm, and A. Karr, “Maximum likelihood identification of neural point process systems,” Biological cybernetics, vol. 59, no. 4-5, pp. 265–275, 1988. [19] J. Macke, J. Cunningham, M. Byron, K. Shenoy, and M. Sahani, “Empirical models of spiking in neural populations,” Advances in neural information processing systems, vol. 24, 2011. [20] M. Collins, S. Dasgupta, and R. E. Schapire, “A generalization of principal component analysis to the exponential family,” Advances in neural information processing systems, vol. 14, 2001. [21] V. Solo and S. A. Pasha, “Point-process principal components analysis via geometric optimization,” Neural Computation, vol. 25, no. 1, pp. 101–122, 2013. [22] Z. Zhou, X. Li, J. Wright, E. Candes, and Y. Ma, “Stable principal component pursuit,” Proceedings of the IEEE International Symposium on Information Theory, pp. 1518–1522, 2010. [23] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with Bregman divergences,” The Journal of Machine Learning Research, vol. 6, pp. 1705–1749, 2005. [24] S. P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends R⃝in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. [25] P. Van Overschee and B. De Moor, “Subspace identification for linear systems: theory, implementation, applications,” 1996. [26] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski, “Population decoding of motor cortical activity using a generalized linear model with hidden states,” Journal of neuroscience methods, vol. 189, no. 2, pp. 267–280, 2010. [27] S. Koyama, L. Castellanos P´erez-Bolde, C. R. Shalizi, and R. E. Kass, “Approximate methods for statespace models,” Journal of the American Statistical Association, vol. 105, no. 489, pp. 170–180, 2010. [28] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, “Spatiotemporal correlations and visual signalling in a complete neuronal population,” Nature, vol. 454, no. 7207, pp. 995–999, 2008. [29] M. Harrison, “Conditional inference for learning the network structure of cortical microcircuits,” in 2012 Joint Statistical Meeting, (San Diego, CA), 2012. 9
|
2013
|
93
|
5,173
|
Causal Inference on Time Series using Restricted Structural Equation Models Jonas Peters∗ Seminar for Statistics ETH Z¨urich, Switzerland peters@math.ethz.ch Dominik Janzing MPI for Intelligent Systems T¨ubingen, Germany janzing@tuebingen.mpg.de Bernhard Sch¨olkopf MPI for Intelligent Systems T¨ubingen, Germany bs@tuebingen.mpg.de Abstract Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1 Introduction We first introduce the problem of causal inference on iid data, that is in the case with no time structure. Let therefore Xi, i ∈V , be a set of random variables and let G be a directed acyclic graph (DAG) on V describing the causal relationships between the variables. Given iid samples from P(Xi),i∈V , we aim at estimating the underlying causal structure of the variables Xi, i ∈V . Constraint- or independence-based methods [e.g. Spirtes et al., 2000] assume that the joint distribution is Markov, and faithful with respect to G. The PC algorithm, for example, exploits conditional independences for reconstructing the Markov equivalence class of G (some edges remain undirected). We say P(Xi),i∈V satisfies a Structural Equation Model [Pearl, 2009] w.r.t. DAG G if for all i ∈V we can write Xi = fi(PAi, N i) , where PAi are the parents of node i in G. Additionally, we require (N i)i∈V to be jointly independent. By restricting the function class one can identify the bivariate case: Shimizu et al. [2006] show that if P(X,Y ) allows for Y = a · X + NY with NY ⊥⊥X then P(X,Y ) only allows for X = b · Y + NX with NX ⊥⊥Y if (X, NY ) are jointly Gaussian ( ⊥⊥stands for statistical independence). This idea has led to the extensions of nonlinear additive functions f(x, n) = g(x) + n [Hoyer et al., 2009]. Peters et al. [2011b] show how identifiability for two variables generalizes to the multivariate case. We now turn to the case of time series data. For each i from a finite V , let therefore Xi t t∈N be a time series. Xt denotes the vector of time series values at time t. We call the infinite graph that contains each variable Xi t as a node the full time graph. The summary time graph contains all #V ∗Significant parts of this research was done, when Jonas Peters was at the MPI T¨ubingen. 1 components of the time series as vertices and an arrow between Xi and Xj, i ̸= j, if there is an arrow from Xi t−k to Xj t in the full time graph for some k. We are given a sample (X1, . . . , XT ) of a multivariate time series and estimate the true summary time graph. I.i.d. methods are not directly applicable because a common history might introduce complicated dependencies between contemporaneous data Xt and Yt. Nevertheless several methods dealing with time series data are motivated by the iid setting (Section 2). Many of them encounter similar problems: when the model assumptions are violated (e.g. in the presence of a confounder) the methods draw false causal conclusions. Furthermore, they do not include nonlinear instantaneous effects. In this work, we extend the Structural Equation Model framework to time series data and call this approach time series models with independent noise (TiMINo). These models include nonlinear and instantaneous effects. They assume Xt to be a function of all direct causes and some noise variable, the collection of which is supposed to be jointly independent. This model formulation comes with substantial benefits: In Section 3 we prove that for TiMINo models the full causal structure can be recovered from the distribution. Section 4 introduces an algorithm (TiMINo causality) that recovers the model structure from a finite sample. It can be equipped with any algorithm for fitting time series. If the data do not satisfy the model assumptions, TiMINo causality remains mostly undecided instead of drawing wrong causal conclusions. Section 5 deals with time series that have been shifted by different (unknown) time delays. Experiments on simulated and real data sets are shown in Section 6. 2 Existing methods Granger causality [Granger, 1969] (G-causality for the remainder of the article) is based on the following idea: Xi does not Granger cause Xj if including the past of Xi does not help in predicting Xj t given the past of all all other time series Xk, k ̸= i. In principle, “all other” means all other information in the world. In practice, one is limited to Xk, k ∈V . The phrase “does not help” is translated into a significance test assuming a multivariate time series model. If the data follow the assumed model, e.g. the VAR model below, G-causality is sometimes interpreted as testing whether Xi t−h, h > 0 is independent of Xj t given Xk t−h, k ∈V \ {i}, h > 0 [see Florens and Mouchart, 1982, Eichler, 2011, Chu and Glymour, 2008, Quinn et al., 2011, and ANLTSM below]. Linear G-causality considers a VAR model: Xt = Pp τ=1 A(τ)Xt−τ + Nt , where Xt and Nt are vectors and A(τ) are matrices. For checking whether Xi G-causes Xj one fits a full VAR model Mfull to Xt and a VAR model Mrestr to Xt that predicts Xi t without using Xj (using the constraints A · i(τ) = 0 for all 1 ≤τ ≤p). One tests whether the reduction of the residual sum of squares (RSS) of Xi t is significant by using the following test statistic: T := (RSSrestr−RSSfull)/(pfull−prestr) RSSfull/(N−pfull) , where pfull and prestr are the number of parameters in the respective models. For the significance test we use T ∼Fpfull−prestr,N−pfull. G-causality has been extended to nonlinear G-causality, [e.g. Chen et al., 2004, Ancona et al., 2004]. In this paper we focus on an extension for the bivariate case proposed by Bell et al. [1996]. It is based on generalized additive models (gams) [Hastie and Tibshirani, 1990]: Xi t = Pp τ=1 Pn j=1 fi,j,τ(Xj t−τ) + N i t, where Nt is a #V dimensional noise vector. Bell et al. [1996] utilize the same F statistic as above using estimated degrees of freedom. Following Bell et al. [1996], Chu and Glymour [2008] introduce additive nonlinear time series models (ANLTSM for short) for performing relaxed conditional independence tests: If including one variable, e.g. X1 t−1, into a model for X2 t that already includes X2 t−2, X2 t−1, and X1 t−2 does not improve the predictability of X2 t , then X1 t−1 is said to be independent of X2 t given X2 t−2, X2 t−1, X1 t−2 (if the maximal time lag is 2). Chu and Glymour [2008] propose a method based on constraintbased methods like FCI [Spirtes et al., 2000] in order to infer the causal structure exploiting those conditional independence statements. The instantaneous effects are assumed to be linear and the confounders linear and instantaneous. TS-LiNGAM [Hyv¨arinen et al., 2008] is based on LiNGAM [Shimizu et al., 2006] from the iid setting. It allows for instantaneous effects and assumes all relationships to be linear. These approaches encounter some methodological problems. Instantaneous effects: G-causality cannot deal with instantaneous effects. E.g., when Xt is causing Yt, including any of the two time series helps for predicting the other and G-causality infers X →Y and Y →X. ANLTSM and TS-LiNGAM only allow for linear instantaneous effects. Theorem 1 shows that the summary time graph may still be identifiable when the instantaneous effects are linear and the variables are jointly Gaussian. TS-LiNGAM does not work in these situations. Confounders: G-causality might fail 2 when there is a confounder between Xt and Yt+1, say. The path between Xt and Yt+1 cannot be blocked by conditioning on any observed variables; G-causality infers X →Y . We will see empirically that TiMINo remains undecided instead; Entner and Hoyer [2010] and Janzing et al. [2009] provide (partial) results for the iid setting. ANLTSM does not allow for nonlinear confounders or confounders with time structure and TS-LiNGAM may fail, too (Exp. 1). Robustness: Theorem 1 (ii) shows that performing general conditional independence tests suffices. The conditioning sets, however, are too large and the tests are performed under a simple model (e.g. VAR). If the model is misspecified, one may draw wrong conclusions without noticing (e.g. Exp. 3). For TiMINo (defined below), Lemma 1 shows that after fitting and checking the model by using unconditional independence tests, the difficult conditional independences have been checked implicitly. A model check is not new [e.g. Hoyer et al., 2009, Entner and Hoyer, 2010] but is thus an effective tool. We can equip bivariate G-causality with a test for cross-correlations; this is not straight-forward for multivariate G-causality. Furthermore, using cross-correlation as an independence test does not always suffice (see Section 2). 3 Structural Equation models for time series: TiMINo Definition 1 Consider a time series Xt = (Xi t)i∈V whose finite dimensional distributions are absolutely continuous w.r.t a product measure (e.g. there is a pdf or pmf). The time series satisfies a TiMINo if there is a p > 0 and ∀i ∈V there are sets PAi 0 ⊆XV \{i}, PAi k ⊆XV , s.t. ∀t Xi t = fi (PAi p)t−p, . . . , (PAi 1)t−1, (PAi 0)t, N i t , (1) with N i t jointly independent over i and t and for each i, N i t are identically distributed in t. The corresponding full time graph is obtained by drawing arrows from any node that appears in the right-hand side of (1) to Xi t. We require the full time graph to be acyclic. Section 6 shows examples. Theorem 1 (i) assumes that (1) follows an identifiable functional model class (IFMOC). This means that (I) causal minimality holds, a weak form of faithfulness that assumes a statistical dependence between cause and effect given all other parents [Spirtes et al., 2000]. And (II), all fi come from a function class that is small enough to make the bivariate case identifiable. Peters et al. [2011b] give a precise definition. Important examples include nonlinear functions with additive Gaussian noise and linear functions with additive non-Gaussian noise. Due to space constraints, proofs are provided in the appendix. In the one-dimensional linear case model (1) is time-reversible if and only if the noise is normally distributed [Peters et al., 2009]. Theorem 1 Suppose that Xt can be represented as a TiMINo (1) with PA(Xi t) = Sp k=0(PAi k)t−k being the direct causes of Xi t and that one of the following holds: (i) Equations (1) come from an IFMOC (e.g. nonlinear functions fi with additive Gaussian noise N i t or linear functions fi with additive non-Gaussian noise N i t). The summary time graph can contain cycles. (ii) Each component exhibits a time structure (PA(Xi t) contains at least one Xi t−k), the joint distribution is faithful w.r.t. the full time graph, and the summary time graph is acyclic. Then the full time graph can be recovered from the joint distribution of Xt. In particular, the true causal summary time graph is identifiable. (Neither of the conditions (i) and (ii) implies the other.) Many function classes satisfy (i) [Peters et al., 2013]. To estimate fi from data (E[Xi t|Xt−p, . . . , Xt−1] for additive noise) we require stationarity and/or α mixing, or geometric ergodicity [e.g. Chu and Glymour, 2008]. Condition (ii) shows how time structure simplifies the causal inference problem. For iid data the true graph is not identifiable in the linear Gaussian case; with time structure it is. We believe that condition (ii) is more difficult to verify in practice; faithfulness is not required for (i). In (ii), the acyclicity prevents the full time graph from being fully connected up to order p. 4 A practical method: TiMINo causality The algorithm for TiMINo causality is based on the theoretical finding in Theorem 1. It takes the time series data as input and outputs either a DAG that estimates the summary time graph or remains undecided. It tries to fit a TiMINo model to the data and outputs the corresponding graph. If 3 no model with independent residuals is found, it outputs “I do not know”. This becomes intractable for a time series with many components; for time series without feedback loops, we adapt a method for additive noise models without time structure suggested by Mooij et al. [2009] that avoids enumerating all DAGs. Algorithm 1 shows the modified version. As reported by Mooij et al. [2009], the time complexity is O(d2 · f(n, d) · t(n, d)), where d is the number of time series, n the sample size and f(n, d) and t(n, d) the complexity of the user-specific regression method and independence test, respectively. Peters et al. [2013] discuss the algorithm’s correctness. We present our choices but do not claim their optimality, any other fitting method and independence test can be used, too. Algorithm 1 TiMINo causality 1: Input: Samples from a d-dimensional time series of length T: (X1, . . . , XT ), maximal order p 2: S := (1, . . . , d) 3: repeat 4: for k in S do 5: Fit TiMINo for Xk t using Xk t−p, . . . , Xk t−1, Xi t−p, . . . , Xi t−1, Xi t for i ∈S \ {k} 6: Test if residuals are indep. of Xi, i ∈S. 7: end for 8: Choose k∗to be the k with the weakest dependence. (If there is no k with independence, break and output: “I do not know - bad model fit”). 9: S := S \ {k∗}; pa(k∗) := S 10: until length(S)= 1 11: For all k remove all parents that are not required to obtain independent residuals. 12: Output: (pa(1), . . . , pa(d)) Depending on the assumed model class, TiMINo causality has to be provided with a fitting method. Here, we chose the R functions ar for VAR fitting (fi(p1, . . . , pr, n) = ai,1 ·p1 +. . .+ai,r ·pr +n), gam for generalized additive models (fi(p1, . . . , pr, n) = fi,1(p1)+. . .+fi,r(pr)+n) [e.g. Bell et al., 1996] and gptk for GP regression (fi(p1, . . . , pr, n) = fi(p1, . . . , pr) + n). We call the methods TiMINo-linear, TiMINo-gam and TiMINo-GP, respectively. For the first two AIC determines the order of the process. All fitting methods are used in a “standard way”. For gam we used the built-in nonparametric smoothing splines. For the GP we used zero mean, squared exponential covariance function and Gaussian Likelihood. The hyper-parameters are automatically chosen by marginal likelihood optimization. Code is available online. To test for independence between a residual time series N k t and another time series Xi t, i ∈S, we shift the latter time series up to the maximal order ±p (but at least up to ±4); for each of those combinations we perform HSIC [Gretton et al., 2008], an independence test for iid data. One could also use a test based on cross-correlation that can be derived from Thm 11.2.3. in [Brockwell and Davis, 1991]. This is related to what is done in transfer function modeling [e.g. §13.1 in Brockwell and Davis, 1991], which is restricted to two time series and linear functions. As opposed to the iid setting, testing for cross-correlation is often enough in order to reject a wrong model. Only Experiments 1 and 5 describe situations, in which cross-correlations fail. To reduce the running time one can use cross-correlation to determine the graph structure and use HSIC as a final model check. For HSIC we used a Gaussian kernel; as in [Gretton et al., 2008], the bandwidth is chosen such that the median distance of the input data leads to an exponent of one. Testing for non-vanishing autocorrelations in the residuals is not included yet. If the model assumptions only hold in some parts of the summary time graph, we can still try to discover parts of the causal structure. Our code package contains this option. We obtained positive results on simulated data but there is no corresponding identifiability statement. Our method has some potential weaknesses. It can happen that one is able to fit a model only in the wrong direction. This, however, requires an “unnatural” fine tuning of the functions [Janzing and Steudel, 2010] and is relevant only when there are time series without time structure or the data are non-faithful (see Theorem 1). The null hypothesis of the independence test represents independence, although the scientific discovery of a causal relationship should rather be the alternative hypothesis. This fact may lead to wrong causal conclusions (instead of “I do not know”) on small data sets.The effect is strengthened by the Bonferroni correction of the HSIC based independence test; one may require modifications for a high number of time series components. For large sample sizes, even 4 smallest differences between the true data generating process and the model may lead to rejected independence tests [discussed by Peters et al., 2011a]. 5 TiMINo for Shifted Time Series In some applications, we observe the components of the time series with varying time delay. Instead of Xi t we are then working with ˜Xi t = Xi t−ℓ, with 0 ≤ℓ≤k. E.g., in functional magnetic resonance imaging brain activity is measured through an increased blood flow in the corresponding area. It has been reported that these data often suffer from different time delays [e.g. Buxton et al., 1998, Smith et al., 2011]. Given the (shifted) measurements ˜Xi t, we therefore have to cope with causal relationships that go backward in time. This is only resolved when going back to the unobserved true data Xi t. Measures like Granger causality will fail in these situations. This does not necessarily have to be the case, however. The structure still remains identifiable even if we observe ˜Xi t instead of ˜Xi t (the following theorem generalizes the second part of Theorem 1 and is proved accordingly)1: Theorem 2 Assume condition (ii) from Theorem 1 with ˜Xi t = Xi t−ℓ, where 0 ≤ℓ≤k are unknown time delays. Then, the full time graph of ˜Xt is identifiable from the joint distribution of ˜Xt. In particular, the summary time graphs of ˜Xt and Xt are identical and therefore identifiable. As opposed to Theorem 1 we cannot identify the full time graph of Xt. It may not be possible, for example, to distinguish between a lag two effect from X1 to X2 and a corresponding lag one effect with a shifted time series X2. The method for recovering the network structure stays almost the same as the one for non-shifted time series. only line 5 of Algorithm 1 has to be updated: we additionally include Xi t+ℓfor 0 ≤ℓ≤k for all i ∈S \ {k}. While TiMINo exploits an asymmetry between cause and effect emerging from restricted structural equations, G-causality exploits the asymmetry of time. The latter asymmetry is broken when considering shifted time series. 6 Experiments 6.1 Artificial Data We always included instantaneous effects, fitted models up to order p = 2 or p = 6 and set α = 0.05. Experiment 1: Confounder with time lag. We simulate 100 data sets (length 1000) from Zt = a · Zt−1 + NZ,t, Xt = 0.6 · Xt−1 + 0.5 · Zt−1 + NX,t, Yt = 0.6 · Yt−1 + 0.5 · Zt−2 + NY,t, with a between 0 and 0.95 and N·,t ∼0.4 · N(0, 1)3. Here, Z is a hidden common cause for X and Y . For all a, Xt contains information about Zt−1 and Yt+1 (see Figure 1); G-causality and TS-LiNGAM wrongly infer X →Y . For large a, Yt contains additional information about Xt+1, which leads to the wrong arrow Y →X. TiMINo causality does not decide for any a. The nonlinear methods perform very similar (not shown). For a = 0, a cross-correlation test is not enough to reject X →Y . Further, all methods fail for a = 0 and Gaussian noise. (Similar results for non-linear confounder.) Experiment 2: Linear, Gaussian with instantaneous effects. We sample 100 data sets (length 2000) from Xt = A1 · Xt−1 + NX,t, Wt = A2 · Wt−1 + A3 · Xt + NW,t, Yt = A4 · Yt−1 + A5 · Wt−1 +NY,t, Zt = A6 ·Zt−1 +A7 ·Wt +A8 ·Yt−1 +NZ,t and N·,t ∼0.4·N(0, 1) and Ai iid from U([−0.8, −0.2] ∪[0.2, 0.8]). We regard the graph containing X →W →Y →Z and W →Z as correct. TS-LiNGAM and G-causality are not able to recover the true structure (see Table 1). We obtain similar results for non-linear instantaneous interactions. Experiment 3: Nonlinear, non-Gaussian without instantaneous effects. We simulate 100 data sets (length 500) from Xt = 0.8Xt−1 + 0.3NX,t, Yt = 0.4Yt−1 + (Xt−1 −1)2 + 0.3NY,t, Zt = 0.4Zt−1 + 0.5 cos(Yt−1) + sin(Yt−1) + 0.3NZ,t, with N·,t ∼U([−0.5, 0.5]) (similar results for other noise distributions, e.g. exponential). Thus, X →Y →Z is the ground truth. Nonlinear G-causality fails since the implementation is only pairwise and it thus always infers an effect from X to Z. Linear G-causality cannot remove the nonlinear effect from Xt−2 to Zt by using Yt−1. Also TiMINo-linear assumes a wrong model but does not make any decision. TiMINo-gam and TiMINoGP work well on this data set (Table 2). This specific choice of parameters show that a significant 1We believe that a corresponding statement for condition (i) holds, too. 5 Xt−2 Xt−1 Xt Xt+1 Yt−2 Yt−1 Yt Yt+1 Zt−2 Zt−1 Zt Zt+1 a a a 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 confounder parameter a TS−LiNGAM 0.0 0.4 0.8 none Y −> X both X −> Y 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 TiMINo (linear) 0.0 0.4 0.8 no decision none Y −> X X −> Y 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 confounder parameter a G−caus. (linear) 0.0 0.4 0.8 none Y −> X both X −> Y Figure 1: Exp.1: Part of the causal full time graph with hidden common cause Z (top left). TiMINo causality does not decide (top right), whereas G-causality and TS-LiNGAM wrongly infer causal connections between X and Y (bottom). G-causal. TiMINo TSDAG linear linear LiNGAM correct 13% 83% 19% wrong 87% 7% 81% no dec. 0% 10% 0% Table 1: Exp.2: Gaussian data and linear instantaneous effects: only TiMINo mostly discovers the correct DAG. 100 300 500 700 900 0 0.5 1 length of time series prop. of correct (−) and incorrect (−.) answers Figure 2: Exp.4: TiMINo-GP (blue) works reliably for long time series. TiMINo-linear (red) and TiMINo-gam (black) mostly remain undecided. difference in performance is possible. For other parameters (e.g. less impact of the nonlinearity), G-causality and TS-LiNGAM still assume a wrong model but make fewer mistakes. Table 2: Exp.3: Since the data are nonlinear, linear G-causality and TS-LiNGAM give wrong answers, TiMINo-lin does not decide. Nonlinear G-causality fails because it analyzes the causal structure between pairs of time series. DAG Grangerlin Grangernonlin TiMINolin TiMINogam TiMINoGP TS-LiNGAM correct 69% 0% 0% 95% 94% 12% wrong 31% 100% 0% 1% 1% 88% no dec. 0% 0% 100% 4% 5% 0% Experiment 4: Non-additive interaction. We simulate 100 data sets with different lengths from Xt = 0.2·Xt−1 +0.9NX,t, Yt = −0.5+exp(−(Xt−1 +Xt−2)2)+0.1NY,t, with N·,t ∼N(0, 1). Figure 2 shows that TiMINo-linear and TiMINo-gam remain mainly undecided, whereas TiMINoGP performs well. For small sample sizes, one observes two effects: GP regression does not obtain accurate estimates for the residuals, these estimates are not independent and thus TiMINo-GP remains more often undecided. Also, TiMINo-gam makes more correct answers than one would expect due to more type II errors. Linear G-causality and TS-LiNGAM give more than 90% incorrect answers, but non-linear G-causality is most often correct (not shown). Bad model assumptions do not always lead to incorrect causal conclusions. Experiment 5: Non-linear Dependence of Residuals. In Experiment 1, TiMINo equipped with a cross-correlation inferred a causal edge, although there were none. The opposite is also possible: Xt = −0.5·Xt−1 +NX,t, Yt = −0.5·Yt−1 +X2 t−1 +NY,t and N·,t ∼0.4·N(0, 1) (length 1000). TiMINo-gam with cross-correlation infers no causal link between X and Y , whereas TiMINo-gam with HSIC correctly identifies X →Y . Experiment 6: Shifted Time Series. We simulate 100 random DAGs with #V = 3 nodes by choosing a random ordering of the nodes and including edges with a probability of 0.6. The structural equations are additive (gam). Each component is of the form f(x) = a · max(x, −0.1) + b · sign(x) p |x|, with a, b iid from U([−0.5, −0.2] ∪[0.2, 0.5]). We sample time series (length 1000) from Gaussian noise and observe the sink node time series with a time delay of three. This makes all 6 traditional methods inapplicable. The performance of linear G-causality, for example, drops from an average Structural Hamming Distance (SHD) of 0.38 without time delay to 1.73 with time delay. TiMINo-gam causality recognizes the wrong model assumption. The SHD increases from 0.13 (17 undecided cases) to 0.71 (79 undecided cases). Adjusting for a time delay (Section 5) yields an SHD of 0.70 but many more decisions (18 undecided cases). Although it is possible to adjust for time delays, the procedure enlarges the model space, which makes rejecting all wrong models more difficult. Already #V = 5 leads to worse average SHD: G-causality: 4.5, TiMINo-gam: 1.5 (92 undecided cases) and TiMINo-gam with time delay adjustment: 2.4 (38 undecided cases). 6.2 Real Data We fitted up to order 6 and included instantaneous effects. For TiMINo, “correct” means that TiMINo-gam is correct and TiMINo-linear is correct or undecided. TiMINo-GP always remains undecided because there are too few data points to fit such a general model. Again, α is set to 0.05. Experiment 7: Gas Furnace. [Box et al., 2008, length 296], Xt describes the input gas rate and Yt the output CO2. We regard X →Y as being true. TS-LiNGAM, G-causality, TiMINo-lin and TiMINo-gam correctly infer X →Y . Disregarding time information leads to a wrong causal conclusion: The method described by Hoyer et al. [2009] leads to a p-value of 4.8% in the correct and 9.1% in the false direction. Experiment 8: Old Faithful. [Azzalini and Bowman, 1990, length 194] Xt contains the duration of an eruption and Yt the time interval to the next eruption of the Old Faithful geyser. We regard X →Y as the ground truth. Although the time intervals [t, t+1] do not have the same length for all t, we model the data as two time series. TS-LiNGAM and TiMINo give correct answers, whereas linear G-causality infers X →Y , and nonlinear G-causality infers Y →X. Experiment 9: Abalone (no time structure). The abalone data set [Asuncion and Newman, 2007] contains (among others that lead to similar results) age Xt and diameter Yt of a certain shell fish. If we model 1000 randomly chosen samples as time series, G-causality (both linear and nonlinear) infers no causal relation as expected. TS-LiNGAM wrongly infers Y →X, which is probably due to the nonlinear relationship. TiMINo gives the correct result. Experiment 10: Diary (confounder). We consider 10 years of weekly prices for butter Xt and cheddar cheese Yt (length 522, http://future.aae.wisc.edu/tab/prices.html) We expect their strong correlation to be due to the (hidden) milk price Mt: X ←M →Y . TiMINo does not decide, whereas TS-LiNGAM and G-causality wrongly infer X →Y . This may be due to different time lags of the confounder (cheese has longer storing and maturing times than butter). Experiment 11: Temperature in House. We placed temperature sensors in six rooms (1 - Shed, 2 - Outside, 3 - Kitchen Boiler, 4 - Living Room, 5 - WC, 6 - Bathroom) of a house in the black forest and recorded the temperature on an hourly basis (length 7265). This house is not inhabited for most of the year, and lacking central heating; the few electric radiators start if the temperatur drops close to freezing. TiMINo does not decide since the model leads to dependent residuals. Although we do not provide any theory for the following steps, we analyze the model leading to the “least dependent” residuals by setting the test level α to zero. TiMINo causality then outputs a causal ordering of the variables. We applied TiMINo-lin and TiMINo-gam to the data sets using lags up to twelve (half a day) and report the proportion in which node i precedes node j (see matrix). 0 0.25 0.83 1 1 1 0.75 0 0.83 1 1 1 0.17 0.17 0 0.75 0.33 0.33 0 0 0.25 0 0 0 0 0 0.67 1 0 0 0 0 0.67 1 1 0 This procedure reveals a sensible causal structure (we arbitrarily- refer to entries larger than 0.5 as causation). 2 (outside) causes all other readings, and none of the other temperatures causes 2. 1 (shed) causes all other readings except for 2. This is wrong, but not surprising since the shed’s temperature is rather close to the outside temperature. 4 (living room) does not cause any other reading, but every other reading does cause it (the living room is the only room without any heating). The links 5 →3 and 6 →3 appear spurious, and come with numbers close to 0.5. These might be erroneous, however, they might also be due to the fact that sensor 3 is sitting on top of the kitchen boiler, which acts as a heat reservoir that delays temperature changes. The link 6 →5 comes with a large number, but it is plausible since unlike the WC, the 7 bathroom has a window and is thus affected directly by outside temperature, causing fast regulation of its radiator, which is placed on a thin wooden wall facing the WC. The phase slope index [Nolte et al., 2008] performed well in Exp. 7, in all other experiments it either gave wrong results or did not decide. Due to space constraints we omit details about this method. We did not find any code for ANLTSM. 7 Conclusions and Future Work This paper shows how causal inference on time series benefits from the framework of Structural Equation Models. The identifiability statement is more general than existing results. The algorithm is based on unconditional independence tests and is applicable to multivariate, linear, nonlinear and instantaneous interactions. It contains the option of remaining undecided. While methods like Granger causality are built on the asymmetry of time direction, TiMINo additionally takes into account identifiability emerging from restricted structural equation models. This leads to a straightforward way of dealing with (unknown) time delays in the different time series. Although an extensive evaluation on real data sets is still required, we believe that our results emphasize the potential use of causal inference methods. They may provide guidance for future interventional experiments. As future work one may use heteroscedastic models [Chen et al., 2012] and systematically preprocess the data (removing trends, periodicities, etc.). This may reduce the number of cases where TiMINo causality is undecided. TiMINo causality evaluates a model fit by checking independence of the residuals. As suggested in Mooij et al. [2009], Yamada and Sugiyama [2010], one may make the independence of the residuals as a criterion for the fitting process or at least for order selection. 8 Appendix Lemma 1 (Markov Condition for TiMINo) If Xt = (Xi t)i∈V satisfy a TiMINo model, each variable Xi t is conditionally independent of each of its non-descendants given its parents. Proof . With S := PA(Xi t) = Sp k=0(PAi k)t−k and Eq. (1) we get Xi t|S=s = fi(s, N i t) for an s with p(s) > 0. Any non-descendant of Xi t is a function of all noise variables from its ancestors and is thus independent of Xi t given S = s. This is the only time we assume t ∈N in this paper. □ Proof of Theorem 1 Suppose that Xt allows for two TiMINo representations that lead to different full time graphs G and G′. (i) Assume that G and G′ do not differ in the instantaneous effects: PAi 0(in G) = PAi 0(in G′) ∀i. Wlog, there is some k > 0 and an edge X1 t−k →X2 t , say, that is in G but not in G′. From G′ and Lemma 1 we have that X1 t−k ⊥⊥X2 t | S , where S = ({Xi t−l, 1 ≤l ≤ p, i ∈V } ∪NDt) \ {X1 t−k, X2 t }, and NDt are all Xi t that are non-descendants (wrt instantaneous effects) of X2 t . Applied to G, causal minimality leads to a contradiction: X1 t−k ̸⊥⊥X2 t | S . Now, let G and G′ differ in the instantaneous effects and choose S = {Xi t−l, 1 ≤l ≤p, i ∈V }. For each s and i we have: Xi t|S=s = fi(s, ( ˜ PA i 0)t), where ˜ PA i 0 are all instantaneous parents of Xi t conditioned on S = s. All Xi t|S=s with the instantaneous effects describe two different structures of an IFMOC. This contradicts the identifiability results by Peters et al. [2011b]. (ii) Because of Lemma 1 and faithfulness G and G′ only differ in the instantaneous effects. But each instantaneous arrow Xi t →Xj t forms a v-structure together with Xj t−k →Xj t ; Xj t−k cannot be connected with Xi t since this introduces a cycle in the summary time graph. □ Proof of Theorem 2 Two full time graphs G and G′ for ˜Xt can differ only in the directions of edges between time series. Assume Xi t →Xj t+k in G and Xi t ←Xj t+k in G′. Choose the largest k possible. Then there is a v-structure Xi t−ℓ→Xi t ←Xj t+k for some ℓ. A connection between Xi t−ℓ and Xj t+k would lead to a pair as above with a larger k. □ References N. Ancona, D. Marinazzo, and S. Stramaglia. Radial basis function approach to nonlinear Granger causality of time series. Phys. Rev. E, 70(5):056221, 2004. 8 A. Asuncion and D. J. Newman. UCI repository. http://archive.ics.uci.edu/ml/, 2007. A. Azzalini and A. W. Bowman. A look at some data on the Old Faithful Geyser. Applied Statistics, 39(3): 357–365, 1990. D. Bell, J. Kay, and J. Malley. A non-parametric approach to non-linear causality testing. Economics Letters, 51(1):7–18, 1996. G. E. P. Box, G. M. Jenkins, and G. C. Reinsel. Time series analysis: forecasting and control. Wiley series in probability and statistics. John Wiley, 2008. P. J. Brockwell and R. A. Davis. Time Series: Theory and Methods. Springer, 2nd edition, 1991. R. B. Buxton, E. C. Wong, and L. R. Frank. Dynamics of blood flow and oxygenation changes during brain activation: The balloon model. Magnetic Resonance in Medicine, 39(6):855–864, 1998. Y. Chen, G. Rangarajan, J. Feng, and M. Ding. Analyzing multiple nonlinear time series with extended Granger causality. Physics Letters A, 324, 2004. Z. Chen, K. Zhang, and L. Chan. Causal discovery with scale-mixture model for spatiotemporal variance dependencies. In NIPS 25, 2012. T. Chu and C. Glymour. Search for additive nonlinear time series causal models. Journal of Machine Learning Research, 9:967–991, 2008. M. Eichler. Graphical modelling of multivariate time series. Probability Theory and Related Fields, 2011. D. Entner and P. Hoyer. Discovering unconfounded causal relationships using linear non-Gaussian models. In JSAI-isAI Workshops, 2010. J. P. Florens and M. Mouchart. A note on noncausality. Econometrica, 50(3):583–591, 1982. C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424–38, July 1969. A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch¨olkopf, and A. Smola. A kernel statistical test of independence. In NIPS 20, Canada, 2008. T. J. Hastie and R. J. Tibshirani. Generalized additive models. London: Chapman & Hall, 1990. P. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch¨olkopf. Nonlinear causal discovery with additive noise models. In NIPS 21, Canada, 2009. A. Hyv¨arinen, S. Shimizu, and P. Hoyer. Causal modelling combining instantaneous and lagged effects: an identifiable model based on non-gaussianity. In ICML 25, 2008. D. Janzing and B. Steudel. Justifying additive-noise-model based causal discovery via algorithmic information theory. Open Systems and Information Dynamics, 17:189–212, 2010. D. Janzing, J. Peters, J.M. Mooij, and B. Sch¨olkopf. Identifying confounders using additive noise models. In UAI 25, 2009. J. Mooij, D. Janzing, J. Peters, and B. Sch¨olkopf. Regression by dependence minimization and its application to causal inference. In ICML 26, 2009. G. Nolte, A. Ziehe, V. Nikulin, A. Schl¨ogl, N. Kr¨amer, T. Brismar, and K.-R. M¨uller. Robustly Estimating the Flow Direction of Information in Complex Physical Systems. Physical Review Letters, 100, 2008. J. Pearl. Causality: Models, reasoning, and inference. Cambridge Univ. Press, 2nd edition, 2009. J. Peters, D. Janzing, A. Gretton, and B. Sch¨olkopf. Detecting the dir. of causal time series. In ICML 26, 2009. J. Peters, D. Janzing, and B. Sch¨olkopf. Causal inference on discrete data using additive noise models. IEEE Trans. Pattern Analysis Machine Intelligence, 33(12):2436–2450, 2011a. J. Peters, J. Mooij, D. Janzing, and B. Sch¨olkopf. Identifiability of causal graphs using functional models. In UAI 27, 2011b. J. Peters, J. Mooij, D. Janzing, and B. Sch¨olkopf. Causal discovery with continuous additive noise models, 2013. arXiv:1309.6779. C. Quinn, T. Coleman, N. Kiyavash, and N. Hatsopoulos. Estimating the directed information to infer causal relationships in ensemble neural spike train recordings. Journal of Comp. Neuroscience, 30(1):17–44, 2011. S. Shimizu, P. Hoyer, A. Hyv¨arinen, and A. J. Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003–2030, 2006. S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich. Network modelling methods for FMRI. NeuroImage, 54(2):875 – 891, 2011. P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd edition, 2000. M. Yamada and M. Sugiyama. Dependence minimizing regression with model selection for non-linear causal inference under non-Gaussian noise. In AAAI. AAAI Press, 2010. 9
|
2013
|
94
|
5,174
|
Better Approximation and Faster Algorithm Using the Proximal Average Yaoliang Yu Department of Computing Science, University of Alberta, Edmonton AB T6G 2E8, Canada yaoliang@cs.ualberta.ca Abstract It is a common practice to approximate “complicated” functions with more friendly ones. In large-scale machine learning applications, nonsmooth losses/regularizers that entail great computational challenges are usually approximated by smooth functions. We re-examine this powerful methodology and point out a nonsmooth approximation which simply pretends the linearity of the proximal map. The new approximation is justified using a recent convex analysis tool— proximal average, and yields a novel proximal gradient algorithm that is strictly better than the one based on smoothing, without incurring any extra overhead. Numerical experiments conducted on two important applications, overlapping group lasso and graph-guided fused lasso, corroborate the theoretical claims. 1 Introduction In many scientific areas, an important methodology that has withstood the test of time is the approximation of “complicated” functions by those that are easier to handle. For instance, Taylor’s expansion in calculus [1], essentially a polynomial approximation of differentiable functions, has fundamentally changed analysis, and mathematics more broadly. Approximations are also ubiquitous in optimization algorithms, e.g. various gradient-type algorithms approximate the objective function with a quadratic upper bound. In some (if not all) cases, there are multiple ways to make the approximation, and one usually has this freedom of choice. It is perhaps not hard to convince oneself that there is no approximation that would work best in all scenarios. And one would probably also agree that a specific form of approximation should be favored if it well suits our ultimate goal. Despite of all these common-sense, in optimization algorithms, the smooth approximations are still dominating, bypassing some recent advances on optimizing nonsmooth functions [2, 3]. Part of the reason, we believe, is the lack of new technical tools. We consider the composite minimization problem where the objective consists of a smooth loss function and a sum of nonsmooth functions. Such problems have received increasing attention due to the arise of structured sparsity [4], notably the overlapping group lasso [5], the graph-guided fused lasso [6] and some others. These structured regularizers, although greatly enhance our modeling capability, introduce significant new computational challenges as well. Popular gradient-type algorithms dealing with such composite problems include the generic subgradient method [7], (accelerated) proximal gradient (APG) [2, 3], and the smoothed accelerated proximal gradient (S-APG) [8]. The subgradient method is applicable to any nonsmooth function, although the convergence rate is rather slow. APG, being a recent advance, can handle simple functions [9] but for more complicated structured regularizers, an inner iterative procedure is needed, resulting in an overall convergence rate that could be as slow as the subgradient method [10]. Lastly, S-APG simply runs APG on a smooth approximation of the original objective, resulting in a much improved convergence rate. Our work is inspired by the recent advance on nonsmooth optimization [2, 3], of which the building block is the proximal map of the nonsmooth function. This proximal map is available in closed-form 1 for simple functions but can be quite expensive for more complicated functions such as a sum of nonsmooth functions we consider here. A key observation we make is that oftentimes the proximal map for each individual summand can be easily computed, therefore a bold idea is to simply use the sum of proximal maps, pretending that the proximal map is a linear operator. Somewhat surprisingly, this naive choice, when combined with APG, results in a novel proximal algorithm that is strictly better than S-APG, while keeping per-step complexity unchanged. We justify our method via a new tool from convex analysis—the proximal average [11]. In essence, instead of smoothing the nonsmooth function, we use a nonsmooth approximation whose proximal map is cheap to evaluate, after all this is all we need to run APG. We formally state our problem in Section 2, along with the proposed algorithm. After recalling the relevant tools from convex analysis in Section 3 we provide the theoretical justification of our method in Section 4. Related works are discussed in Section 5. We test the proposed algorithm in Section 6 and conclude in Section 7. 2 Problem Formulation We are interested in solving the following composite minimization problem: min x∈Rd ℓ(x) + ¯f(x), where ¯f(x) = K X k=1 αkfk(x). (1) Here ℓis convex with L0-Lipschitz continuous gradient w.r.t. the Euclidean norm ∥· ∥, and αk ≥ 0, P k αk = 1. The usual regularization constant that balances the two terms in (1) is absorbed into the loss ℓ. For the functions fk, we assume Assumption 1. Each fk is convex and Mk-Lipschitz continuous w.r.t. the Euclidean norm ∥· ∥. The abbreviation M 2 = PK k=1 αkM 2 k is adopted throughout. We are interested in the general case where the functions fk need not be differentiable. As mentioned in the introduction, a generic scheme that solves (1) is the subgradient method [7], of which each step requires merely an arbitrary subgradient of the objective. With a suitable stepsize, the subgradient method converges1 in at most O(1/ϵ2) steps where ϵ > 0 is the desired accuracy. Although being general, the subgradient method is exceedingly slow, making it unsuitable for many practical applications. Another recent algorithm for solving (1) is the (accelerated) proximal gradient (APG) [2, 3], of which each iteration needs to compute the proximal map of the nonsmooth part ¯f in (1): P1/L0 ¯ f (x) = argmin y L0 2 ∥x −y∥2 + ¯f(y). (Recall that L0 is the Lipschitz constant of the gradient of the smooth part ℓin (1).) Provided that the proximal map can be computed in constant time, it can be shown that APG converges within O(1/√ϵ) complexity, significantly better than the subgradient method. For some simple functions, the proximal map indeed is available in closed-form, see [9] for a nice survey. However, for more complicated functions such as the one we consider here, the proximal map itself is expensive to compute and an inner iterative subroutine is required. Somewhat disappointingly, recent analysis has shown that such a two-loop procedure can be as slow as the subgradient method [10]. Yet another approach, popularized by Nesterov [8], is to approximate each nonsmooth component fk with a smooth function and then run APG. By carefully balancing the approximation and the convergence requirement of APG, the smoothed accelerated proximal gradient (S-APG) proposed in [8] converges in at most O( p 1/ϵ2 + 1/ϵ) steps, again much better than the subgradient method. The main point of this paper is to further improve S-APG, in perhaps a surprisingly simple way. The key assumption that we will exploit is the following: Assumption 2. Each proximal map Pµ fk can be computed “easily” for any µ > 0. 1In this paper we satisfy ourselves with convergence in terms of function values, although with additional assumptions/efforts it is possible to argue for convergence in terms of the iterates. 2 Algorithm 1: PA-APG. 1: Initialize x0 = y1, µ, η1 = 1. 2: for t = 1, 2, . . . do 3: zt = yt −µ∇ℓ(yt), 4: xt = P k αk · Pµ fk(zt), 5: ηt+1 = 1+√ 1+4η2 t 2 , 6: yt+1 = xt + ηt−1 ηt+1 (xt −xt−1). 7: end for Algorithm 2: PA-PG. 1: Initialize x0, µ. 2: for t = 1, 2, . . . do 3: zt = xt−1 −µ∇ℓ(xt−1), 4: xt = P k αk · Pµ fk(zt). 5: end for We prefer to leave the exact meaning of “easily” unspecified, but roughly speaking, the proximal map should be no more expensive than computing the gradient of the smooth part ℓso that it does not become the bottleneck. Both Assumption 1 and Assumption 2 are satisfied in many important applications (examples will follow). As it will also become clear later, these assumptions are exactly those needed by S-APG. Unfortunately, in general, there is no known efficient way that reduces the proximal map of the average ¯f to the proximal maps of its individual components fk, therefore the fast scheme APG is not readily applicable. The main difficulty, of course, is due to the nonlinearity of the proximal map Pµ f, when treated as an operator on the function f. Despite of this fact, we will “naively” pretend that the proximal map is linear and use Pµ ¯ f ?≈ K X k=1 αkPµ fk. (2) Under this approximation, the fast scheme APG can be applied. We give one particular realization (PA-APG) in Algorithm 1 based on the FISTA in [2]. A simpler (though slower) version (PA-PG) based on ISTA [2] is also provided in Algorithm 2. Clearly both algorithms are easily parallelizable if K is large. We remark that any other variation of APG, e.g. [8], is equally well applicable. Of course, when K = 1, our algorithm reduces to the corresponding APG scheme. At this point, one might be suspicious about the usefulness of the “naive” approximation in (2). Before addressing this well-deserved question, let us first point out two important applications where Assumption 1 and Assumption 2 are naturally satisfied. Example 1 (Overlapping group lasso, [5]). In this example, fk(x) = ∥xgk∥where gk is a group (subset) of variables and xg denotes a copy of x with all variables not contained in the group g set to 0. This group regularizer has been proven quite useful in high-dimensional statistics with the capability of selecting meaningful groups of features [5]. In the general case where the groups could overlap as needed, Pµ ¯ f cannot be computed easily. Clearly each fk is convex and 1-Lipschitz continuous w.r.t. ∥· ∥, i.e., Mk = 1 in Assumption 1. Moreover, the proximal map Pµ fk is simply a re-scaling of the variables in group gk, that is [Pµ fk(x)]j = xj, j ̸∈gk (1 −µ/∥xgk∥)+xj, j ∈gk , (3) where (λ)+ = max{λ, 0}. Therefore, both of our assumptions are met. Example 2 (Graph-guided fused lasso, [6]). This example is an enhanced version of the fused lasso [12], with some graph structure exploited to improve feature selection in biostatistic applications [6]. Specifically, given some graph whose nodes correspond to the feature variables, we let fij(x) = |xi −xj| for every edge (i, j) ∈E. For a general graph, the proximal map of the regularizer ¯f = P (i,j)∈E αijfij, with αij ≥0, P (i,j)∈E αij = 1, is not easily computable. Similar as above, each fij is 1-Lipschitz continuous w.r.t. the Euclidean norm. Moreover, the proximal map Pµ fij is easy to compute: [Pµ fij(x)]s = xs, s ̸∈{i, j} xs −sign(xi −xj) min{µ, |xi −xj|/2}, s ∈{i, j} . (4) Again, both our assumptions are satisfied. 3 Note that in both examples we could have incorporated weights into the component functions fk or fij, which amounts to changing αk or αij accordingly. We also remark that there are other applications that fall into our consideration, but for illustration purposes we shall contend ourselves with the above two examples. More conveniently, both examples have been tried with S-APG [13], thus constitute a natural benchmark for our new algorithm. 3 Technical Tools To justify our new algorithm, we need a few technical tools from convex analysis [14]. Let our domain H be a real Hilbert space with the inner product ⟨·, ·⟩and the induced norm ∥· ∥. Denote Γ0 as the set of all lower semicontinuous proper convex functions f : H →R ∪{∞}. It is well-known that the Fenchel conjugation f ∗(y) = sup x ⟨x, y⟩−f(x) is a bijection and involution on Γ0 (i.e. (f ∗)∗= f). For convenience, throughout we let q = 1 2∥· ∥2 (q for “quadratic”). Note that q is the only function which coincides with its Fenchel conjugate. Another convention that we borrow from convex analysis is to write (fµ)(x) = µf(µ−1x) for µ > 0. We easily verify (µf)∗= f ∗µ and also (fµ)∗= µf ∗. For any f ∈Γ0, we define its Moreau envelop (with parameter µ > 0) [14, 15] Mµ f(x) = min y 1 2µ∥x −y∥2 + f(y), (5) and correspondingly the proximal map Pµ f(x) = argmin y 1 2µ∥x −y∥2 + f(y). (6) Since f is closed convex and ∥· ∥2 is strongly convex, the proximal map is well-defined and singlevalued. As mentioned before, the proximal map is the key component of fast schemes such as APG. We summarize some nice properties of the Moreau envelop and the proximal map as: Proposition 1. Let µ, λ > 0, f ∈Γ0, and Id be the identity map, then i). Mµ f ∈Γ0 and (Mµ f)∗= f ∗+ µq; ii). Mµ f ≤f, infx Mµ f(x) = infx f(x), and argminx Mµ f(x) = argminx f(x); iii). Mµ f is differentiable with ∇Mµ f = 1 µ(Id −Pµ f); iv). Mµ λf = λMλµ f and Pµ λf = Pλµ f = (Pµ fλ−1)λ; v). Mλ Mµ f = Mλ+µ f and Pλ Mµ f = µ λ+µId + λ λ+µPλ+µ f ; vi). µMµ f + (M1/µ f ∗)µ = q and Pµ f + (P1/µ f ∗)µ = Id. i) is the well-known duality between infimal convolution and summation. ii), albeit being trivial, is the driving force behind the proximal point algorithm [16]. iii) justifies the “niceness” of the Moreau envelop and connects it with the proximal map. iv) and v) follow from simple algebra. And lastly vi), known as Moreau’s identity [15], plays an important role in the early development of convex analysis. We remind that (Mµ f)∗in general is different from Mµ f ∗. Fix µ > 0. Let SCµ ⊆Γ0 denote the class of µ-strongly convex functions, that is, functions f such that f −µq is convex. Similarly, let SSµ ⊆Γ0 denote the class of finite-valued functions whose gradient is µ-Lipschitz continuous (w.r.t. the norm ∥· ∥). A well-known duality between strong convexity and smoothness is that for f ∈Γ0, we have f ∈SCµ iff f ∗∈SS1/µ, cf. [17, Theorem 18.15]. Based on this duality, we have the next result which turns out to be critical. (Proof in Appendix A) Proposition 2. Fix µ > 0. The Moreau envelop map Mµ : Γ0 →SS1/µ that sends f ∈Γ0 to Mµ f is bijective, increasing, and concave on any convex subset of Γ0 (under the pointwise order). 4 It is clear that SS1/µ is a convex subset of Γ0, which motivates the definition of the proximal average—the key object to us. Fix constants αk ≥0 with PK k=1 αk = 1. Recall that ¯f = P k αkfk with each fk ∈Γ0, i.e. ¯f is the convex combination of the component functions {fk} under the weight {αk}. Note that we always assume ¯f ∈Γ0 (the exception ¯f ≡∞is clearly uninteresting). Definition 1 (Proximal Average, [11, 15]). Denote f = (f1, . . . , fK) and f ∗= (f ∗ 1 , . . . , f ∗ K). The proximal average Aµ f,α, or simply Aµ when the component functions and weights are clear from context, is the unique function h ∈Γ0 such that Mµ h = PK k=1 αkMµ fk. Indeed, the existence of the proximal average follows from the surjectivity of Mµ while the uniqueness follows from the injectivity of Mµ, both proven in Proposition 2. The main property of the proximal average, as seen from its definition, is that its Moreau envelop is the convex combination of the Moreau envelops of the component functions. By iii) of Proposition 1 we immediately obtain Pµ Aµ = K X k=1 αkPµ fk. (7) Recall that the right-hand side is exactly the approximation we employed in Section 2. Interestingly, using the properties we summarized in Proposition 1, one can show that the Fenchel conjugate of the proximal average, denoted as (Aµ)∗, enjoys a similar property [11]: h M1/µ (Aµ)∗ i µ = q −µMµ Aµ = q −µ K X k=1 αkMµ fk = K X k=1 αk(q −µMµ fk) = K X k=1 αk[(M1/µ f ∗ k )µ] = " K X k=1 αkM1/µ f ∗ k # µ, that is, M1/µ (Aµ f,α)∗= PK k=1 αkM1/µ f ∗ k = M1/µ A1/µ f∗,α , therefore by the injective property established in Proposition 2: (Aµ f,α)∗= A1/µ f ∗,α. (8) From its definition it is also possible to derive an explicit formula for the proximal average (although for our purpose only the existence is needed): Aµ f,α = K X k=1 αkMµ fk ∗ −µq !∗ = K X k=1 αkM1/µ f ∗ k ∗ −qµ, (9) where the second equality is obtained by conjugating (8) and applying the first equality to the conjugate. By the concavity and monotonicity of Mµ, we have the inequality Mµ ¯ f ≥ K X k=1 αkMµ fk = Mµ Aµ ⇐⇒¯f ≥Aµ. (10) The above results (after Definition 1) are due to [11], although our treatment is slightly different. It is well-known that as µ →0, Mµ f →f pointwise [14], which, under the Lipschitz assumption, can be strengthened to uniform convergence (Proof in Appendix B): Proposition 3. Under Assumption 1 we have 0 ≤¯f −Mµ Aµ ≤µM 2 2 . For the proximal average, [11] showed that Aµ →¯f pointwise, which again can be strengthened to uniform convergence (proof follows from (10) and Proposition 3 since Aµ ≥Mµ Aµ): Proposition 4. Under Assumption 1 we have 0 ≤¯f −Aµ ≤µM 2 2 . As it turns out, S-APG approximates the nonsmooth function ¯f with the smooth function Mµ Aµ while our algorithm operates on the nonsmooth approximation Aµ (note that it can be shown that Aµ is smooth iff some component fi is smooth). By (10) and ii) in Proposition 1 we have Mµ Aµ ≤Aµ ≤¯f, (11) 5 −10 −5 0 5 10 0 2 4 6 8 10 α = 0.5, µ =10 f1 f2 ¯f M η ¯f Aη −10 −5 0 5 10 0 2 4 6 8 10 α = 0.5, µ =5 f1 f2 ¯f M η ¯f Aη −10 −5 0 5 10 0 2 4 6 8 10 α = 0.5, µ =1 f1 f2 ¯f M η ¯f Aη Figure 1: See Example 3 for context. As predicted Mµ Aµ ≤Aµ ≤¯f. Observe that the proximal average Aµ remains nondifferentiable at 0 while Mµ Aµ is smooth everywhere. For x ≥0, f1 = f2 = ¯f = Aµ (the red circled line), thus the proximal average Aµ is a strictly tighter approximation than smoothing. When µ is small (right panel), ¯f ≈Mµ Aµ ≈Aµ. meaning that the proximal average Aµ is a better under-approximation of ¯f than Mµ Aµ. Let us compare the proximal average Aµ with the smooth approximation Mµ Aµ on a 1-D example. Example 3. Let f1(x) = |x|, f2(x) = max{x, 0}. Clearly both are 1-Lipschitz continuous. Moreover, Pµ f1(x) = sign(x)(|x| −µ)+, Pµ f2(x) = (x −µ)+ + x −(x)+, Mµ f1(x) = ( x2 2µ, |x| ≤µ |x| −µ/2, otherwise , and Mµ f2(x) = 0, x ≤0 x2 2µ, 0 ≤x ≤µ x −µ/2, otherwise . Finally, using (9) we obtain (with α1 = α, α2 = 1 −α) Aµ(x) = x, x ≥0 α 1−α x2 2µ, (α −1)µ ≤x ≤0 −αx −(1 −α) αµ 2 , x ≤(α −1)µ . Figure 1 depicts the case α = 0.5 with different values of the smoothing parameter µ. 4 Theoretical Justification Given our development in the previous section, it is now clear that our proposed algorithm aims at solving the approximation min x ℓ(x) + Aµ(x). (12) The next important piece is to show how a careful choice of µ would lead to a strictly better convergence rate than S-APG. Recall that using APG to slove (12) requires computing the following proximal map in each iteration: P1/L0 Aµ (x) = argmin y L0 2 ∥x −y∥2 + Aµ(y), which, unfortunately, is not yet amenable to efficient computation, due to the mismatch of the constants 1/L0 and µ (recall that in the decomposition (7) the superscript and subscript must both be µ). In general, there is no known explicit formula that would reduce P1/L0 f to Pµ f for different positive constants L0 and µ [17, p. 338], see also iv) in Proposition 1. Our fix is almost trivial: If necessary, we use a bigger Lipschitz constant L0 = 1/µ so that we can compute the proximal map easily. This is indeed legitimate since L0-Lipschitz implies L-Lipschitz for any L ≥L0. Said differently, all we need is to tune down the stepsize a little bit in APG. We state formally the convergence property of our algorithm as (Proof in Appendix C): Theorem 1. Fix the accuracy ϵ > 0. Under Assumption 1 and the choice µ = min{1/L0, 2ϵ/ M 2}, after at most q 2 µϵ∥x0 −x∥steps, the output of Algorithm 1, say ˜x, satisfies ℓ(˜x) + ¯f(˜x) ≤ℓ(x) + ¯f(x) + 2ϵ. The same guarantee holds for Algorithm 2 after at most 1 2µϵ∥x0 −x∥2 steps. 6 Note that if we could reduce P1/L0 Aµ efficiently to Pµ Aµ, we would end up with the optimal (overall) rate O( p 1/ϵ), even though we approximate the nonsmooth function ¯f by the proximal average Aµ. In other words, approximation itself does not lead to an inferior rate. It is our incapability to (efficiently) relate proximal maps that leads to the sacrifice in convergence rates. 5 Discussions To ease our discussion with related works, let us first point out a fact that is not always explicitly recognized, that is, S-APG essentially relies on approximating the nonsmooth function ¯f with Mµ Aµ. Indeed, consider first the case K = 1. The smoothing idea introduced in [8] purports the superficial max-structure assumption, that is, f(x) = maxy∈C ⟨x, y⟩−h(y) where C is some bounded convex set and h ∈Γ0. As it is well-known (also easily verified from definition), f ∈Γ0 is M-Lipschitz continuous (w.r.t. the norm ∥· ∥) iff dom f ∗⊆B∥·∥(0, M), the ball centered at the origin with radius M. Thus the function f ∈Γ0 admits the max-structure iff it is Lipschitz continuous, i.e., satisfying our Assumption 1, in which case h = f ∗and C = dom f ∗. [8] proceeded to add some “distance” function d to obtain the approximation fµ(x) = maxy∈C ⟨x, y⟩−f ∗(y) −µd(y). For simplicity, we will only consider d = q, thus fµ = (f ∗+ µq)∗= Mµ f. The other assumption of S-APG [8] is that fµ and the maximizer in its expression can be easily computed, which is precisely our Assumption 2. Finally for the general case where ¯f is an average of K nonsmooth functions, the smoothing technique is applied in a component by component way, i.e., approximate ¯f with Mµ Aµ. For comparison, let us recall that S-APG finds a 2ϵ accurate solution in at most O( q L0 + M 2/(2ϵ) p 1/ϵ) steps since the Lipschitz constant of the gradient of ℓ+ Mµ Aµ is upper bounded by L0 + M 2/(2ϵ) (under the choice of µ in Theorem 1). This is strictly worse than the complexity O( q max{L0, M 2/(2ϵ)} p 1/ϵ) of our approach. In other words, we have managed to remove the secondary term in the complexity bound of S-APG. We should emphasize that this strict improvement is obtained under exactly the same assumptions and with an algorithm as simple (if not simpler) as S-APG. In some sense it is quite remarkable that the seemingly “naive” approximation that pretends the linearity of the proximal map not only can be justified but also leads to a strictly better result. Let us further explain how the improvement is possible. As mentioned, S-APG approximates ¯f with the smooth function Mµ Aµ. This smooth approximation is beneficial if our capability is limited to smooth functions. Put differently, S-APG implicitly treats applying the fast gradient algorithms as the ultimate goal. However, the recent advances on nonsmooth optimization have broadened the range of fast schemes: It is not smoothness but the proximal map that allows fast convergence. Just as how APG improves upon the subgradient method, our approach, with the ultimate goal to enable efficient computation of the proximal map, improves upon S-APG. Another lesson we wish to point out is that unnecessary “over-smoothing”, as in S-APG, does hurt the performance since it always increases the Lipschitz constant. To summarize, smoothing is not free and it should be used when truly needed. Lastly, we note that our algorithm shares some similarity with forward-backward splitting procedures and alternating direction methods [9, 18, 19], although a detailed examination will not be given here. Due to space limits, we refer further extensions and improvements to [20, Chapter 3]. 6 Experiments We compare the proposed algorithm with S-APG on two important problems: overlapping group lasso and graph-guided fused lasso. See Example 1 and Example 2 for details about the nonsmooth function ¯f. We note that S-APG has been demonstrated with superior performance on both problems in [13], therefore we will only concentrate on comparing with it. Bear in mind that the purpose of our experiment is to verify the theoretical improvement as discussed in Section 5. We are not interested in fine tuning parameters here (despite its practical importance), thus for a fair comparison, we use the same desired accuracy ϵ, Lipschitz constant L0 and other parameters for all methods. Since both our method and S-APG have the same per-step complexity, we will simply run them for a maximum number of iterations (after which saturation is observed) and report all the intermediate objective values. 7 0 50 100 150 10 0 10 1 10 2 10 3 10 4 10 5 2ε = 2/L0 PA−PG S−PG PA−APG S−APG 0 50 100 150 10 0 10 1 10 2 10 3 10 4 10 5 2ε = 1/L0 PA−PG S−PG PA−APG S−APG 0 50 100 150 10 0 10 1 10 2 10 3 10 4 10 5 2ε = 0.5/L0 PA−PG S−PG PA−APG S−APG Figure 2: Objective value vs. iteration on overlapping group lasso. 0 20 40 60 80 100 10 −1 10 0 10 1 10 2 10 3 2ε = 2/L0 PA−PG S−PG PA−APG S−APG 0 20 40 60 80 100 10 −1 10 0 10 1 10 2 10 3 2ε = 1/L0 PA−PG S−PG PA−APG S−APG 0 20 40 60 80 100 10 −1 10 0 10 1 10 2 10 3 2ε = 0.5/L0 PA−PG S−PG PA−APG S−APG Figure 3: Objective value vs. iteration on graph-guided fused lasso. Overlapping Group Lasso: Following [13] we generate the data as follows: We set ℓ(x) = 1 2λK ∥Ax −b∥2 where A ∈Rn×d whose entries are sampled from i.i.d. normal distributions, xj = (−1)j exp(−(j −1)/100), and b = Ax + ξ with the noise ξ sampled from zero mean and unit variance normal distribution. Finally, the groups in the regularizer ¯f are defined as {{1, . . . , 100}, {91, . . . , 190}, . . . , {d −99, . . . , d}}, where d = 90K + 10. That is, there are K groups, each containing 100 variables, and the groups overlap by 10 consecutive variables. We adopt the uniform weight αk = 1/K and set λ = K/5. Figure 2 shows the results for n = 5000 and K = 50, with three different accuracy parameters. For completeness, we also include the results for the non-accelerated versions (PA-PG and S-PG). Clearly, accelerated algorithms are much faster than their non-accelerated cousins. Observe that our algorithms (PA-APG and PA-PG) converge consistently faster than S-APG and S-PG, respectively, with a big margin in the favorable case (middle panel). Again we emphasize that this improvement is achieved without any overhead. Graph-guided Fused Lasso: We generate ℓsimilarly as above. Following [13], the graph edges E are obtained by thresholding the correlation matrix. The case n = 5000, d = 1000, λ = 15 is shown in Figure 3, under three different desired accuracies. Again, we observe that accelerated algorithms are faster than non-accelerated versions and our algorithms consistently converge faster. 7 Conclusions We have considered the composite minimization problem which consists of a smooth loss and a sum of nonsmooth regularizers. Different from smoothing, we considered a seemingly naive nonsmooth approximation which simply pretends the linearity of the proximal map. Based on the proximal average, a new tool from convex analysis, we proved that the new approximation leads to a novel algorithm that strictly improves the state-of-the-art. Experiments on both overlapping group lasso and graph-guided fused lasso verified the superiority of the proposed method. An interesting question arose from this work, also under our current investigation, is in what sense certain approximation is optimal? We also plan to apply our algorithm to other practical problems. Acknowledgement The author thanks Bob Williamson and Xinhua Zhang from NICTA—Canberra for their hospitality during the author’s visit when this work was performed; Warren Hare and Yves Lucet from UBC— Okanagan for drawing his attention to the proximal average; and the reviewers for their valuable comments. 8 References [1] Walter Rudin. Principles of mathematical analysis. McGraw-Hill, 3rd edition, 1976. [2] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [3] Yurii Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, Series B, 140:125–161, 2013. [4] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Structured sparsity through convex optimization. Statistical Science, 27(4):450–468, 2012. [5] Peng Zhao, Guilherme Rocha, and Bin Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 37(6A):3468–3497, 2009. [6] Seyoung Kim and Eric P. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS Genetics, 5(8):1–18, 2009. [7] Naum Z. Shor. Minimization Methods for Non-Differentiable Functions. Springer, 1985. [8] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127–152, 2005. [9] Patrick L. Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185–212. Springer, 2011. [10] Silvia Villa, Saverio Salzo, Luca Baldassarre, and Alessandro Verri. Accelerated and inexact forward-backward algorithms. SIAM Journal on Optimization, 23(3):1607–1633, 2013. [11] Heinz H. Bauschke, Rafal Goebel, Yves Lucet, and Xianfu Wang. The proximal average: Basic theory. SIAM Journal on Optimization, 19(2):766–785, 2008. [12] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67:91–108, 2005. [13] Xi Chen, Qihan Lin, Seyoung Kim, Jaime G. Carbonell, and Eric P. Xing. Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics, 6 (2):719–752, 2012. [14] Ralph Tyrell Rockafellar and Roger J-B Wets. Variational Analysis. Springer, 1998. [15] Jean J. Moreau. Proximit´e et dualtit´e dans un espace Hilbertien. Bulletin de la Soci´et´e Math´ematique de France, 93:273–299, 1965. [16] Ralph Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization, 14(5):877–898, 1976. [17] Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 1st edition, 2011. [18] Hua Ouyang, Niao He, Long Q. Tran, and Alexander Gray. Stochastic alternating direction method of multipliers. In International Conference on Machine Learning, 2013. [19] Taiji Suzuki. Dual averaging and proximal gradient descent for online alternating direction multiplier method. In International Conference on Machine Learning, 2013. [20] Yaoliang Yu. Fast Gradient Algorithms for Stuctured Sparsity. PhD thesis, University of Alberta, 2013. 9
|
2013
|
95
|
5,175
|
Robust Low Rank Kernel Embeddings of Multivariate Distributions Le Song, Bo Dai College of Computing, Georgia Institute of Technology lsong@cc.gatech.edu, bodai@gatech.edu Abstract Kernel embedding of distributions has led to many recent advances in machine learning. However, latent and low rank structures prevalent in real world distributions have rarely been taken into account in this setting. Furthermore, no prior work in kernel embedding literature has addressed the issue of robust embedding when the latent and low rank information are misspecified. In this paper, we propose a hierarchical low rank decomposition of kernels embeddings which can exploit such low rank structures in data while being robust to model misspecification. We also illustrate with empirical evidence that the estimated low rank embeddings lead to improved performance in density estimation. 1 Introduction Many applications of machine learning, ranging from computer vision to computational biology, require the analysis of large volumes of high-dimensional continuous-valued measurements. Complex statistical features are commonplace, including multi-modality, skewness, and rich dependency structures. Kernel embedding of distributions is an effective framework to address challenging problems in this regime [1, 2]. Its key idea is to implicitly map distributions into potentially infinite dimensional feature spaces using kernels, such that subsequent comparison and manipulation of these distributions can be achieved via feature space operations (e.g., inner product, distance, projection and spectral analysis). This new framework has led to many recent advances in machine learning such as kernel independence test [3] and kernel belief propagation [4]. However, algorithms designed with kernel embeddings have rarely taken into account latent and low rank structures prevalent in high dimensional data arising from various applications such as gene expression analysis. While these information have been extensively exploited in other learning contexts such as graphical models and collaborative filtering, their use in kernel embeddings remains scarce and challenging. Intuitively, these intrinsically low dimensional structures of the data should reduce the effect number of parameters in kernel embeddings, and allow us to obtain a better estimator when facing with high dimensional problems. As a demonstration of the above intuition, we illustrate the behavior of low rank kernel embeddings (which we will explain later in more details) when applied to density estimation (Figure 1). 100 data points are sampled i.i.d. from a mixture of 2 spherical Gaussians, where the latent variable is the cluster indicator. The fitted density based on an ordinary kernel density estimator has quite different contours from the ground truth (Figure 1(b)), while those provided by low rank embeddings appear to be much closer to the ground truth ((Figure 1(c)). Essentially, the low rank approximation step endows kernel embeddings with an additional mechanism to smooth the estimator which can be beneficial when the number of data points is small and there are clusters in the data. In our later more systematic experiments, we show that low rank embeddings can lead to density estimators which can significantly improve over alternative approaches in terms of held-out likelihood. While there are a handful of exceptions [5, 6] in the kernel embedding literature which have exploited latent and low rank information, these algorithms are not robust in the sense that, when such information are misspecification, no performance guarantee can be provided and these algorithms can fail drastically. The hierarchical low rank kernel embeddings we proposed in this paper can be 1 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (a) Ground truth (b) Ordinary KDE (c) Low Rank KDE Figure 1: We draw 100 samples from a mixture of 2 spherical Gaussians with equal mixing weights. (a) the contour plot for the ground truth density, (b) for ordinary kernel density estimator (KDE), (c) for low rank KDE. We used cross-validation to find the best kernel bandwidth for both the KDE and low rank KDE. The latter produces a density which is visibly closer to the ground truth, and in term of the integrated square error, it is smaller than the KDE (0.0092 vs. 0.012). considered as a kernel generalization of the discrete valued tree-structured latent variable models studied in [7]. The objective of the current paper is to address previous limitations of kernel embeddings as applied to graphical models and make them more practically useful. Furthermore, we will provide both theoretical and empirical support to the new approach. Another key contribution of the paper is a novel view of kernel embedding of multivariate distributions as infinite dimensional higher order tensors, and the low rank structure of these tensors in the presence of latent variables. This novel view allows us to introduce modern multi-linear algebra and tensor decomposition tools to address challenging problems in the interface between kernel methods and latent variable models. We believe our work will play a synergistic role in bridging together largely separate areas in machine learning research, including kernel methods, latent variable models, and tensor data analysis. In the remainder of the paper, we will first present the tensor view of kernel embeddings of multivariate distributions and its low rank structure in the presence of latent variables. Then we will present our algorithm for hierarchical low rank decomposition of kernel embeddings by making use of a sequence of nested kernel singular value decompositions. Last, we will provide both theoretical and empirical support to our proposed approach. 2 Kernel Embeddings of Distributions We will focus on continuous domains, and denote X a random variable with domain Ωand density p(X). The instantiations of X are denoted by lower case character, x. A reproducing kernel Hilbert space (RKHS) F on Ωwith a kernel k(x, x′) is a Hilbert space of functions f : Ω7→R with inner product ⟨·, ·⟩F. Its element k(x, ·) satisfies the reproducing property: ⟨f(·), k(x, ·)⟩F = f(x), and consequently, ⟨k(x, ·), k(x′, ·)⟩F = k(x, x′), meaning that we can view the evaluation of a function f at any point x ∈Ωas an inner product. Alternatively, k(x, ·) can be viewed as an implicit feature map φ(x) where k(x, x′) = ⟨φ(x), φ(x′)⟩F. For simplicity of notation, we assumes that the domain of all variables are the same and the same kernel function is applied to all variables. A kernel embedding represents a density by its expected features, i.e., µX := EX [φ(X)] = R Ωφ(x)p(x)dx, or a point in a potentially infinite-dimensional and implicit feature space of a kernel [8, 1, 2]. The embedding µX has the property that the expectation of any RKHS function f ∈F can be evaluated as an inner product in F, ⟨µX, f⟩F := EX[f(X)]. Kernel embeddings can be readily generalized to joint density of d variables, X1, . . . , Xd, using dth order tensor product feature space Fd, In this feature space, the feature map is defined as ⊗d i=1φ(xi) := φ(x1) ⊗φ(x2) ⊗. . . ⊗φ(xd), and the inner product in this space satisfies ⊗d i=1φ(xi), ⊗d i=1φ(x′ i) Fd = Qd i=1 ⟨φ(xi), φ(x′ i)⟩F = Qd i=1 k(xi, x′ i). Then we can embed a joint density p(X1, . . . , Xd) into a tensor product feature space Fd by CX1:d := EX1:d ⊗d i=1φ(Xi) = Z Ωd ⊗d i=1φ(xi) p(x1, . . . , xd) d Y i=1 dxi, (1) where we used X1:d to denote the set of variables {X1, . . . , Xd}. 2 The kernel embeddings can also be generalized to conditional densities p(X|z) [9] µX|z := EX|z[φ(X)] = Z Ω φ(x) p(x|z) dx (2) Given this embedding, the conditional expectation of a function f ∈F can be computed as EX|z[f(X)] = f, µX|z F . Unlike the ordinary embeddings, an embedding of conditional distribution is not a single element in the RKHS, but will instead sweep out a family of points in the RKHS, each indexed by a fixed value z of the conditioning variable Z. It is only by fixing Z to a particular value z, that we will be able to obtain a single RKHS element, µX|z ∈F. In other words, conditional embedding is an operator, denoted as CX|Z, which can take as input an z and output an embedding, i.e., µX|z = CX|Zφ(z). Likewise, kernel embedding of conditional distributions can also be generalized to joint distribution of d variables. We will represent an observation from a discrete variable Z taking r possible value using the standard basis in Rr (or one-of-r representation). That is when z takes the i-th value, the i-th dimension of vector z is 1 and other dimensions 0. For instance, when r = 3, Z can take three possible value (1, 0, 0)⊤, (0, 1, 0)⊤and (0, 0, 1)⊤. In this case, we let φ(Z) = Z and use the linear kernel k(Z, Z′) = Z⊤Z. Then, the conditional embedding operator reduces to a separate embedding µX|z for each conditional density p(X|z). Conceptually, we can concatenate these µX|z for different value of z in columns CX|Z := (µX|z=(1,0,0)⊤, µX|z=(0,1,0)⊤, µX|z=(0,0,1)⊤). The operation CX|Zφ(z) essentially picks up the corresponding embedding (or column). 3 Kernel Embeddings as Infinite Dimensional Higher Order Tensors The above kernel embedding CX1:d can also be viewed as a multi-linear operator (tensor) of order d mapping from F × . . . × F to R. (For generic introduction to tensor and tensor notation, please see [10]). The operator is linear in each argument (mode) when fixing other arguments. Furthermore, the application of the operator to a set of elements {fi ∈F}d i=1 can be defined using the inner product from the tensor product feature space, i.e., CX1:d •1 f1 •2 . . . •d fd := CX1:d, ⊗d i=1fd Fd = EX1:d " d Y i=1 ⟨φ(Xi), fi⟩F # , (3) where •i means applying fi to the i-th argument of CX1:d. Furthermore, we can define the generalized Frobenius norm ∥·∥• of CX1:d as ∥CX1:d∥2 • = P∞ i1=1 · · · P∞ id=1(CX1:d •1 ei1 •2 . . . •d eid)2 using an orthonormal basis {ei}∞ i=1 ⊂F. We can also define the inner product for the space of such operator that ∥CX1:d∥• < ∞as D CX1:d, eCX1:d E • = ∞ X i1=1 · · · ∞ X id=1 (CX1:d •1 ei1 •2 . . . •d eid)( eCX1:d •1 ei1 •2 . . . •d eid). (4) When CX1:d has the form of EX1:d ⊗d i=1φ(Xi) , the above inner product reduces to EX1:d[ eCX1:d •1 φ(X1) •2 . . . •d φ(Xd)]. In this paper, the ordering of the tensor modes is not essential so we simply label them using the corresponding random variables. We can reshape a higher order tensor into a lower order tensor by partitioning its modes into several disjoint groups. For instance, let I1 = {X1, . . . , Xs} be the set of modes corresponding to the first s variables and I2 = {Xs+1, . . . , Xd}. Similarly to the Matlab function, we can obtain a 2nd order tensor by CI1;I2 = reshape (CX1:d, I1, I2) : Fs × Fd−s 7→R. (5) In the reverse direction, we can also reshape a lower order tensor into a higher order one by further partitioning certain mode of the tensor. For instance, we can partition I1 into I ′ 1 = {X1, . . . , Xt} and I ′′ 1 = {Xt+1, . . . , Xs}, and turn CI1;I2 into a 3rd order tensor by CI ′ 1;I ′′ 1 ;I2 = reshape (CI1;I2, I ′ 1, I ′′ 1 , I2) : Ft × Fs−t × Fd−s 7→R. (6) Note that given a orthonormal basis {ei}∞ i=1 ∈F, we can readily obtain an orthonormal basis for, e.g., Ft, as {ei1 ⊗. . . ⊗eit}∞ i1,...,it=1, and hence define the generalized Frobenius norm for CI1;I2 and CI ′ 1;I ′′ 1 ;I2. This also implies that the generalized Frobenius norms are the same for all these reshaped tensors, i.e., ∥CX1:d∥• = ∥CI1;I2∥• =
CI ′ 1;I ′′ 1 ;I2
•. 3 X1 X2 Z X1 X2 X3 X4 Z1 Z2 X1 X2 Xd Z1 Z2 Zd (a) X1 ⊥X2|Z (b) X1:2 ⊥X3:4|Z1:2 (c) Caterpillar tree (hidden Markov model) Figure 2: Three latent variable model with different tree topologies The 2nd order tensor CI1;I2 can also be viewed as the cross-covariance operator between two sets of variables in I1 and I2. In this case, we can essentially use notation and operations for matrices. For instance, we can perform singular value decomposition of CI1;I2 = P∞ i=1 si(ui⊗vi) where si ∈R are ordered in nonincreasing manner, {ui}∞ i=1 ⊂Fs and {vi}∞ i=1 ⊂Fd−s are singular vectors. The rank of CI1;I2 is the smallest r such that si = 0 for i ≥r. In this case, we will also define Ur = (u1, u2, . . . , ur), Vr = (v1, v2, . . . , vr) and Sr = diag (s1, s2, . . . , sr), and denote the low rank approximation as CI1;I2 = UrSrV⊤ r . Finally, a 1st order tensor reshape (CX1:d, {X1:d} , ∅), is simply a vector where we we will use vector notation. 4 Low Rank Kernel Embeddings Induced by Latent Variables In the presence of latent variables, the kernel embedding CX1:d will be low rank. For example, the two observed variables X1 and X2 in the example in Figure 1 is conditional independent given the latent cluster indicator variable Z. That is the joint density factorizes as p(X1, X2) = P z p(z)p(X1|z)p(X2|z) (see Figure 2(a) for the graphical model). Throughout the paper, we assume that z is discrete and takes r possible values. Then the embedding CX1X2 of p(X1, X2) has a rank at most r. Let z be represented as the standard basis in Rr. Then CX1X2 = EZ EX1|Z[φ(X1)]Z ⊗ EX2|Z[φ(X2)]Z = CX1|Z EZ [Z ⊗Z] CX2|Z ⊤ (7) where EZ [Z ⊗Z] is an r × r matrix, and hence restricting the rank of CX1X2 to be at most r. In our second example, four observed variables are connected via two latent variables Z1 and Z2 each taking r possible values. The conditional independence structure implies that the density of p(X1, X2, X3, X4) factorizes as P z1,z2 p(X1|z1)p(X2|z1)p(z1, z2)p(X3|z2)p(X4|z2) (see Figure 2(b) for the graphical model). Reshaping its kernel embedding CX1:4, we obtain CX1:2;X3:4 = reshape (CX1:4, {X1:2} , {X3:4}) which factorizes as EX1:2|Z1[φ(X1) ⊗φ(X2)] EZ1Z2[Z1 ⊗Z2] EX3:4|Z2[φ(X3) ⊗φ(X4)] ⊤ (8) where EZ1Z2[Z1 ⊗Z2] is an r × r matrix. Hence the intrinsic “rank” of the reshaped embedding is only r, although the original kernel embedding CX1:4 is a 4th order tensor with infinite dimensions. In general, for a latent variable model p(X1, . . . , Xd) where the conditional independence structure is a tree T , various reshapings of its kernel embedding CX1:d according to edges in the tree will be low rank. More specifically, each edge in the latent tree corresponds to a pair of latent variables (Zs, Zt) (or an observed and a hidden variable (Xs, Zt)) which induces a partition of the observed variables into two groups, I1 and I2. One can imagine splitting the latent tree into two subtrees by cutting the edge. One group of variables reside in the first subtree, and the other group in the second subtree. If we reshape the tensor according to this partitioning, then Theorem 1 Assume that all observed variables are leaves in the latent tree structure, and all latent variables take r possible values, then rank(CI1;I2) ≤r. Proof Due to the conditional independence structure induced by the latent tree, p(X1, . . . , Xd) = P zs P zt p(I1|zs)p(zs, zt)p(I2|zt). Then its embedding can be written as CI1;I2 = CI1|Zs EZsZt[Zs ⊗Zt] CI2|Zt ⊤, (9) where CI1|Zs and CI2|Zt are the conditional embedding operators for p(I1|zs) and p(I2|zt) respectively. Since EZsZt[Zs ⊗Zt] is a r × r matrix, rank(CI1;I2) ≤r. Theorem 1 implies that, given a latent tree model, we obtain a collection of low rank reshapings {CI1;I2} of the kernel embedding CX1:d, each corresponding to an edge (Zs, Zt) of the tree. We 4 will denote by H(T , r) the class of kernel embeddings CX1:d whose various reshapings according to the latent tree T have rank at most r.1 We will also use CX1:d ∈H(T , r) to indicator such a relation. In practice, the latent tree model assumption may be misspecified for a joint density p(X1, . . . , Xd), and consequently the various reshapings of its kernel embedding CX1:d are only approximately low rank. In this case, we will instead impose a (potentially misspecified) latent structure T and a fixed rank r on the data and obtain an approximate low rank decomposition of the kernel embedding. The goal is to obtain a low rank embedding eCX1:d ∈H(T , r), while at the same time insure ∥eCX1:d − CX1:d∥• is small. In the following, we will present such a decomposition algorithm. 5 Low Rank Decomposition of Kernel Embeddings For simplicity of exposition, we will focus on the case where the latent tree structure T has a caterpillar shape (Figure 2(c)). This decomposition can be viewed as a kernel generalization of the hierarchical tensor decomposition in [11, 12, 7]. The decomposition proceeds by reshaping the kernel embedding CX1:d according to the first edge (Z1, Z2), resulting in A1 := CX1;X2:d. Then we perform a rank r approximation for it, resulting in A1 ≈UrSrV⊤ r . This leads to the first intermediate tensor G1 = Ur, and we reshape SrV⊤ r and recursively decompose it. We note that Algorithm 1 contains only pseudo codes, and not implementable in practice since the kernel embedding to decompose can have infinite dimensions. We will design a practical kernel algorithm in the next section. Algorithm 1 Low Rank Decomposition of Kernel Embeddings In: A kernel embedding CX1:d, the caterpillar tree T and desired rank r Out: A low rank embedding eCX1:d ∈H(T , r) as intermediate tensors {G1, . . . , Gd} 1: A1 = reshape(CX1:d, {X1} , {X2:d}) according to tree T . 2: A1 ≈UrSrV⊤ r , approximate A1 using its r leading singular vectors. 3: G1 = Ur, and B1 = SrV⊤ r . G1 can be viewed as a model with two variables, X1 and Z1; and B1 as a new caterpillar tree model T1 with variable X1 removed from T . 4: for j = 2, . . . , d −1 do 5: Aj = reshape(Bj−1, {Zj−1, Xj} , {Xj+1:d}) according to tree Tj−1. 6: Aj ≈UrSrV⊤ r , approximate Aj using its r leading singular vectors. 7: Gj = reshape(Ur, {Zj−1} , {Xj} , {Zj}), and Bj = SrV⊤ r . Gj can be viewed as a model with three variables, Xj, Zj and Zj−1; and Bj as a new caterpillar tree model Tj with variable Zj−1 and Xj removed from Tj−1. 8: end for 9: Gd = Bd−1 Once we finish the decomposition, we obtain the low rank representation of the kernel embedding as a set of intermediate tensors {G1, . . . , Gd}. In particular, we can think of G1 as a second order tensor with dimension ∞× r, Gd as a second order tensor with dimension r × ∞, and Gj for 2 ⩽j ⩽d −1 as a third order tensor with dimension r × ∞× r. Then we can apply the low rank kernel embedding eCX1:d to a set of elements {fi ∈F}d i=1 as follows eCX1:d •1 f1 •2 . . . •d fd = (G1 •1 f1)⊤(G2 •2 f2) . . . (Gd−1 •2 fd−1)(Gd •2 fd). Based on the above decomposition, one can obtain a low rank density estimate by ep(X1, . . . , Xd) = eCX1:d •1 φ(X1) •2 . . . •d φ(Xd). We can also compute the difference between eCX1:d and the operator CX1:d by using the generalized Frobenius norm ∥eCX1:d −CX1:d∥•. 6 Kernel Algorithm In practice, we are only provided with a finite number of samples (xi 1, . . . , xi d) n i=1 draw i.i.d. from p(X1, . . . , Xd), and we want to obtain an empirical low rank decomposition of the kernel embedding. In this case, we will perform a low rank decomposition of the empirical kernel embedding ¯CX1:d = 1 n Pn i=1 ⊗d j=1φ(xi j) . Although the empirical kernel embedding still has infinite dimensions, we will show that we can carry out the decomposition using just the kernel matrices. Let us denote the kernel matrix for each dimension of the data by Kj where j ∈{1, . . . , d}. The (i, i′)-th entry in Kj can be computed as Kii′ j = k(xi j, xi′ j ). Alternatively, one can think of implicitly forming 1One can readily generalize this notation to decompositions where different reshapings have different ranks. 5 the feature matrix Φj = φ(x1 j), . . . , φ(xn j ) , and the corresponding kernel matrix is Kj = Φ⊤ j Φj. Furthermore, we denote the tensor feature matrix formed from dimension j + 1 to d of the data as Ψj = ⊗d j′=j+1φ(x1 j′), . . . , ⊗d j′=j+1φ(xn j′) . The corresponding kernel matrix Lj = Ψ⊤ j Ψj with the (i, i′)-th entry in Lj defined as Lii′ j = Qd j′=j+1 k(xi j′, xi′ j′). Step 1-3 in Algorithm 1. The key building block of the algorithm is a kernel singular value decomposition (Algorithm 2), which we will explain in more details using the example in step 2 of Algorithm 1. Using the implicitly defined feature matrix, A1 can be expressed as A1 = 1 nΦ1Ψ⊤ 1 . For the low rank approximation, A1 ≈UrSrV⊤ r , using singular value decomposition, the leading r singular vector Ur = (u1, . . . , ur) will lie in the span of Φ1, i.e., Ur = Φ1(β1, . . . , βr) where β ∈Rn. Then we can transform the singular value decomposition problem for an infinite dimensional matrix to a generalized eigenvalue problem involving kernel matrices, A1A1 ⊤u = λ u ⇔ 1 n2 Φ1Ψ⊤ 1 Ψ1Φ⊤ 1 Φ1β = λ Φ1β ⇔ 1 n2 K1L1K1β = λ K1β. Let the Cholesky decomposition of K1 be R⊤R, then the generalized eigenvalue decomposition problem can be solved by redefining eβ = Rβ, and solving an ordinary eigenvalue problem 1 n2 RL1R⊤eβ = λ eβ, and obtain β = R† eβ. (10) The resulting singular vectors satisfy u⊤ l ul′ = β⊤ l Φ⊤ 1 Φ1βl′ = β⊤ l Kβl′ = eβ⊤ l eβl′ = δll′. Then we can obtain B1 := SrV⊤ r = U⊤ r A1 by projecting the column of A1 using the singular vectors Ur, B1 = 1 n(β1, . . . , βr)⊤Φ⊤ 1 Φ1Ψ⊤ 1 = 1 n(β1, . . . , βr)⊤K1Ψ⊤ 1 =: (γ1, . . . , γn)Ψ⊤ 1 (11) where γ ∈Rr can be treated as the reduced r-dimensional feature representation for each feature mapped data point φ(xi 1). Then we have the first intermediate tensor G1 = Ur = Φ1(β1, . . . , βr) =: Φ1(θ1, . . . , θn)⊤, where θ ∈Rr. Then the kernel singular value decomposition can be carried out recursively on the reshaped tensor B1. Step 5-7 in Algorithm 1. When j = 2, we first reshape B1 = SrV⊤ r to obtain A2 = 1 n eΦ2Ψ⊤ 2 , where eΦ2 = (γ1⊗φ(x1 2), . . . , γn⊗φ(xn 2)). Then we can carry out similar singular value decomposition as before, and obtain Ur = eΦ2(β1, . . . , βr) =: eΦ2(θ1, . . . , θn)⊤. Then we have the second operator G2 = Pn i=1 γi ⊗φ(xi 2) ⊗θi. Last, we define B2 := SrV⊤ r = U⊤ r A2 as B2 = 1 n(β1, . . . , βr)⊤eΦ⊤ 2 eΦ2Ψ⊤ 2 = 1 n(β1, . . . , βr)⊤(Γ ◦K2)Ψ⊤ 2 =: 1 n(γ1, . . . , γn)Ψ⊤ 2 , (12) and carry out the recursive decomposition further. The result of the algorithm is an empirical low rank kernel embedding, bCX1:d, represented as a collection of intermediate tensors {G1, . . . , Gd}. The overall algorithm is summarized in Algorithm 3. More details about the derivation can be found in Appendix A. The application of the set of intermediate tensor {G1, . . . , Gd} to a set of elements {fi ∈F} can be expressed as kernel operations. For instance, we can obtain a density estimate by bp(x1, . . . , xd) = bCX1:d •1 φ(x1) •2 . . . •d φ(xd) = P z1,...,zd g1(x1, z1)g2(z1, x2, z2) . . . gd(zd−1, xd) where (see Appendix A for more details) g1(x1, z1) = G1 •1 φ(x1) •2 z1 = Xn i=1(z⊤ 1 θi)k(xi 1, x1) (13) gj(zj−1, xj, zj) = Gj •1 zj−1 •2 φ(xj) •3 zj = Xn i=1(z⊤ j−1γi)k(xi j, xj)(z⊤ j θi) (14) gd(zd−1, xd) = Gd •1 zd−1 • xd = Xn i=1(z⊤ d−1γi)k(xi d, xd) (15) In the above formulas, each term is a weighted combination of kernel functions, and the weighting is determined by the kernel singular value decomposition and the values of the latent variable {zj}. 7 Performance Guarantees As we mentioned in the introduction, the imposed latent structure used in the low rank decomposition of kernel embeddings may be misspecified, and the decomposition of empirical embeddings may suffer from sampling error. In this section, we provide finite guarantee for Algorithm 3 even when the latent structures are misspecified. More specifically, we will bound, in terms of the gen6 Algorithm 2 KernelSVD(K, L, r) Out: A collection of vectors (θ1, . . . , θn) 1: Perform Cholesky decomposition K = R⊤R 2: Solve eigen decomposition 1 n2 RLR⊤eβ = λ eβ, and keep the leading r eigen vectors ( eβ1, . . . , eβr) 3: Compute β1 = R† eβ1, . . . , βr = R† eβr, and reorgnaize (θ1, . . . , θn)⊤= (β1, . . . , βr) Algorithm 3 Kernel Low Rank Decomposition of Empirical Embedding ¯CX1:d In: A sample (xi 1, . . . , xi d) n i=1, desired rank r, a query point (x1, . . . , xd) Out: A low rank embedding bCX1:d ∈H(T , r) as intermediate operators {G1, . . . , Gd} 1: Ld = 11⊤ 2: for j = d, d −1, . . . , 1 do 3: Compute matrix Kj with Kii′ j = k(xi j, xi′ j ); furthermore, if j < d, then Lj = Lj+1 ◦Kj+1 4: end for 5: (θ1, . . . , θn) = KernelSVD(K1, L1, r) 6: G1 = Φ1(θ1, . . . , θn)⊤, and compute (γ1, . . . , γn) = (θ1, . . . , θn)K1 7: for j = 2, . . . , d −1 do 8: Γ = (γ1, . . . , γn)⊤(γ1, . . . , γn), and compute (θ1, . . . , θn) = KernelSVD(Ki ◦Γ, Li, r) 9: Gj = Pn i=1 γi ⊗φ(xi j) ⊗θi, and compute (γ1, . . . , γn) = (θ1, . . . , θn)Ki 10: end for 11: Gd = (γ1, . . . , γn)Φ⊤ d eralized Frobenius norm ∥CX1:d −bCX1:d∥•, the difference between the true kernel embeddings and the low rank kernel embeddings estimated from a set of n i.i.d. samples (xi 1, . . . , xi d) n i=1. First we observed that the difference can be decomposed into two terms ∥CX1:d −bCX1:d∥• ⩽∥CX1:d −eCX1:d∥• | {z } E1: model error + ∥eCX1:d −bCX1:d∥• | {z } E2: estimation error (16) where the first term is due to the fact that the latent structures may be misspecified, while the second term is due to estimation from finite number of data points. We will bound these two sources of error separately (the proof is deferred to Appendix B) Theorem 2 Suppose each reshaping CI1;I2 of CX1:d according to an edge in the latent tree structure has a rank r approximation UrSrV⊤ r with error
CI1;I2 −UrSrV⊤ r
• ⩽ϵ. Then the low rank decomposition eCX1:d from Algorithm 1 satisfies ∥CX1:d −eCX1:d∥• ⩽ √ d −1 ϵ. Although previous work [5, 6] have also used hierarchical decomposition for kernel embeddings, their decompositions make the strong assumption that the latent tree models are correctly specified. When the models are misspecified, these algorithms have no guarantees whatsoever, and may fail drastically as we show in later experiments. In contrast, the decomposition we proposed here are robust in the sense that even when the latent tree structure is misspecified, we can still provide the approximation guarantee for the algorithm. Furthermore, when the latent tree structures are correctly specified and the rank r is also correct, then CI1;I2 has rank r and hence ϵ = 0 and our decomposition algorithm does not incur any modeling error. Next, we provide bound for the the estimation error. The estimation error arises from decomposing the empirical estimate ¯CX1:d of the kernel embedding, and the error can accumulate as we combine intermediate tensors {G1, . . . , Gd} to form the final low rank kernel embedding. More specifically, we have the following bound (the proof is deferred to Appendix C) Theorem 3 Suppose the r-th singular value of each reshaping CI1;I2 of CX1:d according to an edge in the latent tree structure is lower bounded by λ, then with probability at least 1−δ, ∥eCX1:d − bCX1:d∥• ≤ (1+λ)d−2 λd−2
CX1:d −¯CX1:d
• ⩽ (1+λ)d−2c λd−2√n , with some constant c associated with the kernel and the probability δ. 7 From the above theorem, we can see that the smaller the r-th singular value, the more difficult it is to estimate the low rank kernel embedding. Although in the bound the error grows exponential in 1/λd−2, in our experiments, we did not observe such exponential degradation of performance even in relatively high dimensional datasets. 8 Experiments Besides the synthetic dataset we showed in Figure 1 where low rank kernel embedding can lead to significant improvement in term of estimating the density, we also experimented with real world datasets from UCI data repository. We take 11 datasets with varying dimensions and number of data points, and the attributes of the datasets are continuous-valued. We whiten the data and compare low rank kernel embeddings (Low Rank) obtained from Algorithm 3 to 3 other alternatives for continuous density estimation, namely, mixture of Gaussian with full covariance matrix, ordinary kernel density estimator (KDE) and the kernel spectral algorithm for latent trees (Spectral) [6]. We use Gaussian kernel k(x, x′) = 1 √ 2πs exp(−∥x −x′∥2/(2s2)) for KDE, Spectral and our method (Low rank). We split each dataset into 10 subsets, and use nested cross-validation based on heldout likelihood to choose hyperparameters: the kernel parameter s for KDE, Spectral and Low rank ({2−3, 2−2, 2−1, 1, 2, 4, 8} times the median pairwise distance), the rank parameter r for Spectral and Low rank (range from 2 to 30), and the number of components in the Gaussian mixture (range from 2 to # Sample 30 ). For both Spectral and Low rank, we use a caterpillar tree in Figure 2(c) as the structure for the latent variable model. From Table 1, we can see that low rank kernel embeddings provide the best or comparable held-out negative log-likelihood across the datasets we experimented with. In some datasets, low rank kernel embeddings can lead to drastic improvement over the alternatives. For instance, in dataset “sonar” and “yeast”, the improvement is dramatic. The Spectral approach performs even worse sometimes. This makes sense, since the caterpillar tree supplied to the algorithm may be far away from the reality and Spectral is not robust to model misspecification. Meanwhile, the Spectral algorithm also caused numerical problem in practical. In contrast, our method Low Rank uses the same latent structure, but achieved much more robust results. Table 1: Negative log-likelihood on held-out data (the lower the better). Method Data Set # Sample Dim. Gaussian mixture KDE Spectral Low rank australian 690 14 17.97±0.26 18.32±0.64 33.50 ±2.17 15.88±0.11 bupa 345 6 8.17±0.30 8.36±0.17 25.01±0.66 7.57±0.14 german 1000 24 31.14 ± 0.41 30.57 ± 0.15 28.40 ± 11.64 22.89 ± 0.26 heart 270 13 17.72 ±0.23 18.23 ±0.18 21.50 ± 2.39 16.95 ± 0.13 ionosphere 351 34 47.60 ±1.77 43.53 ± 1.25 54.91±1.35 35.84 ± 1.00 pima 768 8 11.78 ± 0.04 10.38 ± 0.19 31.42 ± 2.40 10.07 ± 0.11 parkinsons 195 22 30.13± 0.24 30.65 ± 0.66 33.20 ± 0.70 28.19 ± 0.37 sonar 208 60 107.06 ± 1.36 96.17 ± 0.27 89.26 ± 2.75 57.96 ± 2.67 wpbc 198 33 50.75 ± 1.11 49.48 ± 0.64 48.66 ± 2.56 40.78 ± 0.86 wine 178 13 19.59 ± 0.14 19.56 ± 0.56 19.25 ± 0.58 18.67 ± 0.17 yeast 208 79 146.11 ± 5.36 137.15 ± 1.80 76.58 ± 2.24 72.67 ±4.05 9 Discussion and Conclusion In this paper, we presented a robust kernel embedding algorithm which can make use of the low rank structure of the data, and provided both theoretical and empirical support for it. However, there are still a number of issues which deserve further research. First, the algorithm requires a sequence of kernel singular decompositions which can be computationally intensive for high dimensional and large datasets. Developing efficient algorithms yet with theoretical guarantees will be interesting future research. Second, the statistical analysis could be sharpened. For the moment, the analysis does not seem to suggest that the obtained estimator by our algorithm is better than ordinary KDE. Third, it will be interesting empirical work to explore other applications for low rank kernel embeddings, such as kernel two-sample tests, kernel independence tests and kernel belief propagation. 8 References [1] A. J. Smola, A. Gretton, L. Song, and B. Sch¨olkopf. A Hilbert space embedding for distributions. In Proceedings of the International Conference on Algorithmic Learning Theory, volume 4754, pages 13–31. Springer, 2007. [2] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch¨olkopf. Injective Hilbert space embeddings of probability measures. In Proc. Annual Conf. Computational Learning Theory, pages 111–122, 2008. [3] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch¨olkopf, and A. J. Smola. A kernel statistical test of independence. In Advances in Neural Information Processing Systems 20, pages 585–592, Cambridge, MA, 2008. MIT Press. [4] L. Song, A. Gretton, D. Bickson, Y. Low, and C. Guestrin. Kernel belief propagation. In Proc. Intl. Conference on Artificial Intelligence and Statistics, volume 10 of JMLR workshop and conference proceedings, 2011. [5] L. Song, B. Boots, S. Siddiqi, G. Gordon, and A. J. Smola. Hilbert space embeddings of hidden markov models. In International Conference on Machine Learning, 2010. [6] L. Song, A. Parikh, and E.P. Xing. Kernel embeddings of latent tree graphical models. In Advances in Neural Information Processing Systems, volume 25, 2011. [7] L. Song, M. Ishteva, H. Park, A. Parikh, and E. Xing. Hierarchical tensor decomposition of latent tree graphical models. In International Conference on Machine Learning (ICML), 2013. [8] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. [9] L. Song, J. Huang, A. J. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions. In Proceedings of the International Conference on Machine Learning, 2009. [10] Tamara. G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [11] L. Grasedyck. Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications, 31(4):2029–2054, 2010. [12] I Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295– 2317, 2011. [13] L. Rosasco, M. Belkin, and E.D. Vito. On learning with integral operators. Journal of Machine Learning Research, 11:905–934, 2010. 9
|
2013
|
96
|
5,176
|
Learning the Local Statistics of Optical Flow Dan Rosenbaum1, Daniel Zoran2, Yair Weiss1,2 1 CSE , 2 ELSC , Hebrew University of Jerusalem {danrsm,daniez,yweiss}@cs.huji.ac.il Abstract Motivated by recent progress in natural image statistics, we use newly available datasets with ground truth optical flow to learn the local statistics of optical flow and compare the learned models to prior models assumed by computer vision researchers. We find that a Gaussian mixture model (GMM) with 64 components provides a significantly better model for local flow statistics when compared to commonly used models. We investigate the source of the GMM’s success and show it is related to an explicit representation of flow boundaries. We also learn a model that jointly models the local intensity pattern and the local optical flow. In accordance with the assumptions often made in computer vision, the model learns that flow boundaries are more likely at intensity boundaries. However, when evaluated on a large dataset, this dependency is very weak and the benefit of conditioning flow estimation on the local intensity pattern is marginal. 1 Introduction Sintel MPI KITTI Figure 1: Samples of frames and flows from new flow databases. We leverage these newly available resources to learn the statistics of optical flow and compare this to assumptions used by computer vision researchers. The study of natural image statistics is a longstanding research topic with both scientific and engineering interest. Recent progress in this field has been achieved by approaches that systematically compare different models of natural images with respect to numerical criteria such as log likelihood on held-out data or coding efficiency [1, 10, 14]. Interestingly, the best models in terms of log likelihood, when used as priors in image restoration tasks, also yield state-of-the-art performance [14]. Many problems in computer vision require good priors. A notable example is the computation of optical flow: a vector at every pixel that corresponds to the two dimensional projection of the motion 1 at that pixel. Since local motion information is often ambiguous, nearly all optical flow estimation algorithms work by minimizing a cost function that has two terms: a local data term and a “prior” term (see. e.g. [13, 11] for some recent reviews). Given the success in image restoration tasks, where learned priors give state-of-the-art performance, one might expect a similar story in optical flow estimation. However, with the notable exception of [9] (which served as a motivating example for this work and is discussed below) there have been very few attempts to learn priors for optical flow by modeling local statistics. Instead, the state-ofthe-art methods still use priors that were formulated by computer vision researchers. In fact, two of the top performing methods in modern optical flow benchmarks use a hand-defined smoothness constraint that was suggested over 20 years ago [6, 2]. One big difference between image statistics and flow statistics is the availability of ground truth data. Whereas for modeling image statistics one merely needs a collection of photographs (so that the amount of data is essentially unlimited these days), for modeling flow statistics one needs to obtain the ground truth motion of the points in the scene. In the past, the lack of availability of ground truth data did not allow for learning an optical flow prior from examples. In the last two years, however, two ground truth datasets have become available. The Sintel dataset (figure 1) consists of a thousand pairs of frames from a highly realistic computer graphics film with a wide variety of locations and motion types. Although it is synthetic, the work in [3] convincingly show that both in terms of image statistics and in terms of flow statistics, the synthetic frames are highly similar to real scenes. The KITTI dataset (figure 1) consists of frames taken from a vehicle driving in a European city [5]. The vehicle was equipped with accurate range finders as well as accurate localization of its own motion, and the combination of these two sources allow computing optical flow for points that are stationary in the world. Although this is real data, it is sparse (only about 50% of the pixels have ground truth flow). In this paper we leverage the availability of ground truth datasets to learn explicit statistical models of optical flow. We compare our learned model to the assumptions made by computer vision algorithms for estimating flow. We find that a Gaussian mixture model with 64 components provides a significantly better model for local flow statistics when compared to commonly used models. We investigate the source of the GMM’s success and show that it is related to an explicit representation of flow boundaries. We also learn a model that jointly models the local intensity pattern and the local optical flow. In accordance with the assumptions often made in computer vision, the model learns that flow boundaries are more likely at intensity boundaries. However, when evaluated on a large dataset, this dependency is very weak and the benefit of conditioning flow estimation on the local intensity pattern is marginal. 1.1 Priors for optical flow One of the earliest methods for optical flow that is still used in applications is the celebrated LucasKanade algorithm [7]. It overcomes the local ambiguity of motion analysis by assuming that the optical flow is constant within a small image patch and finds this constant motion by least-squares estimation. Another early algorithm that is still widely used is the method of Horn and Schunck [6]. It finds the optical flow by minimizing a cost function that has a data term and a “smoothness” term. Denoting by u the horizontal flow and v the vertical flow, the smoothness term is of the form: JHS = X x,y u2 x + u2 y + v2 x + v2 y where ux, uy are the spatial derivatives of the horizontal flow u and vx, vy are the spatial derivatives of the vertical flow v. When combined with modern optimization methods, this algorithm is often among the top performing methods on modern benchmarks [11, 5]. Rather than using a quadratic smoothness term, many authors have advocated using more robust terms that would be less sensitive to outliers in smoothness. Thus the Black and Anandan [2] algorithm uses: JBA = X x,y ρ(ux) + ρ(uy) + ρ(vx) + ρ(vy) where ρ(t) is a function that grows slower than a quadratic. Popular choices for ρ include the Lorentzian, the truncated quadratic and the absolute value ρ(x) = |x| (or a differentiable approximation to it ρ(x) = √ ϵ + x2)[11]. Both the Lorentzian and the absolute value robust smoothness 2 terms were shown to outperform quadratic smoothness in [11] and the absolute value was better among the two robust terms. Several authors have also suggested that the smoothness term be based on the local intensity pattern, since motion discontinuities are more likely to occur at intensity boundaries. Ren [8] modified the weights in the Lucas and Kanade least-squares estimation so that pixels that are on different sides of an intensity boundary will get lower weights. In the context of Horn and Shunck, several authors suggest using weights to the horizontal and vertical flow derivatives, where the weights had an inverse relationship with the image derivatives: large image derivatives lead to low weight in the flow smoothness (see [13] and references within for different variations on this idea). Perhaps the simplest such regularizer is of the form: JHSI = X x,y w(Ix)(u2 x + v2 x) + w(Iy)(u2 y + v2 y) (1) As we discuss below, this prior can be seen as a Gaussian prior on the flow that is conditioned on the intensity. In contrast to all the previously discussed priors, Roth and Black [9] suggested learning a prior from a dataset. They used a training set of optical flow obtained by simulating the motion of a camera in natural range images. The prior learned by their system was similar to a robust smoothness prior, but the filters are not local derivatives but rather more random-looking high pass filters. They did not observe a significant improvement in performance when using these filters, and standard derivative filters are still used in most smoothness based methods. Given the large number of suggested priors, a natural question to ask is: what is the best prior to use? One way to answer this question is to use these priors as a basis for an optical flow estimation algorithm and see which algorithm gives the best performance. Although such an approach is certainly informative it is difficult to get a definitive answer using it. For example, Sun et al. [11] reported that adding a non-local smoothness term to a robust smoothness prior significantly improved results on the Middlebury benchmark, while Geiger et al. [5] reported that this term decreased performance on KITTI benchmark. Perhaps the main difficulty with this approach is that the prior is only one part of an optical flow estimation algorithm. It is always combined with a non-convex likelihood term and optimized using a nonlinear optimization algorithm. Often the parameters of the optimization have a very large influence on the performance of the algorithm. In this paper we take an alternative approach. Motivated by recent advances in natural image statistics and the availability of new datasets, we compare different priors in terms of (1) log likelihood on held-out data and (2) inference performance with tractable posteriors. Our results allow us to rigorously compare different prior assumptions. 2 Comparing priors as density models In order to compare different prior models as density models, we generate a training set and test set of optical flow patches from the ground truth databases. Denoting by f a single vector that concatenates all the optical flow in a patch (e.g. if we consider 8 × 8 patches, f is a vector of length 128 where the first 64 components denote u and the last 64 components denote v). Given a prior probability model Pr(f; θ) we use the training set to estimate the free parameters of the model θ and then we measure the log likelihood of held out patches from the test set. From Sintel, we divided the pairs of frames for which ground truth is available into 708 pairs which we used for training and 333 pairs which we used for testing. The data is divided into scenes and we made sure that different scenes are used in training and testing. We created a second test set from the KITTI dataset by choosing a subset of patches for which full ground truth flow was available. Since we only consider full patches, this set is smaller and hence we use it only for testing, not for training. The priors we compared are: • Lucas and Kanade. This algorithm is equivalent to the assumption that the observed flow is generated by a constant (u0, v0) that is corrupted by IID Gaussian noise. If we also assume 3 that u0, v0 have a zero mean Gaussian distribution, Pr(f) is a zero mean multidimensional Gaussian with covariance given by σ2 pOOt + σ2 nI where O is a binary 128 × 2 matrix and σp the standard deviation of u0, v0 and σn the standard deviation of the noise. • Horn and Schunck. By exponentiating JHS we see that Pr(f; θ) is a multidimensional Gaussian with covariance matrix λDDT where D is a 256 × 128 derivative matrix that computes the derivatives of the flow field at each pixel and λ is the weight given to the prior relative to the data term. This covariance matrix is not positive definite, so we use λDDT + ϵI and determine λ, ϵ using maximum likelihood. • L1. We exponentiate JBA and obtain a multidimensional Laplace distribution. As in Horn and Schunck, this distribution is not normalizeable so we multiply it by an IID Laplacian prior on each component with variance 1/ϵ. This again gives two free parameters (λ, ϵ) which we find using maximum likelihood. Unlike the Gaussian case, the solution of the ML parameters and the normalization constant cannot be done in closed form, and we use Hamiltonian Annealed Importance Sampling [10]. • Gaussian Mixture Models (GMM). Motivated by the success of GMMs in modeling natural image statistics [14] we use the training set to estimate GMM priors for optical flow. Each mixture component is a multidimensional Gaussian with full covariance matrix and zero mean and we vary the number of components between 1 and 64. We train the GMM using the standard Expectation-Maximization (EM) algorithm using mini-batches. Even with a few mixture components, the GMM has far more free parameters than the previous models but note that we are measuring success on held out patches so that models that overfit should be penalized. The summary of our results are shown in figure 2 where we show the mean log likelihood on the Sintel test set. One interesting thing that can be seen is that the local statistics validate some assumptions commonly used by computer vision researchers. For example, the Horn and Shunck smoothness prior is as good as the optimal Gaussian prior (GMM1) even though it uses local first derivatives. Also, the robust prior (L1) is much better than Horn and Schunck. However, as the number of Gaussians increase the GMM is significantly better than a robust prior on local derivatives. A closer inspection of our results is shown in figure 3. Each figure shows the histogram of log likelihood of held out patches: the more shifted the histogram is to the right, the better the performance. It can be seen that the GMM is indeed much better than the other priors including cases where the test set is taken from KITTI (rather than Sintel) and when the patch size is 12×12 rather than 8×8. LK HS L1 GMM1 GMM2 GMM4 GMM8 GMM16 GMM64 0 1 2 3 4 5 Models log-likelihood Figure 2: mean log likelihood of the different models for 8 × 8 patches extracted from held out data from Sintel. The GMM outperforms the models that are assumed by computer vision researchers. 2.1 Comparing models using tractable inference A second way of comparing the models is by their ability to restore corrupted patches of optical flow. We are not claiming that optical flow restoration is a real-world application (although using priors to “fill in” holes in optical flow is quite common, e.g. [12, 8]). Rather, we use it because for the models we are discussing the inference can either be done in closed form or using convex optimization, so we would expect that better priors will lead to better performance. We perform two flow restoration tasks. In “flow denoising” we take the ground truth flow and add IID Gaussian noise to all flow vectors. In “flow inpainting” we add a small amount of noise to all 4 Sintel KITTI −200 −150 −100 −50 0 −15 −10 −5 0 log-likelihood log(fraction of patches) LK HS L1 GMM64 −6 −4 −2 0 2 −10 −8 −6 −4 −2 0 log-likelihood log(fraction of patches) LK HS L1 GMM64 8 × 8 patches −200 −150 −100 −50 0 −15 −10 −5 0 log-likelihood log(fraction of patches) LK HS L1 GMM64 −6 −4 −2 0 2 −8 −6 −4 −2 0 log-likelihood log(fraction of patches) LK HS L1 GMM64 12 × 12 patches Figure 3: Histograms of log-likelihood of different models on the KITTI and Sintel test sets with two different patch sizes. As can be seen, the GMM outperforms other models in all four cases. flow vectors and a very big amount of noise to some of the flow vectors (essentially meaning that these flow vectors are not observed). For the Gaussian models and the GMM models the Bayesian Least Squares (BLS) estimator of f given y can be computed in closed form. For the Laplacian model, we use MAP estimation which leads to a convex optimization problem. Since MAP may be suboptimal for this case, we optimize the parameters λ, ϵ for MAP inference performance. Results are shown in figures 4,5. The standard deviation of the ground truth flow is approximately 11.6 pixels and we add noise with standard deviations 10, 20 and 30 pixel. Consistent with the log likelihood results, L1 outperforms the Gaussian methods but is outperformed by the GMM. For small noise values the difference between L1 and the GMM is small, but as the amount of noise increases L1 becomes similar in performance to the Gaussian methods and is much worse than the GMM. 3 The secret of the GMM We now take a deeper look at how the GMM models optical flow patches. The first (and not surprising) thing we found is that the covariance matrices learned by the model are block diagonal (so that the u and v components are independent given the assignment to a particular component). More insight can be gained by considering the GMM as a local subspace model: a patch which is generated by component k is generated as a linear combination of the eigenvectors of the kth covariance. The coefficients of the linear combination have energy that decays with the eigenvalue: so each patch can be well approximated by the leading eigenvectors of the corresponding covariance. Unlike global subspace models, different subspace models can be used for different patches, and during inference with the model one can infer which local subspace is most likely to have generated the patch. Figure 6 shows the dominant leading eigenvectors of all 32 covariance matrices in the GMM32 model: the eigenvectors of u are followed by the eigenvectors of v. The number of eigenvectors displayed in each row is set so that they capture 99% of the variance in that component. The rows are organized by decreasing mixing weight. The right hand half of each row shows (u,v) patches that are sampled from that Gaussian. 5 Denoising: σ = 10 σ = 20 σ = 30 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) LK HS L1 GMM64 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) LK HS L1 GMM64 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) LK HS L1 GMM64 Inpainting: 2 × 2 4 × 4 6 × 6 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) LK HS L1 GMM64 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) LK HS L1 GMM64 20 40 60 80 100 −10 −8 −6 −4 −2 0 PSNR log(fraction of patches) LK HS L1 GMM64 Figure 4: Denoising with different noise values and inpainting with different hole sizes. Figure 5: Visualizing denoising performance (σ = 30). It can be seen that the first 10 components or so model very smooth components (in fact the samples appear to be completely flat). A closer examination of the eigenvalues shows that these ten components correspond to smooth motions of different speeds. This can also be seen by comparing the v samples on the top row which are close to gray with those in the next two rows which are much closer to black or white (since the models are zero mean, black and white are equally likely for any component). As can be seen in the figure, almost all the energy in the first components is captured by uniform motions. Thus these components are very similar to a non-local smoothness assumption similar to the one suggested in [11]): they not only assume that derivatives are small but they assume that all the 8 × 8 patch is constant. However, unlike the suggestion in [11] to enforce non-local smoothness by applying a median filter at all pixels, the GMM only applies non-local smoothness at a subset of patches that are inferred to be generated by such components. As we go down in the figure towards more rare components. we see that the components no longer model flat components but rather motion boundaries. This can be seen both in the samples (rightmost rows) and also in the leading eigenvectors (shown on the left) which each control one side of a boundary. For example, the bottom row of the figure illustrates a component that seems to generate primarily diagonal motion boundaries. Interestingly, such local subspace models of optical flow have also been suggested by Fleet et al. [4]. They used synthetic models of moving occlusion boundaries and bars to learn linear subspace models of the flow. The GMM seems to support their intuition that learning separate linear subspace models for flat vs motion boundary is a good idea. However, unlike the work of Fleet et al. the separation into “flat” vs. “motion boundary” was learned in an unsupervised fashion directly from the data. 6 leading eigenvectors patch samples u v u v Figure 6: The eigenvectors and samples of the GMM components. GMM is better because it explicitly models edges and flat patches separately. 4 A joint model for optical flow and intensity As mentioned in the introduction, many authors have suggested modifying the smoothness assumption by conditioning it on the local intensity pattern and giving a higher penalty for motion discontinuities in the absence of intensity discontinuities. We therefore ask, does conditioning on the local intensity give better log likelihood on held out flow patches? Does it give better performance in tractable inference tasks? We evaluated two flow models that are conditioned on the local intensity pattern. The first one is a conditional Gaussian (eq. 1) with exponential weights, i.e. w(Ix) = exp(−I2 x/σ2) and the variance parameter σ2 is optimized to maximize performance. The second one is a Gaussian mixture model that simultaneously models both intensity and flow. The simultaneous GMM we use includes a 200 component GMM to model the intensity together with a 64 dimensional GMM to model the flow. We allow a dependence between the hidden variable of the intensity GMM and that of the flow GMM. This is equivalent to a hidden Markov model (HMM) with 2 hidden variables: one represents the intensity component and one represents the flow component (figure 8). We learn the HMM using the EM algorithm. Initialization is given by independent GMMs learned for the intensity (we actually use the one learned by [14] which is available on their website) and for the flow. The intensity GMM is not changed during the learning. Conditioned on the intensity pattern, the flow distribution is still a GMM with 64 components (as in the previous section) but the mixing weights depend on the intensity. Given these two conditional models, we now ask: will the conditional models give better performance than the unconditional ones? The answer, shown in figure 7 was surprising (to us). Conditioning on the intensity gives basically zero improvement in log likelihood and a slight improvement in flow denoising only for very large amounts of noise. Note that for all models shown in this figure, the denoised estimate is the Bayesian Least Squares (BLS) estimate, and is optimal given the learned models. To investigate this effect, we examine the transition matrix between the intensity components and the flow components (figure 8). If intensity and flow were independent, we would expect all rows of the transition matrix to be the same. If an intensity boundary always lead to a flow boundary, we would expect the bottom rows of the matrix to have only one nonzero element. By examining the learned transition matrix we find that while there is a dependency structure, it is not very strong. 7 Regardless of whether the intensity component corresponds to a boundary or not, the most likely flow components are flat. When there is an intensity boundary, the flow boundary in the same orientation becomes more likely. However, even though it is more likely than in the unconditioned case, it is still less likely than the flat components. To rule out that this effect is due to a local optimum found by EM, we conducted additional experiments whereby the emission probabilities were held fixed to the GMMs learned independently for flow and motion and each patch in the training set was assigned one intensity and one flow component. We then estimated the joint distribution over flow and motion components by simply counting the relative frequency in the training set. The results were nearly identical to those found by EM. In summary, while our learned model supports the standard intuition that motion boundaries are more likely at intensity boundaries, it suggests that when dealing with a large dataset with high variability, there is very little benefit (if any) in conditioning flow models on the local intensity. Hidden Markov model Likelihood Denoising: σ = 90 h intensity h flow intensity flow −20 −15 −10 −5 0 −15 −10 −5 0 log-likelihood log(fraction of patches) HS HSI GMM HMM 20 40 60 80 100 −10 −8 −6 −4 −2 PSNR log(fraction of patches) HS HSI GMM HMM Figure 7: The hidden Markov model we use to jointly model intensity and flow. Both log likelihood and inference evaluations show almost no improvement of conditioning flow on intensity. h flow h intensity 10 20 30 40 50 60 50 100 150 200 un-conditional mixing-weights intensity conditional mixing-weights Figure 8: Left: the transition matrix learned by the HMM. Right: comparing rows of the matrix to the unconditional mixing weights. Conditioned on an intensity boundary, motion boundaries become more likely but are still less likely than a flat motion. 5 Discussion Optical flow has been an active area of research for over 30 years in computer vision, with many methods based on assumed priors over flow fields. In this paper, we have leveraged the availability of large ground truth databases to learn priors from data and compare our learned models to the assumptions typically made by computer vision researchers. We find that many of the assumptions are actually supported by the statistics (e.g. the Horn and Schunck model is close to the optimal Gaussian model, robust models are better, intensity discontinuities make motion discontinuities more likely). However, a learned GMM model with 64 components significantly outperforms the standard models used in computer vision, primarily because it explicitly distinguishes between flat patches and boundary patches and then uses a different form of nonlocal smoothness for the different cases. Acknowledgments Supported by the Israeli Science Foundation, Intel ICRI-CI and the Gatsby Foundation. 8 References [1] M. Bethge. Factorial coding of natural images: how effective are linear models in removing higher-order dependencies? 23(6):1253–1268, June 2006. [2] Michael J. Black and P. Anandan. A framework for the robust estimation of optical flow. In ICCV, pages 231–236, 1993. [3] Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open source movie for optical flow evaluation. In ECCV (6), pages 611–625, 2012. [4] David J. Fleet, Michael J. Black, Yaser Yacoob, and Allan D. Jepson. Design and use of linear models for image motion analysis. International Journal of Computer Vision, 36(3):171–193, 2000. [5] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, pages 3354–3361, 2012. [6] Berthold KP Horn and Brian G Schunck. Determining optical flow. Artificial intelligence, 17(1):185–203, 1981. [7] Bruce D Lucas, Takeo Kanade, et al. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial intelligence, 1981. [8] Xiaofeng Ren. Local grouping for optical flow. In CVPR, 2008. [9] Stefan Roth and Michael J. Black. On the spatial statistics of optical flow. International Journal of Computer Vision, 74(1):33–50, 2007. [10] J Sohl-Dickstein and BJ Culpepper. Hamiltonian annealed importance sampling for partition function estimation. 2011. [11] Deqing Sun, Stefan Roth, and Michael J Black. Secrets of optical flow estimation and their principles. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2432–2439. IEEE, 2010. [12] Li Xu, Zhenlong Dai, and Jiaya Jia. Scale invariant optical flow. In Computer Vision–ECCV 2012, pages 385–399. Springer, 2012. [13] Henning Zimmer, Andr´es Bruhn, and Joachim Weickert. Optic flow in harmony. International Journal of Computer Vision, 93(3):368–388, 2011. [14] Daniel Zoran and Yair Weiss. Natural images, gaussian mixtures and dead leaves. In NIPS, pages 1745–1753, 2012. 9
|
2013
|
97
|
5,177
|
Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis James Voss Ohio State University Computer Science and Engineering, 2015 Neil Avenue, Dreese Labs 586. Columbus, OH 43210 vossj@cse.ohio-state.edu Luis Rademacher Ohio State University Computer Science and Engineering, 2015 Neil Avenue, Dreese Labs 495. Columbus, OH 43210 lrademac@cse.ohio-state.edu Mikhail Belkin Ohio State University Computer Science and Engineering, 2015 Neil Avenue, Dreese Labs 597. Columbus, OH 43210 mbelkin@cse.ohio-state.edu Abstract The performance of standard algorithms for Independent Component Analysis quickly deteriorates under the addition of Gaussian noise. This is partially due to a common first step that typically consists of whitening, i.e., applying Principal Component Analysis (PCA) and rescaling the components to have identity covariance, which is not invariant under Gaussian noise. In our paper we develop the first practical algorithm for Independent Component Analysis that is provably invariant under Gaussian noise. The two main contributions of this work are as follows: 1. We develop and implement an efficient, Gaussian noise invariant decorrelation (quasi-orthogonalization) algorithm using Hessians of the cumulant functions. 2. We propose a very simple and efficient fixed-point GI-ICA (Gradient Iteration ICA) algorithm, which is compatible with quasi-orthogonalization, as well as with the usual PCA-based whitening in the noiseless case. The algorithm is based on a special form of gradient iteration (different from gradient descent). We provide an analysis of our algorithm demonstrating fast convergence following from the basic properties of cumulants. We also present a number of experimental comparisons with the existing methods, showing superior results on noisy data and very competitive performance in the noiseless case. 1 Introduction and Related Works In the Blind Signal Separation setting, it is assumed that observed data is drawn from an unknown distribution. The goal is to recover the latent signals under some appropriate structural assumption. A prototypical setting is the so-called cocktail party problem: in a room, there are d people speaking simultaneously and d microphones, with each microphone capturing a superposition of the voices. The objective is to recover the speech of each individual speaker. The simplest modeling assumption is to consider each speaker as producing a signal that is a random variable independent of the others, and to take the superposition to be a linear transformation independent of time. This leads to the following formalization: We observe samples from a random vector x distributed according to the equation x = As + b + η where A is a linear mixing matrix, b ∈Rd is a constant vector, s is a latent random vector with independent coordinates, and η is an unknown random noise independent 1 of s. For simplicity, we assume A ∈Rd×d is square and of full rank. The latent components of s are viewed as containing the information describing the makeup of the observed signal (voices of individual speakers in the cocktail party setting). The goal of Independent Component Analysis is to approximate the matrix A in order to recover the latent signal s. In practice, most methods ignore the noise term, leaving the simpler problem of recovering the mixing matrix A when x = As is observed. Arguably the two most widely used ICA algorithms are FastICA [13] and JADE [6]. Both of these algorithms are based on a two step process: (1) The data is centered and whitened, that is, made to have identity covariance matrix. This is typically done using principal component analysis (PCA) and rescaling the appropriate components. In the noiseless case this procedure orthogonalizes and rescales the independent components and thus recovers A up to an unknown orthogonal matrix R. (2) Recover the orthogonal matrix R. Most practical ICA algorithms differ only in the second step. In FastICA, various objective functions are used to perform a projection pursuit style algorithm which recovers the columns of R one at a time. JADE uses a fourth-cumulant based technique to simultaneously recover all columns of R. Step 1 of ICA is affected by the addition of a Gaussian noise. Even if the noise is white (has a scalar times identity covariance matrix) the PCA-based whitening procedure can no longer guarantee the whitening of the underlying independent components. Hence, the second step of the process is no longer justified. This failure may be even more significant if the noise is not white, which is likely to be the case in many practical situations. Recent theoretical developments (see, [2] and [3]) consider the case where the noise η is an arbitrary (not necessarily white) additive Gaussian variable drawn independently from s. In [2], it was observed that certain cumulant-based techniques for ICA can still be applied for the second step if the underlying signals can be orthogonalized.1 Orthogonalization of the latent signals (quasi-orthogonalization) is a significantly less restrictive condition as it does not force the underlying signal to have identity covariance (as in whitening in the noiseless case). In the noisy setting, the usual PCA cannot achieve quasi-orthogonalization as it will whiten the mixed signal, but not the underlying components. In [3], we show how quasi-orthogonalization can be achieved in a noise-invariant way through a method based on the fourth-order cumulant tensor. However, a direct implementation of that method requires estimating the full fourth-order cumulant tensor, which is computationally challenging even in relatively low dimensions. In this paper we derive a practical version of that algorithm based on directional Hessians of the fourth univariate cumulant, thus reducing the complexity dependence on the data dimensionality from d4 to d3, and also allowing for a fully vectorized implementation. We also develop a fast and very simple gradient iteration (not to be confused with gradient descent) algorithm, GI-ICA, which is compatible with the quasi-orthogonalization step and can be shown to have convergence of order r −1, when implemented using a univariate cumulant of order r. For the cumulant of order four, commonly used in practical applications, we obtain cubic convergence. We show how these convergence rates follow directly from the properties of the cumulants, which sheds some light on the somewhat surprising cubic convergence seen in fourth-order based ICA methods [13, 18, 22]. The update step has complexity O(Nd) where N is the number of samples, giving a total algorithmic complexity of O(Nd3) for step 1 and O(Nd2t) for step 2, where t is the number of iterations for convergence in the gradient iteration. Interestingly, while the techniques are quite different, our gradient iteration algorithm turns out to be closely related to Fast ICA in the noiseless setting, in the case when the data is whitened and the cumulants of order three or four are used. Thus, GI-ICA can be viewed as a generalization (and a conceptual simplification) of Fast ICA for more general quasi-orthogonalized data. We present experimental results showing superior performance in the case of data contaminated by Gaussian noise and very competitive performance for clean data. We also note that the GIICA algorithms are fast in practice, allowing us to process (decorrelate and detect the independent 1This process of orthogonalizing the latent signals was called quasi-whitening in [2] and later in [3]. However, this conflicts with the definition of quasi-whitening given in [12] which requires the latent signals to be whitened. To avoid the confusion we will use the term quasi-orthogonalization for the process of orthogonalizing the latent signals. 2 components) 100 000 points in dimension 5 in well under a second on a standard desktop computer. Our Matlab implementation of GI-ICA is available for download at http://sourceforge. net/projects/giica/. Finally, we observe that our method is partially compatible with the robust cumulants introduced in [20]. We briefly discuss how GI-ICA can be extended using these noise-robust techniques for ICA to reduce the impact of sparse noise. The paper is organized as follows. In section 2, we discuss the relevant properties of cumulants, and discuss results from prior work which allows for the quasi-orthogonalization of signals with non-zero fourth cumulant. In section 3, we discuss the connection between the fourth-order cumulant tensor method for quasi-orthogonalization discussed in section 2 with Hessian-based techniques seen in [2] and [11]. We use this connection to create a more computationally efficient and practically implementable version of the quasi-orthogonalization algorithm discussed in section 2. In section 4, we discuss new, fast, projection-pursuit style algorithms for the second step of ICA which are compatible with quasi-orthogonalization. In order to simplify the presentation, all algorithms are stated in an abstract form as if we have exact knowledge of required distribution parameters. Section 5 discusses the estimators of required distribution parameters to be used in practice. Section 6 discusses numerical experiments demonstrating the applicability of our techniques. Related Work. The name Independent Component Analysis refers to a broad range of algorithms addressing the blind signal separation problem as well as its variants and extensions. There is an extensive literature on ICA in the signal processing and machine learning communities due to its applicability to a variety of important practical situations. For a comprehensive introduction see the books [8, 14]. In this paper we develop techniques for dealing with noisy data by introducing new and more efficient techniques for quasi-orthogonalization and subsequent component recovery. The quasi-orthogonalization step was introduced in [2], where the authors proposed an algorithm for the case when the fourth cumulants of all independent components are of the same sign. A general algorithm with complete theoretical analysis was provided in [3]. That algorithm required estimating the full fourth-order cumulant tensor. We note that Hessian based techniques for ICA were used in [21, 2, 11], with [11] and [2] using the Hessian of the fourth-order cumulant. The papers [21] and [11] proposed interesting randomized one step noise-robust ICA algorithms based on the cumulant generating function and the fourth cumulant respectively in primarily theoretical settings. The gradient iteration algorithm proposed is closely related to the work [18], which provides a gradient-based algorithm derived from the fourth moment with cubic convergence to learn an unknown parallelepiped in a cryptographic setting. For the special case of the fourth cumulant, the idea of gradient iteration has appeared in the context of FastICA with a different justification, see e.g. [16, Equation 11 and Theorem 2]. We also note the work [12], which develops methods for Gaussian noise-invariant ICA under the assumption that the noise parameters are known. Finally, there are several papers that considered the problem of performing PCA in a noisy framework. [5] gives a provably robust algorithm for PCA under a sparse noise model. [4] performs PCA robust to white Gaussian noise, and [9] performs PCA robust to white Gaussian noise and sparse noise. 2 Using Cumulants to Orthogonalize the Independent Components Properties of Cumulants: Cumulants are similar to moments and can be expressed in terms of certain polynomials of the moments. However, cumulants have additional properties which allow independent random variables to be algebraically separated. We will be interested in the fourth order multi-variate cumulants, and univariate cumulants of arbitrary order. Denote by Qx the fourth order cumulant tensor for the random vector x. So, (Qx)ijkl is the cross-cumulant between the random variables xi, xj, xk, and xl, which we alternatively denote as Cum(xi, xj, xk, xl). Cumulant tensors are symmetric, i.e. (Qx)ijkl is invariant under permutations of indices. Multivariate cumulants have the following properties (written in the case of fourth order cumulants): 1. (Multilinearity) Cum(αxi, xj, xk, xl) = α Cum(xi, xj, xk, xl) for random vector x and scalar α. If y is a random variable, then Cum(xi+y, xj, xk, xl) = Cum(xi, xj, xk, xl)+Cum(y, xj, xk, xl). 2. (Independence) If xi and xj are independent random variables, then Cum(xi, xj, xk, xl) = 0. When x and y are independent, Qx+y = Qx + Qy. 3. (Vanishing Gaussian) Cumulants of order 3 and above are zero for Gaussian random variables. 3 The first order cumulant is the mean, and the second order multivariate cumulant is the covariance matrix. We will denote by κr(x) the order-r univariate cumulant, which is equivalent to the crosscumulant of x with itself r times: κr(x) := Cum(x, x, . . . , x) (where x appears r times). Univariate r-cumulants are additive for independent random variables, i.e. κr(x + y) = κr(x) + κr(y), and homogeneous of degree r, i.e. κr(αx) = αrκr(x). Quasi-Orthogonalization Using Cumulant Tensors. Recalling our original notation, x = As + b + η gives the generative ICA model. We define an operation of fourth-order tensors on matrices: For Q ∈Rd×d×d×d and M ∈Rd×d, Q(M) is the matrix such that Q(M)ij := d X k=1 d X l=1 Qijklmlk . (1) We can use this operation to orthogonalize the latent random signals. Definition 2.1. A matrix W is called a quasi-orthogonalization matrix if there exists an orthogonal matrix R and a nonsingular diagonal matrix D such that WA = RD. We will need the following results from [3]. Here we use Aq to denote the qth column of A. Lemma 2.2. Let M ∈Rd×d be an arbitrary matrix. Then, Qx(M) = ADAT where D is a diagonal matrix with entries dqq = κ4(sq)AT q MAq. Theorem 2.3. Suppose that each component of s has non-zero fourth cumulant. Let M = Qx(I), and let C = Qx(M −1). Then C = ADAT where D is a diagonal matrix with entries dqq = 1/∥Aq∥2 2. In particular, C is positive definite, and for any factorization BBT of C, B−1 is a quasiorthogonalization matrix. 3 Quasi-Orthogonalization using Cumulant Hessians We have seen in Theorem 2.3 a tensor-based method which can be used to quasi-orthogonalize observed data. However, this method na¨ıvely requires the estimation of O(d4) terms from data. There is a connection between the cumulant Hessian-based techniques used in ICA [2, 11] and the tensor-based technique for quasi-orthogonalization described in Theorem 2.3 that allows the tensor-method to be rewritten using a series of Hessian operations. We make this connection precise below. The Hessian version requires only O(d3) terms to be estimated from data and simplifies the computation to consist of matrix and vector operations. Let Hu denote the Hessian operator with respect to a vector u ∈Rd. The following lemma connects Hessian methods with our tensor-matrix operation (a special case is discussed in [2, Section 2.1]). Lemma 3.1. Hu(κ4(uT x)) = ADAT where dqq = 12(uT Aq)2κ4(sq). In Lemma 3.1, the diagonal entries can be rewritten as dqq = 12κ4(sq)(AT q (uuT )Aq). By comparing with Lemma 2.2, we see that applying Qx against a symmetric, rank one matrix uuT can be rewritten in terms of the Hessian operations: Qx(uuT ) = 1 12Hu(κ4(uT x)). This formula extends to arbitrary symmetric matrices by the following Lemma. Lemma 3.2. Let M be a symmetric matrix with eigen decomposition UΛU T such that U = (u1, u2, . . . , ud) and Λ = diag(λ1, λ2, . . . , λd). Then, Qx(M) = 1 12 Pd i=1 λiHuiκ4(uT i x). The matrices I and M −1 in Theorem 2.3 are symmetric. As such, the tensor-based method for quasi-orthogonalization can be rewritten using Hessian operations. This is done in Algorithm 1. 4 Gradient Iteration ICA In the preceding sections, we discussed techniques to quasi-orthogonalize data. For this section, we will assume that quasi-orthogonalization is accomplished, and discuss deflationary approaches that can quickly recover the directions of the independent components. Let W be a quasiorthogonalization matrix. Then, define y := Wx = WAs + Wη. Note that since η is Gaussian noise, so is Wη. There exists a rotation matrix R and a diagonal matrix D such that WA = RD. Let ˜s := Ds. The coordinates of ˜s are still independent random variables. Gaussian noise makes recovering the scaling matrix D impossible. We aim to recover the rotation matrix R. 4 Algorithm 1 Hessian-based algorithm to generate a quasi-orthogonalization matrix. 1: function FINDQUASIORTHOGONALIZATIONMATRIX(x) 2: Let M = 1 12 Pd i=1 Huκ4(uT x)|u=ei. See Equation (4) for the estimator. 3: Let UΛU T give the eigendecomposition of M −1 4: Let C = Pd i=1 λiHuκ4(uT x)|u=Ui. See Equation (4) for the estimator. 5: Factorize C as BBT . 6: return B−1 7: end function To see why recovery of D is impossible, we note that a white Gaussian random variable η1 has independent components. It is impossible to distinguish between the case where η1 is part of the signal, i.e. WA(s + η1) + Wη, and the case where Aη1 is part of the additive Gaussian noise, i.e. WAs+W(Aη1 +η), when s, η1, and η are drawn independently. In the noise-free ICA setting, the latent signal is typically assumed to have identity covariance, placing the scaling information in the columns of A. The presence of additive Gaussian noise makes recovery of the scaling information impossible since the latent signals become ill-defined. Following the idea popularized in FastICA, we will discuss a deflationary technique to recover the columns of R one at a time. Fast Recovery of a Single Independent Component. In the deflationary approach, a function f is fixed that acts upon a directional vector u ∈Rd. Based on some criterion (typically maximization or minimization of f), an iterative optimization step is performed until convergence. This technique was popularized in FastICA, which is considered fast for the following reasons: 1. As an approximate Newton method, FastICA requires computation of ∇uf and a quick-tocompute estimate of (Hu(f))−1 at each iterative step. Due to the estimate, the computation runs in O(Nd) time, where N is the number of samples. 2. The iterative step in FastICA has local quadratic order convergence using arbitrary functions, and global cubic-order convergence when using the fourth cumulant [13]. We note that cubic convergence rates are not unique to FastICA and have been seen using gradient descent (with the correct step-size) when choosing f as the fourth moment [18]. Our proposed deflationary algorithm will be comparable with FastICA in terms of computational complexity, and the iterative step will take on a conceptually simpler form as it only relies on ∇uκr. We provide a derivation of fast convergence rates that relies entirely on the properties of cumulants. As cumulants are invariant with respect to the additive Gaussian noise, the proposed methods will be admissible for both standard and noisy ICA. While cumulants are essentially unique with the additivity and homogeneity properties [17] when no restrictions are made on the probability space, the preprocessing step of ICA gives additional structure (like orthogonality and centering), providing additional admissible functions. In particular, [20] designs “robust cumulants” which are only minimally effected by sparse noise. Welling’s robust cumulants have versions of the additivity and homogeneity properties, and are consistent with our update step. For this reason, we will state our results in greater generality. Let G be a function of univariate random variables that satisfies the additivity, degree-r (r ≥3) homogeneity, and (for the noisy case) the vanishing Gaussians properties of cumulants. Then for a generic choice of input vector v, Algorithm 2 will demonstrate order r−1 convergence. In particular, if G is κ3, then we obtain quadratic convergence; and if G is κ4, we obtain cubic convergence. Lemma 4.1 helps explain why this is true. Lemma 4.1. ∇vG(v · y) = r Pd i=1(v · Ri)r−1G(˜si)Ri. If we consider what is happening in the basis of the columns of R, then up to some multiplicative constant, each coordinate is raised to the r −1 power and then renormalized during each step of Algorithm 2. This ultimately leads to the order r −1 convergence. Theorem 4.2. If for a unit vector input v to Algorithm 2 h = arg maxi |(v · Ri)r−2G(˜si)| has a unique answer, then v has order r −1 convergence to Rh up to sign. In particular, if the following conditions are met: (1) There exists a coordinate random variable si of s such that G(si) ̸= 0. (2) v inputted into Algorithm 2 is chosen uniformly at random from the unit sphere Sd−1. Then Algorithm 2 converges to a column of R (up to sign) almost surely, and convergence is of order r −1. 5 Algorithm 2 A fast algorithm to recover a single column of R when v is drawn generically from the unit sphere. Equations (2) and (3) provide k-statistic based estimates of ∇vκ3 and ∇vκ4, which can be used as practical choices of ∇vG on real data. 1: function GI-ICA(v, y) 2: repeat 3: v ←∇vG(vT y) 4: v ←v/∥v∥2 5: until Convergence return v 6: end function Algorithm 3 Algorithm for ICA in the presence of Gaussian noise. ˜A recovers A up to column order and scaling. RT W is the demixing matrix for the observed random vector x. function GAUSSIANROBUSTICA(G, x) W = FINDQUASIORTHOGONALIZATIONMATRIX(x) y = Wx R columns = ∅ for i = 1 to d do Draw v from Sd−1 ∩span(R columns)⊥uniformly at random. R columns = R columns ∪{GI-ICA(v, y)} end for Construct a matrix R using the elements of R columns as columns. ˜s = RT y ˜A = (RT W)−1 return ˜A, ˜s end function By convergence up to sign, we include the possibility that v oscillates between Rh and −Rh on alternating steps. This can occur if G(˜si) < 0 and r is odd. Due to space limitations, the proof is omitted. Recovering all Independent Components. As a Corollary to Theorem 4.2 we get: Corollary 4.3. Suppose R1, R2, . . . , Rk are known for some k < d. Suppose there exists i > k such that G(si) ̸= 0. If v is drawn uniformly at random from Sd−1 ∩span(R1, . . . , Rk)⊥where Sd−1 denotes the unit sphere in Rd, then Algorithm 2 with input v converges to a new column of R almost surely. Since the indexing of R is arbitrary, Corollary 4.3 gives a solution to noisy ICA, in Algorithm 3. In practice (not required by the theory), it may be better to enforce orthogonality between the columns of R, by orthogonalizing v against previously found columns of R at the end of each step in Algorithm 2. We expect the fourth or third cumulant function will typically be chosen for G. 5 Time Complexity Analysis and Estimation of Cumulants To implement Algorithms 1 and 2 requires the estimation of functions from data. We will limit our discussion to estimation of the third and fourth cumulants, as lower order cumulants are more statistically stable to estimate than higher order cumulants. κ3 is useful in Algorithm 2 for nonsymmetric distributions. However, since κ3(si) = 0 whenever si is a symmetric distribution, it is plausible that κ3 would not recover all columns of R. When s is suspected of being symmetric, it is prudent to use κ4 for G. Alternatively, one can fall back to κ4 from κ3 when κ3 is detected to be near 0. Denote by z(1), z(2), . . . , z(N) the observed samples of a random variable z. Given a sample, each cumulant can be estimated in an unbiased fashion by its k-statistic. Denote by kr(z(i)) the kstatistic sample estimate of κr(z). Letting mr(z(i)) := 1 N PN i=1(z(i) −¯z)r give the rth sample central moment, then k3(z(i)) := N 2m3(z(i)) (N −1)(N −2) , k4(z(i)) := N 2 (N + 1)m4(z(i)) −3(N −1)m2(z(i))2 (N −1)(N −2)(N −3) 6 gives the third and fourth k-statistics [15]. However, we are interested in estimating the gradients (for Algorithm 2) and Hessians (for Algorithm 1) of the cumulants rather than the cumulants themselves. The following Lemma shows how to obtain unbiased estimates: Lemma 5.1. Let z be a d-dimensional random vector with finite moments up to order r. Let z(i) be an iid sample of z. Let α ∈Nd be a multi-index. Then ∂α ukr(u · z(i)) is an unbiased estimate for ∂α uκr(u · z). If we mean-subtract (via the sample mean) all observed random variables, then the resulting estimates are: ∇uk3(u · y) = (N −1)−1(N −2)−13N N X i=1 (u · y(i))2y(i) (2) ∇uk4(u · y) = N 2 (N −1)(N −2)(N −3) ( 4N + 1 N N X i=1 ((u · y(i)))3y(i) ! −12N −1 N 2 N X i=1 (u · y(i))2 ! N X i=1 (u · y(i))y(i) !) (3) Huk4(u · x) = 12N 2 (N −1)(N −2)(N −3) ( N + 1 N N X i=1 ((u · x(i)))2(xxT )(i) (4) −N −1 N 2 N X i=1 (u · x(i))2 N X i=1 (xxT )(i) −2N −2 N 2 N X i=1 (u · x(i))x(i) ! N X i=1 (u · x(i))x(i) !T Using (4) to estimate Huκ4(uT x) from data when implementing Algorithm 1, the resulting quasiorthogonalization algorithm runs in O(Nd3) time. Using (2) or (3) to estimate ∇uG(vT y) (with G chosen to be κ3 or κ4 respectively) when implementing Algorithm 2 gives an update step that runs in O(Nd) time. If t bounds the number of iterations to convergence in Algorithm 2, then O(Nd2t) steps are required to recover all columns of R once quasi-orthogonalization has been achieved. 6 Simulation Results In Figure 1, we compare our algorithms to the baselines JADE [7] and versions of FastICA [10], using the code made available by the authors. Except for the choice of the contrast function for FastICA the baselines were run using default settings. All tests were done using artificially generated data. In implementing our algorithms (available at [19]), we opted to enforce orthogonality during the update step of Algorithm 2 with previously found columns of R. In Figure 1, comparison on five distributions indicates that each of the independent coordinates was generated from a distinct distribution among the Laplace distribution, the Bernoulli distribution with parameter 0.5, the tdistribution with 5 degrees of freedom, the exponential distribution, and the continuous uniform distribution. Most of these distributions are symmetric, making GI-κ3 inadmissible. When generating data for the ICA algorithm, we generate a random mixing matrix A with condition number 10 (minimum singular value 1 and maximum singular value 10), and intermediate singular values chosen uniformly at random. The noise magnitude indicates the strength of an additive white Gaussian noise. We define 100% noise magnitude to mean variance 10, with 25% noise and 50% noise indicating variances 2.5 and 5 respectively. Performance was measured using the Amari Index introduced in [1]. Let ˆB denote the approximate demixing matrix returned by an ICA algorithm, and let M = ˆBA. Then, the Amari index is given by: E := Pn i=1 Pn j=1 |mij| maxk |mik| −1 + Pn j=1 Pn i=1 |mij| maxk |mkj| −1 . The Amari index takes on values between 0 and the dimensionality d. It can be roughly viewed as the distance of M from the nearest scaled permutation matrix PD (where P is a permutation matrix and D is a diagonal matrix). From the noiseles data, we see that quasi-orthogonalization requires more data than whitening in order to provide accurate results. Once sufficient data is provided, all fourth order methods (GI-κ4, JADE, and κ4-FastICA) perform comparably. The difference between GI-κ4 and κ4-FastICA is not 7 100 1000 10000 100000 0.01 0.10 1.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=5, noisless data) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE 100 1000 10000 100000 0.01 0.10 1.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=5, 25% noise magnitude) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE 100 1000 10000 100000 0.01 0.10 1.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=5, 50% noise magnitude) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE 100 1000 10000 100000 0.01 0.10 1.00 10.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=10, noisless data) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE 100 1000 10000 100000 0.01 0.10 1.00 10.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=10, 25% noise magnitude) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE 100 1000 10000 100000 0.01 0.10 1.00 10.00 Number of Samples Amari Index ICA Comparison on 5 distributions (d=10, 50% noise magnitude) GI−κ4 (white) GI−κ4 (quasi−orthogonal) κ4−FastICA log cosh−FastICA JADE Figure 1: Comparison of ICA algorithms under various levels of noise. White and quasi-orthogonal refer to the choice of the first step of ICA. All baseline algorithms use whitening. Reported Amari indices denote the mean Amari index over 50 runs on different draws of both A and the data. d gives the data dimensionality, with two copies of each distribution used when d = 10. statistically significant over 50 runs with 100 000 samples. We note that GI-κ4 under whitening and κ4-FastICA have the same update step (up to a slightly different choice of estimators), with GI-κ4 differing to allow for quasi-orthogonalization. Where provided, the error bars give a 2σ confidence interval on the mean Amari index. In all cases, error bars for our algorithms are provided, and error bars for the baseline algorithms are provided when they do not hinder readability. It is clear that all algorithms degrade with the addition of Gaussian noise. However, GI-κ4 under quasi-orthogonalization degrades far less when given sufficient samples. For this reason, the quasi-orthogonalized GI-κ4 outperforms all other algorithms (given sufficient samples) including the log cosh-FastICA, which performs best in the noiseless case. Contrasting the performance of GIκ4 under whitening with itself under quasi-orthogonalization, it is clear that quasi-orthogonalization is necessary to be robust to Gaussian noise. Run times were indeed reasonably fast. For 100 000 samples on the varied distributions (d = 5) with 50% Gaussian noise magnitude, GI-κ4 (including the orthogonalization step) had an average running time2 of 0.19 seconds using PCA whitening, and 0.23 seconds under quasi-orthogonalization. The corresponding average number of iterations to convergence per independent component (at 0.0001 error) were 4.16 and 4.08. In the following table, we report the mean number of steps to convergence (per independent component) over the 50 runs for the 50% noise distribution (d = 5), and note that once sufficiently many samples were taken, the number of steps to convergence becomes remarkably small. Number of data pts 500 1000 5000 10000 50000 100000 whitening+GI-κ4: mean num steps 11.76 5.92 4.99 4.59 4.35 4.16 quasi-orth.+GI-κ4: mean num steps 213.92 65.95 4.48 4.36 4.06 4.08 7 Acknowledgments This work was supported by NSF grant IIS 1117707. 2 Using a standard desktop with an i7-2600 3.4 GHz CPU and 16 GB RAM. 8 References [1] S. Amari, A. Cichocki, H. H. Yang, et al. A new learning algorithm for blind signal separation. Advances in neural information processing systems, pages 757–763, 1996. [2] S. Arora, R. Ge, A. Moitra, and S. Sachdeva. Provable ICA with unknown Gaussian noise, with implications for Gaussian mixtures and autoencoders. In NIPS, pages 2384–2392, 2012. [3] M. Belkin, L. Rademacher, and J. Voss. Blind signal separation in the presence of Gaussian noise. In JMLR W&CP, volume 30: COLT, pages 270–287, 2013. [4] C. M. Bishop. Variational principal components. Proc. Ninth Int. Conf. on Articial Neural Networks. ICANN, 1:509–514, 1999. [5] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? CoRR, abs/0912.3599, 2009. [6] J. Cardoso and A. Souloumiac. Blind beamforming for non-Gaussian signals. In Radar and Signal Processing, IEE Proceedings F, volume 140, pages 362–370. IET, 1993. [7] J.-F. Cardoso and A. Souloumiac. Matlab JADE for real-valued data v 1.8. http:// perso.telecom-paristech.fr/˜cardoso/Algo/Jade/jadeR.m, 2005. [Online; accessed 8-May-2013]. [8] P. Comon and C. Jutten, editors. Handbook of Blind Source Separation. Academic Press, 2010. [9] X. Ding, L. He, and L. Carin. Bayesian robust principal component analysis. Image Processing, IEEE Transactions on, 20(12):3419–3430, 2011. [10] H. G¨avert, J. Hurri, J. S¨arel¨a, and A. Hyv¨arinen. Matlab FastICA v 2.5. http:// research.ics.aalto.fi/ica/fastica/code/dlcode.shtml, 2005. [Online; accessed 1-May-2013]. [11] D. Hsu and S. M. Kakade. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. In ITCS, pages 11–20, 2013. [12] A. Hyv¨arinen. Independent component analysis in the presence of Gaussian noise by maximizing joint likelihood. Neurocomputing, 22(1-3):49–67, 1998. [13] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [14] A. Hyv¨arinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5):411–430, 2000. [15] J. F. Kenney and E. S. Keeping. Mathematics of Statistics, part 2. van Nostrand, 1962. [16] H. Li and T. Adali. A class of complex ICA algorithms based on the kurtosis cost function. IEEE Transactions on Neural Networks, 19(3):408–420, 2008. [17] L. Mafttner. What are cumulants. Documenta Mathematica, 4:601–622, 1999. [18] P. Q. Nguyen and O. Regev. Learning a parallelepiped: Cryptanalysis of GGH and NTRU signatures. J. Cryptology, 22(2):139–160, 2009. [19] J. Voss, L. Rademacher, and M. Belkin. Matlab GI-ICA implementation. http:// sourceforge.net/projects/giica/, 2013. [Online]. [20] M. Welling. Robust higher order statistics. In Tenth International Workshop on Artificial Intelligence and Statistics, pages 405–412, 2005. [21] A. Yeredor. Blind source separation via the second characteristic function. Signal Processing, 80(5):897–902, 2000. [22] V. Zarzoso and P. Comon. How fast is FastICA. EUSIPCO, 2006. 9
|
2013
|
98
|
5,178
|
Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization Julien Mairal LEAR Project-Team - INRIA Grenoble julien.mairal@inria.fr Abstract Majorization-minimization algorithms consist of iteratively minimizing a majorizing surrogate of an objective function. Because of its simplicity and its wide applicability, this principle has been very popular in statistics and in signal processing. In this paper, we intend to make this principle scalable. We introduce a stochastic majorization-minimization scheme which is able to deal with largescale or possibly infinite data sets. When applied to convex optimization problems under suitable assumptions, we show that it achieves an expected convergence rate of O(1/√n) after n iterations, and of O(1/n) for strongly convex functions. Equally important, our scheme almost surely converges to stationary points for a large class of non-convex problems. We develop several efficient algorithms based on our framework. First, we propose a new stochastic proximal gradient method, which experimentally matches state-of-the-art solvers for large-scale ℓ1logistic regression. Second, we develop an online DC programming algorithm for non-convex sparse estimation. Finally, we demonstrate the effectiveness of our approach for solving large-scale structured matrix factorization problems. 1 Introduction Majorization-minimization [15] is a simple optimization principle for minimizing an objective function. It consists of iteratively minimizing a surrogate that upper-bounds the objective, thus monotonically driving the objective function value downhill. This idea is used in many existing procedures. For instance, the expectation-maximization (EM) algorithm (see [5, 21]) builds a surrogate for a likelihood model by using Jensen’s inequality. Other approaches can also be interpreted under the majorization-minimization point of view, such as DC programming [8], where “DC” stands for difference of convex functions, variational Bayes techniques [28], or proximal algorithms [1,23,29]. In this paper, we propose a stochastic majorization-minimization algorithm, which is is suitable for solving large-scale problems arising in machine learning and signal processing. More precisely, we address the minimization of an expected cost—that is, an objective function that can be represented by an expectation over a data distribution. For such objectives, online techniques based on stochastic approximations have proven to be particularly efficient, and have drawn a lot of attraction in machine learning, statistics, and optimization [3–6,9–12,14,16,17,19,22,24–26,30]. Our scheme follows this line of research. It consists of iteratively building a surrogate of the expected cost when only a single data point is observed at each iteration; this data point is used to update the surrogate, which in turn is minimized to obtain a new estimate. Some previous works are closely related to this scheme: the online EM algorithm for latent data models [5,21] and the online matrix factorization technique of [19] involve for instance surrogate functions updated in a similar fashion. Compared to these two approaches, our method is targeted to more general optimization problems. Another related work is the incremental majorization-minimization algorithm of [18] for finite training sets; it was indeed shown to be efficient for solving machine learning problems where storing 1 dense information about the past iterates can be afforded. Concretely, this incremental scheme requires to store O(pn) values, where p is the variable size, and n is the size of the training set.1 This issue was the main motivation for us for proposing a stochastic scheme with a memory load independent of n, thus allowing us to possibly deal with infinite data sets, or a huge variable size p. We study the convergence properties of our algorithm when the surrogates are strongly convex and chosen among the class of first-order surrogate functions introduced in [18], which consist of approximating the possibly non-smooth objective up to a smooth error. When the objective is convex, we obtain expected convergence rates that are asymptotically optimal, or close to optimal [14, 22]. More precisely, the convergence rate is of order O(1/√n) in a finite horizon setting, and O(1/n) for a strongly convex objective in an infinite horizon setting. Our second analysis shows that for nonconvex problems, our method almost surely converges to a set of stationary points under suitable assumptions. We believe that this result is equally valuable as convergence rates for convex optimization. To the best of our knowledge, the literature on stochastic non-convex optimization is rather scarce, and we are only aware of convergence results in more restricted settings than ours—see for instance [3] for the stochastic gradient descent algorithm, [5] for online EM, [19] for online matrix factorization, or [9], which provides stronger guarantees, but for unconstrained smooth problems. We develop several efficient algorithms based on our framework. The first one is a new stochastic proximal gradient method for composite or constrained optimization. This algorithm is related to a long series of work in the convex optimization literature [6,10,12,14,16,22,25,30], and we demonstrate that it performs as well as state-of-the-art solvers for large-scale ℓ1-logistic regression [7]. The second one is an online DC programming technique, which we demonstrate to be better than batch alternatives for large-scale non-convex sparse estimation [8]. Finally, we show that our scheme can address efficiently structured sparse matrix factorization problems in an online fashion, and offers new possibilities to [13,19] such as the use of various loss or regularization functions. This paper is organized as follows: Section 2 introduces first-order surrogate functions for batch optimization; Section 3 is devoted to our stochastic approach and its convergence analysis; Section 4 presents several applications and numerical experiments, and Section 5 concludes the paper. 2 Optimization with First-Order Surrogate Functions Throughout the paper, we are interested in the minimization of a continuous function f : Rp →R: min θ∈Θ f(θ), (1) where Θ ⊆Rp is a convex set. The majorization-minimization principle consists of computing a majorizing surrogate gn of f at iteration n and updating the current estimate by θn ∈arg minθ∈Θ gn(θ). The success of such a scheme depends on how well the surrogates approximate f. In this paper, we consider a particular class of surrogate functions introduced in [18] and defined as follows: Definition 2.1 (Strongly Convex First-Order Surrogate Functions). Let κ be in Θ. We denote by SL,ρ(f, κ) the set of ρ-strongly convex functions g such that g ≥f, g(κ) = f(κ), the approximation error g −f is differentiable, and the gradient ∇(g −f) is LLipschitz continuous. We call the functions g in SL,ρ(f, κ) “first-order surrogate functions”. Among the first-order surrogate functions presented in [18], we should mention the following ones: • Lipschitz Gradient Surrogates. When f is differentiable and ∇f is L-Lipschitz, f admits the following surrogate g in S2L,L(f, κ): g : θ 7→f(κ) + ∇f(κ)⊤(θ −κ) + L 2 ∥θ −κ∥2 2. When f is convex, g is in SL,L(f, κ), and when f is µ-strongly convex, g is in SL−µ,L(f, κ). Minimizing g amounts to performing a classical classical gradient descent step θ ←κ −1 L∇f(κ). • Proximal Gradient Surrogates. Assume that f splits into f = f1 + f2, where f1 is differentiable, ∇f1 is L-Lipschitz, and f2 is 1To alleviate this issue, it is possible to cut the dataset into η mini-batches, reducing the memory load to O(pη), which remains cumbersome when p is very large. 2 convex. Then, the function g below is in S2L,L(f, κ): g : θ 7→f1(κ) + ∇f1(κ)⊤(θ −κ) + L 2 ∥θ −κ∥2 2 + f2(θ). When f1 is convex, g is in SL,L(f, κ). If f1 is µ-strongly convex, g is in SL−µ,L(f, κ). Minimizing g amounts to a proximal gradient step [1,23,29]: θ ←arg minθ 1 2∥κ −1 L∇f1(κ) −θ∥2 2 + 1 Lf2(θ). • DC Programming Surrogates. Assume that f = f1 + f2, where f2 is concave and differentiable, ∇f2 is L2-Lipschitz, and g1 is in SL1,ρ1(f1, κ), Then, the following function g is a surrogate in SL1+L2,ρ1(f, κ): g : θ 7→f1(θ) + f2(κ) + ∇f2(κ)⊤(θ −κ). When f1 is convex, f1 + f2 is a difference of convex functions, leading to a DC program [8]. With the definition of first-order surrogates and a basic “batch” algorithm in hand, we now introduce our main contribution: a stochastic scheme for solving large-scale problems. 3 Stochastic Optimization As pointed out in [4], one is usually not interested in the minimization of an empirical cost on a finite training set, but instead in minimizing an expected cost. Thus, we assume from now on that f has the form of an expectation: min θ∈Θ h f(θ) ≜Ex[ℓ(x, θ)] i , (2) where x from some set X represents a data point, which is drawn according to some unknown distribution, and ℓis a continuous loss function. As often done in the literature [22], we assume that the expectations are well defined and finite valued; we also assume that f is bounded below. We present our approach for tackling (2) in Algorithm 1. At each iteration, we draw a training point xn, assuming that these points are i.i.d. samples from the data distribution. Note that in practice, since it is often difficult to obtain true i.i.d. samples, the points xn are computed by cycling on a randomly permuted training set [4]. Then, we choose a surrogate gn for the function θ 7→ℓ(xn, θ), and we use it to update a function ¯gn that behaves as an approximate surrogate for the expected cost f. The function ¯gn is in fact a weighted average of previously computed surrogates, and involves a sequence of weights (wn)n≥1 that will be discussed later. Then, we minimize ¯gn, and obtain a new estimate θn. For convex problems, we also propose to use averaging schemes, denoted by “option 2” and “option 3” in Alg. 1. Averaging is a classical technique for improving convergence rates in convex optimization [10,22] for reasons that are clear in the convergence proofs. Algorithm 1 Stochastic Majorization-Minimization Scheme input θ0 ∈Θ (initial estimate); N (number of iterations); (wn)n≥1, weights in (0, 1]; 1: initialize the approximate surrogate: ¯g0 : θ 7→ρ 2∥θ −θ0∥2 2; ¯θ0 = θ0; ˆθ0 = θ0; 2: for n = 1, . . . , N do 3: draw a training point xn; define fn : θ 7→ℓ(xn, θ); 4: choose a surrogate function gn in SL,ρ(fn, θn−1); 5: update the approximate surrogate: ¯gn = (1 −wn)¯gn−1 + wngn; 6: update the current estimate: θn ∈arg min θ∈Θ ¯gn(θ); 7: for option 2, update the averaged iterate: ˆθn ≜(1 −wn+1)ˆθn−1 + wn+1θn; 8: for option 3, update the averaged iterate: ¯θn ≜(1−wn+1)¯θn−1+wn+1θn Pn+1 k=1 wk ; 9: end for output (option 1): θN (current estimate, no averaging); output (option 2): ¯θN (first averaging scheme); output (option 3): ˆθN (second averaging scheme). We remark that Algorithm 1 is only practical when the functions ¯gn can be parameterized with a small number of variables, and when they can be easily minimized over Θ. Concrete examples are discussed in Section 4. Before that, we proceed with the convergence analysis. 3 3.1 Convergence Analysis - Convex Case First, We study the case of convex functions fn : θ 7→ℓ(θ, xn), and make the following assumption: (A) for all θ in Θ, the functions fn are R-Lipschitz continuous. Note that for convex functions, this is equivalent to saying that subgradients of fn are uniformly bounded by R. Assumption (A) is classical in the stochastic optimization literature [22]. Our first result shows that with the averaging scheme corresponding to “option 2” in Alg. 1, we obtain an expected convergence rate that makes explicit the role of the weight sequence (wn)n≥1. Proposition 3.1 (Convergence Rate). When the functions fn are convex, under assumption (A), and when ρ = L, we have E[f(¯θn−1) −f ⋆] ≤L∥θ⋆−θ0∥2 2 + R2 L Pn k=1 w2 k 2 Pn k=1 wk for all n ≥1, (3) where ¯θn−1 is defined in Algorithm 1, θ⋆is a minimizer of f on Θ, and f ⋆≜f(θ⋆). Such a rate is similar to the one of stochastic gradient descent with averaging, see [22] for example. Note that the constraint ρ = L here is compatible with the proximal gradient surrogate. From Proposition 3.1, it is easy to obtain a O(1/√n) bound for a finite horizon—that is, when the total number of iterations n is known in advance. When n is fixed, such a bound can indeed be obtained by plugging constant weights wk = γ/√n for all k ≤n in Eq. (3). Note that the upperbound O(1/√n) cannot be improved in general without making further assumptions on the objective function [22]. The next corollary shows that in an infinite horizon setting and with decreasing weights, we lose a logarithmic factor compared to an optimal convergence rate [14,22] of O(1/√n). Corollary 3.1 (Convergence Rate - Infinite Horizon - Decreasing Weights). Let us make the same assumptions as in Proposition 3.1 and choose the weights wn = γ/√n. Then, E[f(¯θn−1) −f ⋆] ≤L∥θ⋆−θ0∥2 2 2γ√n + R2γ(1 + log(n)) 2L√n , ∀n ≥2. Our analysis suggests to use weights of the form O(1/√n). In practice, we have found that choosing wn = √n0 + 1/√n0 + n performs well, where n0 is tuned on a subsample of the training set. 3.2 Convergence Analysis - Strongly Convex Case In this section, we introduce an additional assumption: (B) the functions fn are µ-strongly convex. We show that our method achieves a rate O(1/n), which is optimal up to a multiplicative constant for strongly convex functions (see [14,22]). Proposition 3.2 (Convergence Rate). Under assumptions (A) and (B), with ρ = L + µ. Define β ≜µ ρ and wn ≜ 1+β 1+βn. Then, E[f(ˆθn−1) −f ⋆] + ρ 2E[∥θ⋆−θn∥2 2] ≤max 2R2 µ , ρ∥θ⋆−θ0∥2 2 1 βn + 1 for all n ≥1, where ˆθn is defined in Algorithm 1, when choosing the averaging scheme called “option 3”. The averaging scheme is slightly different than in the previous section and the weights decrease at a different speed. Again, this rate applies to the proximal gradient surrogates, which satisfy the constraint ρ = L + µ. In the next section, we analyze our scheme in a non-convex setting. 3.3 Convergence Analysis - Non-Convex Case Convergence results for non-convex problems are by nature weak, and difficult to obtain for stochastic optimization [4,9]. In such a context, proving convergence to a global (or local) minimum is out of reach, and classical analyses study instead asymptotic stationary point conditions, which involve directional derivatives (see [2,18]). Concretely, we introduce the following assumptions: 4 (C) Θ and the support X of the data are compact; (D) The functions fn are uniformly bounded by some constant M; (E) The weights wn are non-increasing, w1 = 1, P n≥1 wn =+∞, and P n≥1 w2 n √n<+∞; (F) The directional derivatives ∇fn(θ, θ′ −θ), and ∇f(θ, θ′ −θ) exist for all θ and θ′ in Θ. Assumptions (C) and (D) combined with (A) are useful because they allow us to use some uniform convergence results from the theory of empirical processes [27]. In a nutshell, these assumptions ensure that the function class {x 7→ℓ(x, θ) : θ ∈Θ} is “simple enough”, such that a uniform law of large numbers applies. The assumption (E) is more technical: it resembles classical conditions used for proving the convergence of stochastic gradient descent algorithms, usually stating that the weights wn should be the summand of a diverging sum while the sum of w2 n should be finite; the constraint P n≥1 w2 n √n < +∞is slightly stronger. Finally, (F) is a mild assumption, which is useful to characterize the stationary points of the problem. A classical necessary first-order condition [2] for θ to be a local minimum of f is indeed to have ∇f(θ, θ′ −θ) non-negative for all θ′ in Θ. We call such points θ the stationary points of the function f. The next proposition is a generalization of a convergence result obtained in [19] in the context of sparse matrix factorization. Proposition 3.3 (Non-Convex Analysis - Almost Sure Convergence). Under assumptions (A), (C), (D), (E), (f(θn))n≥0 converges with probability one. Under assumption (F), we also have that lim inf n→+∞inf θ∈Θ ∇¯fn(θn, θ −θn) ∥θ −θn∥2 ≥0, where the function ¯fn is a weighted empirical risk recursively defined as ¯fn = (1−wn) ¯fn−1+wnfn. It can be shown that ¯fn uniformly converges to f. Even though ¯fn converges uniformly to the expected cost f, Proposition 3.3 does not imply that the limit points of (θn)n≥1 are stationary points of f. We obtain such a guarantee when the surrogates that are parameterized, an assumption always satisfied when Algorithm 1 is used in practice. Proposition 3.4 (Non-Convex Analysis - Parameterized Surrogates). Let us make the same assumptions as in Proposition 3.3, and let us assume that the functions ¯gn are parameterized by some variables κn living in a compact set K of Rd. In other words, ¯gn can be written as gκn, with κn in K. Suppose there exists a constant K > 0 such that |gκ(θ) −gκ′(θ)| ≤ K∥κ −κ′∥2 for all θ in Θ and κ, κ′ in K. Then, every limit point θ∞of the sequence (θn)n≥1 is a stationary point of f—that is, for all θ in Θ, ∇f(θ∞, θ −θ∞) ≥0. Finally, we show that our non-convex convergence analysis can be extended beyond first-order surrogate functions—that is, when gn does not satisfy exactly Definition 2.1. This is possible when the objective has a particular partially separable structure, as shown in the next proposition. This extension was motivated by the non-convex sparse estimation formulation of Section 4, where such a structure appears. Proposition 3.5 (Non-Convex Analysis - Partially Separable Extension). Assume that the functions fn split into fn(θ) = f0,n(θ)+PK k=1 fk,n(γk(θ)), where the functions γk :Rp →R are convex and R-Lipschitz, and the fk,n are non-decreasing for k ≥1. Consider gn,0 in SL0,ρ1(f0,n, θn−1), and some non-decreasing functions gk,n in SLk,0(fk,n, γk(θn−1)). Instead of choosing gn in SL,ρ(fn, θn−1) in Alg 1, replace it by gn ≜θ7→g0,n(θ)+gk,n(γk(θ)). Then, Propositions 3.3 and 3.4 still hold. 4 Applications and Experimental Validation In this section, we introduce different applications, and provide numerical experiments. A C++/Matlab implementation is available in the software package SPAMS [19].2 All experiments were performed on a single core of a 2GHz Intel CPU with 64GB of RAM. 2http://spams-devel.gforge.inria.fr/. 5 4.1 Stochastic Proximal Gradient Descent Algorithm Our first application is a stochastic proximal gradient descent method, which we call SMM (Stochastic Majorization-Minimization), for solving problems of the form: min θ∈Θ Ex[ℓ(x, θ)] + ψ(θ), (4) where ψ is a convex deterministic regularization function, and the functions θ 7→ℓ(x, θ) are differentiable and their gradients are L-Lipschitz continuous. We can thus use the proximal gradient surrogate presented in Section 2. Assume that a weight sequence (wn)n≥1 is chosen such that w1 = 1. By defining some other weights wi n recursively as wi n ≜(1−wn)wi−1 n for i < n and wn n ≜wn, our scheme yields the update rule: θn ←arg min θ∈Θ n X i=1 wi n ∇fi(θi−1)⊤θ + L 2 ∥θ −θi−1∥2 2 + ψ(θ) . (SMM) Our algorithm is related to FOBOS [6], to SMIDAS [25] or the truncated gradient method [16] (when ψ is the ℓ1-norm). These three algorithms use indeed the following update rule: θn ←arg min θ∈Θ ∇fn(θn−1)⊤θ + 1 2ηn ∥θ −θn−1∥2 2 + ψ(θ), (FOBOS) Another related scheme is the regularized dual averaging (RDA) of [30], which can be written as θn ←arg min θ∈Θ 1 n n X i=1 ∇fi(θi−1)⊤θ + 1 2ηn ∥θ∥2 2 + ψ(θ). (RDA) Compared to these approaches, our scheme includes a weighted average of previously seen gradients, and a weighted average of the past iterates. Some links can also be drawn with approaches such as the “approximate follow the leader” algorithm of [10] and other works [12,14]. We now evaluate the performance of our method for ℓ1-logistic regression. In summary, the datasets consist of pairs (yi, xi)N i=1, where the yi’s are in {−1, +1}, and the xi’s are in Rp with unit ℓ2norm. The function ψ in (4) is the ℓ1-norm: ψ(θ) ≜λ∥θ∥1, and λ is a regularization parameter; the functions fi are logistic losses: fi(θ) ≜log(1 + e−yix⊤ i θ). One part of each dataset is devoted to training, and another part to testing. We used weights of the form wn ≜ p (n0 + 1)/(n + n0), where n0 is automatically adjusted at the beginning of each experiment by performing one pass on 5% of the training data. We implemented SMM in C++ and exploited the sparseness of the datasets, such that each update has a computational complexity of the order O(s), where s is the number of non zeros in ∇fn(θn−1); such an implementation is non trivial but proved to be very efficient. We consider three datasets described in the table below. rcv1 and webspam are obtained from the 2008 Pascal large-scale learning challenge.3 kdd2010 is available from the LIBSVM website.4 name Ntr (train) Nte (test) p density (%) size (GB) rcv1 781 265 23 149 47 152 0.161 0.95 webspam 250 000 100 000 16 091 143 0.023 14.95 kdd2010 10 000 000 9 264 097 28 875 157 10−4 4.8 We compare our implementation with state-of-the-art publicly available solvers: the batch algorithm FISTA of [1] implemented in the C++ SPAMS toolbox and LIBLINEAR v1.93 [7]. LIBLINEAR is based on a working-set algorithm, and, to the best of our knowledge, is one of the most efficient available solver for ℓ1-logistic regression with sparse datasets. Because p is large, the incremental majorization-minimization method of [18] could not run for memory reasons. We run every method on 1, 2, 3, 4, 5, 10 and 25 epochs (passes over the training set), for three regularization regimes, respectively yielding a solution with approximately 100, 1 000 and 10 000 non-zero coefficients. We report results for the medium regularization in Figure 1 and provide the rest as supplemental material. FISTA is not represented in this figure since it required more than 25 epochs to achieve reasonable values. Our conclusion is that SMM often provides a reasonable solution after one epoch, and outperforms LIBLINEAR in the low-precision regime. For high-precision regimes, LIBLINEAR should be preferred. Such a conclusion is often obtained when comparing batch and stochastic algorithms [4], but matching the performance of LIBLINEAR is very challenging. 3http://largescale.ml.tu-berlin.de. 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/. 6 0 5 10 15 20 25 0.25 0.3 0.35 Epochs / Dataset rcv1 Objective on Training Set LIBLINEAR SMM 0.25 0.3 0.35 Computation Time (sec) / Dataset rcv1 Objective on Training Set 100 101 102 LIBLINEAR SMM 0 5 10 15 20 25 0.25 0.3 0.35 Epochs / Dataset rcv1 Objective on Testing Set LIBLINEAR SMM 0 5 10 15 20 25 0.05 0.1 0.15 0.2 Epochs / Dataset webspam Objective on Training Set LIBLINEAR SMM 0.05 0.1 0.15 0.2 Computation Time (sec) / Dataset webspam Objective on Training Set 101 102 103 LIBLINEAR SMM 0 5 10 15 20 25 0.05 0.1 0.15 0.2 Epochs / Dataset webspam Objective on Testing Set LIBLINEAR SMM 0 5 10 15 20 25 0 0.05 0.1 0.15 0.2 Epochs / Dataset kddb Objective on Training Set LIBLINEAR SMM 0 0.05 0.1 0.15 0.2 Computation Time (sec) / Dataset kddb Objective on Training Set 101 102 103 LIBLINEAR SMM 0 5 10 15 20 25 0 0.05 0.1 0.15 0.2 Epochs / Dataset kddb Objective on Testing Set LIBLINEAR SMM Figure 1: Comparison between LIBLINEAR and SMM for the medium regularization regime. 4.2 Online DC Programming for Non-Convex Sparse Estimation We now consider the same experimental setting as in the previous section, but with a non-convex regularizer ψ : θ 7→λ Pp j=1 log(|θ[j]| + ε), where θ[j] is the j-th entry in θ. A classical way for minimizing the regularized empirical cost 1 N PN i=1 fi(θ) + ψ(θ) is to resort to DC programming. It consists of solving a sequence of reweighted-ℓ1 problems [8]. A current estimate θn−1 is updated as a solution of minθ∈Θ 1 N PN i=1 fi(θ) + λ Pp j=1 ηj|θ[j]|, where ηj ≜1/(|θn−1[j]| + ε). In contrast to this “batch” methodology, we can use our framework to address the problem online. At iteration n of Algorithm 1, we define the function gn according to Proposition 3.5: gn : θ 7→fn(θn−1) + ∇fn(θn−1)⊤(θ −θn−1) + L 2 ∥θ −θn−1∥2 2 + λ Pp j=1 |θ[j]| |θn−1[j]|+ε, We compare our online DC programming algorithm against the batch one, and report the results in Figure 2, with ε set to 0.01. We conclude that the batch reweighted-ℓ1 algorithm always converges after 2 or 3 weight updates, but suffers from local minima issues. The stochastic algorithm exhibits a slower convergence, but provides significantly better solutions. Whether or not there are good theoretical reasons for this fact remains to be investigated. Note that it would have been more rigorous to choose a bounded set Θ, which is required by Proposition 3.5. In practice, it turns not to be necessary for our method to work well; the iterates θn have indeed remained in a bounded set. 0 5 10 15 20 25 -0.06 -0.04 -0.02 0 Iterations - Epochs / Dataset rcv1 Objective on Train Set Online DC Batch DC 0 5 10 15 20 25 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 Iterations - Epochs / Dataset rcv1 Objective on Test Set Online DC Batch DC 0 5 10 15 20 25 -4.54 -4.535 -4.53 -4.525 -4.52 Iterations - Epochs / Dataset webspam Objective on Train Set Online DC Batch DC 0 5 10 15 20 25 -4.385 -4.38 -4.375 -4.37 Iterations - Epochs / Dataset webspam Objective on Test Set Online DC Batch DC Figure 2: Comparison between batch and online DC programming, with medium regularization for the datasets rcv1 and webspam. Additional plots are provided in the supplemental material. Note that each iteration in the batch setting can perform several epochs (passes over training data). 4.3 Online Structured Sparse Coding In this section, we show that we can bring new functionalities to existing matrix factorization techniques [13, 19]. We are given a large collection of signals (xi)N i=1 in Rm, and we want to find a 7 dictionary D in Rm×K that can represent these signals in a sparse way. The quality of D is measured through the loss ℓ(x, D) ≜minα∈RK 1 2∥x −Dα∥2 2 + λ1∥α∥1 + λ2 2 ∥α∥2 2, where the ℓ1-norm can be replaced by any convex regularizer, and the squared loss by any convex smooth loss. Then, we are interested in minimizing the following expected cost: min D∈Rm×K Ex [ℓ(x, D)] + ϕ(D), where ϕ is a regularizer for D. In the online learning approach of [19], the only way to regularize D is to use a constraint set, on which we need to be able to project efficiently; this is unfortunately not always possible. In the matrix factorization framework of [13], it is argued that some applications can benefit from a structured penalty ϕ, but the approach of [13] is not easily amenable to stochastic optimization. Our approach makes it possible by using the proximal gradient surrogate gn : D 7→ℓ(xn, Dn−1) + Tr ∇Dℓ(xn, Dn−1)⊤(D −Dn−1) + L 2 ∥D −Dn−1∥2 F + ϕ(D). (5) It is indeed possible to show that D 7→ℓ(xn, D) is differentiable, and its gradient is Lipschitz continuous with a constant L that can be explicitly computed [18,19]. We now design a proof-of-concept experiment. We consider a set of N =400 000 whitened natural image patches xn of size m = 20 × 20 pixels. We visualize some elements from a dictionary D trained by SPAMS [19] on the left of Figure 3; the dictionary elements are almost sparse, but have some residual noise among the small coefficients. Following [13], we propose to use a regularization function ϕ encouraging neighbor pixels to be set to zero together, thus leading to a sparse structured dictionary. We consider the collection G of all groups of variables corresponding to squares of 4 neighbor pixels in {1, . . . , m}. Then, we define ϕ(D) ≜γ1 PK j=1 P g∈G maxk∈g |dj[k]|+γ2∥D∥2 F, where dj is the j-th column of D. The penalty ϕ is a structured sparsity-inducing penalty that encourages groups of variables g to be set to zero together [13]. Its proximal operator can be computed efficiently [20], and it is thus easy to use the surrogates (5). We set λ1 = 0.15 and λ2 = 0.01; after trying a few values for γ1 and γ2 at a reasonable computational cost, we obtain dictionaries with the desired regularization effect, as shown in Figure 3. Learning one dictionary of size K = 256 took a few minutes when performing one pass on the training data with mini-batches of size 100. This experiment demonstrates that our approach is more flexible and general than [13] and [19]. Note that it is possible to show that when γ2 is large enough, the iterates Dn necessarily remain in a bounded set, and thus our convergence analysis presented in Section 3.3 applies to this experiment. Figure 3: Left: Two visualizations of 25 elements from a larger dictionary obtained by the toolbox SPAMS [19]; the second view amplifies the small coefficients. Right: the corresponding views of the dictionary elements obtained by our approach after initialization with the dictionary on the left. 5 Conclusion In this paper, we have introduced a stochastic majorization-minimization algorithm that gracefully scales to millions of training samples. We have shown that it has strong theoretical properties and some practical value in the context of machine learning. We have derived from our framework several new algorithms, which have shown to match or outperform the state of the art for solving large-scale convex problems, and to open up new possibilities for non-convex ones. In the future, we would like to study surrogate functions that can exploit the curvature of the objective function, which we believe is a crucial issue to deal with badly conditioned datasets. Acknowledgments This work was supported by the Gargantua project (program Mastodons - CNRS). 8 References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1):183–202, 2009. [2] J.M. Borwein and A.S. Lewis. Convex analysis and nonlinear optimization. Springer, 2006. [3] L. Bottou. Online algorithms and stochastic approximations. In David Saad, editor, Online Learning and Neural Networks. 1998. [4] L. Bottou and O. Bousquet. The trade-offs of large scale learning. In Adv. NIPS, 2008. [5] O. Capp´e and E. Moulines. On-line expectation–maximization algorithm for latent data models. J. Roy. Stat. Soc. B, 71(3):593–613, 2009. [6] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res., 10:2899–2934, 2009. [7] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874, 2008. [8] G. Gasso, A. Rakotomamonjy, and S. Canu. Recovering sparse signals with non-convex penalties and DC programming. IEEE T. Signal Process., 57(12):4686–4698, 2009. [9] S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. Technical report, 2013. [10] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Mach. Learn., 69(2-3):169–192, 2007. [11] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Proc. COLT, 2011. [12] C. Hu, J. Kwok, and W. Pan. Accelerated gradient methods for stochastic optimization and online learning. In Adv. NIPS, 2009. [13] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In Proc. AISTATS, 2010. [14] G. Lan. An optimal method for stochastic composite optimization. Math. Program., 133:365–397, 2012. [15] K. Lange, D.R. Hunter, and I. Yang. Optimization transfer using surrogate objective functions. J. Comput. Graph. Stat., 9(1):1–20, 2000. [16] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. J. Mach. Learn. Res., 10:777–801, 2009. [17] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Adv. NIPS, 2012. [18] J. Mairal. Optimization with first-order surrogate functions. In Proc. ICML, 2013. arXiv:1305.3120. [19] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res., 11:19–60, 2010. [20] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In Adv. NIPS, 2010. [21] R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in graphical models, 89, 1998. [22] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. Optimiz., 19(4):1574–1609, 2009. [23] Y. Nesterov. Gradient methods for minimizing composite objective functions. Technical report, CORE Discussion Paper, 2007. [24] S. Shalev-Schwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv 1211.2717v1, 2012. [25] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In Proc. COLT, 2009. [26] S. Shalev-Shwartz and A. Tewari. Stochastic methods for ℓ1 regularized loss minimization. In Proc. ICML, 2009. [27] A. W. Van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. [28] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, 2008. [29] S. Wright, R. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE T. Signal Process., 57(7):2479–2493, 2009. [30] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res., 11:2543–2596, 2010. 9
|
2013
|
99
|
5,179
|
Two-Stream Convolutional Networks for Action Recognition in Videos Karen Simonyan Andrew Zisserman Visual Geometry Group, University of Oxford {karen,az}@robots.ox.ac.uk Abstract We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification. 1 Introduction Recognition of human actions in videos is a challenging task which has received a significant amount of attention in the research community [11, 14, 17, 26]. Compared to still image classification, the temporal component of videos provides an additional (and important) clue for recognition, as a number of actions can be reliably recognised based on the motion information. Additionally, video provides natural data augmentation (jittering) for single image (video frame) classification. In this work, we aim at extending deep Convolutional Networks (ConvNets) [19], a state-of-theart still image representation [15], to action recognition in video data. This task has recently been addressed in [14] by using stacked video frames as input to the network, but the results were significantly worse than those of the best hand-crafted shallow representations [20, 26]. We investigate a different architecture based on two separate recognition streams (spatial and temporal), which are then combined by late fusion. The spatial stream performs action recognition from still video frames, whilst the temporal stream is trained to recognise action from motion in the form of dense optical flow. Both streams are implemented as ConvNets. Decoupling the spatial and temporal nets also allows us to exploit the availability of large amounts of annotated image data by pre-training the spatial net on the ImageNet challenge dataset [1]. Our proposed architecture is related to the two-streams hypothesis [9], according to which the human visual cortex contains two pathways: the ventral stream (which performs object recognition) and the dorsal stream (which recognises motion); though we do not investigate this connection any further here. The rest of the paper is organised as follows. In Sect. 1.1 we review the related work on action recognition using both shallow and deep architectures. In Sect. 2 we introduce the two-stream architecture and specify the Spatial ConvNet. Sect. 3 introduces the Temporal ConvNet and in particular how it generalizes the previous architectures reviewed in Sect. 1.1. A mult-task learning framework is developed in Sect. 4 in order to allow effortless combination of training data over multiple datasets. Implementation details are given in Sect. 5, and the performance is evaluated in Sect. 6 and compared to the state of the art. Our experiments on two challenging datasets (UCF101 [24] and HMDB-51 [16]) show that the two recognition streams are complementary, and our 1 deep architecture significantly outperforms that of [14] and is competitive with the state of the art shallow representations [20, 21, 26] in spite of being trained on relatively small datasets. 1.1 Related work Video recognition research has been largely driven by the advances in image recognition methods, which were often adapted and extended to deal with video data. A large family of video action recognition methods is based on shallow high-dimensional encodings of local spatio-temporal features. For instance, the algorithm of [17] consists in detecting sparse spatio-temporal interest points, which are then described using local spatio-temporal features: Histogram of Oriented Gradients (HOG) [7] and Histogram of Optical Flow (HOF). The features are then encoded into the Bag Of Features (BoF) representation, which is pooled over several spatio-temporal grids (similarly to spatial pyramid pooling) and combined with an SVM classifier. In a later work [28], it was shown that dense sampling of local features outperforms sparse interest points. Instead of computing local video features over spatio-temporal cuboids, state-of-the-art shallow video representations [20, 21, 26] make use of dense point trajectories. The approach, first introduced in [29], consists in adjusting local descriptor support regions, so that they follow dense trajectories, computed using optical flow. The best performance in the trajectory-based pipeline was achieved by the Motion Boundary Histogram (MBH) [8], which is a gradient-based feature, separately computed on the horizontal and vertical components of optical flow. A combination of several features was shown to further boost the accuracy. Recent improvements of trajectory-based hand-crafted representations include compensation of global (camera) motion [10, 16, 26], and the use of the Fisher vector encoding [22] (in [26]) or its deeper variant [23] (in [21]). There has also been a number of attempts to develop a deep architecture for video recognition. In the majority of these works, the input to the network is a stack of consecutive video frames, so the model is expected to implicitly learn spatio-temporal motion-dependent features in the first layers, which can be a difficult task. In [11], an HMAX architecture for video recognition was proposed with pre-defined spatio-temporal filters in the first layer. Later, it was combined [16] with a spatial HMAX model, thus forming spatial (ventral-like) and temporal (dorsal-like) recognition streams. Unlike our work, however, the streams were implemented as hand-crafted and rather shallow (3layer) HMAX models. In [4, 18, 25], a convolutional RBM and ISA were used for unsupervised learning of spatio-temporal features, which were then plugged into a discriminative model for action classification. Discriminative end-to-end learning of video ConvNets has been addressed in [12] and, more recently, in [14], who compared several ConvNet architectures for action recognition. Training was carried out on a very large Sports-1M dataset, comprising 1.1M YouTube videos of sports activities. Interestingly, [14] found that a network, operating on individual video frames, performs similarly to the networks, whose input is a stack of frames. This might indicate that the learnt spatio-temporal features do not capture the motion well. The learnt representation, finetuned on the UCF-101 dataset, turned out to be 20% less accurate than hand-crafted state-of-the-art trajectory-based representation [20, 27]. Our temporal stream ConvNet operates on multiple-frame dense optical flow, which is typically computed in an energy minimisation framework by solving for a displacement field (typically at multiple image scales). We used a popular method of [2], which formulates the energy based on constancy assumptions for intensity and its gradient, as well as smoothness of the displacement field. Recently, [30] proposed an image patch matching scheme, which is reminiscent of deep ConvNets, but does not incorporate learning. 2 Two-stream architecture for video recognition Video can naturally be decomposed into spatial and temporal components. The spatial part, in the form of individual frame appearance, carries information about scenes and objects depicted in the video. The temporal part, in the form of motion across the frames, conveys the movement of the observer (the camera) and the objects. We devise our video recognition architecture accordingly, dividing it into two streams, as shown in Fig. 1. Each stream is implemented using a deep ConvNet, softmax scores of which are combined by late fusion. We consider two fusion methods: averaging and training a multi-class linear SVM [6] on stacked L2-normalised softmax scores as features. Spatial stream ConvNet operates on individual video frames, effectively performing action recognition from still images. The static appearance by itself is a useful clue, since some actions are 2 conv1 7x7x96 stride 2 norm. pool 2x2 conv2 5x5x256 stride 2 norm. pool 2x2 conv3 3x3x512 stride 1 conv4 3x3x512 stride 1 conv5 3x3x512 stride 1 pool 2x2 full6 4096 dropout full7 2048 dropout softmax conv1 7x7x96 stride 2 norm. pool 2x2 conv2 5x5x256 stride 2 pool 2x2 conv3 3x3x512 stride 1 conv4 3x3x512 stride 1 conv5 3x3x512 stride 1 pool 2x2 full6 4096 dropout full7 2048 dropout softmax Spatial stream ConvNet Temporal stream ConvNet single frame input video multi-frame optical flow class score fusion Figure 1: Two-stream architecture for video classification. strongly associated with particular objects. In fact, as will be shown in Sect. 6, action classification from still frames (the spatial recognition stream) is fairly competitive on its own. Since a spatial ConvNet is essentially an image classification architecture, we can build upon the recent advances in large-scale image recognition methods [15], and pre-train the network on a large image classification dataset, such as the ImageNet challenge dataset. The details are presented in Sect. 5. Next, we describe the temporal stream ConvNet, which exploits motion and significantly improves accuracy. 3 Optical flow ConvNets In this section, we describe a ConvNet model, which forms the temporal recognition stream of our architecture (Sect. 2). Unlike the ConvNet models, reviewed in Sect. 1.1, the input to our model is formed by stacking optical flow displacement fields between several consecutive frames. Such input explicitly describes the motion between video frames, which makes the recognition easier, as the network does not need to estimate motion implicitly. We consider several variations of the optical flow-based input, which we describe below. (a) (b) (c) (d) (e) Figure 2: Optical flow. (a),(b): a pair of consecutive video frames with the area around a moving hand outlined with a cyan rectangle. (c): a close-up of dense optical flow in the outlined area; (d): horizontal component dx of the displacement vector field (higher intensity corresponds to positive values, lower intensity to negative values). (e): vertical component dy. Note how (d) and (e) highlight the moving hand and bow. The input to a ConvNet contains multiple flows (Sect. 3.1). 3.1 ConvNet input configurations Optical flow stacking. A dense optical flow can be seen as a set of displacement vector fields dt between the pairs of consecutive frames t and t + 1. By dt(u, v) we denote the displacement vector at the point (u, v) in frame t, which moves the point to the corresponding point in the following frame t + 1. The horizontal and vertical components of the vector field, dx t and dy t , can be seen as image channels (shown in Fig. 2), well suited to recognition using a convolutional network. To represent the motion across a sequence of frames, we stack the flow channels dx,y t of L consecutive frames to form a total of 2L input channels. More formally, let w and h be the width and height of a video; a ConvNet input volume Iτ ∈Rw×h×2L for an arbitrary frame τ is then constructed as follows: Iτ(u, v, 2k −1) = dx τ+k−1(u, v), (1) Iτ(u, v, 2k) = dy τ+k−1(u, v), u = [1; w], v = [1; h], k = [1; L]. For an arbitrary point (u, v), the channels Iτ(u, v, c), c = [1; 2L] encode the motion at that point over a sequence of L frames (as illustrated in Fig. 3-left). Trajectory stacking. An alternative motion representation, inspired by the trajectory-based descriptors [29], replaces the optical flow, sampled at the same locations across several frames, with 3 the flow, sampled along the motion trajectories. In this case, the input volume Iτ, corresponding to a frame τ, takes the following form: Iτ(u, v, 2k −1) = dx τ+k−1(pk), (2) Iτ(u, v, 2k) = dy τ+k−1(pk), u = [1; w], v = [1; h], k = [1; L]. where pk is the k-th point along the trajectory, which starts at the location (u, v) in the frame τ and is defined by the following recurrence relation: p1 = (u, v); pk = pk−1 + dτ+k−2(pk−1), k > 1. Compared to the input volume representation (1), where the channels Iτ(u, v, c) store the displacement vectors at the locations (u, v), the input volume (2) stores the vectors sampled at the locations pk along the trajectory (as illustrated in Fig. 3-right). input volume channels at point input volume channels at point Figure 3: ConvNet input derivation from the multi-frame optical flow. Left: optical flow stacking (1) samples the displacement vectors d at the same location in multiple frames. Right: trajectory stacking (2) samples the vectors along the trajectory. The frames and the corresponding displacement vectors are shown with the same colour. Bi-directional optical flow. Optical flow representations (1) and (2) deal with the forward optical flow, i.e. the displacement field dt of the frame t specifies the location of its pixels in the following frame t + 1. It is natural to consider an extension to a bi-directional optical flow, which can be obtained by computing an additional set of displacement fields in the opposite direction. We then construct an input volume Iτ by stacking L/2 forward flows between frames τ and τ +L/2 and L/2 backward flows between frames τ −L/2 and τ. The input Iτ thus has the same number of channels (2L) as before. The flows can be represented using either of the two methods (1) and (2). Mean flow subtraction. It is generally beneficial to perform zero-centering of the network input, as it allows the model to better exploit the rectification non-linearities. In our case, the displacement vector field components can take on both positive and negative values, and are naturally centered in the sense that across a large variety of motions, the movement in one direction is as probable as the movement in the opposite one. However, given a pair of frames, the optical flow between them can be dominated by a particular displacement, e.g. caused by the camera movement. The importance of camera motion compensation has been previously highlighted in [10, 26], where a global motion component was estimated and subtracted from the dense flow. In our case, we consider a simpler approach: from each displacement field d we subtract its mean vector. Architecture. Above we have described different ways of combining multiple optical flow displacement fields into a single volume Iτ ∈Rw×h×2L. Considering that a ConvNet requires a fixed-size input, we sample a 224 × 224 × 2L sub-volume from Iτ and pass it to the net as input. The hidden layers configuration remains largely the same as that used in the spatial net, and is illustrated in Fig. 1. Testing is similar to the spatial ConvNet, and is described in detail in Sect. 5. 3.2 Relation of the temporal ConvNet architecture to previous representations In this section, we put our temporal ConvNet architecture in the context of prior art, drawing connections to the video representations, reviewed in Sect. 1.1. Methods based on feature encodings [17, 29] typically combine several spatio-temporal local features. Such features are computed from the optical flow and are thus generalised by our temporal ConvNet. Indeed, the HOF and MBH local descriptors are based on the histograms of orientations of optical flow or its gradient, which can be obtained from the displacement field input (1) using a single convolutional layer (containing 4 orientation-sensitive filters), followed by the rectification and pooling layers. The kinematic features of [10] (divergence, curl and shear) are also computed from the optical flow gradient, and, again, can be captured by our convolutional model. Finally, the trajectory feature [29] is computed by stacking the displacement vectors along the trajectory, which corresponds to the trajectory stacking (2). In the supplementary material we visualise the convolutional filters, learnt in the first layer of the temporal network. This provides further evidence that our representation generalises hand-crafted features. As far as the deep networks are concerned, a two-stream video classification architecture of [16] contains two HMAX models which are hand-crafted and less deep than our discriminatively trained ConvNets, which can be seen as a learnable generalisation of HMAX. The convolutional models of [12, 14] do not decouple spatial and temporal recognition streams, and rely on the motionsensitive convolutional filters, learnt from the data. In our case, motion is explicitly represented using the optical flow displacement field, computed based on the assumptions of constancy of the intensity and smoothness of the flow. Incorporating such assumptions into a ConvNet framework might be able to boost the performance of end-to-end ConvNet-based methods, and is an interesting direction for future research. 4 Multi-task learning Unlike the spatial stream ConvNet, which can be pre-trained on a large still image classification dataset (such as ImageNet), the temporal ConvNet needs to be trained on video data – and the available datasets for video action classification are still rather small. In our experiments (Sect. 6), training is performed on the UCF-101 and HMDB-51 datasets, which have only: 9.5K and 3.7K videos respectively. To decrease over-fitting, one could consider combining the two datasets into one; this, however, is not straightforward due to the intersection between the sets of classes. One option (which we evaluate later) is to only add the images from the classes, which do not appear in the original dataset. This, however, requires manual search for such classes and limits the amount of additional training data. A more principled way of combining several datasets is based on multi-task learning [5]. Its aim is to learn a (video) representation, which is applicable not only to the task in question (such as HMDB-51 classification), but also to other tasks (e.g. UCF-101 classification). Additional tasks act as a regulariser, and allow for the exploitation of additional training data. In our case, a ConvNet architecture is modified so that it has two softmax classification layers on top of the last fullyconnected layer: one softmax layer computes HMDB-51 classification scores, the other one – the UCF-101 scores. Each of the layers is equipped with its own loss function, which operates only on the videos, coming from the respective dataset. The overall training loss is computed as the sum of the individual tasks’ losses, and the network weight derivatives can be found by back-propagation. 5 Implementation details ConvNets configuration. The layer configuration of our spatial and temporal ConvNets is schematically shown in Fig. 1. It corresponds to CNN-M-2048 architecture of [3] and is similar to the network of [31]. All hidden weight layers use the rectification (ReLU) activation function; maxpooling is performed over 3×3 spatial windows with stride 2; local response normalisation uses the same settings as [15]. The only difference between spatial and temporal ConvNet configurations is that we removed the second normalisation layer from the latter to reduce memory consumption. Training. The training procedure can be seen as an adaptation of that of [15] to video frames, and is generally the same for both spatial and temporal nets. The network weights are learnt using the mini-batch stochastic gradient descent with momentum (set to 0.9). At each iteration, a mini-batch of 256 samples is constructed by sampling 256 training videos (uniformly across the classes), from each of which a single frame is randomly selected. In spatial net training, a 224 × 224 sub-image is randomly cropped from the selected frame; it then undergoes random horizontal flipping and RGB jittering. The videos are rescaled beforehand, so that the smallest side of the frame equals 256. We note that unlike [15], the sub-image is sampled from the whole frame, not just its 256 × 256 center. In the temporal net training, we compute an optical flow volume I for the selected training frame as described in Sect. 3. From that volume, a fixed-size 224 × 224 × 2L input is randomly cropped and flipped. The learning rate is initially set to 10−2, and then decreased according to a fixed schedule, which is kept the same for all training sets. Namely, when training a ConvNet from scratch, the rate is changed to 10−3 after 50K iterations, then to 10−4 after 70K iterations, and training is stopped 5 after 80K iterations. In the fine-tuning scenario, the rate is changed to 10−3 after 14K iterations, and training stopped after 20K iterations. Testing. At test time, given a video, we sample a fixed number of frames (25 in our experiments) with equal temporal spacing between them. From each of the frames we then obtain 10 ConvNet inputs [15] by cropping and flipping four corners and the center of the frame. The class scores for the whole video are then obtained by averaging the scores across the sampled frames and crops therein. Pre-training on ImageNet ILSVRC-2012. When pre-training the spatial ConvNet, we use the same training and test data augmentation as described above (cropping, flipping, RGB jittering). This yields 13.5% top-5 error on ILSVRC-2012 validation set, which compares favourably to 16.0% reported in [31] for a similar network. We believe that the main reason for the improvement is sampling of ConvNet inputs from the whole image, rather than just its center. Multi-GPU training. Our implementation is derived from the publicly available Caffe toolbox [13], but contains a number of significant modifications, including parallel training on multiple GPUs installed in a single system. We exploit the data parallelism, and split each SGD batch across several GPUs. Training a single temporal ConvNet takes 1 day on a system with 4 NVIDIA Titan cards, which constitutes a 3.2 times speed-up over single-GPU training. Optical flow is computed using the off-the-shelf GPU implementation of [2] from the OpenCV toolbox. In spite of the fast computation time (0.06s for a pair of frames), it would still introduce a bottleneck if done on-the-fly, so we pre-computed the flow before training. To avoid storing the displacement fields as floats, the horizontal and vertical components of the flow were linearly rescaled to a [0, 255] range and compressed using JPEG (after decompression, the flow is rescaled back to its original range). This reduced the flow size for the UCF-101 dataset from 1.5TB to 27GB. 6 Evaluation Datasets and evaluation protocol. The evaluation is performed on UCF-101 [24] and HMDB-51 [16] action recognition benchmarks, which are among the largest available annotated video datasets1. UCF-101 contains 13K videos (180 frames/video on average), annotated into 101 action classes; HMDB-51 includes 6.8K videos of 51 actions. The evaluation protocol is the same for both datasets: the organisers provide three splits into training and test data, and the performance is measured by the mean classification accuracy across the splits. Each UCF-101 split contains 9.5K training videos; an HMDB-51 split contains 3.7K training videos. We begin by comparing different architectures on the first split of the UCF-101 dataset. For comparison with the state of the art, we follow the standard evaluation protocol and report the average accuracy over three splits on both UCF-101 and HMDB-51. Spatial ConvNets. First, we measure the performance of the spatial stream ConvNet. Three scenarios are considered: (i) training from scratch on UCF-101, (ii) pre-training on ILSVRC-2012 followed by fine-tuning on UCF-101, (iii) keeping the pre-trained network fixed and only training the last (classification) layer. For each of the settings, we experiment with setting the dropout regularisation ratio to 0.5 or to 0.9. From the results, presented in Table 1a, it is clear that training the ConvNet solely on the UCF-101 dataset leads to over-fitting (even with high dropout), and is inferior to pre-training on a large ILSVRC-2012 dataset. Interestingly, fine-tuning the whole network gives only marginal improvement over training the last layer only. In the latter setting, higher dropout over-regularises learning and leads to worse accuracy. In the following experiments we opted for training the last layer on top of a pre-trained ConvNet. Temporal ConvNets. Having evaluated spatial ConvNet variants, we now turn to the temporal ConvNet architectures, and assess the effect of the input configurations, described in Sect. 3.1. In particular, we measure the effect of: using multiple (L = {5, 10}) stacked optical flows; trajectory stacking; mean displacement subtraction; using the bi-directional optical flow. The architectures are trained on the UCF-101 dataset from scratch, so we used an aggressive dropout ratio of 0.9 to help improve generalisation. The results are shown in Table 1b. First, we can conclude that stacking multiple (L > 1) displacement fields in the input is highly beneficial, as it provides the network with long-term motion information, which is more discriminative than the flow between a pair of frames 1Very recently, [14] released the Sports-1M dataset of 1.1M automatically annotated YouTube sports videos. Processing the dataset of such scale is very challenging, and we plan to address it in future work. 6 Table 1: Individual ConvNets accuracy on UCF-101 (split 1). (a) Spatial ConvNet. Training setting Dropout ratio 0.5 0.9 From scratch 42.5% 52.3% Pre-trained + fine-tuning 70.8% 72.8% Pre-trained + last layer 72.7% 59.9% (b) Temporal ConvNet. Input configuration Mean subtraction off on Single-frame optical flow (L = 1) 73.9% Optical flow stacking (1) (L = 5) 80.4% Optical flow stacking (1) (L = 10) 79.9% 81.0% Trajectory stacking (2)(L = 10) 79.6% 80.2% Optical flow stacking (1)(L = 10), bi-dir. 81.2% (L = 1 setting). Increasing the number of input flows from 5 to 10 leads to a smaller improvement, so we kept L fixed to 10 in the following experiments. Second, we find that mean subtraction is helpful, as it reduces the effect of global motion between the frames. We use it in the following experiments as default. The difference between different stacking techniques is marginal; it turns out that optical flow stacking performs better than trajectory stacking, and using the bi-directional optical flow is only slightly better than a uni-directional forward flow. Finally, we note that temporal ConvNets significantly outperform the spatial ConvNets (Table 1a), which confirms the importance of motion information for action recognition. We also implemented the “slow fusion” architecture of [14], which amounts to applying a ConvNet to a stack of RGB frames (11 frames in our case). When trained from scratch on UCF-101, it achieved 56.4% accuracy, which is better than a single-frame architecture trained from scratch (52.3%), but is still far off the network trained from scratch on optical flow. This shows that while multi-frame information is important, it is also important to present it to a ConvNet in an appropriate manner. Multi-task learning of temporal ConvNets. Training temporal ConvNets on UCF-101 is challenging due to the small size of the training set. An even bigger challenge is to train the ConvNet on HMDB-51, where each training split is 2.6 times smaller than that of UCF-101. Here we evaluate different options for increasing the effective training set size of HMDB-51: (i) fine-tuning a temporal network pre-trained on UCF-101; (ii) adding 78 classes from UCF-101, which are manually selected so that there is no intersection between these classes and the native HMDB-51 classes; (iii) using the multi-task formulation (Sect. 4) to learn a video representation, shared between the UCF-101 and HMDB-51 classification tasks. The results are reported in Table 2. As expected, it is beneficial to Table 2: Temporal ConvNet accuracy on HMDB-51 (split 1 with additional training data). Training setting Accuracy Training on HMDB-51 without additional data 46.6% Fine-tuning a ConvNet, pre-trained on UCF-101 49.0% Training on HMDB-51 with classes added from UCF-101 52.8% Multi-task learning on HMDB-51 and UCF-101 55.4% utilise full (all splits combined) UCF-101 data for training (either explicitly by borrowing images, or implicitly by pre-training). Multi-task learning performs the best, as it allows the training procedure to exploit all available training data. We have also experimented with multi-task learning on the UCF-101 dataset, by training a network to classify both the full HMDB-51 data (all splits combined) and the UCF-101 data (a single split). On the first split of UCF-101, the accuracy was measured to be 81.5%, which improves on 81.0% achieved using the same settings, but without the additional HMDB classification task (Table 1b). Two-stream ConvNets. Here we evaluate the complete two-stream model, which combines the two recognition streams. One way of combining the networks would be to train a joint stack of fully-connected layers on top of full6 or full7 layers of the two nets. This, however, was not feasible in our case due to over-fitting. We therefore fused the softmax scores using either averaging or a linear SVM. From Table 3 we conclude that: (i) temporal and spatial recognition streams are complementary, as their fusion significantly improves on both (6% over temporal and 14% over spatial nets); (ii) SVM-based fusion of softmax scores outperforms fusion by averaging; (iii) using bi-directional flow is not beneficial in the case of ConvNet fusion; (iv) temporal ConvNet, trained using multi-task learning, performs the best both alone and when fused with a spatial net. Comparison with the state of the art. We conclude the experimental evaluation with the comparison against the state of the art on three splits of UCF-101 and HMDB-51. For that we used a 7 Table 3: Two-stream ConvNet accuracy on UCF-101 (split 1). Spatial ConvNet Temporal ConvNet Fusion Method Accuracy Pre-trained + last layer bi-directional averaging 85.6% Pre-trained + last layer uni-directional averaging 85.9% Pre-trained + last layer uni-directional, multi-task averaging 86.2% Pre-trained + last layer uni-directional, multi-task SVM 87.0% spatial net, pre-trained on ILSVRC, with the last layer trained on UCF or HMDB. The temporal net was trained on UCF and HMDB using multi-task learning, and the input was computed using uni-directional optical flow stacking with mean subtraction. The softmax scores of the two nets were combined using averaging or SVM. As can be seen from Table 4, both our spatial and temporal nets alone outperform the deep architectures of [14, 16] by a large margin. The combination of the two nets further improves the results (in line with the single-split experiments above), and is comparable to the very recent state-of-the-art hand-crafted models [20, 21, 26]. Table 4: Mean accuracy (over three splits) on UCF-101 and HMDB-51. Method UCF-101 HMDB-51 Improved dense trajectories (IDT) [26, 27] 85.9% 57.2% IDT with higher-dimensional encodings [20] 87.9% 61.1% IDT with stacked Fisher encoding [21] (based on Deep Fisher Net [23]) 66.8% Spatio-temporal HMAX network [11, 16] 22.8% “Slow fusion” spatio-temporal ConvNet [14] 65.4% Spatial stream ConvNet 73.0% 40.5% Temporal stream ConvNet 83.7% 54.6% Two-stream model (fusion by averaging) 86.9% 58.0% Two-stream model (fusion by SVM) 88.0% 59.4% 7 Conclusions and directions for improvement We proposed a deep video classification model with competitive performance, which incorporates separate spatial and temporal recognition streams based on ConvNets. Currently it appears that training a temporal ConvNet on optical flow (as here) is significantly better than training on raw stacked frames [14]. The latter is probably too challenging, and might require architectural changes (for example, a combination with the deep matching approach of [30]). Despite using optical flow as input, our temporal model does not require significant hand-crafting, since the flow is computed using a method based on the generic assumptions of constancy and smoothness. As we have shown, extra training data is beneficial for our temporal ConvNet, so we are planning to train it on large video datasets, such as the recently released collection of [14]. This, however, poses a significant challenge on its own due to the gigantic amount of training data (multiple TBs). There still remain some essential ingredients of the state-of-the-art shallow representation [26], which are missed in our current architecture. The most prominent one is local feature pooling over spatio-temporal tubes, centered at the trajectories. Even though the input (2) captures the optical flow along the trajectories, the spatial pooling in our network does not take the trajectories into account. Another potential area of improvement is explicit handling of camera motion, which in our case is compensated by mean displacement subtraction. Acknowledgements This work was supported by ERC grant VisRec no. 228180. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research. References [1] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge (ILSVRC), 2010. URL http://www.image-net.org/challenges/LSVRC/2010/. [2] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In Proc. ECCV, pages 25–36, 2004. [3] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In Proc. BMVC., 2014. [4] B. Chen, J. A. Ting, B. Marlin, and N. de Freitas. Deep learning of invariant spatio-temporal features from video. In NIPS Deep Learning and Unsupervised Feature Learning Workshop, 2010. 8 [5] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proc. ICML, pages 160–167, 2008. [6] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, 2:265–292, 2001. [7] N. Dalal and B Triggs. Histogram of Oriented Gradients for Human Detection. In Proc. CVPR, volume 2, pages 886–893, 2005. [8] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In Proc. ECCV, pages 428–441, 2006. [9] M. A. Goodale and A. D. Milner. Separate visual pathways for perception and action. Trends in Neurosciences, 15(1):20–25, 1992. [10] M. Jain, H. Jegou, and P. Bouthemy. Better exploiting motion for better action recognition. In Proc. CVPR, pages 2555–2562, 2013. [11] H. Jhuang, T. Serre, L. Wolf, and T. Poggio. A biologically inspired system for action recognition. In Proc. ICCV, pages 1–8, 2007. [12] S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE PAMI, 35(1):221–231, 2013. [13] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe. berkeleyvision.org/, 2013. [14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classication with convolutional neural networks. In Proc. CVPR, 2014. [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012. [16] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: A large video database for human motion recognition. In Proc. ICCV, pages 2556–2563, 2011. [17] I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Proc. CVPR, 2008. [18] Q. V. Le, W. Y. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In Proc. CVPR, pages 3361–3368, 2011. [19] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989. [20] X. Peng, L. Wang, X. Wang, and Y. Qiao. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. CoRR, abs/1405.4506, 2014. [21] X. Peng, C. Zou, Y. Qiao, and Q. Peng. Action recognition with stacked fisher vectors. In Proc. ECCV, pages 581–595, 2014. [22] F. Perronnin, J. S´anchez, and T. Mensink. Improving the Fisher kernel for large-scale image classification. In Proc. ECCV, 2010. [23] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep Fisher networks for large-scale image classification. In NIPS, 2013. [24] K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR, abs/1212.0402, 2012. [25] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In Proc. ECCV, pages 140–153, 2010. [26] H. Wang and C. Schmid. Action recognition with improved trajectories. In Proc. ICCV, pages 3551–3558, 2013. [27] H. Wang and C. Schmid. LEAR-INRIA submission for the THUMOS workshop. In ICCV Workshop on Action Recognition with a Large Number of Classes, 2013. [28] H. Wang, M. M. Ullah, A. Kl¨aser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In Proc. BMVC., pages 1–11, 2009. [29] H. Wang, A. Kl¨aser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In Proc. CVPR, pages 3169–3176, 2011. [30] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. DeepFlow: Large displacement optical flow with deep matching. In Proc. ICCV, pages 1385–1392, 2013. [31] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013. 9
|
2014
|
1
|
5,180
|
Compressive Sensing of Signals from a GMM with Sparse Precision Matrices 1Jianbo Yang 1Xuejun Liao 2Minhua Chen 1Lawrence Carin 1Department of Electrical and Computer Engineering, Duke University 2Department of Statistics & Department of Computer Science, University of Chicago {jianbo.yang;xjliao;lcarin@duke@duke.edu},{dukemeeting@gmail.com} Abstract This paper is concerned with compressive sensing of signals drawn from a Gaussian mixture model (GMM) with sparse precision matrices. Previous work has shown: (i) a signal drawn from a given GMM can be perfectly reconstructed from r noise-free measurements if the (dominant) rank of each covariance matrix is less than r; (ii) a sparse Gaussian graphical model can be efficiently estimated from fully-observed training signals using graphical lasso. This paper addresses a problem more challenging than both (i) and (ii), by assuming that the GMM is unknown and each signal is only observed through incomplete linear measurements. Under these challenging assumptions, we develop a hierarchical Bayesian method to simultaneously estimate the GMM and recover the signals using solely the incomplete measurements and a Bayesian shrinkage prior that promotes sparsity of the Gaussian precision matrices. In addition, we provide theoretical performance bounds to relate the reconstruction error to the number of signals for which measurements are available, the sparsity level of precision matrices, and the “incompleteness” of measurements. The proposed method is demonstrated extensively on compressive sensing of imagery and video, and the results with simulated and hardware-acquired real measurements show significant performance improvement over state-of-the-art methods. 1 Introduction Gaussian mixture models (GMMs) [1, 2, 3] have become a popular signal model for compressive sensing [4, 5] of imagery and video, partly because the information domain in these problems can be decomposed into subdomains known as pixel/voxel patches [3, 6]. A GMM employs a Gaussian precision matrix to capture the statistical relations between local pixels/voxels within a patch, and meanwhile captures the global statistics between patches using its clustering mechanism. Compressive sensing (CS) of signals drawn from a GMM admits closed-form minimum mean squared error (MMSE) reconstruction from linear measurements. Recent theoretical analysis in [7] shows that, given a sensing matrix with entries i.i.d. drawn from a zero-mean, fixed-variance, Gaussian distribution or Bernoulli distribution with parameter 0.5, if the GMM is known and the (dominant) rank of each covariance matrix is less than r, each signal can be perfectly reconstructed from r noise-free measurements. Though this is a much less stringent reconstruction condition than that prescribed by standard restricted-isometry-property (RIP) bounds, it relies on the assumption of knowing the exact GMM. If a sufficient number of fully observed signals are available beforehand, one can use maximum likelihood (ML) estimators to train a GMM [8, 9, 7, 1, 10] for use in reconstructing the signals in question. Unfortunately, finding an accurate GMM a priori is usually a challenge in practice, because it is difficult to obtain training signals that match the statistics of the interrogated signals. 1 Recent work [2] on GMM-based methods proposes to solve this problem by estimating the Gaussian components, based on measurements of the signals under interrogation, without resorting to any fully-observed signals to train a model in advance. The method of [2] has two drawbacks: (i) it estimates full dense Gaussian covariance matrices, with the number of free parameters to be estimated growing quadratically fast with the signal dimensionality n; (ii) it does not have performance guarantees, because all previous theoretical results, including those in [7], assume the GMM is given and thus are no longer applicable to the method of [2]. This paper addresses these two issues. First, we effectively reduce the number of GMM parameters by restricting the GMM to have sparse precision matrices with group sparsity patterns, making the GMM a mixture of group-sparse Gaussian graphical models. The group sparsity is motivated by the Markov random field (MRF) property of natural images and video [11, 12, 13]. Instead of having n2 parameters for each Gaussian component as in [2], we have only n + s parameters, where s is the number of nonzero off-diagonals of the precision matrix. We develop a variational maximum-marginal-likelihood estimator (variational MMLE) to simultaneously estimate the GMM and reconstruct the signals, with a Bayesian shrinkage prior used to promote sparsity of the Gaussian precision matrices. Our variational MMLE maximizes the marginal likelihood of the GMM given only the linear measurements, with the unknown signals treated as random variables and integrated out of the likelihood. A key step of the variational MMLE is using Bayesian graphical lasso to reestimate the sparse Gaussian precision matrices based on a posteriori signal samples conditional on the linear measurements. Second, we provide theoretical performance bounds under the assumption that the GMM is not exactly known. Assuming the GMM has sparse precision matrices, our theoretical results relate the signal reconstruction error to the number of signals for which measurements are available, the sparsity level of the precision matrices, and the “incompleteness” of measurements, where the last is defined as the uncertainty (variance) of a signal given its linear measurements. In the experiments, we present reconstruction results of the proposed method on both simulated measurements and real measurements acquired by actual hardware [6]. The proposed method outperforms the state-of-art CS reconstruction algorithms by significant margins. Notations. Let N(x|µ, Ω−1) denote a Gaussian density of x with mean µ and precision matrix Ω, ∥M∥F denote the Frobenius matrix norm of matrix M, ∥M∥max denote the largest entry of M in terms of magnitude, tr(M) denote the trace of M, Ω0 = Σ−1 0 denote the true precision matrix (i.e., the inverse of true covariance matrix Σ0), Ω∗denote the estimate of Ω0 by the proposed model. Herein, the eigenvalues of Σ0 are assumed to be bounded in a constant interval [τ1, τ2] ⊂(0, ∞), to guarantee the existence of Ω0. For functions f(x) and g(x), we write f(x) ≍g(x) when f(x) = O(g(x)) and g(x) = O(f(x)) hold simultaneously. 2 Learning a GMM of Unknown Signals from Linear Measurements 2.1 Signal Reconstruction with a Given GMM The linear measurement of an unknown signal x ∈Rn can be written as y = Φx + ϵ, where Φ ∈Rm×n is a sensing matrix, and ϵ ∈Rm denote measurement noises (we are interested in m < n). Assuming ϵ ∈N(ϵ|0, R), one has p(y|x) = N(y|Φx, R). We further assume R to be a scaled identity matrix, R = κ−1I, and thus the noise is white Gaussian. If x is governed by a GMM, i.e., p(x) = PK z=1 π(z)N(x|µ(z), Ω(z)−1), one may obtain p(y, x, z) = π(z)N(y|Φx, R)N(x|µ(z), Ω(z)−1), p(y) = K X z=1 π(z)N(y|Φµ(z), R + ΦΩ(z)−1Φ′), p(x, z|y) = ρ(z)N(x|η(z), (C(z))−1), (1) where C(z) = Φ′R−1Φ + Ω(z)−1 , η(z) = µz + C(z)Φ′R−1(y −Φµz), ρ(z) = π(z)N(y|Φµ(z), R + ΦΩ(z)−1Φ′) PK l=1 π(l)N(y|Φµ(l), R + ΦΩ(l)−1Φ′) . (2) When the GMM is exactly known, the signal is reconstructed analytically as the conditional mean, bx ≜E(x|y) = PK z=1ρ(z)η(z). (3) 2 It has been shown in [7] that, if the (dominant) rank of each Gaussian covariance matrix is less than r, the signal can be perfectly reconstructed from only r measurements in the low-noise regime. 2.2 Restriction of the GMM to a mixture of Gaussian Markov Random Fields A Markov random field (MRF), also known as an undirected graphical model, provides a graphical representation of the joint probability distribution over multiple random variables, by considering the conditional dependences among the variables [11, 12, 13]. In image analysis, each node of an MRF corresponds to a pixel of the image in question, and an edge between two nodes is often modeled by a potential function to characterize the conditional dependence between the associated pixels. Because of the local smoothness structure of images, the edges of an MRF are usually chosen based on a pairwise neighborhood structure: each pixel only has edge connections with its neighbors. The widely used scheme is that each pixel only has edge connections with its four immediate neighboring pixels to the left, right, top and bottom [11]. Therefore, an MRF for image representation is an undirected graph with only a limited number of edges between its nodes. Generally, learning and inference of an MRF are nontrivial, due to the nonlinearity and nonconvexity of the potential functions [14]. A popular special case of MRF is the Gaussian Markov random field (GMRF) which is an MRF with a multivariate Gaussian distribution over node variables. The best-known advantage of a GMRF is its simplicity of learning and inference, because of the nice properties of a multivariate Gaussian distribution. According to Hammersley-Clifford’s theorem [15], the conditional dependence of the node variables in a GMRF is encoded in the precision matrix. As mentioned before, an MRF is sparse for image analysis problems, on account of the neighborhood structure in the pixel domain. Therefore, the multivariate Gaussian distribution associated with a GMRF has a sparse precision matrix. This property of a GMRF in image analysis is demonstrated in Section 1 of the Supplementary Material. Inspired by the GMRF interpretation, we place a shrinkage prior on each precision matrix to promote sparsity when estimating the GMM. The Laplacian shrinkage prior used in [16] is chosen, but other shrinkage priors [17] could also be used. Specifically, we impose a Laplacian shrinkage prior on the off-diagonal elements of each of K precision matrices, p(Ω(k)) = n Y i=1 Y j<i q τ (k)γ(k) ij 2 exp(− q τ (k)γ(k) ij |ω(k) ij |), ∀k = 1, . . . , K, (4) with the symmetry constraints ω(k) ij = ω(k) ji . In (4), τ (k) > 0 is a “global” scaling parameter for all the elements of {ω(k) ij |i = 1, ..., n, j < i} and generally fixed to be one [18], and γ(k) ij is a “local” weight for the element ω(k) ij . With the Laplacian prior (4), many off-diagonal elements of Ω(k) are encouraged to be close to zero. However, in the inference procedure, the above Laplacian shrinkage prior (4) is inconvenient due to the lack of analytic updating expressions. This issue is overcome by using an equivalent scale mixture of normals representation [16] of (4) as shown below: q τ (k)γ(k) ij 2 exp(− q τ (k)γ(k) ij |ω(k) ij |) = Z N(ω(k) ij |0, τ (k)−1α(k) ij −1)InvGa(α(k) ij |1, γ(k) ij 2 )dα(k) ij (5) where α(k) ij is an augmented variable drawn from an inverse gamma distribution. Further, one may place a gamma prior on γ(k) ij . Then, a draw of the precision matrix may be represented by Ω(k) ∼ n Y i=1 Y j<i N(ω(k) ij |0, τ (k)−1α(k) ij −1), α(k) ij ∼InvGa(α(k) ij |1, γ(k) ij 2 ), γ(k) ij ∼Ga(γ(k) ij |a0, b0) (6) where a0, b0 are the hyperparameters. Suppose {xi}N i=1 are samples drawn from N(x|0, Ω(k)−1) and S denotes the empirical covariance matrix 1 N PN i=1(xi −x)(xi −x)′ where x is the empirical mean of {xi}N i=1. If the elements Ω(k) are drawn as in (6), the logarithm of the joint likelihood can be expressed as log p({xi}N i=1, Ω(k)) ∝N 2 log det(Ω(k)) −tr(SΩ(k)) − n X i=1 X j<i 2 N q τ (k)γ(k) ij |ω(k) ij | ! . (7) From the optimization perspective, the maximum a posterior (MAP) estimations of Ω(k) in (7) is known as the adaptive graphical lasso problem [18]. 3 2.3 Group sparsity based on banding patterns The Bayesian adaptive graphical lasso described above assumes the precision matrix is sparse, and the same Laplacian prior is imposed on all off-diagonal elements of the precision matrix without any discrimination. However, the aforementioned neighborhood structure of image pixels implies that the entries of the precision matrix corresponding to the pairs between neighboring pixels tend to have significant values. This is consistent with the observations as seen from the demonstration in Section 1 of the Supplementary Material: (i) the bands scattered along a few lines above or below the main diagonal are constituted by the entries with significant values in the precision matrix; (ii) the entries in the bands correspond to the pairwise neighborhood structure of the graph, since vectorization of an image patch is constituted by stacking all columns of pixels in a patch on the top of each other; (iii) the existence of multiple bands in some Gaussian components reveals that, besides the four immediate neighboring pixels, other indirected neighboring pixels may also lead to nonnegligible conditional dependence, though the entries in the associated bands have relatively smaller values. Inspired by the banding patterns mentioned above, we categorize the elements in the set {ω(k) ij }n i=1,j<i into two groups {ω(k) ij |(i, j) ∈L1} and {ω(k) ij |(i, j) ∈L2}, where L1 denotes the set of indices corresponding to the elements in the bands and L2 represents the set of indices for the elements not in the bands. For the elements in the group {ω(k) ij |(i, j) ∈L2}, the Laplacian prior is used to encourage a sparse precision matrix. For the elements in the group {ω(k) ij |(i, j) ∈L1} , the sparsity is not desired so a normal prior with Gamma hyperparameters is used instead. Accordingly, the expressions in (6) can be replaced by Ω(k) ∼ n Y i=1 Y i<j N(ω(k) ij |0, τ (k)−1α(k) ij −1) α(k) ij ∼ ( Ga(α(k) ij |c0, d0), if (i, j) ∈L1 InvGa(α(k) ij |1, γ(k) ij 2 ), γ(k) ij ∼Ga(γ(k) ij |a0, b0), if (i, j) ∈L2 . (8) With the prior distribution of Ω(k) in (6) replaced with that in (8), the joint log-likelihood in (7) changes to log p({xi}N i=1, Ω(k)) ∝N 2 log det(Ω(k)) −tr(SΩ(k)) − X (i,j)∈L1 2 N τ (k)α(k) ij ∥ω(k) ij ∥2 − X (i,j)∈L2 2 N q τ (k)γ(k) ij |ω(k) ij | . (9) To the best of our knowledge, the maximum a posterior (MAP) estimations of Ω(k) in (9) has not been studied in the family of graphical lasso or its variants, from the optimization perspective. 2.4 Hierarchical Bayesian model and inference We consider the collective compressive sensing of the signals X = {xi ∈Rn}N i=1 that are drawn from an unknown GMM. The noisy linear measurements of X are given by Y = {yi ∈Rm : yi = Φixi + ϵi}N i=1. We assume the sensing matrices to be signal-dependent to account for generality (i.e., Φi depends on the signal index i). The unification of signal reconstruction with a given GMM (presented in Section 2.1) and GMRF learning with fully-observed training signals (presented in Section 2.2) leads to the following Bayesian model, yi|xi ∼N(yi|Φixi, κ−1I), xi ∼ K X z=1 π(z)N(xi|µ(z), Ω(z)−1), κ ∼Ga(κ|e0, f0) (10) Ω(k) ∼ n Y i=1 Y i<j N(ω(k) ij |0, τ (k)−1α(k) ij −1), α(k) ij ∼InvGa(α(k) ij |1, γ(k) ij 2 ), γ(k) ij ∼Ga(γ(k) ij |a0, b0), (11) The expression in (11) could be replaced by (8) if the group sparsity is considered in the precision matrix. In addition to the precision matrices, we further add the following standard priors on the other parameters of the GMM to make the proposed model a full hierarchical Bayesian model, µ(k) ∼N(µ(k)|m0, (β0Ω(k))−1), π ∼Dirichlet(π(1), . . . , π(K)|a0), (12) 4 where m0, a0 and β0 are hyperparameters. We develop the inference procedure for the proposed Bayesian hierarchical model. Let the symbols Z, µ, Ω, π, α, γ denote the sets {zi}, {µ(k)}, {Ω(k)}, {π(k)}, {α(k)}, {γ(k)} respectively. The marginalized likelihood function is written as L(Θ) = ln Z p(Y, Π, Θ)dΠ where Π ≜{X, Z, α, γ} and Θ ≜{µ, Ω, π, κ} denote the set of the latent variables and parameters of the model, respectively. An expectation-maximization (EM) algorithm [19] could be used to find the optimal Θ by alternating the following two steps • E-step: Find p(Π|Y, Θ∗) with Θ∗computed at the M-step, and obtain the expected complete log-likelihood EΠ(ln p(Y, Π, Θ∗)). • M-step: Find an improved estimate of Θ∗by maximizing the expected complete loglikelihood given at the E-step. However, it is intractable to compute the exact posterior p (Π|Y, Θ) at the E step. We develop a variational inference approach to overcome the intractability. Based on the mean field theory [20], we approximate the posterior distribution p (Π|Y, Θ) by a proposal distribution q(Π) that factorizes over the variables as follows q(Π) = q(X, Z, α, γ) = q(X, Z)q(α)q(γ). (13) Then, we find an optimal distribution q(Π) that minimizes the Kullback-Leibler (KL) divergence KL(q(Π)||p(Π|Y, Θ)) = R q(Π) ln q(Π) p(Π|Y,Θ)dΠ, or equivalently, maximizes the evidence lower bound (ELBO) of the log-marginal data likelihood [21], denoted by F(q(Π), Θ), ln p(Y, Θ) = ln Z q(Π)p (Y, Π, Θ) q(Π) dΠ ≥ Z q(Π) ln p (Y, Π, Θ) q(Π) dΠ ≜F(q(Π), Θ) (14) where the inequality is held based on the Jensen’s inequality. With the above approximation, the entire algorithm becomes a variational EM algorithm and it iterates between the following VE-step and VM-step until convergence: • VE-step: Find the optimal posterior distribution q∗(Π) that maximizes F(q(Π), Θ∗) with Θ∗computed at the VM-step. • VM-step: Find the optimal Θ∗that maximizes F(q∗(Π), Θ) with q∗(Π) computed at the VE-step. The full update equations of the variational EM algorithm are given in Section 2 of the Supplementary Material. 3 Theoretical Analysis The proposed hierarchical Bayesian model unifies the task of signal recovery and the task of estimating the mixture of GMRF, with a common goal of maximizing the ELBO of the log-marginal likelihood of the measurements. This section provides a theoretical analysis to further reveal the mutual influence between these two tasks (Theorem 1 and Theorem 2), and establish a theoretical performance bound (Theorem 3) to relate the reconstruction error to the number of signals being measured, the sparsity level of precision matrices, and the “incompleteness” of measurements. The proofs of these theorems are presented in Sections 3-5 of the Supplementary Material. For convenience, we consider the single Gaussian case, so the superscript (k) is omitted in the sequel. We begin with the definitions and assumptions used in the theorems. Definition 3.1 Let exi and bxi be the signals estimated from measurement yi, using the true precision matrix Ω0 and the estimated precision matrix Ω∗respectively, according to (3), bxi =µ + Ω0 + Φ′ iR−1Φi −1 Φ′ iR−1 (yi −Φiµ) = µ + CiΦ′ iR−1 (yi −Φiµ) exi =µ + Ω0 + ∆+ Φ′ iR−1Φi −1 Φ′ iR−1 (yi −Φiµ) = µ + C−1 i + ∆ −1 Φ′ iR−1 (yi −Φiµ) . Assuming yi ∈Rr is noise-free and the (dominant) rank of Ω0 is less than r, one obtains bxi as the true signal xi [7], i.e., bxi = xi. Then the reconstruction error of exi is ∥δi∥2, where δi = exi −bxi. 5 Definition 3.2 The estimation error of Ω∗is defined as ∥∆∥F where ∆= Ω∗−Ω0. At each VM-step of the variational EM algorithm developed in Section 2.4, Ω∗is updated based on the empirical covariance matrix Σem computed from {exi}, i.e., Σem = 1 N N X i=1 exiex′ i + 1 N N X i=1 Ci = 1 N N X i=1 bxibx′ i | {z } Σ0em + 1 N N X i=1 (2bxiδ′ i + δiδ′ i + Ci) | {z } Σde , (15) where {bxi} and {exi} are considered to both have zero mean, as one can always center the signals with respect to their means [2]. Definition 3.3 The deviation of empirical matrix Σ0 em is defined as Σde = Σem−Σ0 em according to (15), and we use ¯σde ≜∥Σde∥max to measure this deviation. Considering the developed variational EM algorithm can converge to a local minimum, we assume ¯σde ≤c q log n N for a constant c > 01. 3.1 Theoretical results Theorem 1 Assuming ∥Ci∥F ∥∆∥F < 1, the reconstruction error of the i-th signal is upper bounded as ∥δi∥2 ≤ ∥Ci∥F ∥∆∥F 1−∥Ci∥F ∥∆∥F ∥bxi∥2. Theorem 1 establishes the error bound of signal recovery in terms of ∆. In this theorem, Ω∗can be obtained by any GMRF estimation methods, including [1, 2] and the proposed method. Let η = min(i,j)∈Sc √τγij N , η = max(i,j)∈S √τγij N , S = {(i, j) : ωij ̸= 0, i ̸= j}, Sc = {(i, j) : ωij = 0, i ̸= j} and the cardinality of S be s. The following theorem establishes an upper bound of ∥∆∥F on account of Σde. Theorem 2 Given the empirical covariance matrix Σem, if η, η ≍ q log n N + ¯σde, then we have ∥∆∥F = Op{ p (n + s) log n/N + √n + s¯σde}. Note that the standard graphical lasso and its variants [18, 23] assume the true signal samples {xi} are fully observed when estimating Ω∗, so they correspond to the simple case that ¯σde = 0. Loh and Wainwright [22, Corollary 5] also provides an upper bound of ∥∆∥F taking Σde into account. However, they assume Σ0 em is attainable and the proof of their corollary relies on their proposed GMRF estimation algorithm, so the theoretical result in [22] cannot be used here. Let ϵ0 = 1 N PN i=1 ∥bxi −µ∥2, υ = 1 N PN i=1 tr(Ci), δmax = supi ∥δi∥2, bxmax = supi ∥bxi∥2 and ξ = maxi ∥Ci∥F . A combination of Theorem 1 and 2 leads to the following theorem which relates the error bound of signal reconstruction to the number of partially-observed signals (observed through incomplete linear measurements), the sparsity level of precision matrices, and the uncertainty of signal reconstruction (i.e., υ and ξ) which represent the “incompleteness” of the measurements. Theorem 3 Given the empirical covariance matrix Σem, if η, η ≍ q log n N + ¯σde, ξ∥∆∥F < ζ where ζ is a constant and (1 −ζ)/√n + s > Mϵ0(δmax + 2bxmax)ξ with M being an appropriate constant to make ∥∆∥F ≤M p (n + s) log n/N + M√n + s¯σde hold with high probability, then we obtain that 1 N PN i=1 ∥exi −bxi∥2 ≤ √ (log n)/N+υ (1−ζ)/√n+s−Mϵ0(δmax+2bxmax)ξMϵ0ξ. From Theorem 3, we find that when the number of partially-observed signals N tends to infinity and the uncertainty of signal reconstruction tr(Ci) tends to zero ∀i, the average reconstruction error 1 N PN i=1 ∥exi −bxi∥2 is close to zero with high probability. 4 Experiments The performance of the proposed methods is evaluated on the problems of compressive sensing (CS) of imagery and high-speed video2. For convenience, the proposed method is termed as Sparse-GMM when using the non-group sparsity described in Section 2.2, 1A similar assumption is made in expression (3.13) of [22]. 2The complete results can be found at the website: https://sites.google.com/site/nipssgmm/. 6 and is termed Sparse-GMM(G) when using the group sparsity described in Section 2.3. For Sparse-GMM(G), we construct the two groups L1 and L2 as follows : L1 = {(i, j) : pixel i is one of four immediate neighbors, in the spatial domain, of pixel j, i ̸= j} and L2 = {(i, j) : i, j = 1, 2, · · · , n, i ̸= j} \ L1. The proposed methods are compared with state-ofthe-art methods, including: a GMM pre-trained from training patches (GMM-TP) [7, 8], a piecewise linear estimator (PLE) [2], generalized alternating projection (GAP) [24], Two-step Iterative Shrinkage/Thresholding (TwIST) [25], KSVD-OMP [26]. For the proposed methods, the hyperparameters of the scaled mixture of Gaussians are set as p a0/b0/N ≈300, c0 = d0 = 10−6, the hyperparameter of Dirichlet prior α0 is set as a vector with all elements being one, the hyperparameters of the mean of each Gaussian component are set as β0 = 1, and m0 is set to the mean of the initialization of {bxi}N i=1. We fixed κ = 10−6 for the proposed methods, GMM-TP and PLE. The number of dictionary elements in KSVD is set to the best in {64, 128, 256, 512}. The TwIST adopts the total-variation (TV) norm, and the results of TwIST reported here represented the best among the different settings of regularization parameter in the range of [10−4, 1]. In GAP, the spatial transform is chosen between DCT and waveletes and the one with the best result is reported, and the temporal transform for video is fixed to be DCT. 4.1 Simulated measurements Compressive sensing of still images. Following the single pixel camera [27], an image xi is projected onto the rows of a random sensing matrix Φi ∈Rm×n to obtain the compressive measurements yi for i = 1, . . . , N. Each sensing matrix Φi is constituted by the elements drawn from a uniform distribution in [0, 1]. The USPS handwritten digits dataset 3 and the face dataset [28] are used in this experiment. In each dataset, we randomly select 300 images and each image is resized to the scale of 12 × 12. Eight settings of CS ratios are adopted with m n ∈{0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40}. Since signal xi in the single pixel camera represents an entire image which generally has unique statistics, it is infeasible to find suitable training data in practice. Therefore, GMM-TP and KSVD-OMP are not compared to in this experiment4. For PLE, Sparse-GMM and Sparse-GMM(G), the minimum-norm estimates from the measurements, bxi = arg minx{∥x∥2 2 : Φix = yi} = Φ′ i(ΦiΦ′ i)−1yi, i = 1, . . . , N, are used to initialize the GMM. The number of GMM components K in PLE, Sparse-GMM, and Sparse-GMM(G) is tuned among 2 ∼10 based on Bayesian information criterion (BIC). 5 10 15 20 25 30 20 22 24 26 28 30 32 Frames PSRN (dB) GAP (23.72) TwIST (24.81) GMM-TP (24.47) KSVD-OMP (22.37) PLE (25.35) Sparse-GMM (27.3) Sparse-GMM(G) (28.05) 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 5 10 15 20 25 CS measurements fraction in a patch PSRN (dB) GAP TwIST PLE Sparse-GMM Sparse-GMM(G) 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 8 10 12 14 16 18 CS measurements fraction in a patch PSRN (dB) GAP TwIST PLE Sparse-GMM Sparse-GMM(G) Figure 1: A comparison of reconstruction performances, in terms of PSNR, among different methods for CS of imagery on USPS handwritten digits (left) and face datasets (middle), and CS of video on NBA game dataset (right), with the average PSNR over frames shown in the brackets. Compressive sensing of high-speed video. Following the Coded Aperture Compressive Temporal Imaging (CACTI) system [6], each frame of video to be reconstructed is encoded with a shifted binary mask which is designed by randomly drawing values from {0, 1} at every pixel location, with a 0.5 probability of drawing 1. Each signal xi represents the vectorization of T consecutive spatial frames, obtained by first vectorizing each frame into a column and then stacking the resulting T columns on top of each other. The measurement yi is constituted by yi = Φixi where Φi = [Φi,1, . . . , Φi,T ] and Φi,t is a diagonal matrix with its diagonal being the mask that is applied to the t-th frame. A video containing NBA game scenes is used in the experiment. It has 32 frames, each of size 256 × 256, and T is set to be 8. For GMM-TP, KSVD-OMP, PLE, Sparse-GMM and Sparse-GMM(G), we partition each 256 × 256 measurement frame into a set of 64 × 64 blocks, and each block is considered as if it were a small frame and is processed independently of other blocks.5 The patch is of size 4 × 4 × T. Since each block is only 64 × 64, a small number of GMM components are sufficient to capture its statistics, and we find the results are robust to K as long as 2 ≤K ≤5 for PLE, Sparse-GMM and Sparse-GMM(G). Following [8, 26], we use the patches 3It is downloaded from http://cs.nyu.edu/∼roweis/data.html. 4The results of other settings can be found at https://sites.google.com/site/nipssgmm/. 5This subimage processing strategy has also been used in [2]. 7 of a randomly-selected video containing traffic scenes6, which are irrelevant to the NBA game, as training data to learn a GMM for GMM-TP with 20 components, and we use it to initialize PLE, Sparse-GMM, and Sparse-GMM(G). The same training data are used to learn the dictionaries for KSVD-OMP. Max-Max 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-GMM 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-MFA 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Sparse-GMM 20 40 60 80 100 120 20 40 60 80 100 120 140 20 40 60 80 100 120 20 40 60 80 100 120 140 Max-Max 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-GMM 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-MFA 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Sparse-GMM 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Sparse-GMM(G) 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Max-Max 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-GMM 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 MMLE-MFA 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Sparse-GMM 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Sparse-GMM(G) 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 PLE Sparse-GMM Sparse-GMM(G) Figure 2: Plots of an example precision matrix (in magnitude) learned by different GMM methods on the Face dataset with m/n = 0.4. It is preferred to view the figure electronically. The magnitudes in each precision matrix are scaled to the range of [0, 1]. Results. From the results shown in Figure 1, we observe that the proposed methods, especially Sparse-GMM(G), outperforms other methods with significant margins in all considered settings. The better performance of SparseGMM(G) over Sparse-GMM validates the advantage of considering group sparsity in the model. Figure 2 shows the an example precision matrix of one of K Gaussian components that are learned by the methods of PLE, Sparse-GMM, and Sparse-GMM(G) on the face dataset. From this figure, we can see that Sparse-GMM and Sparse-GMM(G) show much clearer groups sparsity than PLE, demonstrating the benifits of using group sparsity constructed from the banding patterns. 4.2 Real measurements #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 Raw measurement (Coded image) GMM-TP TwIST GAP PLE #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 KSVD-OMP #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 Sparse-GMM Sparse-GMM(G) GMM-TP Sparse-GMM Sparse-GMM(G) #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 GAP TwIST Figure 3: Reconstructed images 256 × 256 × T by different methods from the “raw measurement” acquired from CACTI with T = 14. The region in the red boxes are enlarged and shown at the right bottom part for better comparison. We demonstrate the efficacy of the proposed methods on the CS of video, with the measurements acquired by the actual hardware of CACTI camera [6]. A letter is placed on the blades of a chopper wheel that rotates at an angular velocity of 15 blades per second. The training data are obtained from the videos of a chopper wheel rotating at several orientations, positions and velocities. These training videos are captured by a regular camcorder at frame-rates that are different from the high-speed frame rate achieved by CACTI reconstruction. Other settings of the methods are the same as in the experiments on simulated data. The reconstruction results are shown in Figure 3, which shows that SparseGMM(G) generally yields sharper reconstructed frames with less ghost effects than other methods. 5 Conclusions The success of compressive sensing of signals from a GMM highly depends on the quality of the estimator of the unknown GMM. In this paper, we have developed a hierarchical Bayesian method to simultaneously estimate the GMM and recover the signals, all based on using only incomplete linear measurements and a Bayesian shrinkage prior for promoting sparsity of the Gaussian precision matrices. In addition, we have obtained theoretical results under the challenging assumption that the underlying GMM is unknown and has to be estimated from measurements that contain only incomplete information about the signals. Our results extend substantially from previous theoretical results in [7] which assume the GMM is exactly known. The experimental results with both simulated and hardware-acquired measurements show the proposed method significantly outperforms state-of-the-art methods. Acknowledgement The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR. 6The results of the training videos containing general scenes can be found at the aforementioned website. 8 References [1] M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. on Signal Processing, 2010. [2] G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. on Image Processing, 2012. [3] G. Yu and G. Sapiro, “Statistical compressed sensing of Gaussian mixture models,” IEEE Trans. on Signal Processing, 2011. [4] E. J. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. on Inform. Theory, 2006. [5] D. L. Donoho, “Compressed sensing,” IEEE Trans. on Inform. Theory, 2006. [6] P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Optics Express, 2013. [7] F. Renna, R. Calderbank, L. Carin, and M. Rodrigues, “Reconstruction of signals drawn from a Gaussian mixture via noisy compressive measurements,” IEEE Trans. Signal Processing, 2014. [8] J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. on Image Processing, vol. 23, no. 11, pp. 4863–4878, 2014. [9] ——, “Gaussian mixture model for video compressive sensing,” ICIP, pp. 19–23, 2013. [10] D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in ICCV, 2011. [11] S. Roth and M. J. Black, “Fields of experts,” Int. J. Comput. Vision, 2009. [12] F. Heitz and P. Bouthemy, “Multimodal estimation of discontinuous optical flow using Markov random fields.” IEEE Trans. Pattern Anal. Mach. Intell., 1993. [13] V. Cevher, P. Indyk, L. Carin, and R. Baraniuk, “Sparse signal recovery and acquisition with graphical models,” IEEE Signal Processing Magazine, 2010. [14] M. Tappen, C. Liu, E. Adelson, and W. Freeman, “Learning Gaussian conditional random fields for lowlevel vision,” in CVPR, 2007. [15] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, 2005. [16] T. Park and G. Casella, “The Bayesian lasso,” Journal of the American Statistical Association, 2008. [17] N. G. Polson and J. G. Scott, “Shrink globally, act locally: Sparse Bayesian regularization and prediction,” Bayesian Statistics, 2010. [18] J. Fan, Y. Feng, and Y. Wu, “Network exploration via the adaptive lasso and scad penalties,” Ann. Appl. Stat., 2009. [19] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society: Series B, 1977. [20] G. Parisi, Statistical Field Theory. Addison-Wesley, 1998. [21] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, “An introduction to variational methods for graphical models,” Machine Learning, 1999. [22] P.-L. Loh and M. J. Wainwright, “High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity,” Ann. Statist., 2012. [23] J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, 2008. [24] X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences, 2014. [25] J. Bioucas-Dias and M. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Processing, 2007. [26] Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” ICCV, 2011. [27] M. F. Duarte, M. A.Davenport, D. Takhar, J. N. Laska, S. Ting, K. F. Kelly, and R. G. Baraniuk, “Singlepixel imaging via compressive sampling,” IEEE Signal Processing Magazine, 2008. [28] J. B. Tenenbaum, V. Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, 2000. 9
|
2014
|
10
|
5,181
|
Learning Deep Features for Scene Recognition using Places Database Bolei Zhou1, Agata Lapedriza1,3, Jianxiong Xiao2, Antonio Torralba1, and Aude Oliva1 1Massachusetts Institute of Technology 2Princeton University 3Universitat Oberta de Catalunya Abstract Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers’ responses allows us to show differences in the internal representations of object-centric and scene-centric networks. 1 Introduction Understanding the world in a single glance is one of the most accomplished feats of the human brain: it takes only a few tens of milliseconds to recognize the category of an object or environment, emphasizing an important role of feedforward processing in visual recognition. One of the mechanisms subtending efficient human visual recognition is our capacity to learn and remember a diverse set of places and exemplars [11]; by sampling the world several times per second, our neural architecture constantly registers new inputs even for a very short time, reaching an exposure to millions of natural images within just a year. How much would an artificial system have to learn before reaching the scene recognition abilities of a human being? Besides the exposure to a dense and rich variety of natural images, one important property of the primate brain is its hierarchical organization in layers of increasing processing complexity, an architecture that has inspired Convolutional Neural Networks or CNNs [2, 14]. These architectures together with recent large databases (e.g., ImageNet [3]) have obtained astonishing performance on object classification tasks [12, 5, 20]. However, the baseline performance reached by these networks on scene classification tasks is within the range of performance based on hand-designed features and sophisticated classifiers [24, 21, 4]. Here, we show that one of the reasons for this discrepancy is that the higher-level features learned by object-centric versus scene-centric CNNs are different: iconic images of objects do not contain the richness and diversity of visual information that pictures of scenes and environments provide for learning to recognize them. Here we introduce Places, a scene-centric image dataset 60 times larger than the SUN database [24]. With this database and a standard CNN architecture, we establish new baselines of accuracies on 1 various scene datasets (Scene15 [17, 13], MIT Indoor67 [19], SUN database [24], and SUN Attribute Database [18]), significantly outperforming the results obtained by the deep features from the same network architecture trained with ImageNet1. The paper is organized as follows: in Section 2 we introduce the Places database and describe the collection procedure. In Section 3 we compare Places with the other two large image datasets: SUN [24] and ImageNet [3]. We perform experiments on Amazon Mechanical Turk (AMT) to compare these 3 datasets in terms of density and diversity. In Section 4 we show new scene classification performance when training deep features from millions of labeled scene images. Finally, we visualize the units’ responses at different layers of the CNNs, demonstrating that an object-centric network (using ImageNet [12]) and a scene-centric network (using Places) learn different features. 2 Places Database The first benchmark for scene classification was the Scene15 database [13] based on [17]. This dataset contains only 15 scene categories with a few hundred images per class, where current classifiers are saturating this dataset nearing human performance at 95%. The MIT Indoor67 database [19] has 67 categories on indoor places. The SUN database [24] was introduced to provide a wide coverage of scene categories. It is composed of 397 categories containing more than 100 images per category. Despite those efforts, all these scene-centric datasets are small in comparison with current object datasets such as ImageNet (note that ImageNet also contains scene categories but in a very small proportion as is shown in Fig. 2). Complementary to ImageNet (mostly object-centric), we present here a scene-centric database, that we term the Places database. As now, Places contain more than 7 million images from 476 place categories, making it the largest image database of scenes and places so far and the first scene-centric database competitive enough to train algorithms that require huge amounts of data, such as CNNs. 2.1 Building the Places Database Since the SUN database [24] has a rich scene taxonomy, the Places database has inherited the same list of scene categories. To generate the query of image URL, 696 common adjectives (messy, spare, sunny, desolate, etc), manually selected from a list of popular adjectives in English, are combined with each scene category name and are sent to three image search engines (Google Images, Bing Images, and Flickr). Adding adjectives to the queries allows us to download a larger number of images than what is available in ImageNet and to increase the diversity of visual appearances. We then remove duplicated URLs and download the raw images with unique URLs. To date, more than 40 million images have been downloaded. Only color images of 200×200 pixels or larger are kept. PCA-based duplicate removal is conducted within each scene category in the Places database and across the same scene category in the SUN database, which ensures that Places and the SUN do not contain the same images, allowing us to combine the two datasets. The images that survive this initial selection are sent to Amazon Mechanical Turk for two rounds of individual image annotation. For a given category name, its definition as in [24], is shown at the top of a screen, with a question like is this a living room scene? A single image at a time is shown centered in a large window, and workers are asked to press a Yes or No key. For the first round of labeling, the default answer is set to No, requiring the worker to actively pick up the positive images. The positive images resulting from the first round annotation are further sent for a second round annotation, in which the default answer is set to Yes (to pick up the remaining negative images). In each HIT(one assignment for each worker), 750 downloaded images are included for annotation, and an additional 30 positive samples and 30 negative samples with ground truth from the SUN database are also randomly injected as control. Valid HITs kept for further analyses require an accuracy of 90% or higher on these control images. After the two rounds of annotation, and as this paper is published, 7,076,580 images from 476 scene categories are included in the Places database. Fig. 1 shows image samples obtained with some of the adjectives used in the queries. 1The database and pre-trained networks are available at http://places.csail.mit.edu 2 stylish kitchen messy kitchen wooded kitchen sunny coast rocky coast misty coast teenage bedroom romantic bedroom spare bedroom wintering forest path greener forest path darkest forest path Figure 1: Image samples from the scene categories grouped by their queried adjectives. 100 1000 10000 100000 Places ImageNet SUN bridge cemetery tower train railway canyon pond fountain castle lighthouse valley harbor skyscraper aquarium palace arch highway bedroom creek botanical garden restaurant kitchen ocean railroad track river baseball field rainforest stadium baseball art gallery office building golf course mansion staircase windmill coast stadium football parking lot basilica building facade lobby abbey vegetable garden volcano amusement park shed herb garden alley pasture marsh raft dock playground mountain hotel room sea cliff courtyard badlands office boardwalk desert sand patio living room runway plaza sky motel underwater coral reef driveway dining room train station platform hospital viaduct forest path construction site campsite mausoleum music studio mountain snowy basement cottage garden boat deck coffee shop pagoda shower classroom ballroom corn field parlor yard hot spring kitchenette art studio butte orchard gas station forest road corridor closet fire station dam ski slope field wild ski resort iceberg fairway phone booth swamp airport terminal auditorium wheat field wind farm bookstore fire escape supermarket bar water tower rice paddy cockpit home office crosswalk bakery shop bayou veranda slum formal garden chalet ruin attic track outdoor clothing store tree farm residential neighborhood courthouse restaurant patio engine room market outdoor excavation inn outdoor trench schoolhouse conference room pavilion aqueduct temple east asia conference center hospital room rock arch racecourse shopfront topiary garden field cultivated church outdoor pulpit museum indoor dinette home ice cream parlor gift shop boxing ring laundromat nursery martial arts gym swimming pool outdoor food court cathedral outdoor reception temple south asia amphitheater medina pantry galley apartment building outdoor watering hole islet banquet hall crevasse jail cell candy store kindergarden classroom dorm room bowling alley ice skating rink outdoor garbage dump assembly line picnic area locker room monastery outdoor game room kasbah hotel outdoor bus interior doorway outdoor television studio butchers shop waiting room bamboo forest restaurant kitchen subway station platform desert vegetation beauty salon rope bridge stage indoor snowfield cafeteria shoe shop sandbar igloo Figure 2: Comparison of the number of images per scene category in three databases. We made 2 subsets of Places that will be used across the paper as benchmarks. The first one is Places 205, with the 205 categories with more than 5000 images. Fig. 2 compares the number of images in Places 205 with ImageNet and SUN. Note that ImageNet only has 128 of the 205 categories, while SUN contains all of them (we will call this set SUN 205, and it has, at least, 50 images per category). The second subset of Places used in this paper is Places 88. It contains the 88 common categories with ImageNet such that there are at least 1000 images in ImageNet. We call the corresponding subsets SUN 88 and ImageNet 88. 3 Comparing Scene-centric Databases Despite the importance of benchmarks and training datasets in computer vision, comparing datasets is still an open problem. Even datasets covering the same visual classes have notable differences providing different generalization performance when used to train a classifier [23]. Beyond the number of images and categories, there are aspects that are important but difficult to quantify, like the variability in camera poses, in decoration styles or in the objects that appear in the scene. Although the quality of a database will be task dependent, it is reasonable to assume that a good database should be dense (with a high degree of data concentration), and diverse (it should include a high variability of appearances and viewpoints). Both quantities, density and diversity, are hard to estimate in image sets, as they assume some notion of similarity between images which, in general, is not well defined. Two images of scenes can be considered similar if they contain similar objects, and the objects are in similar spatial configurations and pose, and have similar decoration styles. However, this notion is loose and subjective so it is hard to answer the question are these two images similar? For this reason, we define relative measures for comparing datasets in terms of density and diversity that only require ranking similarities. In this section we will compare the densities and diversities of SUN, ImageNet and Places using these relative measures. 3 3.1 Relative Density and Diversity Density is a measure of data concentration. We assume that, in an image set, high density is equivalent to the fact that images have, in general, similar neighbors. Given two databases A and B, relative density aims to measure which one of the two sets has the most similar nearest neighbors. Let a1 be a random image from set A and b1 from set B and let us take their respective nearest neighbors in each set, a2 from A and b2 from B. If A is denser than B, then it would be more likely that a1 and a2 are closer to each other than b1 and b2. From this idea we define the relative density as DenB(A) = p (d(a1, a2) < d(b1, b2)), where d(a1, a2) is a distance measure between two images (small distance implies high similarity). With this definition of relative density we have that A is denser than B if, and only if, DenB(A) > DenA(B). This definition can be extended to an arbitrary number of datasets, A1, ..., AN: DenA2,...,AN (A1) = p(d(a11, a12) < min i=2:N d(ai1, ai2)) (1) where ai1 ∈Ai are randomly selected and ai2 ∈Ai are near neighbors of their respective ai1. The quality of a dataset can not be measured just by its density. Imagine, for instance, a dataset composed of 100,000 images all taken within the same bedroom. This dataset would have a very high density but a very low diversity as all the images would look very similar. An ideal dataset, expected to generalize well, should have high diversity as well. There are several measures of diversity, most of them frequently used in biology to characterize the richness of an ecosystem (see [9] for a review). In this section, we will use a measure inspired by Simpson index of diversity [22]. Simpson index measures the probability that two random individuals from an ecosystem belong to the same species. It is a measure of how well distributed are the individuals across different species in an ecosystem and it is related to the entropy of the distribution. Extending this measure for evaluating the diversity of images within a category is non-trivial if there are no annotations of sub-categories. For this reason, we propose to measure relative diversity of image datasets A and B based on this idea: if set A is more diverse than set B, then two random images from set B are more likely to be visually similar than two random samples from A. Then, the diversity of A with respect to B can be defined as DivB(A) = 1 −p(d(a1, a2) < d(b1, b2)), where a1, a2 ∈A and b1, b2 ∈B are randomly selected. With this definition of relative diversity we have that A is more diverse than B if, and only if, DivB(A) > DivA(B). For an arbitrary number of datasets, A1, ..., AN: DivA2,...,AN (A1) = 1 −p(d(a11, a12) < min i=2:N d(ai1, ai2)) (2) where ai1, ai2 ∈Ai are randomly selected. 3.2 Experimental Results We measured the relative densities and diversities between SUN, ImageNet and Places using AMT. Both measures used the same experimental interface: workers were presented with different pairs of images and they had to select the pair that contained the most similar images. We observed that different annotators are consistent in deciding whether a pair of images is more similar than another pair of images. In these experiments, the only difference when estimating density and diversity is how the pairs are generated. For the diversity experiment, the pairs are randomly sampled from each database. Each trial is composed of 4 pairs from each database, giving a total of 12 pairs to chose from. We used 4 pairs per database to increase the chances of finding a similar pair and avoiding users having to skip trials. AMT workers had to select the most similar pair on each trial. We ran 40 trials per category and two observers per trial, for the 88 categories in common between ImageNet, SUN and Places databases. Fig. 3a shows some examples of pairs from one of the density experiments.The pair selected by AMT workers as being more similar is highlighted. For the density experiments, we selected pairs that were more likely to be visually similar. This would require first finding the true nearest neighbor of each image, which would be experimentally costly. Instead we used visual similarity as measured by using the Euclidean distance between the Gist descriptor [17] of two images. Each pair of images was composed from one randomly selected image and its 5-th nearest neighbor using Gist (we ignored the first 4 neighbors to avoid 4 Places ImageNet SUN a) b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Density Diversity ImageNet SUN Places c) Places ImageNet SUN Figure 3: a) Examples of pairs for the diversity experiment. b) Examples of pairs for the density experiment. c) Scatter plot of relative diversity vs. relative density per each category and dataset. 10 0 10 1 10 2 10 3 10 4 10 20 30 40 50 60 70 Number of training samples per category Classifcation accuracy Test on ImageNet Scene 88 Train on ImageNet 88 [65.6] Train on Places 88 [60.3] Train on SUN 88 [49.2] 10 0 10 1 10 2 10 3 10 4 10 20 30 40 50 60 70 Number of training samples per category Classifcation accuracy Test on SUN 88 Train on SUN 88 [63.3] Train on ImageNet 88 [62.8] Train on Places 88 [69.5] 10 0 10 1 10 2 10 3 10 4 10 15 20 25 30 35 40 45 50 55 Number of training samples per category Classifcation accuracy Test on Places 88 Train on Places 88 [54.2] Train on ImageNet 88 [44.6] Train on SUN 88 [37.0] a) b) c) Figure 4: Cross dataset generalization of training on the 88 common scenes between Places, SUN and ImageNet then testing on the 88 common scenes from: a) SUN, b) ImageNet and c) Places database. near duplicates, which would give a wrong sense of high density). In this case we also show 12 pairs of images at each trial, but run 25 trials per category instead of 40 to avoid duplicate queries. Fig. 3b shows some examples of pairs per one of the density experiments and also the selected pair is highlighted. Notice that in the density experiment (where we computed neighbors) the pairs look, in general, more similar than in the diversity experiment. Fig. 3c shows a scatter plot of relative diversity vs. relative density for all the 88 categories and the three databases. The point of crossing between the two black lines indicates the point where all the results should fall if all the datasets were identical in terms of diversity and density. The figure also shows the average of the density and diversity over all categories for each dataset. In terms of density, the three datasets are, on average, very similar. However, there is a larger variation in terms of diversity, showing Places to be the most diverse of the three datasets. The average relative diversity on each dataset is 0.83 for Places, 0.67 for ImageNet and 0.50 for SUN. In the experiment, users selected pairs from the SUN database to be the closest to each other 50% of the time, while the pairs from the Places database were judged to be the most similar only on 17% of the trials. The categories with the largest variation in diversity across the three datasets are playground, veranda and waiting room. 3.3 Cross Dataset Generalization As discussed in [23], training and testing across different datasets generally results in a drop of performance due to the dataset bias problem. In this case, the bias between datasets is due, among other factors, to the differences in the density and diversity between the three datasets. Fig. 4 shows the classification results obtained from the training and testing on different permutations of the 3 datasets. For these results we use the features extracted from a pre-trained ImageNet-CNN and a linear SVM. In all three cases training and testing on the same dataset provides the best performance for a fixed number of training examples. As the Places database is very large, it achieves the best performance on two of the test sets when all the training data is used. In the next section we will show that a CNN network trained using the Places database achieves a significant improvement over scene-centered benchmarks in comparison with a network trained using ImageNet. 5 Table 1: Classification accuracy on the test set of Places 205 and the test set of SUN 205. Places 205 SUN 205 Places-CNN 50.0% 66.2% ImageNet CNN feature+SVM 40.8% 49.6% 4 Training Neural Network for Scene Recognition and Deep Features Deep convolutional neural networks have obtained impressive classification performance on the ImageNet benchmark [12]. For the training of Places-CNN, we randomly select 2,448,873 images from 205 categories of Places (referred to as Places 205) as the train set, with minimum 5,000 and maximum 15,000 images per category. The validation set contains 100 images per category and the test set contains 200 images per category (a total of 41,000 images). Places-CNN is trained using the Caffe package on a GPU NVIDIA Tesla K40. It took about 6 days to finish 300,000 iterations of training. The network architecture of Places-CNN is the same as the one used in the Caffe reference network [10]. The Caffe reference network, which is trained on 1.2 million images of ImageNet (ILSVRC 2012), has approximately the same architecture as the network proposed by [12]. We call the Caffe reference network as ImageNet-CNN in the following comparison experiments. 4.1 Visualization of the Deep Features Through the visualization of the responses of the units for various levels of network layers, we can have a better understanding of the differences between the ImageNet-CNN and Places-CNN given that they share the same architecture. Fig.5 visualizes the learned representation of the units at the Conv 1, Pool 2, Pool 5, and FC 7 layers of the two networks. Whereas Conv 1 units can be directly visualized (they capture the oriented edges and opponent colors from both networks), we use the mean image method to visualize the units of the higher layers: we first combine the test set of ImageNet LSVRC2012 (100,000 images) and SUN397 (108,754 images) as the input for both networks; then we sort all these images based on the activation response of each unit at each layer; finally we average the top 100 images with the largest responses for each unit as a kind of receptive field (RF) visualization of each unit. To compare the units from the two networks, Fig. 5 displays mean images sorted by their first principal component. Despite the simplicity of the method, the units in both networks exhibit many differences starting from Pool 2. From Pool 2 to Pool 5 and FC 7, gradually the units in ImageNet-CNN have RFs that look like object-blobs, while units in Places-CNN have more RFs that look like landscapes with more spatial structures. These learned unit structures are closely relevant to the differences of the training data. In future work, it will be fascinating to relate the similarity and differences of the RF at different layers of the object-centric network and scene-centric network with the known object-centered and scenecentered neural cortical pathways identified in the human brain (for a review, [16]). In the next section we will show that these two networks (only differing in the training sets) yield very different performances on a variety of recognition benchmarks. 4.2 Results on Places 205 and SUN 205 After the Places-CNN is trained, we use the final layer output (Soft-max) of the network to classify images in the test set of Places 205 and SUN 205. The classification result is listed in Table 1. As a baseline comparison, we show the results of a linear SVM trained on ImageNet-CNN features of 5000 images per category in Places 205 and 50 images per category in SUN 205 respectively. Places-CNN performs much better. We further compute the performance of the Places-CNN in the terms of the top-5 error rate (one test sample is counted as misclassified if the ground-truth label is not among the top 5 predicted labels of the model). The top-5 error rate for the test set of the Places 205 is 18.9%, while the top-5 error rate for the test set of SUN 205 is 8.1%. 4.3 Generic Deep Features for Visual Recognition We use the responses from the trained CNN as generic features for visual recognition tasks. Responses from the higher-level layers of CNN have proven to be effective generic features with stateof-the-art performance on various image datasets [5, 20]. Thus we evaluate performance of the 6 Conv 1 Pool 2 Pool 5 FC 7 N N C -t e N e g a m I N N C s e c al P Figure 5: Visualization of the units’ receptive fields at different layers for the ImageNet-CNN and Places-CNN. Conv 1 units contains 96 filters. The Pool 2 feature map is 13×13×256; The Pool 5 feature map is 6×6×256; The FC 7 feature map is 4096×1. Subset of units at each layer are shown. Table 2: Classification accuracy/precision on scene-centric databases and object-centric databases for the Places-CNN feature and ImageNet-CNN feature. The classifier in all the experiments is a linear SVM with the same parameters for the two features. SUN397 MIT Indoor67 Scene15 SUN Attribute Places-CNN feature 54.32±0.14 68.24 90.19±0.34 91.29 ImageNet-CNN feature 42.61±0.16 56.79 84.23±0.37 89.85 Caltech101 Caltech256 Action40 Event8 Places-CNN feature 65.18±0.88 45.59±0.31 42.86±0.25 94.12±0.99 ImageNet-CNN feature 87.22±0.92 67.23±0.27 54.92±0.33 94.42±0.76 deep features from the Places-CNN on the following scene and object benchmarks: SUN397 [24], MIT Indoor67 [19], Scene15 [13], SUN Attribute [18], Caltech101 [7], Caltech256 [8], Stanford Action40 [25], and UIUC Event8 [15]. All the experiments follow the standards in those papers 2. As a comparison, we evaluate the deep feature’s performance from the ImageNet-CNN on those same benchmarks. Places-CNN and ImageNet-CNN have exactly the same network architecture, but they are trained on scene-centric data and object-centric data respectively. We use the deep features from the response of the Fully Connected Layer (FC) 7 of the CNNs, which is the final fully connected layer before producing the class predictions. There is only a minor difference between the feature of FC 7 and the feature of FC 6 layer [5]. The deep feature for each image is a 4096dimensional vector. Table 2 summarizes the classification accuracy on various datasets for the ImageNet-CNN feature and the Places-CNN feature. Fig.6 plots the classification accuracy for different visual features on SUN397 database and SUN Attribute dataset. The classifier is a linear SVM with the same default parameters for the two deep features (C=1) [6]. The Places-CNN feature shows impressive performance on scene classification benchmarks, outperforming the current state-of-the-art methods for SUN397 (47.20% [21]) and for MIT Indoor67 (66.87% [4]). On the other hand, the ImageNetCNN feature shows better performance on object-related databases. Importantly, our comparison 2Detailed experimental setups are included in the supplementary materials. 7 1 5 10 20 50 0 10 20 30 40 50 60 70 Number of training samples per category Classifcation accuracy 1/1 5/5 20/20 50/50 150/150 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Number of training samples per attribute (positive/negative) Average Precision Places−CNN [0.912] ImageNet−CNN [0.898] Combined kernel [0.879] HoG2x2 [0.848] Self−similarity [0.820] Geometric Color Hist [0.783] Gist [0.799] Combined kernel [37.5] HoG2x2 [26.3] DenseSIFT [23.5] Texton [21.6] Gist [16.3] LBP [14.7] ImageNet−CNN [42.6] Places−CNN [54.3] Benchmark on SUN397 Dataset Benchmark on SUN Attribute Dataset Figure 6: Classification accuracy on the SUN397 Dataset and average precision on the SUN Attribute Dataset with increasing size of training samples for the ImageNet-CNN feature and the Places-CNN feature. Results of other hand-designed features/kernels are fetched from [24] and [18] respectively. Table 3: Classification accuracy/precision on various databases for Hybrid-CNN feature. The numbers in bold indicate the results outperform the ImageNet-CNN feature or Places-CNN feature. SUN397 MIT Indoor67 Scene15 SUN Attribute Caltech101 Caltech256 Action40 Event8 53.86±0.21 70.80 91.59±0.48 91.56 84.79±0.66 65.06±0.25 55.28±0.64 94.22±0.78 shows that Places-CNN and ImageNet-CNN have complementary strengths on scene-centric tasks and object-centric tasks, as expected from the benchmark datasets used to train these networks. Furthermore, we follow the same experimental setting of train and test split in [1] to fine tune Places-CNN on SUN397: the fine-tuned Places-CNN achieves the accuracy of 56.2%, compared to the accuracy of 52.2% achieved by the fine-tuned ImageNet-CNN in [1]. Note that the final output of the fine-tuned CNN is directly used to predict scene category. Additionally, we train a Hybrid-CNN, by combining the training set of Places-CNN and training set of ImageNet-CNN. We remove the overlapping scene categories from the training set of ImageNet, and then the training set of Hybrid-CNN has 3.5 million images from 1183 categories. HybridCNN is trained over 700,000 iterations, under the same network architecture of Places-CNN and ImageNet-CNN. The accuracy on the validation set is 52.3%. We evaluate the deep feature (FC 7) from Hybrid-CNN on benchmarks shown in Table 3. Combining the two datasets yields an additional increase in performance for a few benchmarks. 5 Conclusion Deep convolutional neural networks are designed to benefit and learn from massive amounts of data. We introduce a new benchmark with millions of labeled images, the Places database, designed to represent places and scenes found in the real world. We introduce a novel measure of density and diversity, and show the usefulness of these quantitative measures for estimating dataset biases and comparing different datasets. We demonstrate that object-centric and scene-centric neural networks differ in their internal representations, by introducing a simple visualization of the receptive fields of CNN units. Finally, we provide the state-of-the-art performance using our deep features on all the current scene benchmarks. Acknowledgement. Thanks to Aditya Khosla for valuable discussions. This work is supported by the National Science Foundation under Grant No. 1016862 to A.O, ONR MURI N000141010933 to A.T, as well as MIT Big Data Initiative at CSAIL, Google and Xerox Awards, a hardware donation from NVIDIA Corporation, to A.O and A.T., Intel and Google awards to J.X, and grant TIN2012-38187-C03-02 to A.L. This work is also supported by the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory, contract FA8650-12-C-7211 to A.T. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. 8 References [1] P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In Proc. ECCV. 2014. [2] Y. Bengio. Learning deep architectures for ai. Foundations and trends R⃝in Machine Learning, 2009. [3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. CVPR, 2009. [4] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In In Advances in Neural Information Processing Systems, 2013. [5] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. 2014. [6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. 2008. [7] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 2007. [8] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007. [9] C. Heip, P. Herman, and K. Soetaert. Indices of diversity and evenness. Oceanis, 1998. [10] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe. berkeleyvision.org/, 2013. [11] T. Konkle, T. F. Brady, G. A. Alvarez, and A. Oliva. Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psych Science, 2010. [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In In Advances in Neural Information Processing Systems, 2012. [13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proc. CVPR, 2006. [14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989. [15] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In Proc. ICCV, 2007. [16] A. Oliva. Scene perception (chapter 51). The New Visual Neurosciences, 2013. [17] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int’l Journal of Computer Vision, 2001. [18] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proc. CVPR, 2012. [19] A. Quattoni and A. Torralba. Recognizing indoor scenes. In Proc. CVPR, 2009. [20] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. arXiv preprint arXiv:1403.6382, 2014. [21] J. S´anchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the fisher vector: Theory and practice. Int’l Journal of Computer Vision, 2013. [22] E. H. Simpson. Measurement of diversity. Nature, 1949. [23] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Proc. CVPR, 2011. [24] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Proc. CVPR, 2010. [25] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning bases of action attributes and parts. In Proc. ICCV, 2011. 9
|
2014
|
100
|
5,182
|
Efficient Sampling for Learning Sparse Additive Models in High Dimensions Hemant Tyagi ETH Z¨urich htyagi@inf.ethz.ch Andreas Krause ETH Z¨urich krausea@ethz.ch Bernd G¨artner ETH Z¨urich gaertner@inf.ethz.ch Abstract We consider the problem of learning sparse additive models, i.e., functions of the form: f(x) = P l∈S φl(xl), x ∈Rd from point queries of f. Here S is an unknown subset of coordinate variables with |S| = k ≪d. Assuming φl’s to be smooth, we propose a set of points at which to sample f and an efficient randomized algorithm that recovers a uniform approximation to each unknown φl. We provide a rigorous theoretical analysis of our scheme along with sample complexity bounds. Our algorithm utilizes recent results from compressive sensing theory along with a novel convex quadratic program for recovering robust uniform approximations to univariate functions, from point queries corrupted with arbitrary bounded noise. Lastly we theoretically analyze the impact of noise – either arbitrary but bounded, or stochastic – on the performance of our algorithm. 1 Introduction Several problems in science and engineering require estimating a real-valued, non-linear (and often non-convex) function f defined on a compact subset of Rd in high dimensions. This challenge arises, e.g., when characterizing complex engineered or natural (e.g., biological) systems [1, 2, 3]. The numerical solution of such problems involves learning the unknown f from point evaluations (xi, f(xi))n i=1. Unfortunately, if the only assumption on f is of mere smoothness, then the problem is in general intractable. For instance, it is well known [4] that if f is Cs-smooth then n = Ω((1/δ)d/s) samples are needed for uniformly approximating f within error 0 < δ < 1. This exponential dependence on d is referred to as the curse of dimensionality. Fortunately, many functions arising in practice are much better behaved in the sense that they are intrinsically low-dimensional, i.e., depend on only a small subset of the d variables. Estimating such functions has received much attention and has led to a considerable amount of theory along with algorithms that do not suffer from the curse of dimensionality (cf., [5, 6, 7, 8]). Here we focus on the problem of learning one such class of functions, assuming f possesses the sparse additive structure: f(x1, x2, . . . , xd) = X l∈S φl(xl); S ⊂{1, . . . , d} , |S| = k ≪d. (1.1) Functions of the form (1.1) are referred to as sparse additive models (SPAMs) and generalize sparse linear models to which they reduce to if each φl is linear. The problem of estimating SPAMs has received considerable attention in the regression setting (cf., [9, 10, 11] and references within) where (xi, f(xi))n i=1 are typically i.i.d samples from some unknown probability measure P. This setting, however, does not consider the possibility of sampling f at specifically chosen points, tailored to the additive structure of f. In this paper, we propose a strategy for querying f, together with an efficient recovery algorithm, with much stronger guarantees than known in the regression setting. In particular, we provide the first results guaranteeing uniformly accurate recovery of each individual component φl of the SPAM. This can be crucial in applications where the goal is to not merely approximate f, but gain insight into its structure. 1 Related work. SPAMs have been studied extensively in the regression setting, with observations being corrupted with random noise. [9] proposed the COSSO method, which is an extension of the Lasso to the reproducing kernel Hilbert space (RKHS) setting. A similiar extension was considered in [10]. In [12], the authors propose a least squares method regularized with smoothness, with each φl lying in an RKHS and derive error rates for estimating f, in the L2(P) norm1. [13, 14] propose methods based on least squares loss regularized with sparsity and smoothness constraints. [13] proves consistency of its method in terms of mean squared risk while [14] derives error rates for estimating f in the empirical L2(Pn) norm 1. [11] considers the setting where each φl lies in an RKHS. They propose a convex program for estimating f and derive error rates for the same, in the L2(P), L2(Pn) norms. Furthermore they establish the minimax optimality of their method for the L2(P) norm. For instance, they derive an error rate of O((k log d/n) + kn− 2s 2s+1 ) in the L2(P) norm for estimating Cs smooth SPAMs. An estimator similar to the one in [11] was also considered by [15]. They derive similar error rates as in [11], albeit under stronger assumptions on f. There is further related work in approximation theory, where it is assumed that f can be sampled at a desired set of points. [5] considers a setting more general than (1.1), with f simply assumed to depend on an unknown subset of k ≪d-coordinate variables. They construct a set of sampling points of size O(ck log d) for some constant c > 0, and present an algorithm that recovers a uniform approximation2 to f. This model is generalized in [8], with f assumed to be of the form f(x) = g(Ax) for unknown A ∈Rk×d; each row of A is assumed to be sparse. [7] generalizes this, by removing the sparsity assumption on A. While the methods of [5, 8, 7] could be employed for learning SPAMs, their sampling sets will be of size exponential in k, and hence sub-optimal. Furthermore, while these methods derive uniform approximations to f, they are unable to recover the individual φl’s. Our contributions. Our contributions are threefold: 1. We propose an efficient algorithm that queries f at O(k log d) locations and recovers: (i) the active set S along with (ii) a uniform approximation to each φl, l ∈S. In contrast, the existing error bounds in the statistics community [11, 12, 15] are in the much weaker L2(P) sense. Furthermore, the existing theory in both statistics and approximation theory provides explicit error bounds for recovering f and not the individual φl’s. 2. An important component of our algorithm is a novel convex quadratic program for estimating an unknown univariate function from point queries corrupted with arbitrary bounded noise. We derive rigorous error bounds for this program in the L∞norm that demonstrate the robustness of the solution returned. We also explicitly demonstrate the effect of noise, sampling density and the curvature of the function on the solution returned. 3. We theoretically analyze the impact of additive noise in the point queries on the performance of our algorithm, for two noise models: arbitrary bounded noise and stochastic (iid) noise. In particular for additive Gaussian noise, we show that our algorithm recovers a robust uniform approximation to each φl with at most O(k3(log d)2) point queries of f. We also provide simulation results that validate our theoretical findings. 2 Problem statement For any function g we denote its pth derivative by g(p) when p is large, else we use appropriate number of prime symbols. ∥g ∥L∞[a,b] denotes the L∞norm of g in [a, b]. For a vector x we denote its ℓq norm for 1 ≤q ≤∞by ∥x ∥q. We consider approximating functions f : Rd →R from point queries. In particular, for some unknown active S ⊂{1, . . . , d} with |S| = k ≪d, we assume f to be of the additive form: f(x1, . . . , xd) = P l∈S φl(xl). Here φl : R →R are the individual univariate components of the model. Our goal is to query f at suitably chosen points in its domain in order to recover an estimate φest,l of φl in a compact subset Ω⊂R for each l ∈S. We measure the approximation error in the L∞norm. For simplicity, we assume that Ω= [−1, 1], meaning that we guarantee an upper 1 ∥f ∥2 L2(P)= R |f(x)|2 dP(x) and ∥f ∥2 L2(Pn)= 1 n P i f 2(xi) 2This means in the L∞norm 2 bound on: ∥φest,l −φl ∥L∞[−1,1] ; l ∈S. Furthermore, we assume that we can query f from a slight enlargement: [−(1 + r), (1 + r)]d of [−1, 1]d for3 some small r > 0. As will be seen later, the enlargement r can be made arbitrarily close to 0. We now list our main assumptions for this problem. 1. Each φl is assumed to be sufficiently smooth. In particular we assume that φl ∈C5[−(1 + r), (1 + r)] where C5 denotes five times continuous differentiability. Since [−(1 + r), (1 + r)] is compact, this implies that there exist constants B1, . . . , B5 ≥0 so that max l∈S ∥φ(p) l ∥L∞[−(1+r),(1+r)] ≤Bp; p = 1, . . . , 5. (2.1) 2. We assume each φl to be centered in the interval [−1, 1], i.e. R 1 −1 φl(t)dt = 0; l ∈S. Such a condition is necessary for unique identification of φl. Otherwise one could simply replace each φl with φl + al for al ∈R where P l al = 0 and unique identification will not be possible. 3. We require that for each φl, ∃Il ⊆[−1, 1] with Il being connected and µ(Il) ≥δ so that |φ′ l(x)| ≥D ; ∀x ∈Il. Here µ(I) denotes the Lebesgue measure of I and δ, D > 0 are constants assumed to be known to the algorithm. This assumption essentially enables us to detect the active set S. If say φ′ l was zero or close to zero throughout [−1, 1] for some l ∈S, then due to Assumption 2 this would imply that φl is zero or close to zero. We remark that it suffices to use estimates for our problem parameters instead of exact values. In particular we can use upper bounds for: k, Bp; p = 1, . . . , 5 and lower bounds for the parameters: D, δ. Our methods and results stated in the coming sections will remain unchanged. 3 Our sampling scheme and algorithm In this section, we first motivate and describe our sampling scheme for querying f. We then outline our algorithm and explain the intuition behind its different stages. Consider the Taylor expansion of f at any point ξ ∈Rd along the direction v ∈Rd with step size: ϵ > 0. For any Cp smooth f; p ≥2, we obtain for ζ = ξ + θv for some 0 < θ < ϵ the following expression: f(ξ + ϵv) −f(ξ) ϵ = ⟨v, ▽f(ξ)⟩+ 1 2ϵvT ▽2 f(ζ)v. (3.1) Note that (3.1) can be interpreted as taking a noisy linear measurement of ▽f(ξ) with the measurement vector v and the noise being the Taylor remainder term. Importantly, due to the sparse additive form of f, we have φl ≡0, l /∈S, implying that ▽f(ξ) = [φ′ 1(ξ1) φ′ 2(ξ2) . . . φ′ d(ξd)] is at most k-sparse. Hence (3.1) actually represents a noisy linear measurement of the k-sparse vector : ▽f(ξ). For any fixed ξ, we know from compressive sensing (CS) [16, 17] that ▽f(ξ) can be recovered (with high probability) using few random linear measurements4. This motivates the following sets of points using which we query f as illustrated in Figure 1. For integers mx, mv > 0 we define X := ξi = i mx (1, 1, . . . , 1)T ∈Rd : i = −mx, . . . , mx , (3.2) V := vj ∈Rd : vj,l = ± 1 √mv w.p. 1/2 each; j = 1, . . . , mv and l = 1, . . . , d . (3.3) Using (3.1) at each ξi ∈X and vj ∈V for i = −mx, . . . , mx and j = 1, . . . , mv leads to: f(ξi + ϵvj) −f(ξi) ϵ | {z } yi,j = ⟨vj, ▽f(ξi) | {z } xi ⟩+ 1 2ϵvT j ▽2 f(ζi,j)vj | {z } ni,j , (3.4) 3In case f : [a, b]d →R we can define g : [−1, 1]d →R where g(x) = f( (b−a) 2 x + b+a 2 ) = P l∈S ˜φl(xl) with ˜φl(xl) = φl( (b−a) 2 xl + b+a 2 ). We then sample g from within [−(1 + r), (1 + r)]d for some small r > 0 by querying f, and estimate ˜φl in [−1, 1] which in turn gives an estimate to φl in [a, b]. 4 Estimating sparse gradients via compressive sensing has been considered previously by Fornasier et al. [8] albeit for a substantially different function class than us. Hence their sampling scheme differs considerably from ours, and is not tailored for learning SPAMs. 3 where xi = ▽f(ξi) = [φ′ 1(i/mx) φ′ 2(i/mx) . . . φ′ d(i/mx)] is k-sparse. Let us denote V = [v1 . . . vmv]T , yi = [yi,1 . . . yi,mv] and ni = [ni,1 . . . ni,mv]. Then for each i, we can write (3.4) in the succinct form: yi = Vxi + ni. (3.5) Here V ∈Rmv×d represents the linear measurement matrix, yi ∈Rmv denotes the measurement vector at ξi and ni represents “noise” on account of non-linearity of f. Note that we query f at |X| (|V| + 1) = (2mx + 1)(mv + 1) many points. Given yi, V we can recover a robust approximation to xi via ℓ1 minimization [16, 17]. On account of the structure of ▽f , we thus recover noisy estimates to φ′ l at equispaced points along the interval [−1, 1]. We are now in a position to formally present our algorithm for learning SPAMs. (−1 −1 . . . −1) (1 1 . . . 1) Figure 1: The points ξi ∈ X (blue disks) and ξi +ϵvj (red arrows) for vj ∈V. Our algorithm for learning SPAMs. The steps involved in our learning scheme are outlined in Algorithm 1. Steps 1-4 involve the CS-based recovery stage wherein we use the aforementioned sampling sets to formulate our problem as a CS one. Step 4 involves a simple thresholding procedure where an appropriate threshold τ is employed to recover the unknown active set S. In Section 4 we provide precise conditions on our sampling parameters which guarantee exact recovery, i.e. bS = S. Step 5 leverages a convex quadratic program (P), that uses noisy estimates of φ′ l(i/mx), i.e., bxi,l for each l ∈bS and i = −mx, . . . , mx, to return a cubic spline estimate ˜φ′l. This program and its theoretical properties are explained in Section 4. Finally, in Step 6 we derive our final estimate φest,l via piecewise integration of ˜φ′l for each l ∈bS. Hence our final estimate of φl is a spline of degree 4. The performance of Algorithm 1 for recovering S and the individual φl’s is presented in Theorem 1, which is also our first main result. All proofs are deferred to the appendix. Algorithm 1 Algorithm for learning φl in the SPAM: f(x) = P l∈S φl(xl) 1: Choose mx, mv and construct sampling sets X and V as in (3.2), (3.3). 2: Choose step size ϵ > 0. Query f at f(ξi),f(ξi+ϵvj) for i = −mx, . . . , mx and j = 1, . . . , mv. 3: Construct yi where yi,j = f(ξi+ϵvj)−f(ξi) ϵ for i = −mx, . . . , mx and j = 1, . . . , mv. 4: Set bxi := argmin yi=Vz ∥z ∥1. For τ > 0 compute bS = ∪mx i=−mx {l ∈{1, . . . , d} : |bxi,l| > τ}. 5: For each l ∈bS, run (P) as defined in Section 4 using (bxi,l)mx i=−mx, τ and some smoothing parameter γ ≥0, to obtain ˜φ′l. 6: For each l ∈bS, set φest,l to be the piece-wise integral of ˜φ′l as explained in Section 4. Theorem 1. There exist constants C, C1 > 0 such that if mx ≥(1/δ), mv ≥C1k log d, 0 < ϵ < D√mv CkB2 and τ = CϵkB2 2√mv then with high probability, bS = S and for any γ ≥0 the estimate φest,l returned by Algorithm 1 satisfies for each l ∈S: ∥φest,l −φl ∥L∞[−1,1]≤[59(1 + γ)] CϵkB2 √mv + 87 64m4x ∥φ(5) l ∥L∞[−1,1] . (3.6) Recall that k, B2, D, δ are our problem parameters introduced in Section 2, while ϵ is the step size parameter from (3.4). We see that with O(k log d) point queries of f and with ϵ < D√mv CkB2 , the active set is recovered exactly. The error bound in (3.6) holds for all such choices of ϵ. It is a sum of two terms in which the first one arises during the estimation of ▽f during the CS stage. The second error term is the interpolation error bound for interpolating φ′ l from its samples in the noise-free setting. We note that our point queries lie in [−(1 + (ϵ/√mv)), (1 + (ϵ/√mv))]d. For the stated condition on ϵ in Theorem 1 we have ϵ/√mv < D CkB2 which can be made arbitrarily close to zero by choosing an appropriately small ϵ. Hence we sample from only a small enlargement of [−1, 1]d. 4 4 Analyzing the algorithm We now describe and analyze in more detail the individual stages of Algorithm 1. We first analyze Steps 1-4 which constitute the compressive sensing (CS) based recovery stage. Next, we analyze Step 5 where we also introduce our convex quadratic program. Lastly, we analyze Step 6 where we derive our final estimate φest,l. Compressive sensing-based recovery stage. This stage of Algorithm 1 involves solving a sequence of linear programs for recovering estimates of xi = [φ′ 1(i/mx) . . . φ′ d(i/mx)] for i = −mx, . . . , mx. We note that the measurements yi are noisy linear measurements of xi with the noise being arbitrary and bounded. For such a noise model, it is known that ℓ1 minimization results in robust recovery of the sparse signal [18]. Using this result in our setting allows us to quantify the recovery error ∥bxi −xi ∥2 as specified in Lemma 1. Lemma 1. There exist constants c′ 3 ≥1 and C, c′ 1 > 0 such that for mv satisfying c′ 3k log d < mv < d/(log 6)2 we have with probability at least 1 −e−c′ 1mv −e−√mvd that bxi satisfies ∥bxi −xi ∥2≤ CϵkB2 2√mv for all i = −mx, . . . , mx. Furthermore, given that this holds and mx ≥1/δ is satisfied we then have for any ϵ < D√mv CkB2 that the choice τ = CϵkB2 2√mv implies that bS = S. Thus upon using ℓ1 minimization based decoding at 2mx + 1 points, we recover robust estimates bxi to xi which immediately gives us estimates bφ′l(i/mx) = bxi,l of φ′ l(i/mx) for i = −mx, . . . , mx and l = 1, . . . , d. In order to recover the active set S, we first note that the spacing between consecutive samples in X is 1/mx. Therefore the condition mx ≥1/δ implies on account of Assumption 3 that the sample spacing is fine enough to ensure that for each l ∈S, there exists a sample i for which |φ′ l(i/mx)| ≥D holds. The stated choice of the step size ϵ essentially guarantees ∀l /∈S, i that bφ′l(i/mx) lies within a sufficiently small neighborhood of the origin in turn enabling detection of the active set. Therefore after this stage of Algorithm 1, we have at hand: the active set S along with the estimates: ( bφ′l(i/mx))mx i=mx for each l ∈S. Furthermore, it is easy to see that bφ′l(i/mx) −φ′ l(i/mx) ≤τ = CϵkB2 2√mv , ∀l ∈S, ∀i. Robust estimation via cubic splines. Our aim now is to recover a smooth, robust estimate to φ′ l by using the noisy samples ( bφ′l(i/mx))mx i=mx. Note that the noise here is arbitrary and bounded by τ = CϵkB2 2√mv . To this end we choose to use cubic splines as our estimates, which are essentially piecewise cubic polynomials that are C2 smooth [19]. There is a considerable amount of literature in the statistics community devoted to the problem of estimating univariate functions from noisy samples via cubic splines (cf., [20, 21, 22, 23]), albeit under the setting of random noise. Cubic splines have also been studied extensively in the approximation theoretic setting for interpolating samples (cf., [19, 24, 25]). We introduce our solution to this problem in a more general setting. Consider a smooth function g : [t1, t2] →R and a uniform mesh5: Q : t1 = x0 < x1 < · · · < xn−1 < xn = t2 with xi −xi−1 = h. We have at hand noisy samples: bgi = g(xi) + ei, with noise ei being arbitrary and bounded: |ei| ≤τ. In the noiseless scenario, the problem would be an interpolation one for which a popular class of cubic splines are the “not-a-knot” cubic splines [24]. These achieve optimal O(h4) error rates for C4 smooth g without using any higher order information about g as boundary conditions. Let H2[t1, t2] denote the space of cubic splines defined on [t1, t2] w.r.t Q. We then propose finding the cubic spline estimate as a solution of the following convex optimization problem (in the 4n coefficients of the n cubic polynomials) for some parameter γ ≥0: (P) min L∈H2[t1,t2] Z t2 t1 L′′(x)2dx s.t. bgi −γτ ≤L(xi) ≤bgi + γτ; i = 0, . . . , n, L′′′(x− 1 ) = L′′′(x+ 1 ), L′′′(x− n−1) = L′′′(x+ n−1). (4.1) (4.2) (4.3) 5We consider uniform meshes for clarity of exposition. The results in this section can be easily generalized to non-uniform meshes. 5 Note that (P) is a convex QP with linear constraints. The objective function can be verified to be a positive definite quadratic form in the spline coefficients6. Specifically, the objective measures the total curvature of a feasible cubic spline in [t1, t2]. Each of the constraints (4.2)-(4.3) along with the implicit continuity constraints of L(p); p = 0, 1, 2 at the interior points of Q, are linear equalities/inequalities in the coefficients of the piecewise cubic polynomials. (4.3) refers to the nota-knot boundary conditions [24] which are also linear equalities in the spline coefficients. These conditions imply that L′′′ is continuous7 at the knots x1, xn−1. Thus, (P) searches amongst the space of all not-a-knot cubic splines such that L(xi) lies within a ±γτ interval of bgi, and returns the smoothest solution, i.e., the one with the least total curvature. The parameter γ ≥0, controls the degree of smoothness of the solution. Clearly, γ = 0 implies interpolating the noisy samples (bgi)n i=0. As γ increases, the search interval: [bgi −γτ, bgi + γτ] becomes larger for all i, leading to smoother feasible cubic splines. The following theorem formally describes the estimation properties of (P) and is also our second main result. Theorem 2. For g ∈C4[t1, t2] let L∗: [t1, t2] →R be a solution of (P) for some parameter γ ≥0. We then have that ∥L∗−g ∥∞≤ 118(1 + γ) 3 τ + 29 64h4 ∥g(4) ∥∞. (4.4) We show in the appendix that if R t2 t1 (L∗′′(x))2dx > 0, then L∗is unique. Note that the error bound (4.4) is a sum of two terms. The first term is proportional to the external noise bound: τ, indicating that the solution is robust to noise. The second term is the error that would arise even if perturbation was absent i.e. τ = 0. Intuitively, if γτ is large enough, then we would expect the solution returned by (P) to be a line. Indeed, a larger value of γτ would imply a larger search interval in (4.2), which if sufficiently large, would allow a line (that has zero curvature) to lie in the feasible region. More formally, we show in the appendix, sufficient conditions: τ = Ω( n1/2∥g′′∥∞ γ−1 ), γ > 1, which if satisfied, imply that the solution returned by (P) is a line. This indicates that if either n is small or g has small curvature, then moderately large values of τ and/or γ will cause the solution returned by (P) to be a line. If an estimate of ∥g′′ ∥∞is available, then one could for instance, use the upper bound 1 + O(n1/2 ∥g′′ ∥∞/τ) to restrict the range of values of γ within which (P) is used. Theorem 2 has the following Corollary for estimation of C4 smooth φ′ l in the interval [−1, 1]. The proof simply involves replacing: g with φ′ l, n + 1 with 2mx + 1, h with 1/mx and τ with CϵkB2 2√mv . As the perturbation τ is directly proportional to the step size ϵ, we show in the appendix that if additionally ϵ = Ω( √mxmv∥φ′′′ l ∥∞ γ−1 ), γ > 1, holds, then the corresponding estimate ˜φ′l will be a line. Corollary 1. Let (P) be employed for each l ∈S using noisy samples n bφ′l(i/mx) omx i=−mx, and with step size ϵ satisfying 0 < ϵ < D√mv CkB2 . Denoting ˜φ′l as the corresponding solution returned by (P), we then have for any γ ≥0 that: ∥˜φ′l −φ′ l ∥L∞[−1,1]≤ 59(1 + γ) 3 CϵkB2 √mv + 29 64m4x ∥φ(5) l ∥L∞[−1,1] . (4.5) The final estimate. We now derive the final estimate φest,l of φl for each l ∈S. Denote x0(= −1) < x1 < · · · < x2mx−1 < x2mx(= 1) as our equispaced set of points on [−1, 1]. Since ˜φ′l : [−1, 1] →R returned by (P) is a cubic spline, we have ˜φ′l(x) = ˜φ′l,i(x) for x ∈[xi, xi+1] where ˜φ′l,i is a polynomial of degree at most 3. We then define φest,l(x) := ˜φl,i(x) + Fi for x ∈[xi, xi+1] and i = 0, . . . , 2mx −1. Here ˜φl,i is a antiderivative of ˜φ′l,i and Fi’s are constants of integration. Denoting F0 = F, we have that φest,l is continuous at x1, . . . , x2mx−1 for: Fi = ˜φl,0(x1) + Pi−1 j=1(˜φl,j(xj+1) −˜φl,j(xj)) −˜φl,i(xi) + F = F ′ i + F; 1 ≤i ≤2mx −1. Hence by denoting ψl,i(·) := ˜φl,i(·) + F ′ i we obtain φest,l(·) = ψl(·) + F where ψl(x) = ψl,i(x) for 6Shown in the appendix. 7f(x−) = limh→0−f(x + h) and f(x+) = limh→0+ f(x + h) denote left,right hand limits respectively. 6 x ∈[xi, xi+1]. Now on account of Assumption 2, we require φest,l to also be centered implying F = −1 2 R 1 −1 ψl(x)dx. Hence we output our final estimate of φl to be: φest,l(x) := ψl(x) −1 2 Z 1 −1 ψl(x)dx; x ∈[−1, 1]. (4.6) Since φest,l is by construction continuous in [−1, 1], is a piecewise combination of polynomials of degree at most 4, and since φ′ est,l is a cubic spline, φest,l is a spline function of order 4. Lastly, we show in the proof of Theorem 1 that ∥φest,l −φl ∥L∞[−1,1]≤3 ∥˜φ′l −φ′ l ∥L∞[−1,1] holds. Using Corollary 1, this provides us with the error bounds stated in Theorem 1. 5 Impact of noise on performance of our algorithm Our third main contribution involves analyzing the more realistic scenario, when the point queries are corrupted with additive external noise z′. Thus querying f in Step 2 of Algorithm 1 results in noisy values: f(ξi) + z′ i and f(ξi + ϵvj) + z′ i,j respectively. This changes (3.5) to the noisy linear system: yi = Vxi + ni + zi where zi,j = (z′ i,j −z′ i)/ϵ for i = −mx, . . . , mx and j = 1, . . . , mv. Notice that external noise gets scaled by (1/ϵ), while |ni,j| scales linearly with ϵ. Arbitrary bounded noise. In this model, the external noise is arbitrary but bounded, so that |z′ i| , z′ i,j < κ; ∀i, j. It can be verified along the lines of the proof of Lemma 1 that: ∥ni + zi ∥2≤ √mv 2κ ϵ + ϵkB2 2mv . Observe that unlike the noiseless setting, ϵ cannot be made arbitrarily close to 0, as it would blow up the impact of the external noise. The following theorem shows that if κ is small relative to D2 < |φ′ l(x)|2, ∀x ∈Il, l ∈S, then8 there exists an interval for choosing ϵ, within which Algorithm 1 recovers exactly the active set S. This condition has the natural interpretation that if the signal-to-‘external noise’ ratio in Il is sufficiently large, then S can be detected exactly. Theorem 3. There exist constants C, C1 > 0 such that if κ < D2/(16C2kB2), mx ≥(1/δ), and mv ≥C1k log d hold, then for any ϵ ∈D√mv 2CkB2 [1 −A, 1 + A] where A := p 1 −(16C2kB2κ)/D2 and τ = √mv 2κ ϵ + ϵkB2 2mv , we have in Algorithm 1, with high probability, that bS = S and for any γ ≥0, for each l ∈S: ∥φest,l −φl ∥L∞[−1,1]≤[59(1 + γ)] 4C√mvκ ϵ + CϵkB2 √mv + 87 64m4x ∥φ(5) l ∥L∞[−1,1] . (5.1) Stochastic noise. In this model, the external noise is assumed to be i.i.d. Gaussian, so that z′ i, z′ i,j ∼N(0, σ2); i.i.d. ∀i, j. In this setting we consider resampling f at the query point N times and then averaging the noisy samples, in order to reduce σ. Given this, we now have that z′ i, z′ i,j ∼N(0, σ2 N ); i.i.d. ∀i, j. Using standard tail-bounds for Gaussians, we can show that for any κ > 0 if N is chosen large enough then: |zi,j| = z′ i −z′ i,j ≤2κ; ∀i, j with high probability. Hence the external noise zi,j would be bounded with high probability and the analysis for Theorem 3 can be used in a straightforward manner. Of course, an advantage that we have in this setting is that κ can be chosen to be arbitrarily close to zero by choosing a correspondingly large value of N. We state all this formally in the form of the following theorem. Theorem 4. There exist constants C, C1 > 0 such that for κ < D2/(16C2kB2), mx ≥(1/δ), and mv ≥C1k log d, if we re-sample each query in Step 2 of Algorithm 1: N > σ2 κ2 log √ 2σ κp |X| |V| times for 0 < p < 1, and average the values, then for any ϵ ∈D√mv 2CkB2 [1 −A, 1 + A] where A := p 1 −(16C2kB2κ)/D2 and τ = √mv 2κ ϵ + ϵkB2 2mv , we have in Algorithm 1, with probability at least 1 −p −o(1), that bS = S and for any γ ≥0, for each l ∈S: ∥φest,l −φl ∥L∞[−1,1]≤[59(1 + γ)] 4C√mvκ ϵ + CϵkB2 √mv + 87 64m4x ∥φ(5) l ∥L∞[−1,1] . (5.2) 8Il is the “critical” interval defined in Assumption 3 for detecting l ∈S. 7 Note that we query f now N |X| (|V| + 1) times. Also, |X| = (2mx + 1) = Θ(1), and κ = O(k−1), as D, C, B2, δ are constants. Hence the choice |V| = O(k log d) gives us N = O(k2 log(p−1k2 log d)) and leads to an overall query complexity of: O(k3 log d log(p−1k2 log d)) when the samples are corrupted with additive Gaussian noise. Choosing p = O(d−c) for any constant c > 0 gives us a sample complexity of O(k3(log d)2), and ensures that the result holds with high probability. The o(1) term goes to zero exponentially fast as d →∞. Simulation results. We now provide simulation results on synthetic data to support our theoretical findings. We consider the noisy setting with the point queries being corrupted with Gaussian noise. For d = 1000, k = 4 and S = {2, 105, 424, 782}, consider f : Rd →R where f = φ2(x2) + φ105(x105) + φ424(x424) + φ782(x782) with: φ2(x) = sin(πx), φ105(x) = exp(−2x), φ424(x) = (1/3) cos3(πx) + 0.8x2, φ782(x) = 0.5x4 −x2 + 0.8x. We choose δ = 0.3, D = 0.2 which can be verified as valid parameters for the above φl’s. Furthermore, we choose mx = ⌈2/δ⌉= 7 and mv = ⌈2k log d⌉= 56 to satisfy the conditions of Theorem 4. Next, we choose constants C = 0.2, B2 = 35 and κ = 0.95 D2 16C2kB2 = 4.24 × 10−4 as required by Theorem 4. For the choice ϵ = D√mv 2CkB2 = 0.0267, we then query f at (2mx + 1)(mv + 1) = 855 points. The function values are corrupted with Gaussian noise: N(0, σ2/N) for σ = 0.01 and N = 100. This is equivalent to resampling and averaging the points queries N times. Importantly the sufficient condition on N, as stated in Theorem 4 is ⌈σ2 κ2 log( √ 2σ|X||V| κp )⌉= 6974 for p = 0.1. Thus we consider a significantly undersampled regime. Lastly we select the threshold τ = √mv 2κ ϵ + ϵkB2 2mv = 0.2875 as stated by Theorem 4, and employ Algorithm 1 for different values of the smoothing parameter γ. −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 x φ2, φest,2 (a) Estimates of φ2 −1 −0.5 0 0.5 1 −2 0 2 4 6 x φ105, φest,105 (b) Estimates of φ105 −1 −0.5 0 0.5 1 −0.2 −0.1 0 0.1 0.2 0.3 x φ424, φest,424 (c) Estimates of φ424 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 x φ782, φest,782 (d) Estimates of φ782 Figure 2: Estimates φest,l of φl (black) for: γ = 0.3 (red), γ = 1 (blue) and γ = 5 (green). The results are shown in Figure 2. Over 10 independent runs of the algorithm we observed that S was recovered exactly each time. Furthermore we see from Figure 2 that the recovery is quite accurate for γ = 0.3. For γ = 1 we notice that the search interval γτ = 0.2875 becomes large enough so as to cause the estimates φest,424, φest,782 to become relatively smoother. For γ = 5, the search interval γτ = 1.4375 becomes wide enough for a line to fit in the feasible region for φ′ 424, φ′ 782. This results in φest,424, φest,782 to be quadratic functions. In the case of φ′ 2, φ′ 105, the search interval is not sufficiently wide enough for a line to lie in the feasible region, even for γ = 5. However we notice that the estimates φest,2, φest,105 become relatively smoother as expected. 6 Conclusion We proposed an efficient sampling scheme for learning SPAMs. In particular, we showed that with only a few queries, we can derive uniform approximations to each underlying univariate function of the SPAM. A crucial component of our approach is a novel convex QP for robust estimation of univariate functions via cubic splines, from samples corrupted with arbitrary bounded noise. Lastly, we showed how our algorithm can handle noisy point queries for both (i) arbitrary bounded and (ii) i.i.d. Gaussian noise models. An important direction for future work would be to determine the optimality of our sampling bounds by deriving corresponding lower bounds on the sample complexity. Acknowledgments. This research was supported in part by SNSF grant 200021 137528 and a Microsoft Research Faculty Fellowship. 8 References [1] Th. Muller-Gronbach and K. Ritter. Minimal errors for strong and weak approximation of stochastic differential equations. Monte Carlo and Quasi-Monte Carlo Methods, pages 53–82, 2008. [2] M.H. Maathuis, M. Kalisch, and P. B¨uhlmann. Estimating high-dimensional intervention effects from observational data. The Annals of Statistics, 37(6A):3133–3164, 2009. [3] M.J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. Information Theory, IEEE Transactions on, 55(12):5728–5741, 2009. [4] J.F. Traub, G.W. Wasilkowski, and H. Wozniakowski. Information-Based Complexity. Academic Press, New York, 1988. [5] R. DeVore, G. Petrova, and P. Wojtaszczyk. Approximation of functions of few variables in high dimensions. Constr. Approx., 33:125–143, 2011. [6] A. Cohen, I. Daubechies, R.A. DeVore, G. Kerkyacharian, and D. Picard. Capturing ridge functions in high dimensions from point queries. Constr. Approx., pages 1–19, 2011. [7] H. Tyagi and V. Cevher. Active learning of multi-index function models. Advances in Neural Information Processing Systems 25, pages 1475–1483, 2012. [8] M. Fornasier, K. Schnass, and J. Vyb´ıral. Learning functions of few arbitrary linear parameters in high dimensions. Foundations of Computational Mathematics, 12(2):229–262, 2012. [9] Y. Lin and H.H. Zhang. Component selection and smoothing in multivariate nonparametric regression. The Annals of Statistics, 34(5):2272–2297, 2006. [10] M. Yuan. Nonnegative garrote component selection in functional anova models. In AISTATS, volume 2, pages 660–666, 2007. [11] G. Raskutti, M.J. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. J. Mach. Learn. Res., 13(1):389–427, 2012. [12] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines. In COLT, pages 229–238, 2008. [13] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(5):1009–1030, 2009. [14] L. Meier, S. Van De Geer, and P. B¨uhlmann. High-dimensional additive modeling. The Annals of Statistics, 37(6B):3779–3821, 2009. [15] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics, 38(6):3660–3695, 2010. [16] E.J. Cand`es, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006. [17] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289– 1306, 2006. [18] P. Wojtaszczyk. ℓ1 minimization with noisy data. SIAM Journal on Numerical Analysis, 50(2):458–467, 2012. [19] J.H. Ahlberg, E.N. Nilson, and J.L. Walsh. The theory of splines and their applications. Academic Press (New York), 1967. [20] I.J. Schoenberg. Spline functions and the problem of graduation. Proceedings of the National Academy of Sciences, 52(4):947–950, 1964. [21] C.M. Reinsch. Smoothing by spline functions. Numer. Math, 10:177–183, 1967. [22] G. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik, 24(5):383– 393, 1975. [23] P. Craven and G. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik, 31(4):377–403, 1978. [24] C. de Boor. A practical guide to splines. Springer Verlag (New York), 1978. [25] C.A. Hall and W.W. Meyer. Optimal error bounds for cubic spline interpolation. Journal of Approximation Theory, 16(2):105 – 122, 1976. 9
|
2014
|
101
|
5,183
|
A framework for studying synaptic plasticity with neural spike train data Scott W. Linderman Harvard University Cambridge, MA 02138 swl@seas.harvard.edu Christopher H. Stock Harvard College Cambridge, MA 02138 cstock@post.harvard.edu Ryan P. Adams Harvard University Cambridge, MA 02138 rpa@seas.harvard.edu Abstract Learning and memory in the brain are implemented by complex, time-varying changes in neural circuitry. The computational rules according to which synaptic weights change over time are the subject of much research, and are not precisely understood. Until recently, limitations in experimental methods have made it challenging to test hypotheses about synaptic plasticity on a large scale. However, as such data become available and these barriers are lifted, it becomes necessary to develop analysis techniques to validate plasticity models. Here, we present a highly extensible framework for modeling arbitrary synaptic plasticity rules on spike train data in populations of interconnected neurons. We treat synaptic weights as a (potentially nonlinear) dynamical system embedded in a fullyBayesian generalized linear model (GLM). In addition, we provide an algorithm for inferring synaptic weight trajectories alongside the parameters of the GLM and of the learning rules. Using this method, we perform model comparison of two proposed variants of the well-known spike-timing-dependent plasticity (STDP) rule, where nonlinear effects play a substantial role. On synthetic data generated from the biophysical simulator NEURON, we show that we can recover the weight trajectories, the pattern of connectivity, and the underlying learning rules. 1 Introduction Synaptic plasticity is believed to be the fundamental building block of learning and memory in the brain. Its study is of crucial importance to understanding the activity and function of neural circuits. With innovations in neural recording technology providing access to the simultaneous activity of increasingly large populations of neurons, statistical models are promising tools for formulating and testing hypotheses about the dynamics of synaptic connectivity. Advances in optical techniques [1, 2], for example, have made it possible to simultaneously record from and stimulate large populations of synaptically connected neurons. Armed with statistical tools capable of inferring time-varying synaptic connectivity, neuroscientists could test competing models of synaptic plasticity, discover new learning rules at the monosynaptic and network level, investigate the effects of disease on synaptic plasticity, and potentially design stimuli to modify neural networks. Despite the popularity of GLMs for spike data, relatively little work has attempted to model the time-varying nature of neural interactions. Here we model interaction weights as a dynamical system governed by parametric synaptic plasticity rules. To perform inference in this model, we use particle Markov Chain Monte Carlo (pMCMC) [3], a recently developed inference technique for complex time series. We use this new modeling framework to examine the problem of using recorded data to distinguish between proposed variants of spike-timing-dependent plasticity (STDP) learning rules. 1 time Figure 1: A simple network of four sparsely connected neurons whose synaptic weights are changing over time. Here, the neurons have inhibitory self connections to mimic refractory effects, and are connected via a chain of excitatory synapses, as indicated by the nonzero entries A1→2, A2→3, and A3→4. The corresponding weights of these synapses are strengthening over time (darker entries in W ), leading to larger impulse responses in the firing rates and a greater number of induced post-synaptic spikes (black dots), as shown below. 2 Related Work The GLM is a probabilistic model that considers spike trains to be realizations from a point process with conditional rate λ(t) [4, 5]. From a biophysical perspective, we interpret this rate as a nonlinear function of the cell’s membrane potential. When the membrane potential exceeds the spiking threshold potential of the cell, λ(t) rises to reflect the rate of the cell’s spiking, and when the membrane potential decreases below the spiking threshold, λ(t) decays to zero. The membrane potential is modeled as the sum of three terms: a linear function of the stimulus, I(t), for example a low-pass filtered input current, the sum of excitatory and inhibitory PSPs induced by presynaptic neurons, and a constant background rate. In a network of N neurons, let Sn = {sn,m}Mn m=1 ⊂[0, T] be the set of observed spike times for neuron n, where T is the duration of the recording and Mn is the number of spikes. The conditional firing rate of a neuron n can be written, λn(t) = g bn + Z t 0 kn(t −τ) · I(τ) dτ + N X n′=1 Mn′ X m=1 hn′→n(t −sn′,m) · I[sn′,m < t] , (1) where bn is the background rate, the second term is a convolution of the (potentially vector-valued) stimulus with a linear stimulus filter, kn(∆t), and the third is a linear summation of impulse responses, hn′→n(∆t), which preceding spikes on neuron n′ induce on the membrane potential of neuron n. Finally, the rectifying nonlinearity g : R →R+ converts this linear function of stimulus and spike history into a nonnegative rate. While the spiking threshold potential is not explicitly modeled in this framework, it is implicitly inferred in the amplitude of the impulse responses. From this semi-biophysical perspective it is clear that one shortcoming of the standard GLM is that it does not account for time-varying connectivity, despite decades of research showing that changes in synaptic weight occur over a variety of time scales and are the basis of many fundamental cognitive processes. This absence is due, in part, to the fact that this direct biophysical interpretation is not warranted in most traditional experimental regimes, e.g., in multi-electrode array (MEA) recordings where electrodes are relatively far apart. However, as high resolution optical recordings grow in popularity, this assumption must be revisited; this is a central motivation for the present model. There have been a few efforts to incorporate dynamics into the GLM. Stevenson and Koerding [6] extended the GLM to take inter-spike intervals as a covariates and formulated a generalized bilinear model for weights. Eldawlatly et al. [7] modeled the time-varying parameters of a GLM using a dynamic Bayesian network (DBN). However, neither of these approaches accommodate the breadth of synaptic plasticity rules present in the literature. For example, parametric STDP models with hard 2 bounds on the synaptic weight are not congruent with the convex optimization techniques used by [6], nor are they naturally expressed in a DBN. Here we model time-varying synaptic weights as a potentially nonlinear dynamical system and perform inference using particle MCMC. Nonstationary, or time-varying, models of synaptic weights have also been studied outside the context of GLMs. For example, Petreska et al. [8] applied hidden switching linear dynamical systems models to neural recordings. This approach has many merits, especially in traditional MEA recordings where synaptic connections are less likely and nonlinear dynamics are not necessarily warranted. Outside the realm of computational neuroscience and spike train analysis, there exist a number of dynamic statistical models, such as West et al. [9], which explored dynamic generalized linear models. However, the types of models we are interested in for studying synaptic plasticity are characterized by domain-specific transition models and sparsity structure, and until recently, the tools for effectively performing inference in these models have been limited. 3 A Sparse Time-Varying Generalized Linear Model In order to capture the time-varying nature of synaptic weights, we extend the standard GLM by first factoring the impulse responses in the firing rate of Equation 1 into a product of three terms: hn′→n(∆t, t) ≡An′→n Wn′→n(t) rn′→n(∆t). (2) Here, An′→n ∈{0, 1} is a binary random variable indicating the presence of a direct synapse from neuron n′ to neuron n, Wn′→n(t) : [0, T] →R is a non stationary synaptic “weight” trajectory associated with the synapse, and rn′→n(∆t) is a nonnegative, normalized impulse response, i.e. R ∞ 0 rn′→n(τ)dτ = 1. Requiring rn′→n(∆t) to be normalized gives meaning to the synaptic weights: otherwise W would only be defined up to a scaling factor. For simplicity, we assume r(∆t) does not change over time, that is, only the amplitude and not the duration of the PSPs are timevarying. This restriction could be adapted in future work. As is often done in GLMs, we model the normalized impulse responses as a linear combination of basis functions. In order to enforce the normalization of r(·), however, we use a convex combination of normalized, nonnegative basis functions. That is, rn′→n(∆t) ≡ B X b=1 β(n′→n) b rb(∆t), where R ∞ 0 rb(τ) dτ = 1, ∀b and PB b=1 β(n′→n) b = 1, ∀n, n′. The same approach is used to model the stimulus filters, kn(∆t), but without the normalization and non-negativity constraints. The binary random variables An′→n, which can be collected into an N × N binary matrix A, model the connectivity of the synaptic network. Similarly, the collection of weight trajectories {{Wn′→n(t)}}n′,n, which we will collectively refer to as W (t), model the time-varying synaptic weights. This factorization is often called a spike-and-slab prior [10], and it allows us to separate our prior beliefs about the structure of the synaptic network from those about the evolution of synaptic weights. For example, in the most general case we might leverage a variety of random network models [11] as prior distributions for A, but here we limit ourselves to the simplest network model, the Erd˝os-Renyi model. Under this model, each An′→n is an independent identically distributed Bernoulli random variable with sparsity parameter ρ. Figure 1 illustrates how the adjacency matrix and the time-varying weights are integrated into the GLM. Here, a four-neuron network is connected via a chain of excitatory synapses, and the synapses strengthen over time due to an STDP rule. This is evidenced by the increasing amplitude of the impulse responses in the firing rates. With larger synaptic weights comes an increased probability of postsynaptic spikes, shown as black dots in the figure. In order to model the dynamics of the time-varying synaptic weights, we turn to a rich literature on synaptic plasticity and learning rules. 3.1 Learning rules for time-varying synaptic weights Decades of research on synapses and learning rules have yielded a plethora of models for the evolution of synaptic weights [12]. In most cases, this evolution can be written as a dynamical system, dW (t) dt = ℓ(W (t), {sn,m : sn,m < t} ) + ϵ(W (t), t), 3 where ℓis a potentially nonlinear learning rule that determines how synaptic weights change as a function of previous spiking. This framework encompasses rate-based rules such as the Oja rule [13] and timing-based rules such as STDP and its variants. The additive noise, ϵ(W (t), t), need not be Gaussian, and many models require truncated noise distributions. Following biological intuition, many common learning rules factor into a product of simpler functions. For example, STDP (defined below) updates each synapse independently such that dWn′→n(t)/dt only depends on Wn′→n(t) and the presynaptic spike history Sn<t = {sn,m : sn,m < t}. Biologically speaking, this means that plasticity is local to the synapse. More sophisticated rules allow dependencies among the columns of W . For example, the incoming weights to neuron n may depend upon one another through normalization, as in the Oja rule [13], which scales synapse strength according to the total strength of incoming synapses. Extensive research in the last fifteen years has identified the relative spike timing between the preand postsynaptic neurons as a key component of synaptic plasticity, among other factors such as mean firing rate and dendritic depolarization [14]. STDP is therefore one of the most prominent learning rules in the literature today, with a number of proposed variants based on cell type and biological plausibility. In the experiments to follow, we will make use of two of these proposed variants. First, consider the canonical STDP rule with a “double-exponential” function parameterized by τ−, τ+, A−, and A+ [15], in which the effect of a given pair of pre-synaptic and post-synaptic spikes on a weight may be written: ℓ(Wn′→n(t), Sn′, Sn) = I[t ∈Sn] ℓ+(Sn′; A+, τ+) −I[t ∈Sn′] ℓ−(Sn; A−, τ−), (3) ℓ+(Sn′; A+, τ+) = X sn′,m∈Sn′<t A+ e(t−sn′,m)/τ+ ℓ−(Sn; A−, τ−) = X sn,m∈Sn<t A−e(t−sn,m)/τ−. This rule states that weight changes only occur at the time of pre- or post-synaptic spikes, and that the magnitude of the change is a nonlinear function of interspike intervals. A slightly more complicated model known as the multiplicative STDP rule extends this by bounding the weights above and below by Wmax and Wmin, respectively [16]. Then, the magnitude of the weight update is scaled by the distance from the threshold: ℓ(Wn′→n(t), Sn′, Sn) = I[t ∈Sn] ˜ℓ+(Sn′; A+, τ+) (Wmax −Wn′→n(t)), −I[t ∈Sn′] ˜ℓ−(Sn; A−, τ−) (Wn′→n(t) −Wmin). (4) Here, by setting ˜ℓ± = min(ℓ±, 1), we enforce that the synaptic weights always fall within [Wmin, Wmax]. With this rule, it often makes sense to set Wmin to zero. Similarly, we can construct an additive, bounded model which is identical to the standard additive STDP model except that weights are thresholded at a minimum and maximum value. In this model, the weight never exceeds its set lower and upper bounds, but unlike the multiplicative STDP rule, the proposed weight update is independent of the current weight except at the boundaries. Likewise, whereas with the canonical STDP model it is sensible to use Gaussian noise for ϵ(t) in the bounded multiplicative model we use truncated Gaussian noise to respect the hard upper and lower bounds on the weights. Note that this noise is dependent upon the current weight, Wn′→n(t). The nonlinear nature of this rule, which arises from the multiplicative interactions among the parameters, θℓ= {A+, τ+, A−, τ−, Wmax, Wmax}, combined with the potentially non-Gaussian noise models, pose substantial challenges for inference. However, the computational cost of these detailed models is counterbalanced by dramatic expansions in the flexibility of the model and the incorporation of a priori knowledge of synaptic plasticity. These learning models can be interpreted as strong regularizers of models that would otherwise be highly underdetermined, as there are N 2 weight trajectories and only N spike trains. In the next section we will leverage powerful new techniques for Bayesian inference in order to capitalize on these expressive models of synaptic plasticity. 4 Inference via particle MCMC The traditional approach to inference in the standard GLM is penalized maximum likelihood estimation. The log likelihood of a single conditional Poisson process is well known to be, L λn(t); {Sn}N n=1, I(t) = − Z T 0 λn(t) dt + Mn X m=1 log (λn(sn,m)) , (5) 4 and the log likelihood of a population of non-interacting spike trains is simply the sum of each of the log likelihoods for each neuron. The likelihood depends upon the parameters θGLM = {bn, kn, {hn′→n(∆t)}N n′=1} through the definition of the rate function given in Equation 1. For some link functions g, the log likelihood is a concave function of θGLM, and the MLE can be found using efficient optimization techniques. Certain dynamical models, namely linear Gaussian latent state space models, also support efficient inference via point process filtering techniques [17]. Due to the potentially nonlinear and non-Gaussian nature of STDP, these existing techniques are not applicable here. Instead we use particle MCMC [3], a powerful technique for inference in time series. Particle MCMC samples the posterior distribution over weight trajectories, W (t), the adjacency matrix A, and the model parameters θGLM and θℓ, given the observed spike trains, by combining particle filtering with MCMC. We represent the conditional distribution over weight trajectories with a set of discrete particles. Let the instantaneous weights at (discretized) time t be represented by a set of P particles, {W (p) t }P p=1. The particles live in RN×N and are assigned normalized particle weights1, ωp, which approximate the true distribution via Pr(W t) ≈PP p=1 ωp δW (p) t (W t). Particle filtering is a method of inferring a distribution over weight trajectories by iteratively propagating forward in time and reweighting according to how well the new samples explain the data. For each particle W (p) t at time t, we propagate forward one time step using the learning rule to obtain a particle W (p) t+1. Then, using Equation 5, we evaluate the log likelihood of the spikes that occurred in the window [t, t + 1) and update the weights. Since some of these particles may have very low weights, after each step we resample the particles. After the T-th time step we are left with a set of weight trajectories {(W (p) 0 , . . . , W (p) T )}P p=1, each associated with a particle weight ωp. Particle filtering only yields a distribution over weight trajectories, and implicitly assumes that the other parameters have been specified. Particle MCMC provides a broader inference algorithm for both weights and other parameters. The idea is to interleave conditional particle filtering steps that sample the weight trajectory given the current model parameters and the previously sampled weights, with traditional Gibbs updates to sample the model parameters given the current weight trajectory. This combination leaves the stationary distribution of the Markov chain invariant and allows joint inference over weights and parameters. Gibbs updates for the remaining model parameters, including those of the learning rule, are described in the supplementary material. Collapsed sampling of A and W (t) In addition to sampling of weight trajectories and model parameters, particle MCMC approximates the marginal likelihood of entries in the adjacency matrix, A, integrating out the corresponding weight trajectory. We have, up to a constant, Pr(An′→n | S, θℓ, θGLM, A¬n′→n, W ¬n′→n(t)) = Z T 0 Z ∞ −∞ p(An′→n, Wn′→n(t) | S, θℓ, θGLM, A¬n′→n, W ¬n′→n(t)) dWn′→n(t) dt ≈ " T Y t=1 1 P P X p=1 ˆω(p) t # Pr(An′→n), where ¬n′ →n indicates all entries except for n′ →n, and the particle weights are obtained by running a particle filter for each assignment of An′→n. This allows us to jointly sample An→n′ and Wn→n′(t) by first sampling An→n′ and then Wn→n′(t) given An→n′. By marginalizing out the weight trajectory, our algorithm is able to explore the space of adjacency matrices more efficiently. We capitalize on a number of other opportunities for computational efficiency as well. For example, if the learning rule factors into independent updates for each Wn′→n(t), then we can update each synapse’s weight trajectory separately and reduce the particles to one-dimensional objects. In our implementation, we also make use of a pMCMC variant with ancestor sampling [18] that significantly improves convergence. Any distribution may be used to propagate the particles forward; using the learning rule is simply the easiest to implement and understand. We have omitted a number of details in this description; for a thorough overview of particle MCMC, the reader should consult [3, 18]. 1Note that the particle weights are not the same as the synaptic weights. 5 Figure 2: We fit time-varying weight trajectories to spike trains simulated from a GLM with two neurons undergoing no plasticity (top row), an additive, unbounded STDP rule (middle), and a multiplicative, saturating STDP rule (bottom row). We fit the first 50 seconds with four different models: MAP for an L1-regularized GLM, and fully-Bayesian inference for a static, additive STDP, and multiplicative STDP learning rules. In all cases, the correct models yield the highest predictive log likelihood on the final 10 seconds of the dataset. 5 Evaluation We evaluated our technique with two types of synthetic data. First, we generated data from our model, with known ground-truth. Second, we used the well-known simulator NEURON to simulate driven, interconnected populations of neurons undergoing synaptic plasticity. For comparison, we show how the sparse, time-varying GLM compares to a standard GLM with a group LASSO prior on the impulse response coefficients for which we can perform efficient MAP estimation. 5.1 GLM-based simulations As a proof of concept, we study a single synapse undergoing a variety of synaptic plasticity rules and generating spikes according to a GLM. The neurons also have inhibitory self-connections to mimic refractory effects. We tested three synaptic plasticity mechanisms: a static synapse (i.e., no plasticity), the unbounded, additive STDP rule given by Equation 3, and the bounded, multiplicative STDP rule given by Equation 4. For each learning rule, we simulated 60 seconds of spiking activity at 1kHz temporal resolution, updating the synaptic weights every 1s. The baseline firing rates were normally distributed with mean 20Hz and standard deviation of 5Hz. Correlations in the spike timing led to changes in the synaptic weight trajectories that we could detect with our inference algorithm. Figure 2 shows the true and inferred weight trajectories, the inferred learning rules, and the predictive log likelihood on ten seconds of held out data for each of the three ground truth learning rules. When the underlying weights are static (top row), MAP estimation and static learning rules do an excellent 6 9 mV Figure 3: Evaluation of synapse detection on a 60 second spike train from a network of 10 neurons undergoing synaptic plasticity with a saturating, additive STDP rule, simulated with NEURON. The sparse, time-varying GLM with an additive rule outperforms the fully-Bayesian model with static weights, MAP estimation with L1 regularization, and simple thresholding of the cross-correlation matrix. job of detecting the true weight whereas the two time-varying models must compensate by either setting the learning rule as close to zero as possible, as the additive STDP does, or setting the threshold such that the weight trajectory is nearly constant, as the multiplicative model does. Note that the scales of the additive and multiplicative learning rules are not directly comparable since the weight updates in the multiplicative case are modulated by how close the weight is to the threshold. When the underlying weights vary (middle and bottom rows), the static models must compromise with an intermediate weight. Though the STDP models are both able to capture the qualitative trends, the correct model yields a better fit and better predictive power in both cases. In terms of computational cost, our approach is clearly more expensive than alternative approaches based on MAP estimation or MLE. We developed a parallel implementation of our algorithm to capitalize on conditional independencies across neurons, i.e. for the additive and multiplicative STDP rules we can sample the weights W ∗→n independently of the weights W ∗→n′. On the two neuron examples we achieve upward of 2 iterations per second (sampling all variables in the model), and we run our model for 1000 iterations. Convergence of the Markov chain is assessed by analyzing the log posterior of the samples, and typically stabilizes after a few hundred iterations. As we scale to networks of ten neurons, our running time quickly increases to roughly 20 seconds per iteration, which is mostly dominated by slice sampling the learning rule parameters. In order to evaluate the conditional probability of a learning rule parameter, we need to sample the weight trajectories for each synapse. Though these running times are nontrivial, they are not prohibitive for networks that are realistically obtainable for optical study of synaptic plasticity. 5.2 Biophysical simulations Using the biophysical simulator NEURON, we performed two experiments. First, we considered a network of 10 sparsely interconnected neurons (28 excitatory synapses) undergoing synaptic plasticity according to an additive STDP rule. Each neuron was driven independently by a hidden population of 13 excitatory neurons and 5 inhibitory neurons connected to the visible neuron with probability 0.8 and fixed synaptic weights averaging 3.0 mV. The visible synapses were initialized close to 6.0 mV and allowed to vary between 0.0 and 10.5 mV. The synaptic delay was fixed at 1.0 ms for all synapses. This yielded a mean firing rate of 10 Hz among visible neurons. Synaptic weights were recorded every 1.0 ms. These parameters were chosen to demonstrate interesting variations in synaptic strength, and as we transition to biological applications it will be necessary to evaluate the sensitivity of the model to these parameters and the appropriate regimes for the circuits under study. We began by investigating whether the model is able to accurately identify synapses from spikes, or whether it is confounded by spurious correlations. Figure 3 shows that our approach identifies the 28 excitatory synapses in our network, as measured by ROC curve (Add. STDP AUC=0.99), and outperforms static models and cross-correlation. In the sparse, time-varying GLM, the probability of an edge is measured by the mean of A under the posterior, whereas in the standard GLM with MAP estimation, the likelihood of an edge is measured by area under the impulse response. 7 mV 12 Figure 4: Analogously to Figure 2, a sparse, time-varying GLM can capture the weight trajectories and learning rules from spike trains simulated by NEURON. Here an excitatory synapse undergoes additive STDP with a hard upper bound on the excitatory postsynaptic current. The weight trajectory inferred by our model with the same parametric form of the learning rule matches almost exactly, whereas the static models must compromise in order to capture early and late stages of the data, and the multiplicative weight exhibits qualitatively different trajectories. Nevertheless, in terms of predictive log likelihood, we do not have enough information to correctly determine the underlying learning rule. Potential solutions are discussed in the main text. Looking into the synapses that are detected by the time-varying model and missed by the static model, we find an interesting pattern. The improved performance comes from synapses that decay in strength over the recording period. Three examples of these synaptic weight trajectories are shown in the right panel of Figure 3. The time-varying model assigns over 90% probability to each of the three synapses, whereas the static model infers less than a 40% probability for each synapse. Finally, we investigated our model’s ability to distinguish various learning rules by looking at a single synapse, analogous to the experiment performed on data from the GLM. Figure 4 shows the results of a weight trajectory for a synapse under additive STDP with a strict threshold on the excitatory postsynaptic current. The time-varying GLM with an additive model captures the same trajectory, as shown in the left panel. The GLM weights have been linearly rescaled to align with the true weights, which are measured in millivolts. Furthermore, the inferred additive STDP learning rule, in particular the time constants and relative amplitudes, perfectly match the true learning rule. These results demonstrate that a sparse, time-varying GLM is capable of discovering synaptic weight trajectories, but in terms of predictive likelihood, we still have insufficient evidence to distinguish additive and multiplicative STDP rules. By the end of the training period, the weights have saturated at a level that almost surely induces postsynaptic spikes. At this point, we cannot distinguish two learning rules which have both reached saturation. This motivates further studies that leverage this probabilistic model in an optimal experimental design framework, similar to recent work by Shababo et al. [19], in order to conclusively test hypotheses about synaptic plasticity. 6 Discussion Motivated by the advent of optical tools for interrogating networks of synaptically connected neurons, which make it possible to study synaptic plasticity in novel ways, we have extended the GLM to model a sparse, time-varying synaptic network, and introduced a fully-Bayesian inference algorithm built upon particle MCMC. Our initial results suggest that it is possible to infer weight trajectories for a variety of biologically plausible learning rules. A number of interesting questions remain as we look to apply these methods to biological recordings. We have assumed access to precise spike times, though extracting spike times from optical recordings poses inferential challenges of its own. Solutions like those of Vogelstein et al. [20] could be incorporated into our probabilistic model. Computationally, particle MCMC could be replaced with stochastic EM to achieve improved performance [18], and optimal experimental design could aid in the exploration of stimuli to distinguish between learning rules. Beyond these direct extensions, this work opens up potential to infer latent state spaces with potentially nonlinear dynamics and non-Gaussian noise, and to infer learning rules at the synaptic or even the network level. Acknowledgments This work was partially funded by DARPA YFA N66001-12-1-4219 and NSF IIS1421780. S.W.L. was supported by an NDSEG fellowship and by the NSF Center for Brains, Minds, and Machines. 8 References [1] Adam M Packer, Darcy S Peterka, Jan J Hirtz, Rohit Prakash, Karl Deisseroth, and Rafael Yuste. Twophoton optogenetics of dendritic spines and neural circuits. Nature methods, 9(12):1202–1205, 2012. [2] Daniel R Hochbaum, Yongxin Zhao, Samouil L Farhi, Nathan Klapoetke, Christopher A Werley, Vikrant Kapoor, Peng Zou, Joel M Kralj, Dougal Maclaurin, Niklas Smedemark-Margulies, et al. All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins. Nature methods, 2014. [3] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010. [4] Liam Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15(4):243–262, January 2004. [5] Wilson Truccolo, Uri T. Eden, Matthew R. Fellows, John P. Donoghue, and Emery N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology, 93(2):1074–1089, 2005. [6] Ian Stevenson and Konrad Koerding. Inferring spike-timing-dependent plasticity from spike train data. In Advances in Neural Information Processing Systems, pages 2582–2590, 2011. [7] Seif Eldawlatly, Yang Zhou, Rong Jin, and Karim G Oweiss. On the use of dynamic Bayesian networks in reconstructing functional neuronal networks from spike train ensembles. Neural Computation, 22(1): 158–189, 2010. [8] Biljana Petreska, Byron Yu, John P Cunningham, Gopal Santhanam, Stephen I Ryu, Krishna V Shenoy, and Maneesh Sahani. Dynamical segmentation of single trials from population neural data. In Neural Information Processing Systems, pages 756–764, 2011. [9] Mike West, P Jeff Harrison, and Helio S Migon. Dynamic generalized linear models and Bayesian forecasting. Journal of the American Statistical Association, 80(389):73–83, 1985. [10] T. J. Mitchell and J. J. Beauchamp. Bayesian Variable Selection in Linear Regression. Journal of the American Statistical Association, 83(404):1023—-1032, 1988. [11] James Robert Lloyd, Peter Orbanz, Zoubin Ghahramani, and Daniel M Roy. Random function priors for exchangeable arrays with applications to graphs and relational data. Advances in Neural Information Processing Systems, 2012. [12] Natalia Caporale and Yang Dan. Spike timing-dependent plasticity: a Hebbian learning rule. Annual Review of Neuroscience, 31:25–46, 2008. [13] Erkki Oja. Simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 15(3):267–273, 1982. [14] Daniel E Feldman. The spike-timing dependence of plasticity. Neuron, 75(4):556–71, August 2012. [15] S Song, K D Miller, and L F Abbott. Competitive Hebbian learning through spike-timing-dependent synaptic plasticitye. Nature Neuroscience, 3(9):919–26, September 2000. ISSN 1097-6256. [16] Abigail Morrison, Markus Diesmann, and Wulfram Gerstner. Phenomenological models of synaptic plasticity based on spike timing. Biological cybernetics, 98(6):459–478, 2008. [17] Anne C Smith and Emery N Brown. Estimating a state-space model from point process observations. Neural Computation, 15(5):965–91, May 2003. [18] Fredrik Lindsten, Michael I Jordan, and Thomas B Sch¨on. Ancestor sampling for particle Gibbs. In Advances in Neural Information Processing Systems, pages 2600–2608, 2012. [19] Ben Shababo, Brooks Paige, Ari Pakman, and Liam Paninski. Bayesian inference and online experimental design for mapping neural microcircuits. In Advances in Neural Information Processing Systems, pages 1304–1312, 2013. [20] Joshua T Vogelstein, Brendon O Watson, Adam M Packer, Rafael Yuste, Bruno Jedynak, and Liam Paninski. Spike inference from calcium imaging using sequential Monte Carlo methods. Biophysical journal, 97(2):636–655, 2009. 9
|
2014
|
102
|
5,184
|
Real-Time Decoding of an Integrate and Fire Encoder Shreya Saxena and Munther Dahleh Department of Electrical Engineering and Computer Sciences Massachusetts Institute of Technology Cambridge, MA 02139 {ssaxena,dahleh}@mit.edu Abstract Neuronal encoding models range from the detailed biophysically-based Hodgkin Huxley model, to the statistical linear time invariant model specifying firing rates in terms of the extrinsic signal. Decoding the former becomes intractable, while the latter does not adequately capture the nonlinearities present in the neuronal encoding system. For use in practical applications, we wish to record the output of neurons, namely spikes, and decode this signal fast in order to act on this signal, for example to drive a prosthetic device. Here, we introduce a causal, real-time decoder of the biophysically-based Integrate and Fire encoding neuron model. We show that the upper bound of the real-time reconstruction error decreases polynomially in time, and that the L2 norm of the error is bounded by a constant that depends on the density of the spikes, as well as the bandwidth and the decay of the input signal. We numerically validate the effect of these parameters on the reconstruction error. 1 Introduction One of the most detailed and widely accepted models of the neuron is the Hodgkin Huxley (HH) model [1]. It is a complex nonlinear model comprising of four differential equations governing the membrane potential dynamics as well as the dynamics of the sodium, potassium and calcium currents found in a neuron. We assume in the practical setting that we are recording multiple neurons using an extracellular electrode, and thus that the observable postprocessed outputs of each neuron are the time points at which the membrane voltage crosses a threshold, also known as spikes. Even with complete knowledge of the HH model parameters, it is intractable to decode the extrinsic signal applied to the neuron given only the spike times. Model reduction techniques are accurate in certain regimes [2]; theoretical studies have also guaranteed an input-output equivalence between a multiplicative or additive extrinsic signal applied to the HH model, and the same signal applied to an Integrate and Fire (IAF) neuron model with variable thresholds [3]. Specifically, take the example of a decoder in a brain machine interface (BMI) device, where the decoded signal drives a prosthetic limb in order to produce movement. Given the complications involved in decoding an extrinsic signal using a realistic neuron model, current practices include decoding using a Kalman filter, which assumes a linear time invariant (LTI) encoding with the extrinsic signal as an input and the firing rate of the neuron as the output [4–6]. Although extremely tractable for decoding, this approach ignores the nonlinear processing of the extrinsic current by the neuron. Moreover, assuming firing rates as the output of the neuron averages out the data and incurs inherent delays in the decoding process. Decoding of spike trains has also been performed using stochastic jump models such as point process models [7, 8], and we are currently exploring relationships between these and our work. 1 IAF Encoder Real-Time Decoder f(t) {ti}i:|ti|t ˜ft(t) Figure 1: IAF Encoder and a Real-Time Decoder. We consider a biophysically inspired IAF neuron model with variable thresholds as the encoding model. It has been shown that, given the parameters of the model and given the spikes for all time, a bandlimited signal driving the IAF model can be perfectly reconstructed if the spikes are ‘dense enough’ [9–11]. This is a Nyquist-type reconstruction formula. However, for this theory to be applicable to a real-time setting, as in the case of BMI, we need a causal real-time decoder that estimates the signal at every time t, and an estimate of the time taken for the convergence of the reconstructed signal to the real signal. There have also been some approaches for causal reconstruction of a signal encoded by an IAF encoder, such as in [12]. However, these do not show the convergence of the estimate to the real signal with the advent of time. In this paper, we introduce a causal real-time decoder (Figure 1) that, given the parameters of the IAF encoding process, provides an estimate of the signal at every time, without the need to wait for a minimum amount of time to start decoding. We show that, under certain conditions on the input signal, the upper bound of the error between the estimated signal and the input signal decreases polynomially in time, leading to perfect reconstruction as t ! 1, or a bounded error if a finite number of iterations are used. The bounded input bounded output (BIBO) stability of a decoder is extremely important to analyze for the application of a BMI. Here, we show that the L2 norm of the error is bounded, with an upper bound that depends on the bandwidth of the signal, the density of the spikes, and the decay of the input signal. We numerically show the utility of the theory developed here. We first provide example reconstructions using the real-time decoder and compare our results with reconstructions obtained using existing methods. We then show the dependence of the decoding error on the properties of the input signal. The theory and algorithm presented in this paper can be applied to any system that uses an IAF encoding device, for example in pluviometry. We introduce some preliminary definitions in Section 2, and then present our theoretical results in Section 3. We use a model IAF system to numerically simulate the output of an IAF encoder and provide causal real-time reconstruction in Section 4, and end with conclusions in Section 5. 2 Preliminaries We first define the subsets of the L2 space that we consider. L⌦ 2 and L⌦ 2,β are defined as the following. L⌦ 2 = n f 2 L2 | ˆf(!) = 0 8! /2 [−⌦, ⌦] o (1) L⌦ 2,β = n fgβ 2 L2 | ˆf(!) = 0 8! /2 [−⌦, ⌦] o (2) , where gβ(t) = (1+|t|)β and ˆf(!) = (Ff)(!) is the Fourier transform of f. We will only consider signals in L⌦ 2,β for β ≥0. Next, we define sinc⌦(t) and [a,b](t), both of which will play an integral part in the reconstruction of signals. sinc⌦(t) = ( sin(⌦t) ⌦t t 6= 0 1 t = 0 (3) [a,b](t) = ⇢1 t 2 [a, b] 0 otherwise (4) Finally, we define the encoding system based on an IAF neuron model; we term this the IAF Encoder. We consider that this model has variable thresholds in its most general form, which may be useful if 2 it is the result of a model reduction technique such as in [3], or in approaches where R ti+1 ti f(⌧)d⌧ can be calculated through other means, such as in [9]. A typical IAF Encoder is defined in the following way: given the thresholds {qi} where qi > 0 8i, the spikes {ti} are such that Z ti+1 ti f(⌧)d⌧= ±qi (5) This signifies that the encoder outputs a spike at time ti+1 every time the integral R t ti f(⌧)d⌧reaches the threshold qi or −qi. We assume that the decoder has knowledge of the value of the integral as well as the time at which the integral was reached. For a physical representation with neurons whose dynamics can faithfully be modeled using IAF neurons, we can imagine two neurons with the same input f; one neuron spikes when the positive threshold is reached while the other spikes when the negative threshold is reached. The decoder views the activity of both of these neurons and, with knowledge of the corresponding thresholds, decodes the signal accordingly. We can also take the approach of limiting ourselves to positive f(t). In order to remain general in the following treatment, we assume that we have knowledge of nR ti+1 ti f(⌧)d⌧ o , as well as the corresponding spike times {ti}. 3 Theoretical Results The following is a theorem introduced in [11], which was also applied to IAF Encoders in [10,13,14]. We will later use the operators and concepts introduced in this theorem. Theorem 1. Perfect Reconstruction: Given a sampling set {ti}i2Z and the corresponding samples R ti+1 ti f(⌧)d⌧, we can perfectly reconstruct f 2 L⌦ 2 if supi2Z(ti+1 −ti) = δ for some δ < ⇡ ⌦. Moreover, f can be reconstructed iteratively in the following way, such that kf −f kk2 ✓δ⌦ ⇡ ◆k+1 kfk2 (6) , and limk!1 f k = f in L2. f 0 = Af (7) f 1 = (I −A)f 0 + Af = (I −A)Af + Af (8) f k = (I −A)f k−1 + Af = k X n=0 (I −A)nAf (9) , where the operator Af is defined as the following. Af = 1 X i=1 Z ti+1 ti f(⌧)d⌧sinc⌦(t −si) (10) and si = ti+ti+1 2 , the midpoint of each pair of spikes. Proof. Provided in [11]. The above theorem requires an infinite number of spikes in order to start decoding. However, we would like a real-time decoder that outputs the ‘best guess’ at every time t in order for us to act on the estimate of the signal. In this paper, we introduce one such decoder; we first provide a high-level description of the real-time decoder, then a recursive algorithm to apply in the practical case, and finally we will provide error bounds for its performance. Real-Time Decoder At every time t, the decoder outputs an estimate of the input signal ˜ft(t), where ˜ft(t) is an estimate of the signal calculated using all the spikes from time 0 to t. Since there is no new information between spikes, this is essentially the same as calculating an estimate after every spike ti, ˜fti(t), and using this estimate till the next spike, i.e. for time t 2 [ti, ti+1] (see Figure 2). 3 t0 t1 t2 t3 t4 t5 t6 t7 ft(t) f (t) 0 ft1(t) ft2(t) = ft1(t)+ gt2(t) ft3(t) t Figure 2: A visualization of the decoding process. The original signal f(t) is shown in black and the spikes {ti} are shown in blue. As each spike ti arrives, a new estimate ˜fti(t) of the signal is formed (shown in green), which is modified after the next spike ti+1 by the innovation function gti+1. The output of the decoder ˜ft(t) = P i2Z ˜fti(t) [ti,ti+1)(t) is shown in red. We will show that we can calculate the estimate after every spike ˜fti+1 as the sum of the previous estimate ˜fti and an innovation gti+1. This procedure is captured in the algorithm given in Equations 11 and 12. Recursive Algorithm ˜f 0 ti+1 = ˜f 0 ti + g0 ti+1 (11) ˜f k ti+1 = ˜f k ti + gk ti+1 = ˜f k ti + ⇣ gk−1 ti+1 + g0 ti+1 −Ati+1gk−1 ti+1 ⌘ (12) Here, ˜f 0 t0 = 0, and g0 ti+1(t) = ⇣R ti+1 ti f(⌧)d⌧ ⌘ sinc(t−si). We denote ˜fti(t) = limk!1 ˜f k ti(t) and gti+1(t) = limk!1 gk ti+1(t). We define the operator AT f used in Equation 12 as the following. AT f = X i:|ti|T Z ti+1 ti f(⌧)d⌧sinc⌦(t −si) (13) The output of our causal real-time decoder can also be written as ˜ft(t) = P i2Z ˜fti(t) [ti,ti+1)(t). In the case of a decoder that uses a finite number of iterations K at every step, i.e. calculates ˜f K ti after every spike ti, the decoded signal is ˜f K t (t) = P i2Z ˜f K ti (t) [ti,ti+1)(t). { ˜f k ti}k are stored after every spike ti, and thus do not need to be recomputed at the arrival of the next spike. Thus, when a new spike arrives at ti+1, each ˜f k ti can be modified by adding the innovation functions gk ti+1. Next, we show an upper bound on the error incurred by the decoder. Theorem 2. Real-time reconstruction: Given a signal f 2 L⌦ 2,β passed through an IAF encoder with known thresholds, and given that the spikes satisfy a certain minimum density supi2Z(ti+1 − ti) = δ for some δ < ⌦ ⇡, we can construct a causal real-time decoder that reconstructs a function ˜ft(t) using the recursive algorithm in Equations 11 and 12, s.t. |f(t) −˜ft(t)| c 1 −δ⌦ ⇡ kfk2,β(1 + t)−β (14) 4 , where c depends only on δ, ⌦and β. Moreover, if we use a finite number of iterations K at every step, we obtain the following error. |f(t) −˜f K t (t)| c1 − - δ⌦ ⇡ .K+1 1 −δ⌦ ⇡ kfk2,β(1 + t)−β + ✓δ⌦ ⇡ ◆K+1 1 + δ⌦ ⇡ 1 −δ⌦ ⇡ kfk2 (15) Proof. Provided in the Appendix. Theorem 2 is the main result of this paper. It shows that the upper bound of the real-time reconstruction error using the decoding algorithm in Equations 11 and 12, decreases polynomially as a function of time. This implies that the approximation ˜ft(t) becomes more and more accurate with the passage of time, and moreover, we can calculate the exact amount of time we would need to record to have a given level of accuracy. Given a maximum allowed error ✏, these bounds can provide a combination (t, K) that will ensure |f(t) −˜f K t (t)| ✏if f 2 L⌦ 2,β, and if the density constraint is met. We can further show that the L2 norm of the reconstruction remains bounded with a bounded input (BIBO stability), by bounding the L2 norm of the error between the original signal and the reconstruction. Corollary 1. Bounded L2 norm: The causal decoder provided in Theorem 2, with the same assumptions and in the case of K ! 1, constructs a signal ˜ft(t) s.t. the L2 norm of the error kf −˜ftk2 = qR 1 0 |f(t) −˜ft(t)|2dt is bounded: kf −˜ftk2 c/p2β−1 1−δ⌦ ⇡ kfk2,β where c is the same constant as in Theorem 2. Proof. sZ 1 0 |f(t) −˜ft(t)|2dt v u u t Z 1 0 c 1 −δ⌦ ⇡ !2 kfk2 2,β (1 + t)−2βdt = c/p2β −1 1 −δ⌦ ⇡ kfk2,β (16) Here, the first inequality is due to Theorem 2, and all the constants are as defined in the same. Remark 1: This result also implies that we have a decay in the root-mean-square (RMS) error, i.e. q 1 T R T 0 |f(t) −˜ft(t)|2dt T !1 −−−−! 0. For the case of a finite number of iterations K < 1, the RMS error converges to a non-zero constant - δ⌦ ⇡ .K+1 1+ δ⌦ ⇡ 1−δ⌦ ⇡kfk2. Remark 2: The methods used in Corollary 1 also provide a bound on the error in the weighted L2 norm, i.e. kf −˜fk2,β c/pβ−1 1−δ⌦ ⇡ kfk2,β for β ≥2, which may be a more intuitive form to use for a subsequent stability analysis. 4 Numerical Simulations We simulated signals f(t) of the following form, for t 2 [0, 100], using a stepsize of 10−2. f(t) = P50 i=1 wk (sinc⌦(t −dk))β P50 i=1 wk (17) Here, the wk’s and dk’s were picked uniformly at random from the interval [0, 1] and [0, 100] respectively. Note that f 2 Lβ⌦ 2,β. All simulations were performed using MATLAB R2014a. For each simulation experiment, at every time t we decoded using only the spikes before time t. We first provide example reconstructions using the Real-Time Decoder for four signals in Figure 3, using constant thresholds, i.e. qi = q 8i. We compare our results to those obtained using a Linear Firing Rate (FR) Decoder, i.e. we let the reconstructed signal be a linear function of the number of spikes in the past ∆seconds, ∆being the window size. We can see that there is a delay in the reconstruction with this decoding approach. Moreover, the reconstruction is not as accurate as that using the Real-Time Decoder. 5 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (a) ⌦= 0.2⇡; Real-Time Decoder 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (b) ⌦= 0.2⇡; Linear FR Decoder 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (c) ⌦= 0.3⇡; Real-Time Decoder 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (d) ⌦= 0.3⇡; Linear FR Decoder 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (e) ⌦= 0.4⇡; Real-Time Decoder 0 20 40 60 80 0 0.02 0.04 0.06 0.08 0.1 Time (s) Amplitude (f) ⌦= 0.4⇡; Linear FR Decoder 0 20 40 60 80 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Time (s) Amplitude (g) ⌦= 0.5⇡; Real-Time Decoder 0 20 40 60 80 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Time (s) Amplitude (h) ⌦= 0.5⇡; Linear FR Decoder Figure 3: (a,c,e,g) Four example reconstructions using the Real-Time Decoder, with the original signal f(t) in black solid and the reconstructed signal ˜ft(t) in red dashed lines. Here, [β, K] = [2, 500], and qi = 0.01 8i. (b,d,f,h) The same signal was decoded using a Linear Firing Rate (FR) Decoder. A window size of ∆= 3s was used. 6 0.1pi 0.2pi 0.3pi 0.4pi 0 1 2 3x 10 −4 Ω ∥f −˜ft∥2 ∥f ∥2,β (a) ⌦is varied; [β, δ, K] = [2, ⇡ 2⌦, 500] 0.6 0.8 1 1.2 1.4 1.6 0 0.5 1 1.5 2 2.5x 10 −4 δ ∥f −˜ft∥2 ∥f ∥2,β (b) δ is varied; [⌦, β, K] = [0.3⇡, 2, 500] 2 2.5 3 3.5 4 4.5 5 10 −10 10 −8 10 −6 10 −4 β ∥f −˜ft∥2 ∥f ∥2,β (c) β is varied; [⌦, δ, K] = [0.3⇡, 1 0.3β , 500] 0 100 200 300 400 500 0 1 2x 10 −4 K ∥f −˜ft∥2 ∥f ∥2,β (d) K is varied; [⌦, δ, β] = [0.3⇡, 5 3, 2] Figure 4: Average error for 20 different signals while varying different parameters. Next, we show the decay of the real-time error by averaging out the error for 20 different input signals, while varying certain parameters, namely ⌦, β, δ and K (Figure 4). The thresholds qi were chosen to be constant a priori, but were reduced to satisfy the density constraint wherever necessary. According to Equation 14 (including the effect of the constant c), the error should decrease as ⌦is decreased. We see this effect in the simulation study in Figure 4a. For these simulations, we chose δ such that δ⌦ ⇡< 1, thus δ was decreasing as ⌦increased; however, the effect of the increasing ⌦ dominated in this case. In Figure 4b we see that increasing δ while keeping the bandwidth constant does indeed increase the error, thus the algorithm is sensitive to the density of the spikes. In this figure, all the values of δ satisfy the density constraint, i.e. δ⌦ ⇡< 1. Increasing β is seen to have a large effect, as seen in Figure 4c: the error decreases polynomially in β (note the log scale on the y-axis). Although increasing β in our simulations also increased the bandwidth of the signal, the faster decay had a larger effect on the error than the change in bandwidth. In Figure 4d, the effect of increasing K is apparent; however, this error flattens out for large values of K, showing convergence of the algorithm. 7 5 Conclusions We provide a real-time decoder to reconstruct a signal f 2 L⌦ 2,β encoded by an IAF encoder. Under Nyquist-type spike density conditions, we show that the reconstructed signal ˜ft(t) converges to f(t) polynomially in time, or with a fixed error that depends on the computation power used to reconstruct the function. Moreover, we get a lower error as the spike density increases, i.e. we get better results if we have more spikes. Decreasing the bandwidth or increasing the decay of the signal both lead to a decrease in the error, corroborated by the numerical simulations. This decoder also outperforms the linear decoder that acts on the firing rate of the neuron. However, the main utility of this decoder is that it comes with verifiable bounds on the error of decoding as we record more spikes. There is a severe need in the BMI community for considering error bounds while decoding signals from the brain. For example, in the case where the reconstructed signal is driving a prosthetic, we are usually placing the decoder and machine in an inherent feedback loop (where the feedback is visual in this case). A stability analysis of this feedback loop includes calculating a bound on the error incurred by the decoding process, which is the first step for the construction of a device that robustly tracks agile maneuvers. In this paper, we provide an upper bound on the error incurred by the realtime decoding process, which can be used along with concepts in robust control theory to provide sufficient conditions on the prosthetic and feedback system in order to ensure stability [15–17]. Acknowledgments Research supported by the National Science Foundation’s Emerging Frontiers in Research and Innovation Grant (1137237). References [1] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, p. 500, 1952. [2] W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press, 2002. [3] A. A. Lazar, “Population encoding with hodgkin–huxley neurons,” Information Theory, IEEE Transactions on, vol. 56, no. 2, pp. 821–837, 2010. [4] J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O’Doherty, D. M. Santucci, D. F. Dimitrov, P. G. Patil, C. S. Henriquez, and M. A. Nicolelis, “Learning to control a brain–machine interface for reaching and grasping by primates,” PLoS biology, vol. 1, no. 2, p. e42, 2003. [5] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue, “Brainmachine interface: Instant neural control of a movement signal,” Nature, vol. 416, no. 6877, pp. 141–142, 2002. [6] W. Wu, J. E. Kulkarni, N. G. Hatsopoulos, and L. Paninski, “Neural decoding of hand motion using a linear state-space model with hidden states,” Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol. 17, no. 4, pp. 370–378, 2009. [7] E. N. Brown, L. M. Frank, D. Tang, M. C. Quirk, and M. A. Wilson, “A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells,” The Journal of Neuroscience, vol. 18, no. 18, pp. 7411–7425, 1998. [8] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown, “Dynamic analysis of neural encoding by point process adaptive filtering,” Neural Computation, vol. 16, no. 5, pp. 971–998, 2004. [9] A. A. Lazar, “Time encoding with an integrate-and-fire neuron with a refractory period,” Neurocomputing, vol. 58, pp. 53–58, 2004. [10] A. A. Lazar and L. T. T´oth, “Time encoding and perfect recovery of bandlimited signals,” Proceedings of the ICASSP, vol. 3, pp. 709–712, 2003. [11] H. G. Feichtinger and K. Gr¨ochenig, “Theory and practice of irregular sampling,” Wavelets: mathematics and applications, vol. 1994, pp. 305–363, 1994. 8 [12] H. G. Feichtinger, J. C. Pr´ıncipe, J. L. Romero, A. S. Alvarado, and G. A. Velasco, “Approximate reconstruction of bandlimited functions for the integrate and fire sampler,” Advances in computational mathematics, vol. 36, no. 1, pp. 67–78, 2012. [13] A. A. Lazar and L. T. T´oth, “Perfect recovery and sensitivity analysis of time encoded bandlimited signals,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 51, no. 10, pp. 2060–2073, 2004. [14] D. Gontier and M. Vetterli, “Sampling based on timing: Time encoding machines on shiftinvariant subspaces,” Applied and Computational Harmonic Analysis, vol. 36, no. 1, pp. 63–78, 2014. [15] S. V. Sarma and M. A. Dahleh, “Remote control over noisy communication channels: A firstorder example,” Automatic Control, IEEE Transactions on, vol. 52, no. 2, pp. 284–289, 2007. [16] ——, “Signal reconstruction in the presence of finite-rate measurements: finite-horizon control applications,” International Journal of Robust and Nonlinear Control, vol. 20, no. 1, pp. 41–58, 2010. [17] S. Saxena and M. A. Dahleh, “Analyzing the effect of an integrate and fire encoder and decoder in feedback,” Proceedings of 53rd IEEE Conference on Decision and Control (CDC), 2014. 9
|
2014
|
103
|
5,185
|
Parallel Direction Method of Multipliers Huahua Wang , Arindam Banerjee , Zhi-Quan Luo University of Minnesota, Twin Cities {huwang,banerjee}@cs.umn.edu, luozq@umn.edu Abstract We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linear constraints. While the Alternating Direction Method of Multipliers (ADMM) for two-block linear constraints has been intensively studied both theoretically and empirically, in spite of some preliminary work, effective generalizations of ADMM to multiple blocks is still unclear. In this paper, we propose a parallel randomized block coordinate method named Parallel Direction Method of Multipliers (PDMM) to solve optimization problems with multi-block linear constraints. At each iteration, PDMM randomly updates some blocks in parallel, behaving like parallel randomized block coordinate descent. We establish the global convergence and the iteration complexity for PDMM with constant step size. We also show that PDMM can do randomized block coordinate descent on overlapping blocks. Experimental results show that PDMM performs better than state-of-the-arts methods in two applications, robust principal component analysis and overlapping group lasso. 1 Introduction In this paper, we consider the minimization of block-seperable convex functions subject to linear constraints, with a canonical form: min {xj∈Xj} f(x) = J X j=1 fj(xj) , s.t. Ax = J X j=1 Ac jxj = a , (1) where the objective function f(x) is a sum of J block separable (nonsmooth) convex functions, Ac j ∈Rm×nj is the j-th column block of A ∈Rm×n where n = P j nj, xj ∈Rnj×1 is the j-th block coordinate of x, Xj is a local convex constraint of xj and a ∈Rm×1. The canonical form can be extended to handle linear inequalities by introducing slack variables, i.e., writing Ax ≤a as Ax + z = a, z ≥0. A variety of machine learning problems can be cast into the linearly-constrained optimization problem (1) [8, 4, 24, 5, 6, 21, 11]. For example, in robust Principal Component Analysis (RPCA) [5], one attempts to recover a low rank matrix L and a sparse matrix S from an observation matrix M, i.e., the linear constraint is M = L+S. Further, in the stable version of RPCA [29], an noisy matrix Z is taken into consideration, and the linear constraint has three blocks, i.e., M = L + S + Z. Problem (1) can also include composite minimization problems which solve a sum of a loss function and a set of nonsmooth regularization functions. Due to the increasing interest in structural sparsity [1], composite regularizers have become widely used, e.g., overlapping group lasso [28]. As the blocks are overlapping in this class of problems, it is difficult to apply block coordinate descent methods for large scale problems [16, 18] which assume block-separable. By simply splitting blocks and introducing equality constraints, the composite minimization problem can also formulated as (1) [2]. A classical approach to solving (1) is to relax the linear constraints using the (augmented) Lagrangian, i.e., Lρ(x, y) = f(x) + ⟨y, Ax −a⟩+ ρ 2∥Ax −a∥2 2 , (2) 1 where ρ ≥0 is called the penalty parameter. We call x the primal variable and y the dual variable. (2) usually leads to primal-dual algorithms which update the primal and dual variables alternatively. While the dual update is simply dual gradient descent, the primal update is to solve a minimization problem of (2) given y. If ρ = 0, the primal update can be solved in a parallel block coordinate fashion [3, 19], leading to the dual ascent method. While the dual ascent method can achieve massive parallelism, a careful choice of stepsize and some strict conditions are required for convergence, particularly when f is nonsmooth. To achieve better numerical efficiency and convergence behavior compared to the dual ascent method, it is favorable to set ρ > 0 in the augmented Lagrangian (2) which we call the method of multipliers. However, (2) is no longer separable and solving entire augmented Lagrangian (2) exactly is computationally expensive. In [20], randomized block coordinate descent (RBCD) [16, 18] is used to solve (2) exactly, but leading to a double-loop algorithm along with the dual step. More recent results show (2) can be solved inexactly by just sweeping the coordinates once using the alternating direction method of multipliers (ADMM) [12, 2]. This paper attempts to develop a parallel randomized block coordinate variant of ADMM. When J = 2, ADMM has been widely used to solve the augmented Lagragian (2) in many applications [2]. Encouraged by the success of ADMM with two blocks, ADMM has also been extended to solve the problem with multiple blocks [15, 14, 10, 17, 13, 7]. The variants of ADMM can be mainly divided into two categories. The first category considers Gauss-Seidel ADMM (GSADMM) [15, 14], which solves (2) in a cyclic block coordinate manner. In [13], a back substitution step was added so that the convergence of ADMM for multiple blocks can be proved. In some cases, it has been shown that ADMM might not converge for multiple blocks [7]. In [14], a block successive upper bound minimization method of multipliers (BSUMM) is proposed to solve the problem (1). The convergence of BSUMM is established under some fairly strict conditions: (i) certain local error bounds hold; (ii) the step size is either sufficiently small or decreasing. However, in general, Gauss-Seidel ADMM with multiple blocks is not well understood and its iteration complexity is largely open. The second category considers Jacobian variants of ADMM [26, 10, 17], which solves (2) in a parallel block coordinate fashion. In [26, 17], (1) is solved by using two-block ADMM with splitting variables (sADMM). [10] considers a proximal Jacobian ADMM (PJADMM) by adding proximal terms. A randomized block coordinate variant of ADMM named RBSUMM was proposed in [14]. However, RBSUMM can only randomly update one block. Moreover, the convergence of RBSUMM is established under the same conditions as BSUMM and its iteration complexity is unknown. In this paper, we propose a parallel randomized block coordinate method named parallel direction method of multipliers (PDMM) which randomly picks up any number of blocks to update in parallel, behaving like randomized block coordinate descent [16, 18]. Like the dual ascent method, PDMM solves the primal update in a parallel block coordinate fashion even with the augmentation term. Moreover, PDMM inherits the merits of the method of multipliers and can solve a fairly large class of problems, including nonsmooth functions. Technically, PDMM has three aspects which make it distinct from such state-of-the-art methods. First, if block coordinates of the primal x is solved exactly, PDMM uses a backward step on the dual update so that the dual variable makes conservative progress. Second, the sparsity of A and the number of randomized blocks are taken into consideration to determine the step size of the dual update. Third, PDMM can randomly update arbitrary number of primal blocks in parallel. Moreover, we show that sADMM and PJADMM are the two extreme cases of PDMM. The connection between sADMM and PJADMM through PDMM provides better understanding of dual backward step. PDMM can also be used to solve overlapping groups in a randomized block coordinate fashion. Interestingly, the corresponding problem for RBCD [16, 18] with overlapping blocks is still an open problem. We establish the global convergence and O(1/T) iteration complexity of PDMM with constant step size. We evaluate the performance of PDMM in two applications: robust principal component analysis and overlapping group lasso. The rest of the paper is organized as follows: We introduce PDMM in Section 2, and establish convergence results in Section 3. We evaluate the performance of PDMM in Section 4 and conclude in Section 5. The technical analysis and detailed proofs are provided in the supplement. Notations: Assume that A ∈Rm×n is divided into I × J blocks. Let Ar i ∈Rmi×n be the i-th row block of A, Ac j ∈Rm×nj be the j-th column block of A, and Aij ∈Rmi×nj be the ij-th block of A. Let yi ∈Rmi×1 be the i-th block of y ∈Rm×1. Let N(i) be a set of nonzero blocks Aij in the 2 i-th row block Ar i and di = |N(i)| be the number of nonzero blocks. Let ˜Ki = min{di, K} where K is the number of blocks randomly chosen by PDMM and T be the number of iterations. 2 Parallel Direction Method of Multipliers Consider a direct Jacobi version of ADMM which updates all blocks in parallel: xt+1 j = argminxj∈Xj Lρ(xj, xt k̸=j, yt) , (3) yt+1 = yt + τρ(Axt+1 −a) . (4) where τ is a shrinkage factor for the step size of the dual gradient ascent update. However, empirical results show that it is almost impossible to make the direct Jacobi updates (3)-(4) to converge even when τ is extremely small. [15, 10] also noticed that the direct Jacobi updates may not converge. To address the problem in (3) and (4), we propose a backward step on the dual update. Moreover, instead of updating all blocks, the blocks xj will be updated in a parallel randomized block coordinate fashion. We call the algorithm Parallel Direction Method of Multipliers (PDMM). PDMM first randomly select K blocks denoted by set Jt at time t, then executes the following iterates: xt+1 jt = argmin xjt∈Xjt Lρ(xjt, xt k̸=jt, ˆyt) + ηjtBφjt (xjt, xt jt) , jt ∈Jt, (5) yt+1 i = yt i + τiρ(Aixt+1 −ai) , (6) ˆyt+1 i = yt+1 i −νiρ(Aixt+1 −ai) , (7) where τi > 0, 0 ≤νi < 1, ηjt ≥0, and Bφjt (xjt, xt jt) is a Bregman divergence. Note xt+1 = (xt+1 Jt , xt k/∈Jt) in (6) and (7). (6) and (7) update all dual blocks. We show that PDMM can also do randomized dual block coordinate ascent in an extended work [25]. Let ˜Ki = min{di, K}. τi and νi can take the following values: τi = K ˜Ki(2J −K) , νi = 1 −1 ˜Ki . (8) In the xjt-update (5), a Bregman divergence is addded so that exact PDMM and its inexact variants can be analyzed in an unified framework [23, 11]. In particular, if ηjt = 0, (5) is an exact update. If ηjt > 0, by choosing a suitable Bregman divergence, (5) can be solved by various inexact updates, often yielding a closed-form for the xjt update (see Section 2.1). To better understand PDMM, we discuss the following three aspects which play roles in choosing τi and νi: the dual backward step (7), the sparsity of A, and the choice of randomized blocks. Dual Backward Step: We attribute the failure of the Jacobi updates (3)-(4) to the following observation in (3), which can be rewritten as: xt+1 j = argminxj∈Xj fj(xj) + ⟨yt + ρ(Axt −a), Ac jxj⟩+ ρ 2∥Ac j(xj −xt j)∥2 2 . (9) In the primal xj update, the quadratic penalty term implicitly adds full gradient ascent step to the dual variable, i.e., yt+ρ(Axt−a), which we call implicit dual ascent. The implicit dual ascent along with the explicit dual ascent (4) may lead to too aggressive progress on the dual variable, particularly when the number of blocks is large. Based on this observation, we introduce an intermediate variable ˆyt to replace yt in (9) so that the implicit dual ascent in (9) makes conservative progress, e.g., ˆyt + ρ(Axt −a) = yt + (1 −ν)ρ(Axt −a) , where 0 < ν < 1. ˆyt is the result of a ‘backward step’ on the dual variable, i.e., ˆyt = yt −νρ(Axt −a). Moreover, one can show that τ and ν have also been implicitly used when using two-block ADMM with splitting variables (sADMM) to solve (1) [17, 26]. Section 2.2 shows sADMM is a special case of PDMM. The connection helps in understanding the role of the two parameters τi, νi in PDMM. Interestingly, the step sizes τi and νi can be improved by considering the block sparsity of A and the number of random blocks K to be updated. Sparsity of A: Assume A is divided into I × J blocks. While xj can be updated in parallel, the matrix multiplication Ax in the dual update (4) requires synchronization to gather messages from all block coordinates jt ∈Jt. For updating the i-th block of the dual yi, we need Aixt+1 = P jt∈Jt Aijtxt+1 jt + P k/∈Jt Aikxt k which aggregates “messages” from all xjt. If Aijt is a block of 3 zeros, there is no “message” from xjt to yi. More precisely, Aixt+1 = P jt∈Jt∩N (i) Aijtxt+1 jt + P k/∈Jt Aikxt k where N(i) denotes a set of nonzero blocks in the i-th row block Ai. N(i) can be considered as the set of neighbors of the i-th dual block yi and di = |N(i)| is the degree of the i-th dual block yi. If A is sparse, di could be far smaller than J. According to (8), a low di will lead to bigger step sizes τi for the dual update and smaller step sizes for the dual backward step (7). Further, as shown in Section 2.3, when using PDMM with all blocks to solve composite minimization with overlapping blocks, PDMM can use τi = 0.5 which is much larger than 1/J in sADMM. Randomized Blocks: The number of blocks to be randomly chosen also has the effect on τi, νi. If randomly choosing one block (K = 1), then νi = 0, τi = 1 2J−1. The dual backward step (7) vanishes. As K increases, νi increases from 0 to 1 − 1 di and τi increases from 1 2J−1 to 1 di . If updating all blocks (K = J), τi = 1 di , νi = 1 −1 di . PDMM does not necessarily choose any K combination of J blocks. The J blocks can be randomly partitioned into J/K groups where each group has K blocks. Then PDMM randomly picks some groups. A simple way is to permutate the J blocks and choose K blocks cyclically. 2.1 Inexact PDMM If ηjt > 0, there is an extra Bregman divergence term in (5), which can serve two purposes. First, choosing a suitable Bregman divergence can lead to an efficient solution for (5). Second, if ηjt is sufficiently large, the dual update can use a large step size (τi = 1) and the backward step (7) can be removed (νi = 0), leading to the same updates as PJADMM [10] (see Section 2.2). Given a continuously differentiable and strictly convex function ψjt, its Bregman divergence is defiend as Bψjt (xjt, xt jt) = ψjt(xjt) −ψjt(xt jt) −⟨∇ψjt(xt jt), xjt −xt jt⟩, (10) where ∇ψjt denotes the gradient of ψjt. Rearranging the terms yields ψjt(xjt) −Bψjt (xjt, xt jt) = ψjt(xt jt) + ⟨∇ψjt(xt jt), xjt −xt jt⟩, (11) which is exactly the linearization of ψjt(xjt) at xt jt. Therefore, if solving (5) exactly becomes difficult due to some problematic terms, we can use the Bregman divergence to linearize these problematic terms so that (5) can be solved efficiently. More specifically, in (5), we can choose φjt = ϕjt − 1 ηjt ψjt assuming ψjt is the problematic term. Using the linearity of Bregman divergence, Bφjt (xjt, xt jt) = Bϕjt (xjt, xt jt) −1 ηjt Bψjt (xjt, xt jt) . (12) For instance, if fjt is a logistic function, solving (5) exactly requires an iterative algorithm. Setting ψjt = fjt, ϕjt = 1 2∥· ∥2 2 in (12) and plugging into (5) yield xt+1 jt =argmin xjt∈Xjt ⟨∇fjt(xt jt), xjt⟩+⟨ˆyt, Ajtxjt⟩+ ρ 2∥Ajtxjt + X k̸=jt Akxt k−a∥2 2+ηjt∥xjt −xt jt∥2 2 , which has a closed-form solution. Similarly, if the quadratic penalty term ρ 2∥Ac jtxjt + P k̸=jt Ac kxt k −a∥2 2 is a problematic term, we can set ψjt(xjt) = ρ 2∥Ac jtxjt∥2 2, then Bψjt (xjt, xt jt) = ρ 2∥Ac jt(xjt −xt jt)∥2 2 can be used to linearize the quadratic penalty term. In (12), the nonnegativeness of Bφjt implies that Bϕjt ≥ 1 ηjt Bψjt . This condition can be satisfied as long as ϕjt is more convex than ψjt. Technically, we assume that ϕjt is σ/ηjt-strongly convex and ψjt has Lipschitz continuous gradient with constant σ, which has been shown in [23]. 2.2 Connections to Related Work Consider the case when all blocks are used in PDMM. There are also two other methods which update all blocks in parallel. If solving the primal updates exactly, two-block ADMM with splitting variables (sADMM) is considered in [17, 26]. We show that sADMM is a special case of PDMM when setting τi = 1 J and νi = 1 −1 J (Appendix B in [25]). If the primal updates are solved inexactly, [10] considers a proximal Jacobian ADMM (PJADMM) by adding proximal terms where 4 the converge rate is improved to o(1/T) given the sufficiently large proximal terms. We show that PJADMM [10] is also a special case of PDMM (Appendix C in [25]). sADMM and PJADMM are two extreme cases of PDMM. The connection between sADMM and PJADMM through PDMM can provide better understanding of the three methods and the role of dual backward step. If the primal update is solved exactly which makes sufficient progress, the dual update should take small step, e.g., sADMM. On the other hand, if the primal update takes small progress by adding proximal terms, the dual update can take full gradient step, e.g. PJADMM. While sADMM is a direct derivation of ADMM, PJADMM introduces more terms and parameters. In addition to PDMM, RBUSMM [14] can also randomly update one block. The convergence of RBSUMM requires certain local error bounds to be hold and decreasing step size. Moreover, the iteration complexity of RBSUMM is still unknown. In contast, PDMM converges at a rate of O(1/T) with the constant step size. 2.3 Randomized Overlapping Block Coordinate Descent Consider the composite minimization problem of a sum of a loss function ℓ(w) and composite regularizers gj(wj): min w ℓ(w) + L X j=1 gj(wj) , (13) which considers L overlapping groups wj ∈Rb×1. Let J = L + 1, xJ = w. For 1 ≤j ≤L, denote xj = wj, then xj = UT j xJ, where Uj ∈Rb×L is the columns of an identity matrix and extracts the coordinates of xJ. Denote U = [U1, · · · , UL] ∈Rn×(bL) and A = [IbL, −UT ] where bL denotes b × L. By letting fj(xj) = gj(wj) and fJ(xJ) = ℓ(w), (13) can be written as: min x J X j=1 fj(xj) s.t. Ax = 0. (14) where x = [x1; · · · ; xL; xL+1] ∈Rb×J. (14) can be solved by PDMM in a randomized block coordinate fashion. In A, for b rows block, there are only two nonzero blocks, i.e., di = 2. Therefore, τi = K 2(2J−K), νi = 0.5. In particular, if K = J, τi = νi = 0.5. In contrast, sADMM uses τi = 1/J ≪0.5, νi = 1 −1/J > 0.5 if J is larger. Remark 1 (a) ADMM [2] can solve (14) where the equality constraint is xj = UT j xJ. (b) In this setting, Gauss-Seidel ADMM (GSADMM) and BSUMM [14] are the same as ADMM. BSUMM should converge with constant stepsize ρ (not necessarily sufficiently small), although the theory of BSUMM does not include this special case. 3 Theoretical Results We establish the convergence results for PDMM under fairly simple assumptions: Assumption 1 (1) fj : Rnj 7→R ∪{+∞} are closed, proper, and convex. (2) A KKT point of the Lagrangian (ρ = 0 in (2)) of Problem (1) exists. Assumption 1 is the same as that required by ADMM [2, 22]. Assume that {x∗ j ∈Xj, y∗ i } satisfies the KKT conditions of the Lagrangian (ρ = 0 in (2)), i.e., −AT j y∗∈∂fj(x∗ j) , (15) Ax∗−a = 0. (16) During iterations, (16) is satisfied if Axt+1 = a. Let f ′ j(xt+1 j ) ∈∂fj(xt+1 j ) where ∂fj be the subdifferential of fj. For x∗ j ∈Xj, the optimality conditions for the xj update (5) is ⟨f ′ j(xt+1 j )+Ac j[yt+(1−ν)ρ(Axt−a)+Ac j(xt+1 j −xt j)]+ηj(∇φj(xt+1 j )−∇φj(xt j)), xt+1 j −x∗ j⟩≤0 . When Axt+1 = a, yt+1 = yt. If Ac j(xt+1 j −xt j) = 0, then Axt −a = 0. When ηj ≥0, further assuming Bφj(xt+1 j , xt j) = 0, (15) will be satisfied. Note x∗ j ∈Xj is always satisfied in (5) in 5 PDMM. Overall, the KKT conditions (15)-(16) are satisfied if the following optimality conditions are satisfied by the iterates: Axt+1 = a , Ac j(xt+1 j −xt j) = 0 , (17) Bφj(xt+1 j , xt j) = 0 . (18) The above optimality conditions are sufficient for the KKT conditions. (17) are the optimality conditions for the exact PDMM. (18) is needed only when ηj > 0. Let zij = Aijxj ∈Rmi×1, zr i = [zT i1, · · · , zT iJ]T ∈RmiJ×1 and z = [(zr 1)T , · · · , (zr I)T ]T ∈ RJm×1. Define the residual of optimality conditions (17)-(18) as R(xt+1) = ρ 2∥zt+1 −zt∥2 Pt + ρ 2 I X i=1 βi∥Ar i xt+1 −ai∥2 2 + J X j=1 ηjBφj(xt+1 j , xt j) . (19) where Pt is some positive semi-definite matrix and βi = K J ˜ Ki . If R(xt+1) →0, (17)-(18) will be satisfied and thus PDMM converges to the KKT point {x∗, y∗}. Define the current iterate vt = (xt j, yt i) and h(v∗, vt) as a distance from vt to a KKT point v∗= (x∗ j ∈Xj, y∗ i ): h(v∗, vt) = K J I X i=1 1 2τiρ∥y∗ i −yt−1 i ∥2 2 + ˜Lρ(xt, yt) + ρ 2∥z∗−zt∥2 Q + J X j=1 ηjBφj(x∗ j, xt j) , (20) where Q is a positive semi-definite matrix and ˜Lρ(xt, yt) with γi = 2(J−K) ˜ Ki(2J−K) + 1 di − K J ˜ Ki is ˜Lρ(xt, yt) = f(xt) −f(x∗) + I X i=1 ⟨yt i, Ar i xt −ai⟩+ (γi −τi)ρ 2 ∥Ar i xt −ai∥2 2 . (21) The following Lemma shows that h(v∗, vt) ≥0. Lemma 1 Let vt = (xt j, yt i) be generated by PDMM (5)-(7) and h(v∗, vt) be defined in (20). Setting νi = 1 − 1 ˜ Ki and τi = K ˜ Ki(2J−K), we have h(v∗, vt) ≥ρ 2 I X i=1 ζi∥Ar i xt −ai∥2 2 + ρ 2∥z∗−zt∥2 Q + J X j=1 ηjBφj(x∗ j, xt j) ≥0 . (22) where ζi = J−K ˜ Ki(2J−K) + 1 di − K J ˜ Ki ≥0. Moreover, if h(v∗, vt) = 0, then Ar i xt = ai, zt = z∗and Bφj(x∗ j, xt j) = 0. Thus, (15)-(16) are satisfied. In PDMM, yt+1 depends on xt+1, which in turn depends on Jt. xt and yt are independent of Jt. xt depends on the observed realizations of the random variable ξt−1 = {J1, · · · , Jt−1} .The following theorem shows that h(v∗, vt) decreases monotonically and thus establishes the global convergence of PDMM. Theorem 1 (Global Convergence) Let vt = (xt j, yt i) be generated by PDMM (5)-(7) and v∗= (x∗ j ∈Xj, y∗ i ) be a KKT point satisfying (15)-(16). Setting νi = 1 − 1 ˜ Ki and τi = K ˜ Ki(2J−K), we have 0 ≤Eξth(v∗, vt+1) ≤Eξt−1h(v∗, vt) , EξtR(xt+1) →0 . (23) The following theorem establishes the iteration complexity of PDMM in an ergodic sense. Theorem 2 (Iteration Complexity) Let (xt j, yt i) be generated by PDMM (5)-(7). Let ¯xT = PT t=1 xt. Setting νi = 1 − 1 ˜ Ki and τi = K ˜ Ki(2J−K), we have Ef(¯xT ) −f(x∗) ≤ J K nPI i=1 1 2βiρ∥y∗ i ∥2 2 + ˜Lρ(x1, y1) + ρ 2∥z∗−z1∥2 Q + PJ j=1 ηjBφj(x∗ j, x1 j) o T , E I X i=1 βi∥Ar i ¯xT −ai∥2 2 ≤ 2 ρh(v∗, v0) T . where βi = K J ˜ Ki , Q is a positive semi-definite matrix, and the expectation is over Jt. 6 0 100 200 300 400 500 600 700 800 −5 −4 −3 −2 −1 0 1 2 3 4 time (s) residual (log) PDMM1 PDMM2 PDMM3 GSADMM RBSUMM sADMM 0 50 100 150 200 250 −5 −4 −3 −2 −1 0 1 2 3 4 iterations residual (log) PDMM1 PDMM2 PDMM3 GSADMM RBSUMM sADMM 50 100 150 200 250 300 7.8 7.85 7.9 7.95 8 8.05 8.1 8.15 time (s) objective (log) PDMM1 PDMM2 PDMM3 GSADMM RBSUMM sADMM Figure 1: Comparison of the convergence of PDMM with ADMM methods in RPCA. Table 1: The best results of PDMM with tuning parameters τi, νi in RPCA. time (s) iteration residual(×10−5) objective (log) PDMM1 118.83 40 3.60 8.07 PDMM2 137.46 34 5.51 8.07 PDMM3 147.82 31 6.54 8.07 GSADMM 163.09 28 6.84 8.07 RBSUMM 206.96 141 8.55 8.07 sADMM1 731.51 139 9.73 8.07 Remark 2 PDMM converges at the same rate as ADMM and its variants. In Theorem 2, PDMM can achieve the fastest convergence by setting J = K = 1, τi = 1, νi = 0, i.e., the entire matrix A is considered as a single block, indicating PDMM reduces to the method of multipliers. In this case, however, the resulting subproblem may be difficult to solve, as discussed in Section 1. Therefore, the number of blocks in PDMM depends on the trade-off between the number of subproblems and how efficiently each subproblem can be solved. 4 Experimental Results In this section, we evaluate the performance of PDMM in solving robust principal component analysis (RPCA) and overlapping group lasso [28]. We compared PDMM with ADMM [2] or GSADMM (no theory guarantee), sADMM [17, 26], and RBSUMM [14]. Note GSADMM includes BSUMM [14]. All experiments are implemented in Matlab and run sequentially. We run the experiments 10 times and report the average results. The stopping criterion is either when the residual is smaller than 10−4 or when the number of iterations exceeds 2000. RPCA: RPCA is used to obtain a low rank and sparse decomposition of a given matrix A corrupted by noise [5, 17]: min 1 2∥X1∥2 F + γ2∥X2∥1 + γ3∥X3∥∗ s.t. A = X1 + X2 + X3 . (24) where A ∈Rm×n, X1 is a noise matrix, X2 is a sparse matrix and X3 is a low rank matrix. A = L + S + V is generated in the same way as [17]1. In this experiment, m = 1000, n = 5000 and the rank is 100. The number appended to PDMM denotes the number of blocks (K) to be chosen in PDMM, e.g., PDMM1 randomly updates one block. Figure 1 compares the convegence results of PDMM with ADMM methods. In PDMM, ρ = 1 and τi, νi are chosen according to (8), i.e., (τi, νi) = {( 1 5, 0), ( 1 4, 1 2), ( 1 3, 1 3)} for PDMM1, PDMM2 and PDMM3 respectively. We choose the ‘best’results for GSADMM (ρ = 1) and RBSUMM (ρ = 1, α = ρ 11 √ t+10) and sADMM (ρ = 1). PDMMs perform better than RBSUMM and sADMM. Note the public available code of sADMM1 does not have dual update, i.e., τi = 0. sADMM should be the same as PDMM3 if τi = 1 3. Since τi = 0, sADMM is the slowest algorithm. Without tuning the parameters of PDMM, GSADMM converges faster than PDMM. Note PDMM can run in parallel but GSADMM only runs sequentially. PDMM3 is faster than two randomized version of PDMM since the costs of extra iterations in PDMM1 and PDMM2 have surpassed the savings at each iteration. For the two randomized one block coordinate methods, PDMM1 converges faster than RBSUMM in terms of both the number of iterations and runtime. The effect of τi, νi: We tuned the parameter τi, νi in PDMMs. Three randomized methods (RBSUMM, PDMM1 and PDMM2) choose the blocks cyclically instead of randomly. Table 1 compares the ‘best’results of PDMM with other ADMM methods. In PDMM, (τi, νi) = 1http://www.stanford.edu/ boyd/papers/prox algs/matrix decomp.html 7 0 50 100 150 200 0 0.1 0.2 0.3 0.4 0.5 time (s) objective PA−APG S−APG PDMM ADMM sADMM 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 iteration objective PA−APG S−APG PDMM ADMM sADMM 20 30 40 50 60 70 −5 −4 −3 −2 −1 0 time (s) residual (log) 1 21 41 61 81 101 Figure 2: Comparison of convergence of PDMM and other methods in overlapping group Lasso. {( 1 2, 0), ( 1 3, 1 2), ( 1 2, 1 2)}. GSADMM converges with the smallest number of iterations, but PDMMs can converge faster than GSADMM in terms of runtime. The computation per iteration in GSADMM is slightly higher than PDMM3 because GSADMM updates the sum X1 + X2 + X3 but PDMM3 can reuse the sum. Therefore, if the numbers of iterations of the two methods are close, PDMM3 can be faster than GSADMM. PDMM1 and PDMM2 can be faster than PDMM3. By simply updating one block, PDMM1 is the fastest algorithm and achieves the lowest residual. Overlapping Group Lasso: We consider solving the overlapping group lasso problem [28]: min w 1 2Lλ∥Aw −b∥2 2 + X g∈Gdg∥wg∥2 . (25) where A ∈Rm×n, w ∈Rn×1 and wg ∈Rb×1 is the vector of overlapping group indexed by g. dg is some positive weight of group g ∈G. As shown in Section 2.3, (25) can be rewritten as the form (14). The data is generated in a same way as [27, 9]: the elements of A are sampled from normal distribution, b = Ax + ϵ with noise ϵ sampled from normal distribution, and xj = (−1)j exp(−(j −1)/100). In this experiment, m = 5000, the number of groups is L = 100, and dg = 1 L, λ = L 5 in (25). The size of each group is 100 and the overlap is 10. The total number of blocks in PDMM and sADMM is J = 101. τi, νi in PDMM are computed according to (8). In Figure 2, the first two figures plot the convergence of objective in terms of the number of iterations and time. PDMM uses all 101 blocks and is the fastest algorithm. ADMM is the same as GSADMM in this problem, but is slower than PDMM. Since sADMM does not consider the sparsity, it uses τi = 1 J+1, νi = 1 − 1 J+1, leading to slow convergence. The two accelerated methods, PA-APG [27] and S-APG [9], are slower than PDMM and ADMM. The effect of K: The third figure shows PDMM with different number of blocks K. Although the complexity of each iteration is the lowest when K = 1, PDMM takes much more iterations than other cases and thus takes the longest time. As K increases, PDMM converges faster and faster. When K = 20, the runtime is already same as using all blocks. When K > 21, PDMM takes less time to converge than using all blocks. The runtime of PDMM decreases as K increases from 21 to 61. However, the speedup from 61 to 81 is negligable. We tried different set of parameters for RBSUMM ρ i2+1 i+t (0 ≤i ≤5, ρ = 0.01, 0.1, 1) or sufficiently small step size, but could not see the convergence of the objective within 5000 iterations. Therefore, the results are not included here. 5 Conclusions We proposed a randomized block coordinate variant of ADMM named Parallel Direction Method of Multipliers (PDMM) to solve the class of problem of minimizing block-separable convex functions subject to linear constraints. PDMM considers the sparsity and the number of blocks to be updated when setting the step size. We show two existing Jacobian ADMM methods are special cases of PDMM. We also use PDMM to solve overlapping block problems. The global convergence and the iteration complexity are established with constant step size. Experiments on robust PCA and overlapping group lasso show that PDMM is faster than existing methods. Acknowledgment H. W. and A. B. acknowledge the support of NSF via IIS-1447566, IIS-1422557, CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, IIS-0916750, and NASA grant NNX12AQ39A. H. W. acknowledges the support of DDF (2013-2014) from the University of Minnesota. A.B. acknowledges support from IBM and Yahoo. Z.Q. Luo is supported in part by the US AFOSR via grant number FA9550-12-1-0340 and the National Science Foundation via grant number DMS-1015346. 8 References [1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex Optimization with Sparsity-Inducing Norms. S. Sra, S. Nowozin, S. J. Wright., editors, Optimization for Machine Learning, MIT Press, 2011. [2] S. Boyd, E. Chu N. Parikh, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundation and Trends Machine Learning, 3(1):1–122, 2011. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [4] T. Cai, W. Liu, and X. Luo. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of American Statistical Association, 106:594–607, 2011. [5] E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis ?. Journal of the ACM, 58:1–37, 2011. [6] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex optimization. Annals of Statistics, 40:1935–1967, 2012. [7] C. Chen, B. He, Y. Ye, and X. Yuan. The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Preprint, 2013. [8] S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM review, 43:129–159, 2001. [9] X. Chen, Q. Lin, S. Kim, J. G. Carbonell, and E. P. Xing. Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics, 6:719752, 2012. [10] W. Deng, M. Lai, Z. Peng, and W. Yin. Parallel multi-block admm with o(1/k) convergence. ArXiv, 2014. [11] Q. Fu, H. Wang, and A. Banerjee. Bethe-ADMM for tree decomposition based parallel MAP inference. In UAI, 2013. [12] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finiteelement approximations. Computers and Mathematics with Applications, 2:17–40, 1976. [13] B. He, M. Tao, and X. Yuan. Alternating direction method with Gaussian back substitution for separable convex programming. SIAM Journal of Optimization, pages 313–340, 2012. [14] M. Hong, T. Chang, X. Wang, M. Razaviyayn, S. Ma, and Z. Luo. A block successive upper bound minimization method of multipliers for linearly constrained convex optimization. Preprint, 2013. [15] M. Hong and Z. Luo. On the linear convergence of the alternating direction method of multipliers. ArXiv, 2012. [16] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization methods. SIAM Journal on Optimization, 22(2):341362, 2012. [17] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1:123–231, 2014. [18] P. Richtarik and M. Takac. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 2012. [19] N. Z. Shor. Minimization Methods for Non-Differentiable Functions. Springer-Verlag, 1985. [20] R. Tappenden, P. Richtarik, and B. Buke. Separable approximations and decomposition methods for the augmented lagrangian. Preprint, 2013. [21] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1:1–305, 2008. [22] H. Wang and A. Banerjee. Online alternating direction method. In ICML, 2012. [23] H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In NIPS, 2014. [24] H. Wang, A. Banerjee, C. Hsieh, P. Ravikumar, and I. Dhillon. Large scale distributed sparse precesion estimation. In NIPS, 2013. [25] H. Wang, A. Banerjee, and Z. Luo. Parallel direction method of multipliers. ArXiv, 2014. [26] X. Wang, M. Hong, S. Ma, and Z. Luo. Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers. Preprint, 2013. [27] Y. Yu. Better approximation and faster algorithm using the proximal average. In NIPS, 2012. [28] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 37:34683497, 2009. [29] Z. Zhou, X. Li, J. Wright, E. Candes, and Y. Ma. Stable principal component pursuit. In IEEE International Symposium on Information Theory, 2010. 9
|
2014
|
104
|
5,186
|
Spectral Methods for Supervised Topic Models Yining Wang† Jun Zhu‡ †Machine Learning Department, Carnegie Mellon University, yiningwa@cs.cmu.edu ‡Dept. of Comp. Sci. & Tech.; Tsinghua National TNList Lab; State Key Lab of Intell. Tech. & Sys., Tsinghua University, dcszj@mail.tsinghua.edu.cn Abstract Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on either variational approximation or Monte Carlo sampling. This paper presents a novel spectral decomposition algorithm to recover the parameters of supervised latent Dirichlet allocation (sLDA) models. The Spectral-sLDA algorithm is provably correct and computationally efficient. We prove a sample complexity bound and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on a diverse range of synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the algorithm. 1 Introduction Topic modeling offers a suite of useful tools that automatically learn the latent semantic structure of a large collection of documents. Latent Dirichlet allocation (LDA) [9] represents one of the most popular topic models. The vanilla LDA is an unsupervised model built on input contents of documents. In many applications side information is available apart from raw contents, e.g., user-provided rating scores of an online review text. Such side signal usually provides additional information to reveal the underlying structures of the documents in study. There have been extensive studies on developing topic models that incorporate various side information, e.g., by treating it as supervision. Some representative models are supervised LDA (sLDA) [8] that captures a real-valued regression response for each document, multiclass sLDA [21] that learns with discrete classification responses, discriminative LDA (DiscLDA) [14] that incorporates classification response via discriminative linear transformations on topic mixing vectors, and MedLDA [22, 23] that employs a max-margin criterion to learn discriminative latent topic representations. Topic models are typically learned by finding maximum likelihood estimates (MLE) through local search or sampling methods [12, 18, 19], which may suffer from local optima. Much recent progress has been made on developing spectral decomposition [1, 2, 3] and nonnegative matrix factorization (NMF) [4, 5, 6, 7] methods to infer latent topic-word distributions. Instead of finding MLE estimates, which is a known NP-hard problem [6], these methods assume that the documents are i.i.d. sampled from a topic model, and attempt to recover the underlying model parameters. Compared to local search and sampling algorithms, these methods enjoy the advantage of being provably effective. In fact, sample complexity bounds have been proved to show that given a sufficiently large collection of documents, these algorithms can recover the model parameters accurately with a high probability. Although spectral decomposition (as well as NMF) methods have achieved increasing success in recovering latent variable models, their applicability is quite limited. For example, previous work has mainly focused on unsupervised latent variable models, leaving the broad family of supervised models (e.g., sLDA) largely unexplored. The only exception is [10] which presents a spectral method for mixtures of regression models, quite different from sLDA. Such ignorance is not a coincidence as supervised models impose new technical challenges. For instance, a direct application of previous 1 techniques [1, 2] on sLDA cannot handle regression models with duplicate entries. In addition, the sample complexity bound gets much worse if we try to match entries in regression models with their corresponding topic vectors. On the practical side, few quantitative experimental results (if any at all) are available for spectral decomposition based methods on LDA models. In this paper, we extend the applicability of spectral learning methods by presenting a novel spectral decomposition algorithm to recover the parameters of sLDA models from empirical low-order moments estimated from the data. We provide a sample complexity bound and analyze the identifiability conditions. A key step in our algorithm is a power update step that recovers the regression model in sLDA. The method uses a newly designed empirical moment to recover regression model entries directly from the data and reconstructed topic distributions. It is free from making any constraints on the underlying regression model, and does not increase the sample complexity much. We also provide thorough experiments on both synthetic and real-world datasets to demonstrate the practical effectiveness of our proposed algorithm. By combining our spectral recovery algorithm with a Gibbs sampling procedure, we showed superior performance in terms of language modeling, prediction accuracy and running time compared to traditional inference algorithms. 2 Preliminaries We first overview the basics of sLDA, orthogonal tensor decomposition and the notations to be used. 2.1 Supervised LDA Latent Dirichlet allocation (LDA) [9] is a generative model for topic modeling of text documents. It assumes k different topics with topic-word distributions µ1, · · · , µk ∈∆V −1, where V is the vocabulary size and ∆V −1 denotes the probability simplex of a V -dimensional random vector. For a document, LDA models a topic mixing vector h ∈∆k−1 as a probability distribution over the k topics. A conjugate Dirichlet prior with parameter α is imposed on the topic mixing vectors. A bag-of-word model is then adopted, which generates each word in the document based on h and the topic-word vectors µ. Supervised latent Dirichlet allocation (sLDA) [8] incorporates an extra response variable y ∈R for each document. The response variable is modeled by a linear regression model η ∈Rk on either the topic mixing vector h or the averaging topic assignment vector ¯z, where ¯zi = 1 m P j 1[zj=i] with m the number of words in a document. The noise is assumed to be Gaussian with zero mean and σ2 variance. Fig. 1 shows the graph structure of two sLDA variants mentioned above. Although previous work has mainly focused on model (b) which is convenient for Gibbs sampling and variational inference, we consider model (a) because it will considerably simplify our spectral algorithm and analysis. One may assume that whenever a document is not too short, the empirical distribution of its word topic assignments should be close to the document’s topic mixing vector. Such a scheme was adopted to learn sparse topic coding models [24], and has demonstrated promising results in practice. 2.2 High-order tensor product and orthogonal tensor decomposition A real p-th order tensor A ∈Np i=1 Rni belongs to the tensor product of Euclidean spaces Rni. Generally we assume n1 = n2 = · · · = np = n, and we can identify each coordinate of A by a p-tuple (i1, · · · , ip), where i1, · · · , ip ∈[n]. For instance, a p-th order tensor is a vector when p = 1 and a matrix when p = 2. We can also consider a p-th order tensor A as a multilinear mapping. For A ∈Np Rn and matrices X1, · · · , Xp ∈Rn×m, the mapping A(X1, · · · , Xp) is a p-th order tensor in Np Rm, with [A(X1, · · · , Xp)]i1,··· ,ip ≜P j1,··· ,jp∈[n] Aj1,··· ,jp[X1]j1,i1[X2]j2,i2 · · · [Xp]jp,ip. Consider some concrete examples of such a multilinear mapping. When A, X1, X2 are matrices, we have A(X1, X2) = X⊤ 1 AX2. Similarly, when A is a matrix and x is a vector, A(I, x) = Ax. An orthogonal tensor decomposition of a tensor A ∈Np Rn is a collection of orthonormal vectors {vi}k i=1 and scalars {λi}k i=1 such that A = Pk i=1 λiv⊗p i . Without loss of generality, we assume λi are nonnegative when p is odd. Although orthogonal tensor decomposition in the matrix case can be done efficiently by singular value decomposition (SVD), it has several delicate issues in higher order tensor spaces [2]. For instance, tensors may not have unique decompositions, and an orthogonal decomposition may not exist for every symmetric tensor [2]. Such issues are further complicated when only noisy estimates of the desired tensors are available. For these reasons, we need more advanced techniques to handle high-order tensors. In this paper, we will apply the robust 2 x z h α µ η, σ y β M N k (a) yd = η⊤ d hd + εd x z h α µ η, σ y β M N k (b) yd = η⊤ d ¯zd + εd Figure 1: Plate notations for two variants of sLDA tensor power method [2] to recover robust eigenvalues and eigenvectors of an (estimated) third-order tensor. The algorithm recovers eigenvalues and eigenvectors up to an absolute error ε, while running in polynomial time with respect to the tensor dimension and log(1/ε). Further details and analysis of the robust tensor power method are presented in Appendix A.2 and [2]. 2.3 Notations Throughout, we use v⊗p ≜v⊗v⊗· · ·⊗v to denote the p-th order tensor generated by a vector v. We use ∥v∥= pP i v2 i to denote the Euclidean norm of a vector v, ∥M∥to denote the spectral norm of a matrix M and ∥T∥to denote the operator norm of a high-order tensor. ∥M∥F = qP i,j M 2 ij denotes the Frobenious norm of a matrix. We use an indicator vector x ∈RV to represent a word in a document, e.g., for the i-th word in the vocabulary, xi = 1 and xj = 0 for all j ̸= i. We also use O ≜(µ1, µ2, · · · , µk) ∈RV ×k to denote the topic distribution matrix, and eO ≜(eµ1, eµ2, · · · , eµK) to denote the canonical version of O, where eµi = q αi α0(α0+1)µ with α0 = Pk i=1 αi. 3 Spectral Parameter Recovery We now present a novel spectral parameter recovery algorithm for sLDA. The algorithm consists of two key components—the orthogonal tensor decomposition of observable moments to recover the topic distribution matrix O and a power update method to recover the linear regression model η. We elaborate on these techniques and a rigorous theoretical analysis in the following sections. 3.1 Moments of observable variables Our spectral decomposition methods recover the topic distribution matrix O and the linear regression model η by manipulating moments of observable variables. In Definition 1, we define a list of moments on random variables from the underlying sLDA model. Definition 1. We define the following moments of observable variables: M1 = E[x1], M2 = E[x1 ⊗x2] − α0 α0 + 1M1 ⊗M1, (1) M3 = E[x1 ⊗x2 ⊗x3] − α0 α0 + 2 (E[x1 ⊗x2 ⊗M1] + E[x1 ⊗M1 ⊗x2] + E[M1 ⊗x1 ⊗x2]) + 2α2 0 (α0 + 1)(α0 + 2)M1 ⊗M1 ⊗M1, (2) My = E[yx1 ⊗x2] − α0 α0 + 2 (E[y]E[x1 ⊗x2] + E[x1] ⊗E[yx2] + E[yx1] ⊗E[x2]) + 2α2 0 (α0 + 1)(α0 + 2)E[y]M1 ⊗M1. (3) Note that the moments M1, M2 and M3 were also defined and used in previous work [1, 2] for the parameter recovery for LDA models. For the sLDA model, we need to define a new moment My in order to recover the linear regression model η. The moments are based on observable variables in the sense that they can be estimated from i.i.d. sampled documents. For instance, M1 can be estimated by computing the empirical distribution of all words, and M2 can be estimated using M1 and word co-occurrence frequencies. Though the moments in the above forms look complicated, we can apply elementary calculations based on the conditional independence structure of sLDA to significantly simplify them and more importantly to get them connected with the model parameters to be recovered, as summarized in Proposition 1. The proof is deferred to Appendix B. 3 Proposition 1. The moments can be expressed using the model parameters as: M2 = 1 α0(α0 + 1) k X i=1 αiµi ⊗µi, M3 = 2 α0(α0 + 1)(α0 + 2) k X i=1 αiµi ⊗µi ⊗µi, (4) My = 2 α0(α0 + 1)(α0 + 2) k X i=1 αiηiµi ⊗µi. (5) 3.2 Simultaneous diagonalization Proposition 1 shows that the moments in Definition 1 are all the weighted sums of tensor products of {µi}k i=1 from the underlying sLDA model. One idea to reconstruct {µi}k i=1 is to perform simultaneous diagonalization on tensors of different orders. The idea has been used in a number of recent developments of spectral methods for latent variable models [1, 2, 10]. Specifically, we first whiten the second-order tensor M2 by finding a matrix W ∈RV ×k such that W ⊤M2W = Ik. This whitening procedure is possible whenever the topic distribuction vectors {µi}k i=1 are linearly independent (and hence M2 has rank k). The whitening procedure and the linear independence assumption also imply that {Wµi}k i=1 are orthogonal vectors (see Appendix A.2 for details), and can be subsequently recovered by performing an orthogonal tensor decomposition on the simultaneously whitened third-order tensor M3(W, W, W). Finally, by multiplying the pseudo-inverse of the whitening matrix W + we obtain the topic distribution vectors {µi}k i=1. It should be noted that Jennrich’s algorithm [13, 15, 17] could recover {µi}k i=1 directly from the 3rd order tensor M3 alone when {µi}k i=1 is linearly independent. However, we still adopt the above simultaneous diagonalization framework because the intermediate vectors {Wµi}k i=1 play a vital role in the recovery procedure of the linear regression model η. 3.3 The power update method Although the linear regression model η can be recovered in a similar manner by performing simultaneous diagonalization on M2 and My, such a method has several disadvantages, thereby calling for novel solutions. First, after obtaining entry values {ηi}k i=1 we need to match them to the topic distributions {µi}k i=1 previously recovered. This can be easily done when we have access to the true moments, but becomes difficult when only estimates of observable tensors are available because the estimated moments may not share the same singular vectors due to sampling noise. A more serious problem is that when η has duplicate entries the orthogonal decomposition of My is no longer unique. Though a randomized strategy similar to the one used in [1] might solve the problem, it could substantially increase the sample complexity [2] and render the algorithm impractical. We develop a power update method to resolve the above difficulties. Specifically, after obtaining the whitened (orthonormal) vectors {vi} ≜ci · Wµi 1 we recover the entry ηi of the linear regression model directly by computing a power update v⊤ i My(W, W)vi. In this way, the matching problem is automatically solved because we know what topic distribution vector µi is used when recovering ηi. Furthermore, the singular values (corresponding to the entries of η) do not need to be distinct because we are not using any unique SVD properties of My(W, W). As a result, our proposed algorithm works for any linear model η. 3.4 Parameter recovery algorithm An outline of our parameter recovery algorithm for sLDA (Spectral-sLDA) is given in Alg. 1. First, empirical estimates of the observable moments in Definition 1 are computed from the given documents. The simultaneous diagonalization method is then used to reconstruct the topic distribution matrix O and its prior parameter α. After obtaining O = (µ1, · · · , µk), we use the power update method introduced in the previous section to recover the linear regression model η. Alg. 1 admits three hyper-parameters α0, L and T. α0 is defined as the sum of all entries in the prior parameter α. Following the conventions in [1, 2], we assume that α0 is known a priori and use this value to perform parameter estimation. It should be noted that this is a mild assumption, as in practice usually a homogeneous vector α is assumed and the entire vector is known [20]. The L and T parameters are used to control the number of iterations in the robust tensor power method. In general, the robust tensor power method runs in O(k3LT) time. To ensure sufficient recovery accuracy, 1ci is a scalar coefficient that depends on α0 and αi. See Appendix A.2 for details. 4 Algorithm 1 spectral parameter recovery algorithm for sLDA. Input parameters: α0, L, T. 1: Compute empirical moments and obtain c M2, c M3 and c My. 2: Find c W ∈Rn×k such that c M2(c W, c W) = Ik. 3: Find robust eigenvalues and eigenvectors (bλi, bvi) of c M3(c W, c W, c W) using the robust tensor power method [2] with parameters L and T. 4: Recover prior parameters: bαi ←4α0(α0+1) (α0+2)2bλ2 i . 5: Recover topic distributions: bµi ←α0+2 2 bλi(c W +)⊤bvi. 6: Recover the linear regression model: bηi ←α0+2 2 bv⊤ i c My(c W, c W)bvi. 7: Output: bη, bα and {bµi}k i=1. L should be at least a linear function of k and T should be set as T = Ω(log(k)+log log(λmax/ε)), where λmax = 2 α0+2 q α0(α0+1) αmin and ε is an error tolerance parameter. Appendix A.2 and [2] provide a deeper analysis into the choice of L and T parameters. 3.5 Speeding up moment computation In Alg. 1, a straightforward computation of the third-order tensor c M3 requires O(NM 3) time and O(V 3) storage, where N is corpus size and M is the number of words per document. Such time and space complexities are clearly prohibitive for real applications, where the vocabulary usually contains tens of thousands of terms. However, we can employ a trick similar as in [11] to speed up the moment computation. We first note that only the whitened tensor c M3(c W, c W, c W) is needed in our algorithm, which only takes O(k3) storage. Another observation is that the most difficult term in c M3 can be written as Pr i=1 ciui,1 ⊗ui,2 ⊗ui,3, where r is proportional to N and ui,· contains at most M non-zero entries. This allows us to compute c M3(c W, c W, c W) in O(NMk) time by computing Pr i=1 ci(W ⊤ui,1) ⊗(W ⊤ui,2) ⊗(W ⊤ui,3). Appendix B.2 provides more details about this speed-up trick. The overall time complexity is O(NM(M + k2) + V 2 + k3LT) and the space complexity is O(V 2 + k3). 4 Sample Complexity Analysis We now analyze the sample complexity of Alg. 1 in order to achieve ε-error with a high probability. For clarity, we focus on presenting the main results, while deferring the proof details to Appendix A, including the proofs of important lemmas that are needed for the main theorem. Theorem 1. Let σ1( eO) and σk( eO) be the largest and the smallest singular values of the canonical topic distribution matrix eO. Define λmin ≜ 2 α0+2 q α0(α0+1) αmax and λmax ≜ 2 α0+2 q α0(α0+1) αmin with αmax and αmin the largest and the smallest entries of α. Suppose bµ, bα and bη are the outputs of Algorithm 1, and L is at least a linear function of k. Fix δ ∈(0, 1). For any small error-tolerance parameter ε > 0, if Algorithm 1 is run with parameter T = Ω(log(k) + log log(λmax/ε)) on N i.i.d. sampled documents (each containing at least 3 words) with N ≥max(n1, n2, n3), where n1 = C1 · 1 + p log(6/δ) 2 · α2 0(α0 + 1)2 αmin , n3 = C3 · (1 + p log(9/δ))2 σk( eO)10 · max 1 ε2 , k2 λ2 min , n2 = C2 · (1 + p log(15/δ))2 ε2σk( eO)4 · max (∥η∥+ Φ−1(δ/60σ))2, α2 maxσ1( eO)2 , and C1, C2 and C3 are universal constants, then with probability at least 1 −δ, there exists a permutation π : [k] →[k] such that for every topic i, the following holds: 1. |αi −bαπ(i)| ≤ 4α0(α0+1)(λmax+5ε) (α0+2)2λ2 min(λmin−5ε)2 · 5ε, if λmin > 5ε; 2. ∥µi −bµπ(i)∥≤ 3σ1( eO) 8αmax λmin + 5(α0+2) 2 + 1 ε; 3. |ηi −bηπ(i)| ≤ ∥η∥ λmin + (α0 + 2) ε. 5 300 600 1000 3000 6000 10000 0 0.2 0.4 0.6 α error (1−norm) M=250 M=500 300 600 1000 3000 6000 10000 0 5 10 η error (1−norm) M=250 M=500 300 600 1000 3000 6000 10000 0 0.2 0.4 µ error (1−norm) M=250 M=500 Figure 2: Reconstruction errors of Alg. 1. X axis denotes the training size. Error bars denote the standard deviations measured on 3 independent trials under each setting. In brevity, the proof is based on matrix perturbation lemmas (see Appendix A.1) and analysis to the orthogonal tensor decomposition methods (including SVD and robust tensor power method) performed on inaccurate tensor estimations (see Appendix A.2). The sample complexity lower bound consists of three terms, from n1 to n3. The n3 term comes from the sample complexity bound for the robust tensor power method [2]; the (∥η∥+ Φ−1(δ/60σ))2 term in n2 characterizes the recovery accuracy for the linear regression model η, and the α2 maxσ1( eO)2 term arises when we try to recover the topic distribution vectors µ; finally, the term n1 is required so that some technical conditions are met. The n1 term does not depend on either k or σk( eO), and could be largely neglected in practice. An important implication of Theorem 1 is that it provides a sufficient condition for a supervised LDA model to be identifiable, as shown in Remark 1. To some extent, Remark 1 is the best identifiability result possible under our inference framework, because it makes no restriction on the linear regression model η, and the linear independence assumption is unavoidable without making further assumptions on the topic distribution matrix O. Remark 1. Given a sufficiently large number of i.i.d. sampled documents with at least 3 words per document, a supervised LDA model M = (α, µ, η) is identifiable if α0 = Pk i=1 αi is known and {µi}k i=1 are linearly independent. We also make remarks on indirected quantities appeared in Theorem 1 (e.g., σk( eO)) and a simplified sample complexity bound for some specical cases. They can be found in Appendix A.4. 5 Experiments 5.1 Datasets description and Algorithm implementation details We perform experiments on both synthetic and real-world datasets. The synthetic data are generated in a similar manner as in [22], with a fixed vocabulary of size V = 500. We generate the topic distribution matrix O by first sampling each entry from a uniform distribution and then normalize every column of O. The linear regression model η is sampled from a standard Gaussian distribution. The prior parameter α is assumed to be homogeneous, i.e., α = (1/k, · · · , 1/k). Documents and response variables are then generated from the sLDA model specified in Sec. 2.1. For real-world data, we use the large-scale dataset built on Amazon movie reviews [16] to demonstrate the practical effectiveness of our algorithm. The dataset contains 7,911,684 movie reviews written by 889,176 users from Aug 1997 to Oct 2012. Each movie review is accompanied with a score from 1 to 5 indicating how the user likes a particular movie. The median number of words per review is 101. A vocabulary with V = 5, 000 terms is built by selecting high frequency words. We also pre-process the dataset by shifting the review scores so that they have zero mean. Both Gibbs sampling for the sLDA model in Fig. 1 (b) and the proposed spectral recovery algorithm are implemented in C++. For our spectral algorithm, the hyperparameters L and T are set to 100, which is sufficiently large for all settings in our experiments. Since Alg. 1 can only recover the topic model itself, we use Gibbs sampling to iteratively sample topic mixing vectors h and topic assignments for each word z in order to perform prediction on a held-out dataset. 5.2 Convergence of reconstructed model parameters We demonstrate how the sLDA model reconstructed by Alg. 1 converges to the underlying true model when more observations are available. Fig. 2 presents the 1-norm reconstruction errors of α, η and µ. The number of topics k is set to 20 and the number of words per document (i.e., M) is set 6 0.20.40.60.8 1 2 4 6 8 10 0 0.2 0.4 MSE (k=20) ref. model Spec−sLDA Gibbs−sLDA 0.20.40.60.8 1 2 4 6 8 10 8.8 8.9 9 Neg. Log−likeli. (k=20) 0.20.40.60.8 1 2 4 6 8 10 0 0.2 0.4 MSE (k=50) 0.20.40.60.8 1 2 4 6 8 10 8.93 8.94 8.95 8.96 8.97 Neg. Log−likeli. (k=50) Figure 3: Mean square errors and negative per-word log-likelihood of Alg. 1 and Gibbs sLDA. Each document contains M = 500 words. The X axis denotes the training size (×103). 0 2 4 6 8 10 0 0.05 0.1 0.15 PR2 (α=0.01) Gibbs−sLDA Spec−sLDA Hybrid 0 2 4 6 8 10 −0.05 0 0.05 0.1 0.15 PR2 (α=0.1) Gibbs−sLDA Spec−sLDA Hybrid 0 2 4 6 8 10 −0.2 −0.1 0 0.1 PR2 (α=1.0) Gibbs−sLDA Spec−sLDA Hybrid 0 2 4 6 8 10 7.4 7.5 7.6 7.7 Neg. Log−likeli. (α=0.01) Gibbs−sLDA Spec−sLDA Hybrid 0 2 4 6 8 10 7.4 7.6 7.8 Neg. Log−likeli. (α=0.1) Gibbs−sLDA Spec−sLDA Hybrid 0 2 4 6 8 10 7.4 7.6 7.8 8 Neg. Log−likeli. (α=1.0) Gibbs−sLDA Spec−sLDA Hybrid Figure 4: pR2 scores and negative per-word log-likelihood. The X axis indicates the number of topics. Error bars indicate the standard deviation of 5-fold cross-validation. to 250 and 500. Since Spectral-sLDA can only recover topic distributions up to a permutation over [k], a minimum weighted graph match was computed on O and bO to find an optimal permutation. Fig. 2 shows that the reconstruction errors for all the parameters go down rapidly as we obtain more documents. Furthermore, though Theorem 1 does not involve the number of words per document, the simulation results demonstrate a significant improvement when more words are observed in each document, which is a nice complement for the theoretical analysis. 5.3 Prediction accuracy and per-word likelihood We compare the prediction accuracy and per-word likelihood of Spectral-sLDA and Gibbs-sLDA on both synthetic and real-world datasets. On the synthetic dataset, the regression error is measured by the mean square error (MSE), and the per-word log-likelihood is defined as log2 p(w|h, O) = log2 PK k=1 p(w|z = k, O)p(z = k|h). The hyper-parameters used in our Gibbs sampling implementation are the same with the ones used to generate the datasets. Fig. 3 shows that Spectral-sLDA consistently outperforms Gibbs-sLDA. Our algorithm also enjoys the advantage of being less variable, as indicated by the curve and error bars. Moreover, when the number of training documents is sufficiently large, the performance of the reconstructed model is very close to the underlying true model2, which implies that Alg. 1 can correctly identify an sLDA model from its observations, therefore supporting our theory. We also test both algorithms on the large-scale Amazon movie review dataset. The quality of the prediction is assessed with predictive R2 (pR2) [8], a normalized version of MSE, which is defined as pR2 ≜1 −(P i (yi −byi)2)/(P i (yi −¯y)2), where byi is the estimate, yi is the truth, and ¯y is the average true value. We report the results under various settings of α and k in Fig. 4, with the σ hyper-parameter of Gibbs-sLDA selected via cross-validation on a smaller subset of documents. Apart from Gibbs-sLDA and Spectral-sLDA, we also test the performance of a hybrid algorithm which performs Gibbs sampling using models reconstructed by Spectral-sLDA as initializations. Fig. 4 shows that in general Spectral-sLDA does not perform as well as Gibbs sampling. One possible reason is that real-world datasets are not exact i.i.d. samples from an underlying sLDA model. However, a significant improvement can be observed when the Gibbs sampler is initialized with models reconstructed by Spectral-sLDA instead of random initializations. This is because Spectral-sLDA help avoid the local optimum problem of local search methods like Gibbs sampling. Similar improvements for spectral methods were also observed in previous papers [10]. 2Due to the randomness in the data generating process, the true model has a non-zero prediction error. 7 Table 1: Training time of Gibbs-sLDA and Spectral-sLDA, measured in minutes. k is the number of topics and n is the number of documents used in training. k = 10 k = 50 n(×104) 1 5 10 50 100 1 5 10 50 100 Gibbs-sLDA 0.6 3.0 6.0 30.5 61.1 2.9 14.3 28.2 145.4 281.8 Spec-sLDA 1.5 1.6 1.7 2.9 4.3 3.1 3.6 4.3 9.5 16.2 Table 2: Prediction accuracy and per-word log-likelihood of Gibbs-sLDA and the hybrid algorithm. The initialization solution is obtained by running Alg. 1 on a collection of 1 million documents, while n is the number of documents used in Gibbs sampling. k = 8 topics are used. predictive R2 Negative per-word log-likelihood log10 n 3 4 5 6 3 4 5 6 Gibbs-sLDA 0.00 0.04 0.11 0.14 7.72 7.55 7.45 7.42 (0.01) (0.02) (0.02) (0.01) (0.01) (0.01) (0.01) (0.01) Hybrid 0.02 0.17 0.18 0.18 7.70 7.49 7.40 7.36 (0.01) (0.03) (0.03) (0.03) (0.01) (0.02) (0.01) (0.01) Note that for k > 8 the performance of Spectral-sLDA significantly deteriorates. This phenomenon can be explained by the nature of Spectral-sLDA itself: one crucial step in Alg. 1 is to whiten the empirical moment c M2, which is only possible when the underlying topic matrix O has full rank. For the Amazon movie review dataset, it is impossible to whiten c M2 when the underlying model contains more than 8 topics. This interesting observation shows that the Spectral-sLDA algorithm can be used for model selection to avoid overfitting by using too many topics. 5.4 Time efficiency The proposed spectral recovery algorithm is very time efficient because it avoids time-consuming iterative steps in traditional inference and sampling methods. Furthermore, empirical moment computation, the most time-consuming part in Alg. 1, consists of only elementary operations and could be easily optimized. Table 1 compares the training time of Gibbs-sLDA and Spectral-sLDA and shows that our proposed algorithm is over 15 times faster than Gibbs sampling, especially for large document collections. Although both algorithms are implemented in a single-threading manner, Spectral-sLDA is very easy to parallelize because unlike iterative local search methods, the moment computation step in Alg. 1 does not require much communication or synchronization. There might be concerns about the claimed time efficiency, however, because significant performance improvements could only be observed when Spectral-sLDA is used together with GibbssLDA, and the Gibbs sampling step might slow down the entire procedure. To see why this is not the case, we show in Table 2 that in order to obtain high-quality models and predictions, only a very small collection of documents are needed after model reconstruction of Alg. 1. In contrast, Gibbs-sLDA with random initialization requires more data to get reasonable performances. To get a more intuitive idea of how fast our proposed method is, we combine Tables 1 and 2 to see that by doing Spectral-sLDA on 106 documents and then post-processing the reconstructed models using Gibbs sampling on only 104 documents, we obtain a pR2 score of 0.17 in 5.8 minutes, while Gibbs-sLDA takes over an hour to process a million documents with a pR2 score of only 0.14. Similarly, the hybrid method takes only 10 minutes to get a per-word likelihood comparable to the Gibbs sampling algorithm that requires more than an hour running time. 6 Conclusion We propose a novel spectral decomposition based method to reconstruct supervised LDA models from labeled documents. Although our work has mainly focused on tensor decomposition based algorithms, it is an interesting problem whether NMF based methods could also be applied to obtain better sample complexity bound and superior performance in practice for supervised topic models. Acknowledgement The work was done when Y.W. was at Tsinghua. The work is supported by the National Basic Research Program of China (No. 2013CB329403), National NSF of China (Nos. 61322308, 61332007), and Tsinghua University Initiative Scientific Research Program (No. 20121088071). 8 References [1] A. Anandkumar, D. Foster, D. Hsu, S. Kakade, and Y.-K. Liu. Two SVDs suffice: Spectral decompositions for probabilistic topic modeling and latent Dirichlet allocatoin. arXiv:1204.6703, 2012. [2] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. arXiv:1210:7559, 2012. [3] A. Anandkumar, D. Hsu, and S. Kakade. A method of moments for mixture models and hidden Markov models. arXiv:1203.0683, 2012. [4] S. Arora, R. Ge, Y. Halpern, D. Mimno, and A. Moitra. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013. [5] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization provably. In STOC, 2012. [6] S. Arora, R. Ge, and A. Moitra. Learning topic models-going beyond SVD. In FOCS, 2012. [7] V. Bittorf, B. Recht, C. Re, and J. Tropp. Factoring nonnegative matrices with linear programs. In NIPS, 2012. [8] D. Blei and J. McAuliffe. Supervised topic models. In NIPS, 2007. [9] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, (3):993–1022, 2003. [10] A. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. In ICML, 2013. [11] S. Cohen and M. Collins. Tensor decomposition for fast parsing with latent-variable PCFGs. In NIPS, 2012. [12] M. Hoffman, F. R. Bach, and D. M. Blei. Online learning for latent Dirichlet allocation. In NIPS, 2010. [13] J. Kruskal. Three-way arrays: Rank and uniqueness of trilinear decompositions, with applications to arithmetic complexity and statistics. Linear Algebra and its Applications, 18(2):95– 138, 1977. [14] S. Lacoste-Julien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In NIPS, 2008. [15] S. Leurgans, R. Ross, and R. Abel. A decomposition for three-way arrays. SIAM Journal on Matrix Analysis and Applications, 14(4):1064–1083, 1993. [16] J. McAuley and J. Leskovec. From amateurs to connoisseus: Modeling the evolution of user expertise through online reviews. In WWW, 2013. [17] A. Moitra. Algorithmic aspects of machine learning. 2014. [18] I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed Gibbs sampling for latent Dirichlet allocation. In SIGKDD, 2008. [19] R. Redner and H. Walker. Mixture densities, maximum likelihood and the EM algorithm. SIAM Review, 26(2):195–239, 1984. [20] M. Steyvers and T. Griffiths. Latent semantic analysis: a road to meaning, chapter Probabilistic topic models. Laurence Erlbaum, 2007. [21] C. Wang, D. Blei, and F.-F. Li. Simultaneous image classification and annotation. In CVPR, 2009. [22] J. Zhu, A. Ahmed, and E. Xing. MedLDA: Maximum margin supervised topic models. Journal of Machine Learning Research, (13):2237–2278, 2012. [23] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation. Journal of Machine Learning Research, (15):1073–1110, 2014. [24] J. Zhu and E. Xing. Sparse topic coding. In UAI, 2011. 9
|
2014
|
105
|
5,187
|
Exclusive Feature Learning on Arbitrary Structures via ℓ1,2-norm Deguang Kong1, Ryohei Fujimaki2, Ji Liu3, Feiping Nie1, Chris Ding1 1 Dept. of Computer Science, University of Texas Arlington, TX, 76019; 2 NEC Laboratories America, Cupertino, CA, 95014; 3 Dept. of Computer Science, University of Rochester, Rochester, NY, 14627 Email: doogkong@gmail.com, rfujimaki@nec-labs.com, jliu@cs.rochester.edu, feipingnie@gmail.com, chqding@uta.edu Abstract Group LASSO is widely used to enforce the structural sparsity, which achieves the sparsity at the inter-group level. In this paper, we propose a new formulation called “exclusive group LASSO”, which brings out sparsity at intra-group level in the context of feature selection. The proposed exclusive group LASSO is applicable on any feature structures, regardless of their overlapping or non-overlapping structures. We provide analysis on the properties of exclusive group LASSO, and propose an effective iteratively re-weighted algorithm to solve the corresponding optimization problem with rigorous convergence analysis. We show applications of exclusive group LASSO for uncorrelated feature selection. Extensive experiments on both synthetic and real-world datasets validate the proposed method. 1 Introduction Structure sparsity induced regularization terms [1, 8] have been widely used recently for feature learning purpose, due to the inherent sparse structures of the real world data. Both theoretical and empirical studies have suggested the powerfulness of structure sparsity for feature learning, e.g., Lasso [24], group LASSO [29], exclusive LASSO [31], fused LASSO [25], and generalized LASSO [22]. To make a compromise between the regularization term and the loss function, the sparseinduced optimization problem is expected to fit the data with better statistical properties. Moreover, the results obtained from sparse learning are easier for interpretation, which give insights for many practical applications, such as gene-expression analysis [9], human activity recognition [14], electronic medical records analysis [30], etc. Motivation Of all the above sparse learning methods, group LASSO [29] is known to enforce the sparsity on variables at an inter-group level, where variables from different groups are competing to survive. Our work is motivated from a simple observation: in practice, not only features from different groups are competing to survive (i.e., group LASSO), but also features in a seemingly cohesive group are competing to each other. The winner features in a group are set to large values, while the loser features are set to zeros. Therefore, it leads to sparsity at the intra-group level. In order to make a distinction with standard LASSO and group LASSO, we called it “exclusive group LASSO” regularizer. In “exclusive group LASSO” regularizer, intra-group sparsity is achieved via ℓ1 norm, while inter-group non-sparsity is achieved via ℓ2 norm. Essentially, standard group LASSO achieves sparsity via ℓ2,1 norm, while the proposed exclusive group LASSO achieves sparsity via ℓ1,2 norm. An example of exclusive group LASSO is shown in Fig.(1) via Eq.(2). The significant difference from the standard LASSO is to encourage similar features in different groups to co-exist (Lasso usually allows only one of them surviving). Overall, the exclusive group LASSO regularization encourages intra-group competition but discourages inter-group competition. 1 w1 w2 w3 w4 w5 w6 w7 G1 G2 G3 w1 w2 w3 w4 w5 w6 w7 G1 G2 G3 (a) group lasso (b) exclusive lasso zero value Figure 1: Explanation of differences between group LASSO and exclusive group LASSO. Group setting: G1 = {1, 2}, G2 = {3, 4}, G3 = {5, 6, 7}. Group LASSO solution of Eq.(3) at λ = 2 using least square loss is: w = [0.0337; 0.0891; 0; 0; −0.2532; 0.043; 0.015]. exclusive group LASSO solution of Eq.(2) at λ = 10 is: w = [0.0749; 0; 0; −0.0713; −0.1888; 0; 0]. Clearly, group LASSO introduces sparsity at an inter-group level, whereas exclusive LASSO enforces sparsity at an intra-group level. We note that “exclusive LASSO” was first used in [31] for multi-task learning. Our “exclusive group LASSO” work, however, has clear difference from [31]: (1) we give a clear physical intuition of “exclusive group LASSO”, which leads to sparsity at an intra-group level (Eq.2), whereas [31] focuses on “Exclusive LASSO” problem in a multi-task setting; (2) we target a general “group” setting which allows arbitrary group structure, which can be easily extended to multi-task/multi-label learning. The main contributions of this paper include: (1) we propose a new formulation of “exclusive group LASSO” with clear physical meaning, which allow any arbitrary structure on feature space; (2) we propose an effective iteratively re-weighted algorithm to tackle non-smooth “exclusive group LASSO” term with rigorous convergence guarantees. Moreover, an effective algorithm is proposed to handle both non-smooth ℓ1 and exclusive group LASSO term (Lemma 4.1); (3) The proposed approach is validated via experiments on both synthetic and real data sets, specifically for uncorrelated feature selection problems. Notation Throughout the paper, matrices are written as boldface uppercase, vectors are written as boldface lowercase, and scalars are denoted by lower-case letters (a, b). n is the number of data points, p is the dimension of data, K is the number of class in a dataset. For any vector w ∈ℜp, ℓq norm of w is ∥w∥q = Pp i=1 |wj|q 1 q for q ∈(0, ∞). A group of variables is a subset g ⊂{1, 2, · · · , p}. Thus, the set of possible groups is the power set of {1, 2, · · · , p}: P({1, 2, · · · , p}). Gg ∈P({1, 2, · · · , p}) denotes a set of group g, which is known in advance depending on applications. If two groups have one or more overlapped variables, we say that they are overlapped. For any group variable wGg ∈ℜp, only the entries in the group g are preserved which are the same as those in w, while the other entries are set to zeros. For example, if Gg = {1, 2, 4}, wGg = [w1, w2, 0, w4, 0, · · · , 0], then ∥wGg∥2 = p w2 1 + w2 2 + w2 4. Let supp(w) ⊂{1, 2, · · · , p} be a set which wi ̸= 0, and zero(w) ⊂{1, 2, · · · , p} be a set which wi = 0. Clearly, zero(w) = {1, 2, · · · , p} \ supp(w). Let ▽f(w) be gradient of f at w ∈ℜp, for any differentiable function f: ℜp →ℜ. 2 Exclusive group LASSO Let G be a group set, the exclusive group LASSO penalty is defined as: ∀w ∈ℜp, ΩG Eg(w) = X g∈G ∥wGg∥2 1. (1) When the groups of g form different partitions of the set of variables, ΩG Eg is a ℓ1/ℓ2 norm penalty. A ℓ2 norm is enforced on different groups, while in each group, ℓ1 norm is used to make a sum over each intra-group variable. Minimizing such a convex risk function often leads to a solution that some entries in a group are zeros. For example, for a group Gg = {1, 2, 4}, there exists a solution w, such that w1 = 0, w2 ̸= 0, w4 ̸= 0. A concrete example is shown in Fig.1, in which we solve: min w∈ℜp J1(w), J1(w) = f(w) + λΩG Eg(w). (2) using least square loss function f(w) = ∥y−XT w∥2 2. As compared to standard group LASSO [29] solution of Eq.(3), 2 −1 0 1 −1 0 1 −1 −0.5 0 0.5 1 (a) non-overlap −1 0 1 −1 0 1 −1 −0.5 0 0.5 1 (b) overlap 2 4 6 8 10 12 14 2 4 6 8 10 12 14 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) feature correlation matrix Figure 2: (a-b): Geometric shape of Ω(w) ≤1 in R3. (a) non-overlap exclusive group LASSO: Ω(w) = (|w1| + |w2|)2 + (|w3|)2; (b) overlap exclusive group LASSO: Ω(w) = (|w1| + |w2|)2 + (|w2| + |w3|)2; (c) feature correlation matrix R on dataset House (506 data points, 14 variables). Rij indicates the feature correlation between feature i and j. Red colors indicate large values, while blue colors indicate small values. f(w) + λ X g ∥wGg∥2. (3) We observe that group LASSO introduces sparsity at an inter-group level, whereas exclusive LASSO enforces sparsity at an intra-group level. Analysis of exclusive group LASSO For each group g, feature index u ∈supp(g) will be non-zero. Let vg ∈ℜp be a variable which preserves the values of non-zero index for group g. Consider all groups, for optimization goal w, we have supp(w) = ∪ g supp(vg). (1) For non-overlapping case, different groups form a partition of feature set {1, 2, · · · , p}, and there exists a unique decomposition of w = P g vg. Since there is not any common elements for any two different groups Gu and Gv, i.e., supp(wGu) ∩supp(wGv) = φ. thus it is easy to see: vg = wGg, ∀g ∈G. (2) However, for overlapping groups, there could be element sets I ⊂(Gu ∩Gv), and therefore, different groups Gu and Gv may have opposite effects to optimize the features in set I. For feature i ∈I, it is prone to give different values if optimized separately, i.e., (wGu)i ̸= (wGv)i. For example, Gu = [1, 2], Gv = [2, 3], whereas group u may require w2 = 0 and group v may require w2 ̸= 0. Thus, there will be many possible combinations of feature values, and it leads to: ΩG Eg = inf P g vg=w P g ∥vg∥2 1. Further, if some groups are overlapped, the final zeros sets will be a subset of unions of all different groups. zero(w) ⊂∩ g zero(vg). Illustration of Geometric shape of exclusive LASSO Figure 2 shows the geometric shape for both norms in R3 with different group settings, where in (a): G1 = [1, 2], G2 = [3]; and in (b): G1 = [1, 2], G2 = [2, 3]. For the non-overlapping case, variables w1, w2 usually can not be zero simultaneously. In contrast, for the overlapping case, variable w2 cannot be zero unless both groups G1 and G2 require w2 = 0. Properties of exclusive LASSO The regularization term of Eq.(1) is a convex formulation. If ∪g∈G = {1, 2, · · · , p}, then ΩG E := q ΩG Eg is a norm. See Appendix for proofs. 3 An effective algorithm for solving ΩG Eg regularizer The challenge of solving Eq. (1) is to tackle the exclusive group LASSO term, where f(w) can be any convex loss function w.r.t w. It is generally felt that exclusive group LASSO term is much more difficult to solve than the standard LASSO term (shrinkage thresholding). Existing algorithm can formulate it as a quadratic programming problem [19], which can be solved by interior point method or active set method. However, the computational cost is expensive, which limits its use in practice. Recently, a primal-dual algorithm [27] is proposed to solve the similar problem, which casts the non-smooth problem into a min-max problem. However, the algorithm is a gradient descent type method and converges slowly. Moreover, the algorithm is designed for multi-task learning problem, and cannot be applied directly for exclusive group LASSO problem with arbitrary structures. In the following, we first derive a very efficient yet simple algorithm. Moreover, the proposed algorithm is a generic algorithm, which allows arbitrary structure on feature space, irrespective of specific feature structures [10], e.g., linear structure [28], tree structure [15], graph structure [7], etc. 3 Theoretical analysis guarantees the convergence of algorithm. Moreover, the algorithm is easy to implement and ready to use in practice. Key idea The idea of the proposed algorithm is to find an auxiliary function for Eq.(1) which can be easily solved. Then the updating rules for w is derived. Finally, we prove the solution is exactly the optimal solution we are seeking for the original problem. Since it is a convex problem, the optimal solution is the global optimal solution. Procedure Instead of directly optimizing Eq. (1), we propose to optimize the following objective (the reasons will be seen immediately below), i.e., J2(w) = f(w) + λwT Fw, (4) where F ∈ℜp×p is a diagonal matrices which encodes the exclusive group information, and its diagonal element is given by1 Fii = X g (IGg)i∥wGg∥1 |wi| . (5) Let IGg ∈{0, 1}p×1 be group index indicator for group g ∈G. For example, group G1 is {1, 2}, then IG1 = [1, 1, 0, · · · , 0]. Thus the group variable wGg can be explicitly expressed as wGg = diag(IGg) × w. Note computation of F depends on w, thus minimization of w depends on both F. In the following, we propose an efficient iteratively re-weighted algorithm to find out the optimal global solution for w, where in each iteration, w is updated along the gradient descent direction. This process is iterated until the algorithm converges. Taking the derivative of Eq.(4) w.r.t w and set ∂J2 ∂w = 0. We have ∇wf(w) + 2λFw = 0. (6) Then the complete algorithm is: (1) Updating wt via Eq.(6); (2) Updating Ft via Eq.(5). The above two steps are iterated until the algorithm converges. We can prove the obtained optimal solution is exactly the global optimal solution for Eq.(1). 3.1 Convergence Analysis In the following, we prove the convergence of algorithm. Theorem 3.1. Under the updating rule of Eq. (6), J1(wt+1) −J1(wt) ≤0. The proof is provided in Appendix. Discussion We note reweighted strategy [26] was also used in solving problems like zero-norm of the parameters of linear models. However, it cannot be directly used to solve “exclusive group LASSO” problem proposed in this paper, and cannot handle arbitrary structures on feature space. 4 Uncorrelated feature learning via exclusive group LASSO Motivation It is known that in Lasso-type (including elastic net) [24, 32] variable selection, variable correlations are not taken into account. Therefore, some strongly correlated variables tend to be in or out of the model together. However, in practice, feature variables are often correlated. See an example shown on housing dataset [4] with 506 samples and 14 attributes. Although there are only 14 attributes, feature 5 is highly correlated with feature 6, 7, 11, 12, etc. Moveover, the strongly correlated variables may share similar properties, with overlapped or redundant information. Especially 1when wi = 0, then Fii is related to subgradient of w w.r.t to wi. However, we can not set Fii = 0, otherwise the derived algorithm cannot be guaranteed to converge. We can regularize Fii = P g (IGg)i∥wGg∥1/ p w2 i + ϵ , then the derived algorithm can be proved to minimize the regularized P g ∥(w + ϵ)Gg∥2 1. It is easy to see the regularized exclusive ℓ1 norm of w approximates exclusive ℓ1 norm of w when ϵ →0+. 4 Table 1: Characteristics of datasets Dataset # data #dimension #domain isolet 1560 617 UCI ionosphere 351 34 UCI mnist(0,1) 3125 784 image Leuml 72 3571 biology when the number of selected variables are limited, more discriminant information with minimum correlations are desirable for prediction or classification purpose. Therefore, it is natural to eliminate the correlations in the feature learning process. Formulation The above observations motivate our work of uncorrelated feature learning via exclusive group LASSO. We consider the variable selection problem based on the LASSO-type optimization, where we can make the selected variables uncorrelated as much as possible. To be exact, we propose to optimize the following objective: min w∈ℜp f(w) + α∥w∥1 + β X g ∥wGg∥2 1, (7) where f(w) is loss function involving class predictor y ∈ ℜn and data matrix X = [x1, x2, · · · , xn] ∈ℜp×n, ∥wGg∥2 1 is the exclusive group LASSO term involving feature correlation information α and β are tuning parameters, which can make balances between plain LASSO term and the exclusive group LASSO term. The core part of Eq.(7) is to use exclusive group LASSO regularizer to eliminate the correlated features, which cannot be done by plain LASSO. Let the feature correlation matrix be R = (Rst) ∈ ℜp×p, clearly, R = RT , Rst represents the correlations between features s and t, i.e., Rst = | P i XsiXti| pP i X2 si pP i X2 ti , Rst > θ (8) To let the selected features uncorrelated as much as possible, for any two features s, t, if their correlations Rst > θ, then we put them in an exclusive group. Therefore, only one feature can survive. For example, on the example shown in Fig.2(c), if we use θ = 0.93 as a threshold, we will generate the following exclusive group LASSO term: X g ∥wGg∥2 1 = (|w3| + |w10|)2 + (|w5| + |w6|)2 + (|w5| + |w7|)2 + (|w5| + |w11|)2 + (|w6| + |w11|)2 +(|w6| + |w12|)2 + (|w6| + |w14|)2 + (|w7| + |w11|)2. (9) Algorithm Solving Eq.(7) is to solve a convex optimization problem, because all the three involved terms are convex. This also indicates that there exists a unique global solution. Eq.(7) can be efficiently solved via accelerated proximal gradient (FISTA) method [17, 2], irrespective of what kind of loss function used in minimization of empirical risk. Thus solving Eq.(7) is transformed into solving: min w∈ℜp 1 2∥w −a∥2 2 + α∥w∥1 + β X g ∥wGg∥2 1, (10) where a = wt − 1 Lt ∇f(wt) which involves the current wt value and step size Lt. The challenge of solving Eq.(10) is that, it involves two non-smooth terms. Fortunately, we have the following lemma to establish the relations between the optimal solution of Eq.(10) to Eq.(11), the solution of which has been well discussed in §3. min w∈ℜp 1 2∥w −u∥2 2 + β X g ∥wGg∥2 1. (11) Lemma 4.1. The optimal solution to Eq.(10) is the optimal solution to Eq.(11), where u = arg min x 1 2∥x −a∥2 2 + α∥x∥1 = sgn(a)(a −α)+, (12) and sgn(.), SGN(.) are the operators defined in the component fashion: if v > 0, sgn(v) = 1, SGN(v) = {1}; else if v = 0, sgn(v) = 0, SGN(v) = [−1, 1]; else if v < 0, sgn(v) = −1, SGN(v) = {−1}. The proof is provided in Appendix. 5 120 140 160 180 200 220 240 −2.8 −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 # of features log Generalization: RMSE error L1 Exclusive lasso+L1 optimal solution (a) RMSE on linear structure 120 140 160 180 200 220 240 −3 −2.8 −2.6 −2.4 −2.2 −2 −1.8 −1.6 # of features log Generalization: MAE error L1 Exclusive lasso+L1 optimal solution (b) MAE on linear structure 200 210 220 230 240 250 −0.6 −0.59 −0.58 −0.57 −0.56 −0.55 −0.54 # of features log Generalization: RMSE error L1 Exclusive lasso+L1 optimal solution (c) RMSE on hub structure 200 210 220 230 240 250 −0.79 −0.78 −0.77 −0.76 −0.75 −0.74 −0.73 −0.72 −0.71 # of features log Generalization: MAE error L1 Exclusive lasso+L1 optimal solution (d) MAE on hub structure 0 100 200 300 400 500 600 74 76 78 80 82 84 86 # of features Feature Selection Accuracy F−statistic ReliefF LASSO Exclusive (e) isolet 0 5 10 15 20 25 30 35 72 74 76 78 80 82 84 86 88 # of features Feature Selection Accuracy F−statistic ReliefF LASSO Exclusive (f) ionosphere 0 20 40 60 80 100 80 82 84 86 88 90 92 94 # of features Feature Selection Accuracy F−statistic ReliefF LASSO Exclusive (g) mnist (0,1) 0 20 40 60 80 100 91 92 93 94 95 96 97 98 99 # of features Feature Selection Accuracy F−statistic ReliefF LASSO Exclusive (h) leuml Figure 3: (a-d): Feature selection results on synthetic dataset using (a, b) linear structure; (c, d) hub structure. Evaluation metrics: RMSE, MAE. x-axis: number of selected features. y-axis: RMSE or MAE error in log scale. (g-j): Classification accuracy using SVM (linear kernel) with different number of selected features on four datasets. Compared methods: Exclusive LASSO of Eq.(7), LASSO, ReliefF [21], F-statistics [3]. x-axis: number of selected features; y-axis: classification accuracy. 5 Experiment Results To validate the effectiveness of our method, we first conduct experiment using Eq.(7) on two synthetic datasets, and then show experiments on real-world datasets. 5.1 Synthetic datasets (1) Linear-correlated features. Let data X1 = [x1 1, x1 2, · · · , x1 n] ∈ℜp×n, X2 = [x2 1, x2 2, · · · , x2 n] ∈ ℜp×n, where each data x1 i ∼N[0p×1, Ip×p], x2 i ∼N[0p×1, Ip×p], I is identity matrix. We generate a group of p-features, which is a linear combination of features in X1 and X2, i.e., X3 = 0.5(X1 + X2) + ϵ, ϵ ∼N(−0.1e, 0.1Ip×p). Construct data matrix X = [X1; X2; X3], clearly, X ∈ℜ3p×n. Features in dimension [2p + 1, 3p] are highly correlated with features in dimension [1, p] and [p + 1, 2p]. Let w1 ∈ℜp, where each w1 i ∼Uniform(−0.5, 0.5), and w2 ∈ℜp, where each w2 i ∼ Uniform(−0.5, 0.5). Let ˜w = [w1; w2; 0p×1]. We generate predicator y ∈ℜn, and y = ˜wT X + ϵy, where (ϵy)i ∼N(0, 0.1). We solve Eq.(7) using current y and X with least square loss. The group settings are: (i, p+i, 2p+i), for 1 ≤i ≤p. We compare the computed w∗against ground truth solution ˜w and plain LASSO solution (i.e., β = 0 in Eq.7). We use the root mean square error (RMSE) and mean absolute error (MAE) error to evaluate the differences of values predicted by a model and the values actually observed. We generate n = 1000 data, with p = [120, 140, · · · , 220, 240] and do 5-fold cross validation. Generalization error of RMSE and MAE are shown in Figures 3(a) and 3(b). Clearly, our approach outperforms standard LASSO solution and exactly recovers the true features. (2) Correlated features on Hub structure. Let data X = [X1; X2; · · · , XB] ∈ℜq×n, where each block Xb = [Xb 1:; Xb 2:; · · · ; Xb p:] ∈ℜp×n, 1 ≤b ≤B, q = p × B. In each block, for each data point 1 ≤i ≤n, Xb 1i = 1 B P 2≤j≤p Xb ji + 1 B zi + ϵb i, where Xb ji ∼N(0, 1), zi ∼ N(0, 1) and ϵb i ∼Uniform(−0.1, 0.1). Let w1, w2, · · · , wB ∈ℜp, where wb = [wb 1 0]T , where wb 1 ∼Uniform(−0.5, 0.5). Let ˜w = [w1; w2; · · · ; wB], we generate predicator y ∈ℜn, and y = ˜wT X + ϵy, where (ϵy)i ∼N(0, 0.1). The group settings are: ((b −1) × p + 1, · · · , b × p), for 1 ≤b ≤B. We generate n = 1000 data, B = 10, with varied p = [20, 21, · · · , 24, 25] and do 5-fold cross validation. Generalization error of RMSE and MAE are shown in Figs.3(c),3(d). Clearly, our approach outperforms standard LASSO solution, and recovers the exact features. 6 5.2 Real-world datasets To validate the effectiveness of proposed method, we perform feature selection via proposed uncorrelated feature learning framework of Eq.(7) on 4 datasets (shown in Table.1), including 2 UCI datasets: isolet [6], ionosphere [5], 1 image dataset: mnist with only figure “0” and “1” [16], and 1 biology dataset: Leuml [13]. We perform classification tasks on these different datasets. The compared methods include: proposed method of Eq.(7) (shown as Exclusive), plain LASSO, ReliefF [21], F-statistics [3]. We use logistic regression as the loss function in our method and plain LASSO method. In our method, parameter α, β are tuned to select different numbers of features. Exclusive LASSO groups are set according to feature correlations (i.e., threshold θ is set to 0.90 in Eq.8). After the specific number of features are selected, we feed them into support vector machine (SVM) with linear kernel, and classification results with different number of selected features are shown in Fig.(3). A first glance at the experimental results indicates the better performance of our method as compared to plain LASSO. Moreover, our method is also generally better than the other two popularly used feature selection methods, such as ReliefF and F-statistics. The experiment result also further confirms our intuition: elimination of correlated features is really helpful for feature selection and thus improves the classification performance. Because ℓ1,∞[20], ℓ2,1 [12, 18], or non-convex feature learning via ℓp,∞[11](0 < p < 1) are designed for multi-task or multi-label feature learning, thus we do not compare against these methods. Further, we list the mean and variance of classification accuracy of different algorithms in the following table, using 50% of all the features. Compared methods include (1) Lasso (L1); (2) Plain exclusive group LASSO (α = 0 in Eq. (7)); (3) Exclusive group LASSO (α > 0 in Eq. (7)). dataset # of features LASSO plain exclusive exclusive group LASSO isolet 308 81.75 ± 0.49 82.05 ± 0.50 83.24 ± 0.23 ionosphere 17 85.10 ± 0.27 85.21 ± 0.31 87.28 ± 0.42 mnist(0,1) 392 92.35 ± 0.13 93.07 ± 0.20 94.51 ± 0.19 leuml 1785 95.10 ± 0.31 95.67± 0.24 97.70 ± 0.27 The above experiment results indicate that the advantage of our method (exclusive group LASSO) over plain LASSO comes from the exclusive LASSO term. The experiment results also suggest that the plain exclusive LASSO performs very similar to LASSO. However, the exclusive group LASSO (α > 0 in Eq.7) performs definitely better than both standard LASSO and plain exclusive LASSO (1%-4% performance improvement). The exclusive LASSO regularizer eliminates the correlated and redundant features. We show the running time of plain exclusive LASSO and exclusive group LASSO (α > 0 in Eq.7) in the following table. We run different algorithms on a Intel i5-3317 CPU, 1.70GHz, 8GRAM desktop. dataset plain exclusive (running time: sec) exclusive group LASSO (running time: sec) isolet 47.24 51.93 ionosphere 22.75 24.18 mnist(0,1) 123.45 126.51 leuml 142.19 144.08 The above experiment results indicate that the computational cost of exclusive group LASSO is slightly higher than that of plain exclusive LASSO. The reason is that, the solution to exclusive group LASSO is given by simple thresholding on the plain exclusive LASSO result. This further confirms our theoretical analysis results shown in Lemma 4.1. 6 Conclusion In this paper, we propose a new formulation called “exclusive group LASSO” to enforce the sparsity for features at an intra-group level. We investigate its properties and propose an effective algorithm with rigorous convergence analysis. We show applications for uncorrelated feature selection, which indicate the good performance of proposed method. Our work can be easily extended for multi-task or multi-label learning. Acknowledgement The majority of the work was done during the internship of the first author at NEC Laboratories America, Cupertino, CA. 7 Appendix Proof of a valid norm of ΩG E: Note if ΩG E(w) = 0, then w = 0. For any scalar a, ΩG E(aw) = |a|ΩG E(w). This proves absolute homogeneity and zero property hold. Next we consider triangle inequality. Consider w, ˜w ∈ℜp. Let vg and ˜vg be optimal decomposition of w, ˜w such that ΩG E(w) = qP g ∥vg∥2 1, and ΩG E( ˜w) = qP g ∥˜vg∥2 1. Since vg + ˜vg is a decomposition of w + ˜w, thus we have: 1 ΩG E(w + ˜w) ≤ qP g ∥vg + ˜vg∥2 1 ≤ qP g ∥vg∥2 1 + qP g ∥˜vg∥2 1 = ΩG E(w) + ΩG E( ˜w). ⊓– To prove Theorem 3.1, we need two lemmas. Lemma 6.1. Under the updating rule of Eq.(6), J2(wt+1) < J2(wt). Lemma 6.2. Under the updating rule of Eq.(6), J1(wt+1) −J1(wt) ≤ J2(wt+1) −J2(wt) . (13) Proof of Theorem 3.1 From Lemma 6.1 and Lemma 6.2, it is easy to see J1(wt+1)−J1(wt) ≤ 0. This completes the proof. ⊓– Proof of Lemma 6.1 Eq.(4) is a convex function, and optimal solution of Eq.(6) is obtained by taking derivative ∂J2 ∂w = 0, thus obtained w∗is global optimal solution, J2(wt+1) < J2(wt). ⊓– Before proof of Lemma 6.2, we need the following Proposition. Proposition 6.3. wT Fw = PG g=1(∥wGg∥1)2. Proof of Lemma 6.2 Let ∆= LHS -RHS of Eq.(13). We have ∆, where ∆= X g ∥wt+1 Gg ∥2 1 − X i,g (IGg)i∥wt Gg∥1 |wt i| (wt+1 i )2 + X i,g (IGg)i∥wt Gg∥1 |wt i| (wt i)2 − X g ∥wt Gg∥2 1 (14) = X g ∥wt+1 Gg ∥2 1 − X i,g (IGg)i∥wt Gg∥1 |wt i| (wt+1 i )2 = X g ( X i∈Gg |wt+1 i |)2 −( X i∈Gg |wt i|)( X i∈Gg (wt+1 i )2 |wt i| ) (15) = X g ( X i∈Gg aibi)2 −( X i∈Gg a2 i )( X i∈Gg b2 i ) ≤0, (16) where ai = |wt+1 i | √ |wt i|, bi = p |wt i|. Due to proposition 6.3, Eq.(14) is equivalent to Eq.(15). Eq.(16) holds due to cauchy inequality [23]: for any scalar ai, bi, (P i aibi)2 ≤(P i a2 i )(P i b2 i ). ⊓– Proof of Lemma 4.1 For notation simplicity, let ΩG Eg(w) = P g ∥wGg∥2 1. Let w∗be the optimal solution to Eq.(11), then we have 0 ∈w∗−u + β∂ΩG Eg(w∗). (17) In order to prove that w∗is also the global optimal solution to Eq.(10), i.e., 0 ∈w∗−a + αSGN(w∗) + β∂ΩG Eg(w∗). (18) First, from Eq.(12), we have 0 ∈u−a+αSGN(u), and this leads to u ∈a−αSGN(u). According to the definition of ΩG Eg(w), from Eq.(11), it is easy to verify that (1) if ui = 0, then wi = 0; (2) if ui ̸= 0, then sign(wi) = sign(ui) and 0 ≤|wi| ≤|ui|. This indicates that SGN(u) ⊂SGN(w), and thus u ∈a −αSGN(w). (19) Put Eqs.(17, 19) together, and this exactly recovers Eq.(18), which completes the proof. 1Note the derivation needs Cauchy inequality [23], where for any scalar ai, bi, (P g agbg)2 ≤ (P g a2 g)(P g b2 g). Let ag = ∥vg∥1, bg = ∥˜vg∥1, then we can get the inequality. 8 References [1] F. Bach. Structured sparsity and convex optimization. In ICPRAM, 2012. [2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1):183–202, 2009. [3] J. D. F. Habbema and J. Hermans. Selection of variables in discriminant analysis by f-statistic and error rate. Technometrics, 1977. [4] Housing. http://archive.ics.uci.edu/ml/datasets/Housing. [5] Ionosphere. http://archive.ics.uci.edu/ml/datasets/Ionosphere. [6] isolet. http://archive.ics.uci.edu/ml/datasets/ISOLET. [7] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In ICML, page 55, 2009. [8] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Journal of Machine Learning Research, 12:2777–2824, 2011. [9] S. Ji, L. Yuan, Y. Li, Z. Zhou, S. Kumar, and J. Ye. Drosophila gene expression pattern annotation using sparse features and term-term interactions. In KDD, pages 407–416, 2009. [10] D. Kong and C. H. Q. Ding. Efficient algorithms for selecting features with arbitrary group constraints via group lasso. In ICDM, pages 379–388, 2013. [11] D. Kong and C. H. Q. Ding. Non-convex feature learning via ℓp,∞operator. In AAAI, pages 1918–1924, 2014. [12] D. Kong, C. H. Q. Ding, and H. Huang. Robust nonnegative matrix factorization using ℓ2,1-norm. In CIKM, pages 673–682, 2011. [13] Leuml. http://www.stat.duke.edu/courses/Spring01/sta293b/datasets.html. [14] J. Liu, R. Fujimaki, and J. Ye. Forward-backward greedy algorithms for general convex smooth functions over a cardinality constraint. In ICML, 2014. [15] J. Liu and J. Ye. Moreau-yosida regularization for grouped tree structure learning. In NIPS, pages 1459– 1467, 2010. [16] mnist. http://yann.lecun.com/exdb/mnist/. [17] Y. Nesterov. Gradient methods for minimizing composite objective function. ECORE Discussion Paper, 2007. [18] F. Nie, H. Huang, X. Cai, and C. H. Q. Ding. Efficient and robust feature selection via joint ℓ2,1-norms minimization. In NIPS, pages 1813–1821. 2010. [19] S. Nocedal, J.; Wright. Numerical Optimization. Springer-Verlag, Berlin, New York, 2006. [20] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for ℓ1,∞regularization. In ICML, page 108, 2009. [21] M. Robnik-Sikonja and I. Kononenko. Theoretical and empirical analysis of relieff and rrelieff. Machine Learning, 53(1-2):23–69, 2003. [22] V. Roth. The generalized lasso. IEEE Transactions on Neural Networks, 15(1):16–28, 2004. [23] J. M. Steele. The Cauchy-Schwarz master class : an introduction to the art of mathematical inequalities. MAA problem book series. Cambridge University Press, Cambridge, New York, NY, 2004. [24] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1994. [25] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society Series B, pages 91–108, 2005. [26] J. Weston, A. Elisseeff, B. Schlkopf, and P. Kaelbling. Use of the zero-norm with linear models and kernel methods. Journal of Machine Learning Research, 3:1439–1461, 2003. [27] T. Yang, R. Jin, M. Mahdavi, and S. Zhu. An efficient primal-dual prox method for non-smooth optimization. CoRR, abs/1201.5283, 2012. [28] L. Yuan, J. Liu, and J. Ye. Efficient methods for overlapping group lasso. In NIPS, pages 352–360, 2011. [29] M. Yuan and M. Yuan. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68:49–67, 2006. [30] J. Zhou, F. Wang, J. Hu, and J. Ye. From micro to macro: data driven phenotyping by densification of longitudinal electronic medical records. In KDD, pages 135–144, 2014. [31] Y. Zhou, R. Jin, and S. C. H. Hoi. Exclusive lasso for multi-task feature selection. Journal of Machine Learning Research - Proceedings Track, 9:988–995, 2010. [32] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005. 9
|
2014
|
106
|
5,188
|
Provable Non-convex Robust PCA Praneeth Netrapalli 1∗ U N Niranjan2∗ Sujay Sanghavi3 Animashree Anandkumar2 Prateek Jain4 1Microsoft Research, Cambridge MA. 2The University of California at Irvine. 3The University of Texas at Austin. 4Microsoft Research, India. Abstract We propose a new method for robust PCA – the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of lowrank matrices, and the set of sparse matrices; each projection is non-convex but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an m×n input matrix (m ≤n), our method has a running time of O r2mn per iteration, and needs O (log(1/ϵ)) iterations to reach an accuracy of ϵ. This is close to the running times of simple PCA via the power method, which requires O (rmn) per iteration, and O (log(1/ϵ)) iterations. In contrast, the existing methods for robust PCA, which are based on convex optimization, have O m2n complexity per iteration, and take O (1/ϵ) iterations, i.e., exponentially more iterations for the same accuracy. Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. Keywords: Robust PCA, matrix decomposition, non-convex methods, alternating projections. 1 Introduction Principal component analysis (PCA) is a common procedure for preprocessing and denoising, where a low rank approximation to the input matrix (such as the covariance matrix) is carried out. Although PCA is simple to implement via eigen-decomposition, it is sensitive to the presence of outliers, since it attempts to “force fit” the outliers to the low rank approximation. To overcome this, the notion of robust PCA is employed, where the goal is to remove sparse corruptions from an input matrix and obtain a low rank approximation. Robust PCA has been employed in a wide range of applications, including background modeling [LHGT04], 3d reconstruction [MZYM11], robust topic modeling [Shi13], and community detection [CSX12], and so on. Concretely, robust PCA refers to the following problem: given an input matrix M = L∗+ S∗, the goal is to decompose it into sparse S∗and low rank L∗matrices. The seminal works of [CSPW11, CLMW11] showed that this problem can be provably solved via convex relaxation methods, under some natural conditions on the low rank and sparse components. While the theory is elegant, in practice, convex techniques are expensive to run on a large scale and have poor convergence rates. Concretely, for decomposing an m×n matrix, say with m ≤n, the best specialized implementations (typically first-order methods) have a per-iteration complexity of O m2n , and require O(1/ϵ) number of iterations to achieve an error of ϵ. In contrast, the usual PCA, which carries out a rankr approximation of the input matrix, has O(rmn) complexity per iteration – drastically smaller ∗Part of the work done while interning at Microsoft Research, India 1 when r is much smaller than m, n. Moreover, PCA requires exponentially fewer iterations for convergence: an ϵ accuracy is achieved with only O (log(1/ϵ)) iterations (assuming constant gap in singular values). In this paper, we design a non-convex algorithm which is “best of both the worlds” and bridges the gap between (the usual) PCA and convex methods for robust PCA. Our method has low computational complexity similar to PCA (i.e. scaling costs and convergence rates), and at the same time, has provable global convergence guarantees, similar to the convex methods. Proving global convergence for non-convex methods is an exciting recent development in machine learning. Non-convex alternating minimization techniques have recently shown success in many settings such as matrix completion [Kes12, JNS13, Har13], phase retrieval [NJS13], dictionary learning [AAJ+13], tensor decompositions for unsupervised learning [AGH+12], and so on. Our current work on the analysis of non-convex methods for robust PCA is an important addition to this growing list. 1.1 Summary of Contributions We propose a simple intuitive algorithm for robust PCA with low per-iteration cost and a fast convergence rate. We prove tight guarantees for recovery of sparse and low rank components, which match those for the convex methods. In the process, we derive novel matrix perturbation bounds, when subject to sparse perturbations. Our experiments reveal significant gains in terms of speed-ups over the convex relaxation techniques, especially as we scale the size of the input matrices. Our method consists of simple alternating (non-convex) projections onto low-rank and sparse matrices. For an m×n matrix, our method has a running time of O(r2mn log(1/ϵ)), where r is the rank of the low rank component. Thus, our method has a linear convergence rate, i.e. it requires O(log(1/ϵ)) iterations to achieve an error of ϵ, where r is the rank of the low rank component L∗. When the rank r is small, this nearly matches the complexity of PCA, (which is O(rmn log(1/ϵ))). We prove recovery of the sparse and low rank components under a set of requirements which are tight and match those for the convex techniques (up to constant factors). In particular, under the deterministic sparsity model, where each row and each column of the sparse matrix S∗has at most α fraction of non-zeros, we require that α = O 1/(µ2r) , where µ is the incoherence factor (see Section 3). In addition to strong theoretical guarantees, in practice, our method enjoys significant advantages over the state-of-art solver for (1), viz., the inexact augmented Lagrange multiplier (IALM) method [CLMW11]. Our method outperforms IALM in all instances, as we vary the sparsity levels, incoherence, and rank, in terms of running time to achieve a fixed level of accuracy. In addition, on a real dataset involving the standard task of foreground-background separation [CLMW11], our method is significantly faster and provides visually better separation. Overview of our techniques: Our proof technique involves establishing error contraction with each projection onto the sets of low rank and sparse matrices. We first describe the proof ideas when L∗is rank one. The first projection step is a hard thresholding procedure on the input matrix M to remove large entries and then we perform rank-1 projection of the residual to obtain L(1). Standard matrix perturbation results (such as Davis-Kahan) provide ℓ2 error bounds between the singular vectors of L(1) and L∗. However, these bounds do not suffice for establishing the correctness of our method. Since the next step in our method involves hard thresholding of the residual M −L(1), we require element-wise error bounds on our low rank estimate. Inspired by the approach of Erd˝os et al. [EKYY13], where they obtain similar element-wise bounds for the eigenvectors of sparse Erd˝os–R´enyi graphs, we derive these bounds by exploiting the fixed point characterization of the eigenvectors1. A Taylor’s series expansion reveals that the perturbation between the estimated and the true eigenvectors consists of bounding the walks in a graph whose adjacency matrix corresponds to (a subgraph of) the sparse component S∗. We then show that if the graph is sparse enough, then this perturbation can be controlled, and thus, the next thresholding step results in further error contraction. We use an induction argument to show that the sparse estimate is always contained in the true support of S∗, and that there is an error contraction in each step. For the case, where L∗has rank r > 1, our algorithm proceeds in several stages, where we progressively compute higher rank 1If the input matrix M is not symmetric, we embed it in a symmetric matrix and consider the eigenvectors of the corresponding matrix. 2 projections which alternate with the hard thresholding steps. In stage k = [1, 2, . . . , r], we compute rank-k projections, and show that after a sufficient number of alternating projections, we reduce the error to the level of (k + 1)th singular value of L∗, using similar arguments as in the rank-1 case. We then proceed to performing rank-(k + 1) projections which alternate with hard thresholding. This stage-wise procedure is needed for ill-conditioned matrices, since we cannot hope to recover lower eigenvectors in the beginning when there are large perturbations. Thus, we establish global convergence guarantees for our proposed non-convex robust PCA method. 1.2 Related Work Guaranteed methods for robust PCA have received a lot of attention in the past few years, starting from the seminal works of [CSPW11, CLMW11], where they showed recovery of an incoherent low rank matrix L∗through the following convex relaxation method: Conv-RPCA : min L,S ∥L∥∗+ λ∥S∥1, s.t., M = L + S, (1) where ∥L∥∗denotes the nuclear norm of L (nuclear norm is the sum of singular values). A typical solver for this convex program involves projection on to ℓ1 and nuclear norm balls (which are convex sets). Note that the convex method can be viewed as “soft” thresholding in the standard and spectral domains, while our method involves hard thresholding in these domains. [CSPW11] and [CLMW11] consider two different models of sparsity for S∗. Chandrasekaran et al. [CSPW11] consider a deterministic sparsity model, where each row and column of the m × n matrix, S, has at most α fraction of non-zero entries. For guaranteed recovery, they require α = O 1/(µ2r√n) , where µ is the incoherence level of L∗, and r is its rank. Hsu et al. [HKZ11] improve upon this result to obtain guarantees for an optimal sparsity level of α = O 1/(µ2r) . This matches the requirements of our non-convex method for exact recovery. Note that when the rank r = O(1), this allows for a constant fraction of corrupted entries. Cand`es et al. [CLMW11] consider a different model with random sparsity and additional incoherence constraints, viz., they require ∥UV ⊤∥∞< µ√r/n. Note that our assumption of incoherence, viz., ∥U (i)∥< µ p r/n, only yields ∥UV ⊤∥∞< µ2r/n. The additional assumption enables [CLMW11] to prove exact recovery with a constant fraction of corrupted entries, even when L∗is nearly full-rank. We note that removing the ∥UV ⊤∥∞condition for robust PCA would imply solving the planted clique problem when the clique size is less than √n [Che13]. Thus, our recovery guarantees are tight upto constants without these additional assumptions. A number of works have considered modified models under the robust PCA framework, e.g. [ANW12, XCS12]. For instance, Agarwal et al. [ANW12] relax the incoherence assumption to a weaker “diffusivity” assumption, which bounds the magnitude of the entries in the low rank part, but incurs an additional approximation error. Xu et al.[XCS12] impose special sparsity structure where a column can either be non-zero or fully zero. In terms of state-of-art specialized solvers, [CLMW11] implements the in-exact augmented Lagrangian multipliers (IALM) method and provides guidelines for parameter tuning. Other related methods such as multi-block alternating directions method of multipliers (ADMM) have also been considered for robust PCA, e.g. [WHML13]. Recently, a multi-step multi-block stochastic ADMM method was analyzed for this problem [SAJ14], and this requires 1/ϵ iterations to achieve an error of ϵ. In addition, the convergence rate is tight in terms of scaling with respect to problem size (m, n) and sparsity and rank parameters, under random noise models. There is only one other work which considers a non-convex method for robust PCA [KC12]. However, their result holds only for significantly more restrictive settings and does not cover the deterministic sparsity assumption that we study. Moreover, the projection step in their method can have an arbitrarily large rank, so the running time is still O(m2n), which is the same as the convex methods. In contrast, we have an improved running time of O(r2mn). 2 Algorithm In this section, we present our algorithm for the robust PCA problem. The robust PCA problem can be formulated as the following optimization problem: find L, S s.t. ∥M −L−S∥F ≤ϵ2 and 2ϵ is the desired reconstruction error 3 Figure 1: Illustration of alternating projections. The goal is to find a matrix L∗which lies in the intersection of two sets: L = { set of rank-r matrices} and SM = {M −S, where S is a sparse matrix}. Intuitively, our algorithm alternately projects onto the above two non-convex sets, while appropriately relaxing the rank and the sparsity levels. 1. L lies in the set of low-rank matrices, 2. S lies in the set of sparse matrices. A natural algorithm for the above problem is to iteratively project M −L onto the set of sparse matrices to update S, and then to project M −S onto the set of low-rank matrices to update L. Alternatively, one can view the problem as that of finding a matrix L in the intersection of the following two sets: a) L = { set of rank-r matrices}, b) SM = {M −S, where S is a sparse matrix}. Note that these projections can be done efficiently, even though the sets are non-convex. Hard thresholding (HT) is employed for projections on to sparse matrices, and singular value decomposition (SVD) is used for projections on to low rank matrices. Rank-1 case: We first describe our algorithm for the special case when L∗is rank 1. Our algorithm performs an initial hard thresholding to remove very large entries from input M. Note that if we performed the projection on to rank-1 matrices without the initial hard thresholding, we would not make any progress since it is subject to large perturbations. We alternate between computing the rank-1 projection of M −S, and performing hard thresholding on M −L to remove entries exceeding a certain threshold. This threshold is gradually decreased as the iterations proceed, and the algorithm is run for a certain number of iterations (which depends on the desired reconstruction error). General rank case: When L∗has rank r > 1, a naive extension of our algorithm consists of alternating projections on to rank-r matrices and sparse matrices. However, such a method has poor performance on ill-conditioned matrices. This is because after the initial thresholding of the input matrix M, the sparse corruptions in the residual are of the order of the top singular value (with the choice of threshold as specified in the algorithm). When the lower singular values are much smaller, the corresponding singular vectors are subject to relatively large perturbations and thus, we cannot make progress in improving the reconstruction error. To alleviate the dependence on the condition number, we propose an algorithm that proceeds in stages. In the kth stage, the algorithm alternates between rank-k projections and hard thresholding for a certain number of iterations. We run the algorithm for r stages, where r is the rank of L∗. Intuitively, through this procedure, we recover the lower singular values only after the input matrix is sufficiently denoised, i.e. sparse corruptions at the desired level have been removed. Figure 1 shows a pictorial representation of the alternating projections in different stages. Parameters: As can be seen, the only real parameter to the algorithm is β, used in thresholding, which represents “spikiness” of L∗. That is if the user expects L∗to be “spiky” and the sparse part to be heavily diffused, then higher value of β can be provided. In our implementation, we found that selecting β aggressively helped speed up recovery of our algorithm. In particular, we selected β = 1/√n. Complexity: The complexity of each iteration within a single stage is O(kmn), since it involves calculating the rank-k approximation3 of an m×n matrix (done e.g. via vanilla PCA). The number of iterations in each stage is O (log (1/ϵ)) and there are at most r stages. Thus the overall complexity of the entire algorithm is then O(r2mn log(1/ϵ)). This is drastically lower than the best known bound of O m2n/ϵ on the number of iterations required by convex methods, and just a factor r away from the complexity of vanilla PCA. 3Note that we only require a rank-k approximation of the matrix rather than the actual singular vectors. Thus, the computational complexity has no dependence on the gap between the singular values. 4 Algorithm 1 (bL, bS) = AltProj(M, ϵ, r, β): Non-convex Alternating Projections based Robust PCA 1: Input: Matrix M ∈Rm×n, convergence criterion ϵ, target rank r, thresholding parameter β. 2: Pk(A) denotes the best rank-k approximation of matrix A. HTζ(A) denotes hard-thresholding, i.e. (HTζ(A))ij = Aij if |Aij| ≥ζ and 0 otherwise. 3: Set initial threshold ζ0 ←βσ1(M). 4: L(0) = 0, S(0) = HTζ0(M −L(0)) 5: for Stage k = 1 to r do 6: for Iteration t = 0 to T = 10 log nβ
M −S(0)
2 /ϵ do 7: Set threshold ζ as ζ = β σk+1(M −S(t)) + 1 2 t σk(M −S(t)) ! (2) 8: L(t+1) = Pk(M −S(t)) 9: S(t+1) = HTζ(M −L(t+1)) 10: end for 11: if βσk+1(L(t+1)) < ϵ 2n then 12: Return: L(T ), S(T ) /* Return rank-k estimate if remaining part has small norm */ 13: else 14: S(0) = S(T ) /* Continue to the next stage */ 15: end if 16: end for 17: Return: L(T ), S(T ) 3 Analysis In this section, we present our main result on the correctness of AltProj. We assume the following conditions: (L1) Rank of L∗is at most r. (L2) L∗is µ-incoherent, i.e., if L∗= U ∗Σ∗(V ∗)⊤is the SVD of L∗, then ∥(U ∗)i∥2 ≤µ√r √m , ∀1 ≤i ≤m and ∥(V ∗)i∥2 ≤µ√r √n , ∀1 ≤i ≤n, where (U ∗)i and (V ∗)i denote the ith rows of U ∗and V ∗respectively. (S1) Each row and column of S have at most α fraction of non-zero entries such that α ≤ 1 512µ2r. Note that in general, it is not possible to have a unique recovery of low-rank and sparse components. For example, if the input matrix M is both sparse and low rank, then there is no unique decomposition (e.g. M = e1e⊤ 1 ). The above conditions ensure uniqueness of the matrix decomposition problem. Additionally, we set the parameter β in Algorithm 1 be set as β = 4µ2r √mn. We now establish that our proposed algorithm recovers the low rank and sparse components under the above conditions. Theorem 1 (Noiseless Recovery). Under conditions (L1), (L2) and S∗, and choice of β as above, the outputs bL and bS of Algorithm 1 satisfy:
bL −L∗
F ≤ϵ,
bS −S∗
∞≤ ϵ √mn, and Supp bS ⊆Supp (S∗) . Remark (tight recovery conditions): Our result is tight up to constants, in terms of allowable sparsity level under the deterministic sparsity model. In other words, if we exceed the sparsity limit imposed in S1, it is possible to construct instances where there is no unique decomposition4. Our 4For instance, consider the n × n matrix which has r copies of the all ones matrix, each of size n r , placed across the diagonal. We see that this matrix has rank r and is incoherent with parameter µ = 1. Note that 5 conditions L1, L2 and S1 also match the conditions required by the convex method for recovery, as established in [HKZ11]. Remark (convergence rate): Our method has a linear rate of convergence, i.e. O(log(1/ϵ)) to achieve an error of ϵ, and hence we provide a strongly polynomial method for robust PCA. In contrast, the best known bound for convex methods for robust PCA is O(1/ϵ) iterations to converge to an ϵ-approximate solution. Theorem 1 provides recovery guarantees assuming that L∗is exactly rank-r. However, in several real-world scenarios, L∗can be nearly rank-r. Our algorithm can handle such situations, where M = L∗+ N ∗+ S∗, with N ∗being an additive noise. Theorem 1 is a special case of the following theorem which provides recovery guarantees when N ∗has small ℓ∞norm. Theorem 2 (Noisy Recovery). Under conditions (L1), (L2) and S∗, and choice of β as in Theorem 1, when the noise ∥N ∗∥∞≤σr(L∗) 100n ,the outputs bL, bS of Algorithm 1 satisfy:
bL −L∗
F ≤ϵ + 2µ2r 7 ∥N ∗∥2 + 8√mn √r ∥N ∗∥∞ ,
bS −S∗
∞≤ ϵ √mn + 2µ2r √mn 7 ∥N ∗∥2 + 8√mn √r ∥N ∗∥∞ , and Supp bS ⊆Supp (S∗) . 3.1 Proof Sketch We now present the key steps in the proof of Theorem 1. A detailed proof is provided in the appendix. Step I: Reduce to the symmetric case, while maintaining incoherence of L∗and sparsity of S∗. Using standard symmetrization arguments, we can reduce the problem to the symmetric case, where all the matrices involved are symmetric. See appendix for details on this step. Step II: Show decay in ∥L −L∗∥∞after projection onto the set of rank-k matrices. The t-th iterate L(t+1) of the k-th stage is given by L(t+1) = Pk(L∗+ S∗−S(t)). Hence, L(t+1) is obtained by using the top principal components of a perturbation of L∗given by L∗+ (S∗−S(t)). The key step in our analysis is to show that when an incoherent and low-rank L∗is perturbed by a sparse matrix S∗−S(t), then ∥L(t+1) −L∗∥∞is small and is much smaller than |S∗−S(t)|∞. The following lemma formalizes the intuition; see the appendix for a detailed proof. Lemma 1. Let L∗, S∗be symmetric and satisfy the assumptions of Theorem 1 and let S(t) and L(t) be the tth iterates of the kth stage of Algorithm 1. Let σ∗ 1, . . . , σ∗ n be the eigenvalues of L∗, s.t., |σ∗ 1| ≥· · · ≥|σ∗ r|. Then, the following holds:
L(t+1) −L∗
∞≤2µ2r n σ∗ k+1 + 1 2 t |σ∗ k| ! ,
S∗−S(t+1)
∞≤8µ2r n σ∗ k+1 + 1 2 t |σ∗ k| ! , and Supp S(t+1) ⊆Supp (S∗) . Moreover, the outputs bL and bS of Algorithm 1 satisfy:
bL −L∗
F ≤ϵ,
bS −S∗
∞≤ϵ n, and Supp bS ⊆Supp (S∗) . Step III: Show decay in ∥S −S∗∥∞after projection onto the set of sparse matrices. We next show that if ∥L(t+1) −L∗∥∞is much smaller than ∥S(t) −S∗∥∞then the iterate S(t+1) also has a much smaller error (w.r.t. S∗) than S(t). The above given lemma formally provides the error bound. Step IV: Recurse the argument. We have now reduced the ℓ∞norm of the sparse part by a factor of half, while maintaining its sparsity. We can now go back to steps II and III and repeat the arguments for subsequent iterations. a fraction of α = O (1/r) sparse perturbations suffice to erase one of these blocks making it impossible to recover the matrix. 6 500 600 700 800 10 2 n α Time(s) n = 2000, r = 5, µ = 1 AltProj IALM 500 600 700 800 200 400 600 n α Max. Rank n = 2000, r = 5, µ = 1 IALM 1 1.5 2 2.5 3 10 2 µ Time(s) n = 2000, r = 10, n α = 100 AltProj IALM 50 100 150 200 10 2 r Time(s) n = 2000, µ = 1, n α = 3r AltProj IALM (a) (b) (c) (d) Figure 2: Comparison of AltProj and IALM on synthetic datasets. (a) Running time of AltProj and IALM with varying α. (b) Maximum rank of the intermediate iterates of IALM. (c) Running time of AltProj and IALM with varying µ. (d) Running time of AltProj and IALM with varying r. 4 Experiments We now present an empirical study of our AltProj method. The goal of this study is two-fold: a) establish that our method indeed recovers the low-rank and sparse part exactly, without significant parameter tuning, b) demonstrate that AltProj is significantly faster than Conv-RPCA (see (1)); we solve Conv-RPCA using the IALM method [CLMW11], a state-of-the-art solver [LCM10]. We implemented our method in Matlab and used a Matlab implementation of the IALM method by [LCM10]. We consider both synthetic experiments and experiments on real data involving the problem of foreground-background separation in a video. Each of our results for synthetic datasets is averaged over 5 runs. Parameter Setting: Our pseudo-code (Algorithm 1) prescribes the threshold ζ in Step 4, which depends on the knowledge of the singular values of the low rank component L∗. Instead, in the experiments, we set the threshold at the (t + 1)-th step of k-th stage as ζ = µσk+1(M−S(t)) √n . For synthetic experiments, we employ the µ used for data generation, and for real-world datasets, we tune µ through cross-validation. We found that the above thresholding provides exact recovery while speeding up the computation significantly. We would also like to note that [CLMW11] sets the regularization parameter λ in Conv-RPCA (1) as 1/√n (assuming m ≤n). However, we found that for problems with large incoherence such a parameter setting does not provide exact recovery. Instead, we set λ = µ/√n in our experiments. Synthetic datasets: Following the experimental setup of [CLMW11], the low-rank part L∗= UV T is generated using normally distributed U ∈Rm×r, V ∈Rn×r. Similarly, supp(S∗) is generated by sampling a uniformly random subset of [m]×[n] with size ∥S∗∥0 and each non-zero S∗ ij is drawn i.i.d. from the uniform distribution over [r/(2√mn), r/√mn]. For increasing incoherence of L∗, we randomly zero-out rows of U, V and then re-normalize them. There are three key problem parameters for RPCA with a fixed matrix size: a) sparsity of S∗, b) incoherence of L∗, c) rank of L∗. We investigate performance of both AltProj and IALM by varying each of the three parameters while fixing the others. In our plots (see Figure 2), we report computational time required by each of the two methods for decomposing M into L + S up to a relative error (∥M −L −S∥F /∥M∥F ) of 10−3. Figure 2 shows that AltProj scales significantly better than IALM for increasingly dense S∗. We attribute this observation to the fact that as ∥S∗∥0 increases, the problem is “harder” and the intermediate iterates of IALM have ranks significantly larger than r. Our intuition is confirmed by Figure 2 (b), which shows that when density (α) of S∗ is 0.4 then the intermediate iterates of IALM can be of rank over 500 while the rank of L∗is only 5. We observe a similar trend for the other parameters, i.e., AltProj scales significantly better than IALM with increasing incoherence parameter µ (Figure 2 (c)) and increasing rank (Figure 2 (d)). See Appendix C for additional plots. Real-world datasets: Next, we apply our method to the problem of foreground-background (F-B) separation in a video [LHGT04]. The observed matrix M is formed by vectorizing each frame and stacking them column-wise. Intuitively, the background in a video is the static part and hence forms a low-rank component while the foreground is a dynamic but sparse perturbation. Here, we used two benchmark datasets named Escalator and Restaurant dataset. The Escalator dataset has 3417 frames at a resolution of 160 × 130. We first applied the standard PCA method for extracting low-rank part. Figure 3 (b) shows the extracted background from the video. There are 7 (a) (b) (c) (d) Figure 3: Foreground-background separation in the Escalator video. (a): Original image frame. (b): Best rank-10 approximation; time taken is 3.1s. (c): Low-rank frame obtained using AltProj; time taken is 63.2s. (d): Low-rank frame obtained using IALM; time taken is 1688.9s. (a) (b) (c) (d) Figure 4: Foreground-background separation in the Restaurant video. (a): Original frame from the video. (b): Best rank-10 approximation (using PCA) of the original frame; 2.8s were required to compute the solution (c): Low-rank part obtained using AltProj; computational time required by AltProj was 34.9s. (d): Low-rank part obtained using IALM; 693.2s required by IALM to compute the low-rank+sparse decomposition. several artifacts (shadows of people near the escalator) that are not desirable. In contrast, both IALM and AltProj obtain significantly better F-B separation (see Figure 3(c), (d)). Interestingly, AltProj removes the steps of the escalator which are moving and arguably are part of the dynamic foreground, while IALM keeps the steps in the background part. Also, our method is significantly faster, i.e., our method, which takes 63.2s is about 26 times faster than IALM, which takes 1688.9s. Restaurant dataset: Figure 4 shows the comparison of AltProj and IALM on a subset of the “Restaurant” dataset where we consider the last 2055 frames at a resolution of 120×160. AltProj was around 19 times faster than IALM. Moreover, visually, the background extraction seems to be of better quality (for example, notice the blur near top corner counter in the IALM solution). Plot(b) shows the PCA solution and that also suffers from a similar blur at the top corner of the image, while the background frame extracted by AltProj does not have any noticeable artifacts. 5 Conclusion In this work, we proposed a non-convex method for robust PCA, which consists of alternating projections on to low rank and sparse matrices. We established global convergence of our method under conditions which match those for convex methods. At the same time, our method has much faster running times, and has superior experimental performance. This work opens up a number of interesting questions for future investigation. While we match the convex methods, under the deterministic sparsity model, studying the random sparsity model is of interest. Our noisy recovery results assume deterministic noise; improving the results under random noise needs to be investigated. There are many decomposition problems beyond the robust PCA setting, e.g. structured sparsity models, robust tensor PCA problem, and so on. It is interesting to see if we can establish global convergence for non-convex methods in these settings. Acknowledgements AA and UN would like to acknowledge NSF grant CCF-1219234, ONR N00014-14-1-0665, and Microsoft faculty fellowship. SS would like to acknowledge NSF grants 1302435, 0954059, 1017525 and DTRA grant HDTRA1-13-1-0024. PJ would like to acknowledge Nikhil Srivastava and Deeparnab Chakrabarty for several insightful discussions during the course of the project. 8 References [AAJ+13] A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli, and R. Tandon. Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization. Available on arXiv:1310.7991, Oct. 2013. [AGH+12] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor Methods for Learning Latent Variable Models. Available at arXiv:1210.7559, Oct. 2012. [ANW12] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics, 40(2):1171– 1197, 2012. [Bha97] Rajendra Bhatia. Matrix Analysis. Springer, 1997. [Che13] Y. Chen. Incoherence-Optimal Matrix Completion. ArXiv e-prints, October 2013. [CLMW11] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11, 2011. [CSPW11] Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, and Alan S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011. [CSX12] Yudong Chen, Sujay Sanghavi, and Huan Xu. Clustering sparse graphs. In Advances in neural information processing systems, pages 2204–2212, 2012. [EKYY13] L´aszl´o Erd˝os, Antti Knowles, Horng-Tzer Yau, and Jun Yin. Spectral statistics of Erd˝os–R´enyi graphs I: Local semicircle law. The Annals of Probability, 2013. [Har13] Moritz Hardt. On the provable convergence of alternating minimization for matrix completion. arXiv:1312.0925, 2013. [HKZ11] Daniel Hsu, Sham M Kakade, and Tong Zhang. Robust matrix decomposition with sparse corruptions. ITIT, 2011. [JNS13] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, 2013. [KC12] Anastasios Kyrillidis and Volkan Cevher. Matrix alps: Accelerated low rank and sparse matrix reconstruction. In SSP Workshop, 2012. [Kes12] Raghunandan H. Keshavan. Efficient algorithms for collaborative filtering. Phd Thesis, Stanford University, 2012. [LCM10] Zhouchen Lin, Minming Chen, and Yi Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055, 2010. [LHGT04] Liyuan Li, Weimin Huang, IY-H Gu, and Qi Tian. Statistical modeling of complex backgrounds for foreground object detection. ITIP, 2004. [MZYM11] Hossein Mobahi, Zihan Zhou, Allen Y. Yang, and Yi Ma. Holistic 3d reconstruction of urban structures from low-rank textures. In ICCV Workshops, pages 593–600, 2011. [NJS13] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In NIPS, pages 2796–2804, 2013. [SAJ14] H. Sedghi, A. Anandkumar, and E. Jonckheere. Guarantees for Stochastic ADMM in High Dimensions. Preprint., Feb. 2014. [Shi13] Lei Shi. Sparse additive text models with low rank background. In Advances in Neural Information Processing Systems, pages 172–180, 2013. [WHML13] X. Wang, M. Hong, S. Ma, and Z. Luo. Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers. arXiv:1308.5294, 2013. [XCS12] Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. IEEE Transactions on Information Theory, 58(5):3047–3064, 2012. 9
|
2014
|
107
|
5,189
|
Expectation-Maximization for Learning Determinantal Point Processes Jennifer Gillenwater Computer and Information Science University of Pennsylvania jengi@cis.upenn.edu Alex Kulesza Computer Science and Engineering University of Michigan kulesza@umich.edu Emily Fox Statistics University of Washington ebfox@stat.washington.edu Ben Taskar Computer Science and Engineering University of Washington taskar@cs.washington.edu Abstract A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard [1]. Thus, previous work has instead focused on more restricted convex learning settings: learning only a single weight for each row of the kernel matrix [2], or learning weights for a linear combination of DPPs with fixed kernel matrices [3]. In this work we propose a novel algorithm for learning the full kernel matrix. By changing the kernel parameterization from matrix entries to eigenvalues and eigenvectors, and then lower-bounding the likelihood in the manner of expectation-maximization algorithms, we obtain an effective optimization procedure. We test our method on a real-world product recommendation task, and achieve relative gains of up to 16.5% in test log-likelihood compared to the naive approach of maximizing likelihood by projected gradient ascent on the entries of the kernel matrix. 1 Introduction Subset selection is a core task in many real-world applications. For example, in product recommendation we typically want to choose a small set of products from a large collection; many other examples of subset selection tasks turn up in domains like document summarization [4, 5], sensor placement [6, 7], image search [3, 8], and auction revenue maximization [9], to name a few. In these applications, a good subset is often one whose individual items are all high-quality, but also all distinct. For instance, recommended products should be popular, but they should also be diverse to increase the chance that a user finds at least one of them interesting. Determinantal point processes (DPPs) offer one way to model this tradeoff; a DPP defines a distribution over all possible subsets of a ground set, and the mass it assigns to any given set is a balanced measure of that set’s quality and diversity. Originally discovered as models of fermions [10], DPPs have recently been effectively adapted for a variety of machine learning tasks [8, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 3, 20]. They offer attractive computational properties, including exact and efficient normalization, marginalization, conditioning, and sampling [21]. These properties arise in part from the fact that a DPP can be compactly param1 eterized by an N × N positive semi-definite matrix L. Unfortunately, though, learning L from example subsets by maximizing likelihood is conjectured to be NP-hard [1, Conjecture 4.1]. While gradient ascent can be applied in an attempt to approximately optimize the likelihood objective, we show later that it requires a projection step that often produces degenerate results. For this reason, in most previous work only partial learning of L has been attempted. [2] showed that the problem of learning a scalar weight for each row of L is a convex optimization problem. This amounts to learning what makes an item high-quality, but does not address the issue of what makes two items similar. [3] explored a different direction, learning weights for a linear combination of DPPs with fixed Ls. This works well in a limited setting, but requires storing a potentially large set of kernel matrices, and the final distribution is no longer a DPP, which means that many attractive computational properties are lost. [8] proposed as an alternative that one first assume L takes on a particular parametric form, and then sample from the posterior distribution over kernel parameters using Bayesian methods. This overcomes some of the disadvantages of [3]’s L-ensemble method, but does not allow for learning an unconstrained, non-parametric L. The learning method we propose in this paper differs from those of prior work in that it does not assume fixed values or restrictive parameterizations for L, and exploits the eigendecomposition of L. Many properties of a DPP can be simply characterized in terms of the eigenvalues and eigenvectors of L, and working with this decomposition allows us to develop an expectation-maximization (EM) style optimization algorithm. This algorithm negates the need for the problematic projection step that is required for naive gradient ascent to maintain positive semi-definiteness of L. As the experiments show, a projection step can sometimes lead to learning a nearly diagonal L, which fails to model the negative interactions between items. These interactions are vital, as they lead to the diversityseeking nature of a DPP. The proposed EM algorithm overcomes this failing, making it more robust to initialization and dataset changes. It is also asymptotically faster than gradient ascent. 2 Background Formally, a DPP P on a ground set of items Y = {1, . . . , N} is a probability measure on 2Y, the set of all subsets of Y. For every Y ⊆Y we have P(Y ) ∝det(LY ), where L is a positive semi-definite (PSD) matrix. The subscript LY ≡[Lij]i,j∈Y denotes the restriction of L to the entries indexed by elements of Y , and we have det(L∅) ≡1. Notice that the restriction to PSD matrices ensures that all principal minors of L are non-negative, so that det(LY ) ≥0 as required for a proper probability distribution. The normalization constant for the distribution can be computed explicitly thanks to the fact that P Y det(LY ) = det(L + I), where I is the N × N identity matrix. Intuitively, we can think of a diagonal entry Lii as capturing the quality of item i, while an off-diagonal entry Lij measures the similarity between items i and j. An alternative representation of a DPP is given by the marginal kernel: K = L(L + I)−1. The L-K relationship can also be written in terms of their eigendecompositons. L and K share the same eigenvectors v, and an eigenvalue λi of K corresponds to an eigenvalue λi/(1 −λi) of L: K = N X j=1 λjvjv⊤ j ⇔ L = N X j=1 λj 1 −λj vjv⊤ j . (1) Clearly, if L if PSD then K is as well, and the above equations also imply that the eigenvalues of K are further restricted to be ≤1. K is called the marginal kernel because, for any set Y ∼P and for every A ⊆Y: P(A ⊆Y ) = det(KA) . (2) We can also write the exact (non-marginal, normalized) probability of a set Y ∼P in terms of K: P(Y ) = det(LY ) det(L + I) = | det(K −IY )| , (3) where IY is the identity matrix with entry (i, i) zeroed for items i ∈Y [1, Equation 3.69]. In what follows we use the K-based formula for P(Y ) and learn the marginal kernel K. This is equivalent to learning L, as Equation (1) can be applied to convert from K to L. 2 3 Learning algorithms In our learning setting the input consists of n example subsets, {Y1, . . . , Yn}, where Yi ⊆ {1, . . . , N} for all i. Our goal is to maximize the likelihood of these example sets. We first describe in Section 3.1 a naive optimization procedure: projected gradient ascent on the entries of the marginal matrix K, which will serve as a baseline in our experiments. We then develop an EM method: Section 3.2 changes variables from kernel entries to eigenvalues and eigenvectors (introducing a hidden variable in the process), Section 3.3 applies Jensen’s inequality to lower-bound the objective, and Sections 3.4 and 3.5 outline a coordinate ascent procedure on this lower bound. 3.1 Projected gradient ascent The log-likelihood maximization problem, based on Equation (3), is: max K n X i=1 log | det(K −IY i)| s.t. K ⪰0, I −K ⪰0 (4) where the first constraint ensures that K is PSD and the second puts an upper limit of 1 on its eigenvalues. Let L(K) represent this log-likelihood objective. Its partial derivative with respect to K is easy to compute by applying a standard matrix derivative rule [22, Equation 57]: ∂L(K) ∂K = n X i=1 (K −IY i)−1 . (5) Thus, projected gradient ascent [23] is a viable, simple optimization technique. Algorithm 1 outlines this method, which we refer to as K-Ascent (KA). The initial K supplied as input to the algorithm can be any PSD matrix with eigenvalues ≤1. The first part of the projection step, max(λ, 0), chooses the closest (in Frobenius norm) PSD matrix to Q [24, Equation 1]. The second part, min(λ, 1), caps the eigenvalues at 1. (Notice that only the eigenvalues have to be projected; K remains symmetric after the gradient step, so its eigenvectors are already guaranteed to be real.) Unfortunately, the projection can take us to a poor local optima. To see this, consider the case where the starting kernel K is a poor fit to the data. In this case, a large initial step size η will probably be accepted; even though such a step will likely result in the truncation of many eigenvalues at 0, the resulting matrix will still be an improvement over the poor initial K. However, with many zero eigenvalues, the new K will be near-diagonal, and, unfortunately, Equation (5) dictates that if the current K is diagonal, then its gradient is as well. Thus, the KA algorithm cannot easily move to any highly non-diagonal matrix. It is possible that employing more complex step-size selection mechanisms could alleviate this problem, but the EM algorithm we develop in the next section will negate the need for these entirely. The EM algorithm we develop also has an advantage in terms of asymptotic runtime. The computational complexity of KA is dominated by the matrix inverses of the L derivative, each of which requires O(N 3) operations, and by the eigendecomposition needed for the projection, also O(N 3). The overall runtime of KA, assuming T1 iterations until convergence and an average of T2 iterations to find a step size, is O(T1nN 3 + T1T2N 3). As we will show in the following sections, the overall runtime of the EM algorithm is O(T1nNk2+T1T2N 3), which can be substantially better than KA’s runtime for k ≪N. 3.2 Eigendecomposing Eigendecomposition is key to many core DPP algorithms such as sampling and marginalization. This is because the eigendecomposition provides an alternative view of the DPP as a generative process, which often leads to more efficient algorithms. Specifically, sampling a set Y can be broken down into a two-step process, the first of which involves generating a hidden variable J ⊆{1, . . . , N} that codes for a particular set of K’s eigenvectors. We review this process below, then exploit it to develop an EM optimization scheme. Suppose K = V ΛV ⊤is an eigendecomposition of K. Let V J denote the submatrix of V containing only the columns corresponding to the indices in a set J ⊆{1, . . . , N}. Consider the corresponding 3 Algorithm 1 K-Ascent (KA) Input: K, {Y1, . . . , Yn}, c repeat G ←∂L(K) ∂K (Eq. 5) η ←1 repeat Q ←K + ηG Eigendecompose Q into V, λ λ ←min(max(λ, 0), 1) Q ←V diag(λ)V ⊤ η ←η 2 until L(Q) > L(K) δ ←L(Q) −L(K) K ←Q until δ < c Output: K Algorithm 2 Expectation-Maximization (EM) Input: K, {Y1, . . . , Yn}, c Eigendecompose K into V, λ repeat for j = 1, . . . , N do λ′ j ←1 n P i pK(j ∈J | Yi) (Eq. 19) G ←∂F (V,λ′) ∂V (Eq. 20) η ←1 repeat V ′ ←V exp[η V ⊤G −G⊤V ] η ←η 2 until L(V ′, λ′) > L(V, λ′) δ ←F(V ′, λ′) −F(V, λ) λ ←λ′, V ←V ′, η ←2η until δ < c Output: K marginal kernel, with all selected eigenvalues set to 1: KV J = X j∈J vjv⊤ j = V J(V J)⊤. (6) Any such kernel whose eigenvalues are all 1 is called an elementary DPP. According to [21, Theorem 7], a DPP with marginal kernel K is a mixture of all 2N possible elementary DPPs: P(Y ) = X J⊆{1,...,N} PV J(Y ) Y j∈J λj Y j /∈J (1 −λj) , PV J(Y ) = 1(|Y | = |J|) det(KV J Y ) . (7) This perspective leads to an efficient DPP sampling algorithm, where a set J is first chosen according to its mixture weight in Equation (7), and then a simple algorithm is used to sample from P V J [5, Algorithm 1]. In this sense, the index set J is an intermediate hidden variable in the process for generating a sample Y . We can exploit this hidden variable J to develop an EM algorithm for learning K. Re-writing the data log-likelihood to make the hidden variable explicit: L(K) = L(Λ, V ) = n X i=1 log X J pK(J, Yi) ! = n X i=1 log X J pK(Yi | J)pK(J) ! , where (8) pK(J) = Y j∈J λj Y j /∈J (1 −λj) , pK(Yi | J) =1(|Yi| = |J|) det([V J(V J)⊤]Yi) . (9) These equations follow directly from Equations (6) and (7). 3.3 Lower bounding the objective We now introduce an auxiliary distribution, q(J | Yi), and deploy it with Jensen’s inequality to lower-bound the likelihood objective. This is a standard technique for developing EM schemes for dealing with hidden variables [25]. Proceeding in this direction: L(V, Λ) = n X i=1 log X J q(J | Yi)pK(J, Yi) q(J | Yi) ! ≥ n X i=1 X J q(J | Yi) log pK(J, Yi) q(J | Yi) ≡F(q, V, Λ) . (10) 4 The function F(q, V, Λ) can be expressed in either of the following two forms: F(q, V, Λ) = n X i=1 −KL(q(J | Yi) ∥pK(J | Yi)) + L(V, Λ) (11) = n X i=1 Eq[log pK(J, Yi)] + H(q) (12) where H is entropy. Consider optimizing this new objective by coordinate ascent. From Equation (11) it is clear that, holding V, Λ constant, F is concave in q. This follows from the concavity of KL divergence. Holding q constant in Equation (12) yields the following function: F(V, Λ) = n X i=1 X J q(J | Yi) [log pK(J) + log pK(Yi | J)] . (13) This expression is concave in λj, since log is concave. However, it is not concave in V due to the non-convex V ⊤V = I constraint. We describe in Section 3.5 one way to handle this. To summarize, coordinate ascent on F(q, V, Λ) alternates the following “expectation” and “maximization” steps; the first is concave in q, and the second is concave in the eigenvalues: E-step: min q n X i=1 KL(q(J | Yi) ∥pK(J | Yi)) (14) M-step: max V,Λ n X i=1 Eq[log pK(J, Yi)] s.t. 0 ≤λ ≤1, V ⊤V = I (15) 3.4 E-step The E-step is easily solved by setting q(J | Yi) = pK(J | Yi), which minimizes the KL divergence. Interestingly, we can show that this distribution is itself a conditional DPP, and hence can be compactly described by an N × N kernel matrix. Thus, to complete the E-step, we simply need to construct this kernel. Lemma 1 (see the supplement for a proof) gives an explicit formula. Note that q’s probability mass is restricted to sets of a particular size k, and hence we call it a k-DPP. A k-DPP is a variant of DPP that can also be efficiently sampled from and marginalized, via modifications of the standard DPP algorithms. (See the supplement and [3] for more on k-DPPs.) Lemma 1. At the completion of the E-step, q(J | Yi) with |Yi| = k is a k-DPP with (non-marginal) kernel QYi: QYi = RZYiR, and q(J | Yi) ∝1(|Yi| = |J|) det(QYi J ) , where (16) U = V ⊤, ZYi = U Yi(U Yi)⊤, and R = diag p λ/(1 −λ) . (17) 3.5 M-step The M-step update for the eigenvalues is a closed-form expression with no need for projection. Taking the derivative of Equation (13) with respect to λj, setting it equal to zero, and solving for λj: λj = 1 n n X i=1 X J:j∈J q(J | Yi) . (18) The exponential-sized sum here is impractical, but we can eliminate it. Recall from Lemma 1 that q(J | Yi) is a k-DPP with kernel QYi. Thus, we can use k-DPP marginalization algorithms to efficiently compute the sum over J. More concretely, let ˆV represent the eigenvectors of QYi, with ˆvr(j) indicating the jth element of the rth eigenvector. Then the marginals are: X J:j∈J q(J | Yi) = q(j ∈J | Yi) = N X r=1 ˆvr(j)2 , (19) 5 which allows us to compute the eigenvalue updates in time O(nNk2), for k = maxi |Yi|. (See the supplement for the derivation of Equation (19) and its computational complexity.) Note that this update is self-normalizing, so explicit enforcement of the 0 ≤λj ≤1 constraint is unnecessary. There is one small caveat: the QYi matrix will be infinite if any λj is exactly equal to 1 (due to R in Equation (17)). In practice, we simply tighten the constraint on λ to keep it slightly below 1. Turning now to the M-step update for the eigenvectors, the derivative of Equation (13) with respect to V involves an exponential-size sum over J similar to that of the eigenvalue derivative. However, the terms of the sum in this case depend on V as well as on q(J | Yi), making it hard to simplify. Yet, for the particular case of the initial gradient, where we have q = p, simplification is possible: ∂F(V, Λ) ∂V = n X i=1 2BYi(HYi)−1VYiR2 (20) where HYi is the |Yi| × |Yi| matrix VYiR2V ⊤ Yi and VYi = (U Yi)⊤. BYi is a N × |Yi| matrix containing the columns of the N × N identity corresponding to items in Yi; BYi simply serves to map the gradients with respect to VYi into the proper positions in V . This formula allows us to compute the eigenvector derivatives in time O(nNk2), where again k = maxi |Yi|. (See the supplement for the derivation of Equation (20) and its computational complexity.) Equation (20) is only valid for the first gradient step, so in practice we do not bother to fully optimize V in each M-step; we simply take a single gradient step on V . Ideally we would repeatedly evaluate the M-step objective, Equation (13), with various step sizes to find the optimal one. However, the M-step objective is intractable to evaluate exactly, as it is an expectation with respect to an exponential-size distribution. In practice, we solve this issue by performing an E-step for each trial step size. That is, we update q’s distribution to match the updated V and Λ that define pK, and then determine if the current step size is good by checking for improvement in the likelihood L. There is also the issue of enforcing the non-convex constraint V ⊤V = I. We could project V to ensure this constraint, but, as previously discussed for eigenvalues, projection steps often lead to poor local optima. Thankfully, for the particular constraint associated with V , more sophisticated update techniques exist: the constraint V ⊤V = I corresponds to optimization over a Stiefel manifold, so the algorithm from [26, Page 326] can be employed. In practice, we simplify this algorithm by negelecting second-order information (the Hessian) and using the fact that the V in our application is full-rank. With these simplifications, the following multiplicative update is all that is needed: V ←V exp " η V ⊤∂L ∂V − ∂L ∂V ⊤ V !# , (21) where exp denotes the matrix exponential and η is the step size. Algorithm 2 summarizes the overall EM method. As previously mentioned, assuming T1 iterations until convergence and an average of T2 iterations to find a step size, its overall runtime is O(T1nNk2 + T1T2N 3). The first term in this complexity comes from the eigenvalue updates, Equation (19), and the eigenvector derivative computation, Equation (20). The second term comes from repeatedly computing the Stiefel manifold update of V , Equation (21), during the step size search. 4 Experiments We test the proposed EM learning method (Algorithm 2) by comparing it to K-Ascent (KA, Algorithm 1)1. Both methods require a starting marginal kernel ˜K. Note that neither EM nor KA can deal well with starting from a kernel with too many zeros. For example, starting from a diagonal kernel, both gradients, Equations (5) and (20), will be diagonal, resulting in no modeling of diversity. Thus, the two initialization options that we explore have non-trivial off-diagonals. The first of these options is relatively naive, while the other incorporates statistics from the data. For the first initialization type, we use a Wishart distribution with N degrees of freedom and an identity covariance matrix to draw ˜L ∼WN(N, I). The Wishart distribution is relatively unassuming: in terms of eigenvectors, it spreads its mass uniformly over all unitary matrices [27]. We make 1Code and data for all experiments can be downloaded from https://code.google.com/p/em-for-dpps 6 safety furniture carseats strollers health bath media toys bedding apparel diaper gear feeding relative log likelihood difference 0.0 0.0 0.5 0.9 1.3 1.8 2.4 2.5 2.5 7.7 8.1 9.8 11.0 (a) safety furniture carseats strollers health bath media toys bedding apparel diaper gear feeding relative log likelihood difference 0.6 2.6 -0.1 1.5 3.1 2.3 1.9 3.5 5.3 5.3 5.8 10.4 16.5 (b) Figure 1: Relative test log-likelihood differences, 100 (EM−KA) |KA| , using: (a) Wishart initialization in the full-data setting, and (b) moments-matching initialization in the data-poor setting. just one simple modification to its output to make it a better fit for practical data: we re-scale the resulting matrix by 1/N so that the corresponding DPP will place a non-trivial amount of probability mass on small sets. (The Wishart’s mean is NI, so it tends to over-emphasize larger sets unless we re-scale.) We then convert ˜L to ˜K via Equation (1). For the second initialization type, we employ a form of moment matching. Let mi and mij represent the normalized frequencies of single items and pairs of items in the training data: mi = 1 n n X ℓ=1 1(i ∈Yℓ), mij = 1 n n X ℓ=1 1(i ∈Yℓ∧j ∈Yℓ) . (22) Recalling Equation (2), we attempt to match the first and second order moments by choosing ˜K as: ˜Kii = mi, ˜Kij = r max ˜Kii ˜Kjj −mij, 0 . (23) To ensure a valid starting kernel, we then project ˜K by clipping its eigenvalues at 0 and 1. 4.1 Baby registry tests Consider a product recommendation task, where the ground set comprises N products that can be added to a particular category (e.g., toys or safety) in a baby registry. A very simple recommendation system might suggest products that are popular with other consumers; however, this does not account for negative interactions: if a consumer has already chosen a carseat, they most likely will not choose an additional carseat, no matter how popular it is with other consumers. DPPs are ideal for capturing such negative interactions. A learned DPP could be used to populate an initial, basic registry, as well as to provide live updates of product recommendations as a consumer builds their registry. To test our DPP learning algorithms, we collected a dataset consisting of 29,632 baby registries from Amazon.com, filtering out those listing fewer than 5 or more than 100 products. Amazon characterizes each product in a baby registry as belonging to one of 18 categories, such as “toys” and“safety”. For each registry, we created sub-registries by splitting it according to these categories. (A registry with 5 toy items and 10 safety items produces two sub-registries.) For each category, we then filtered down to its top 100 most frequent items, and removed any product that did not occur in at least 100 sub-registries. We discarded categories with N < 25 or fewer than 2N remaining (non-empty) sub-registries for training. The resulting 13 categories have an average inventory size of N = 71 products and an average number of sub-registries n = 8,585. We used 70% of the data for training and 30% for testing. Note that categories such as “carseats” contain more diverse items than just their namesake; for instance, “carseats” also contains items such as seat back kick protectors and rear-facing baby view mirrors. See the supplement for more dataset details and for quartile numbers for all of the experiments. Figure 1a shows the relative test log-likelihood differences of EM and KA when starting from a Wishart initialization. These numbers are the medians from 25 trials (draws from the Wishart). EM 7 Graco Sweet Slumber Sound Machine VTech Comm. Audio Monitor Boppy Noggin Nest Head Support Cloud b Twilight Constellation Night Light Braun ThermoScan Lens Filters Britax EZ-Cling Sun Shades TL Care Organic Cotton Mittens Regalo Easy Step Walk Thru Gate Aquatopia Bath Thermometer Alarm Infant Optics Video Monitor Tuesday, August 5, 14 (a) feeding gear bedding bath apparel diaper media furniture health toys safety carseats strollers KA runtime / EM runtime 0.4 0.5 0.6 0.7 0.7 1.1 1.3 1.3 1.9 2.2 4.0 6.0 7.4 (b) Figure 2: (a) A high-probability set of size k = 10 selected using an EM model for the “safety” category. (b) Runtime ratios. gains an average of 3.7%, but has a much greater advantage for some categories than for others. Speculating that EM has more of an advantage when the off-diagonal components of K are truly important—when products exhibit strong negative interactions—we created a matrix M for each category with the true data marginals from Equation (22) as its entries. We then checked the value of d = 1 N ||M||F ||diag(M)||2 . This value correlates well with the relative gains for EM: the 4 categories for which EM has the largest gains (safety, furniture, carseats, and strollers) all exhibit d > 0.025, while categories such as feeding and gear have d < 0.012. Investigating further, we found that, as foreshadowed in Section 3.1, KA performs particularly poorly in the high-d setting because of its projection step—projection can result in KA learning a near-diagonal matrix. If instead of the Wishart initialization we use the moments-matching initializer, this alleviates KA’s projection problem, as it provides a starting point closer to the true kernel. With this initializer, KA and EM have comparable test log-likelihoods (average EM gain of 0.4%). However, the momentsmatching initializer is not a perfect fix for the KA algorithm in all settings. For instance, consider a data-poor setting, where for each category we have only n = 2N training examples. In this case, even with the moments-matching initializer EM has a significant edge over KA, as shown in Figure 1b: EM gains an average of 4.5%, with a maximum gain of 16.5% for the safety category. To give a concrete example of the advantages of EM training, Figure 2a shows a greedy approximation [28, Section 4] to the most-likely ten-item registry in the category “safety”, according to a Wishart-initialized EM model. The corresponding KA selection differs from Figure 2a in that it replaces the lens filters and the head support with two additional baby monitors: “Motorola MBP36 Remote Wireless Video Baby Monitor”, and “Summer Infant Baby Touch Digital Color Video Monitor”. It seems unlikely that many consumers would select three different brands of video monitor. Having established that EM is more robust than KA, we conclude with an analysis of runtimes. Figure 2b shows the ratio of KA’s runtime to EM’s for each category. As discussed earlier, EM is asymptotically faster than KA, and we see this borne out in practice even for the moderate values of N and n that occur in our registries dataset: on average, EM is 2.1 times faster than KA. 5 Conclusion We have explored learning DPPs in a setting where the kernel K is not assumed to have fixed values or a restrictive parametric form. By exploiting K’s eigendecomposition, we were able to develop a novel EM learning algorithm. On a product recommendation task, we have shown EM to be faster and more robust than the naive approach of maximizing likelihood by projected gradient. In other applications for which modeling negative interactions between items is important, we anticipate that EM will similarly have a significant advantage. Acknowledgments This work was supported in part by ONR Grant N00014-10-1-0746. 8 References [1] A. Kulesza. Learning with Determinantal Point Processes. PhD thesis, University of Pennsylvania, 2012. [2] A. Kulesza and B. Taskar. Learning Determinantal Point Processes. In Conference on Uncertainty in Artificial Intelligence (UAI), 2011. [3] A. Kulesza and B. Taskar. k-DPPs: Fixed-Size Determinantal Point Processes. In International Conference on Machine Learning (ICML), 2011. [4] H. Lin and J. Bilmes. Learning Mixtures of Submodular Shells with Application to Document Summarization. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012. [5] A. Kulesza and B. Taskar. Determinantal Point Processes for Machine Learning. Foundations and Trends in Machine Learning, 5(2-3), 2012. [6] A. Krause, A. Singh, and C. Guestrin. Near-Optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms, and Empirical Studies. Journal of Machine Learning Research (JMLR), 9:235–284, 2008. [7] A. Krause and C. Guestrin. Near-Optimal Non-Myopic Value of Information in Graphical Models. In Conference on Uncertainty in Artificial Intelligence (UAI), 2005. [8] R. Affandi, E. Fox, R. Adams, and B. Taskar. Learning the Parameters of Determinantal Point Process Kernels. In International Conference on Machine Learning (ICML), 2014. [9] S. Dughmi, T. Roughgarden, and M. Sundararajan. Revenue Submodularity. In Electronic Commerce, 2009. [10] O. Macchi. The Coincidence Approach to Stochastic Point Processes. Advances in Applied Probability, 7(1), 1975. [11] J. Snoek, R. Zemel, and R. Adams. A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data. In NIPS, 2013. [12] B. Kang. Fast Determinantal Point Process Sampling with Application to Clustering. In NIPS, 2013. [13] R. Affandi, E. Fox, and B. Taskar. Approximate Inference in Continuous Determinantal Point Processes. In NIPS, 2013. [14] A. Shah and Z. Ghahramani. Determinantal Clustering Process — A Nonparametric Bayesian Approach to Kernel Based Semi-Supervised Clustering. In Conference on Uncertainty in Artificial Intelligence (UAI), 2013. [15] R. Affandi, A. Kulesza, E. Fox, and B. Taskar. Nystr¨om Approximation for Large-Scale Determinantal Processes. In Conference on Artificial Intelligence and Statistics (AIStats), 2013. [16] J. Gillenwater, A. Kulesza, and B. Taskar. Near-Optimal MAP Inference for Determinantal Point Processes. In NIPS, 2012. [17] J. Zou and R. Adams. Priors for Diversity in Generative Latent Variable Models. In NIPS, 2013. [18] R. Affandi, A. Kulesza, and E. Fox. Markov Determinantal Point Processes. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012. [19] J. Gillenwater, A. Kulesza, and B. Taskar. Discovering Diverse and Salient Threads in Document Collections. In Empirical Methods in Natural Language Processing (EMNLP), 2012. [20] A. Kulesza and B. Taskar. Structured Determinantal Point Processes. In NIPS, 2010. [21] J. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Determinantal Processes and Independence. Probability Surveys, 3, 2006. [22] K. Petersen and M. Pedersen. The Matrix Cookbook. Technical report, University of Denmark, 2012. [23] E. Levitin and B. Polyak. Constrained Minimization Methods. USSR Computational Mathematics and Mathematical Physics, 6(5):1–50, 1966. [24] D. Henrion and J. Malick. Projection Methods for Conic Feasibility Problems. Optimization Methods and Software, 26(1):23–46, 2011. [25] R. Neal and G. Hinton. A New View of the EM Algorithm that Justies Incremental, Sparse and Other Variants. Learning in Graphical Models, 1998. [26] A. Edelman, T. Arias, and S. Smith. The Geometry of Algorithms with Orthogonality Constraints. SIAM Journal on Matrix Analysis and Applications (SIMAX), 1998. [27] A. James. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Annals of Mathematical Statistics, 35(2):475–501, 1964. [28] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions I. Mathematical Programming, 14(1), 1978. 9
|
2014
|
108
|
5,190
|
Estimation with Norm Regularization Arindam Banerjee Sheng Chen Farideh Fazayeli Vidyashankar Sivakumar Department of Computer Science & Engineering University of Minnesota, Twin Cities {banerjee,shengc,farideh,sivakuma}@cs.umn.edu Abstract Analysis of non-asymptotic estimation error and structured statistical recovery based on norm regularized regression, such as Lasso, needs to consider four aspects: the norm, the loss function, the design matrix, and the noise model. This paper presents generalizations of such estimation error analysis on all four aspects. We characterize the restricted error set, establish relations between error sets for the constrained and regularized problems, and present an estimation error bound applicable to any norm. Precise characterizations of the bound is presented for a variety of noise models, design matrices, including sub-Gaussian, anisotropic, and dependent samples, and loss functions, including least squares and generalized linear models. Gaussian width, a geometric measure of size of sets, and associated tools play a key role in our generalized analysis. 1 Introduction Over the past decade, progress has been made in developing non-asymptotic bounds on the estimation error of structured parameters based on norm regularized regression. Such estimators are usually of the form [16, 9, 3]: ˆθλn = argmin θ∈Rp L(θ; Zn) + λnR(θ) , (1) where R(θ) is a suitable norm, L(·) is a suitable loss function, Zn = {(yi, Xi)}n i=1 where yi ∈ R, Xi ∈Rp is the training set, and λn > 0 is a regularization parameter. The optimal parameter θ∗is often assumed to be ‘structured,’ usually characterized as low value according to some norm R(·). Since ˆθλn is an estimate of the optimal structure θ∗, the focus has been on bounding a suitable function of the error vector ˆ∆n = (ˆθλn −θ∗), e.g., the L2 norm ∥ˆ∆n∥2. To understand the state-of-the-art on non-asymptotic bounds on the estimation error for normregularized regression, four aspects of (1) need to be considered: (i) the norm R(θ), (ii) properties of the design matrix X ∈Rn×p, (iii) the loss function L(·), and (iv) the noise model, typically in terms of w = y −E[y|x]. Most of the literature has focused on a linear model: y = Xθ + ω, and a squared-loss function: L(θ; Zn) = 1 n∥y −Xθ∥2 2 = 1 n Pn i=1(yi −⟨θ, Xi⟩)2. Early work on such estimators focussed on the L1 norm [21, 20, 8], and led to sufficient conditions on the design matrix X, including the restricted-isometry properties (RIP) and restricted eigenvalue (RE) conditions [2, 9, 13, 3]. While much of the development has focussed on isotropic Gaussian design matrices, recent work has extended the analysis for L1 norm to correlated Gaussian designs [13] as well as anisotropic sub-Gaussian design matrices [14]. Building on such development, [9] presents a unified framework for the case of decomposable norms and also considers generalized linear models (GLMs) for certain norms such as L1. Two key insights are offered in [9]: first, for suitably large λn, the error vector ˆ∆n lies in a restricted set, a cone or a star, and second, on the restricted error set, the loss function needs to satisfy restricted strong convexity (RSC), a generalization of the RE condition, for the analysis to work out. 1 For isotropic Gaussian design matrices, additional progress has been made. [4] considers a constrained estimation formulation for all atomic norms, where the gain condition, equivalent to the RE condition, uses Gordons inequality [5, 7] and is succinctly represented in terms of the Gaussian width of the intersection of the cone of the error set and a unit ball/sphere. [11] considers three related formulations for generalized Lasso problems, establish recovery guarantees based on Gordons inequality, and quantities related to the Gaussian width. Sharper analysis for recovery has been considered in [1], yielding a precise characterization of phase transition behavior using quantities related to the Gaussian width. [12] consider a linear programming estimator in a 1-bit compressed sensing setting and, interestingly, the concept of Gaussian width shows up in the analysis. In spite of the advances, most of these results are restricted to isotropic Gaussian design matrices. In this paper, we consider structured estimation problems with norm regularization, which substantially generalize existing results on all four pertinent aspects: the norm, the design matrix, the loss, and the noise model. The analysis we present applies to all norms. We characterize the structure of the error set for all norms, develop precise relationships between the error sets of the regularized and constrained versions [2], and establish an estimation error bound in Section 2. The bound depends on the regularization parameter λn and a certain RSC condition constant κ. In Section 3, for both Gaussian and sub-Gaussian noise ω, we develop suitable characterizations for λn in terms of the Gaussian width of the unit norm ball ΩR = {u|R(u) ≤1}. In Section 4, we characterize the RSC condition for any norm, considering two families of design matrices X ∈Rn×p: Gaussian and subGaussian, and three settings for each family: independent isotropic designs, independent anisotropic designs where the rows are correlated as Σp×p, and dependent isotropic designs where the rows are isotropic but columns are correlated as Γn×n, implying dependent samples. In Section 5, we show how to extend the analysis to generalized linear models (GLMs) with sub-Gaussian design matrices and any norm. Our analysis techniques are simple and largely uniform across different types of noise and design matrices. Parts of our analysis are geometric, where Gaussian widths, as a measure of size of suitable sets, and associated tools play a key role [4, 7]. We also use standard covering arguments, use Sudakov-Dudley inequality to switch from covering numbers to Gaussian widths [7], and use generic chaining to upper bound ‘sub-Gaussian widths’ with Gaussian widths [15]. 2 Restricted Error Set and Recovery Guarantees In this section, we give a characterization of the restricted error set Er in which the error vector ˆ∆n lives, establish clear relationships between the error sets for the regularized and constrained problems, and finally establish upper bounds on the estimation error. The error bound is deterministic, but has quantities which involve θ∗, X, ω, for which we develop high probability bounds in Sections 3, 4, and 5. 2.1 The Restricted Error Set and the Error Cone We start with a characterization of the restricted error set Er where ˆ∆n will belong. Lemma 1 For any β > 1, assuming λn ≥βR∗(∇L(θ∗; Zn)) , (2) the error vector ˆ∆n = ˆθλn −θ∗belongs to the set Er = Er(θ∗, β) = ∆∈Rp R(θ∗+ ∆) ≤R(θ∗) + 1 β R(∆) . (3) The restricted error set Er need not be convex for general norms. Interestingly, for β = 1, the inequality in (3) is just the triangle inequality, and is satisfied by all ∆. Note that β > 1 restricts the set of ∆which satisfy the inequality, yielding the restricted error set. In particular, ∆cannot go in the direction of θ∗, i.e., ∆̸= αθ∗for any α > 0. Further, note that the condition in (2) is similar to that in [9] for β = 2, but the above characterization holds for any norm, not just decomposable norms [9]. 2 While Er need not be a convex set, we establish a relationship between Er and Cc, the cone for the constrained problem [4], where Cc = Cc(θ∗) = cone {∆∈Rp | R(θ∗+ ∆) ≤R(θ∗)} . (4) Theorem 1 Let Ar = Er ∩ρBp 2 and Ac = Cc ∩ρBp 2, where Bp 2 = {u|∥u∥2 ≤1} is the unit ball of ℓ2 norm and ρ > 0 is any suitable radius. Then, for any β > 1 we have w(Ar) ≤ 1 + 2 β −1 ∥θ∗∥2 ρ w(Ac) , (5) where w(A) denotes the Gaussian width of any set A given by: w(A) = Eg[supa∈A⟨a, g⟩], where g is an isotropic Gaussian random vector. Thus, the Gaussian width of the error sets of regularized and constrained problems are closely related. In particular, for ∥θ∗∥2 = 1, with ρ = 1, β = 2, we have w(Ar) ≤3w(Ac). Related observations have been made for the special case of the L1 norm [2], although past work did not provide an explicit characterization in terms of Gaussian widths. The result also suggests that it is possible to move between the error analysis of the regularized and the constrained versions of the estimation problem. 2.2 Recovery Guarantees In order to establish recovery guarantees, we start by assuming that restricted strong convexity (RSC) is satisfied by the loss function in Cr = cone(Er), i.e., for any ∆∈Cr, there exists a suitable constant κ so that δL(∆, θ∗) ≜L(θ∗+ ∆) −L(θ∗) −⟨∇L(θ∗), ∆⟩≥κ∥∆∥2 2 . (6) In Sections 4 and 5, we establish precise forms of the RSC condition for a wide variety of design matrices and loss functions. In order to establish recovery guarantees, we focus on the quantity F(∆) = L(θ∗+ ∆) −L(θ∗) + λn(R(θ∗+ ∆) −R(θ∗)) . (7) Since ˆθλn = θ∗+ ˆ∆n is the estimated parameter, i.e., ˆθλn is the minimum of the objective, we clearly have F( ˆ∆n) ≤0, which implies a bound on ∥ˆ∆n∥2. Unlike previous results, the bound can be established without making any additional assumptions on the norm R(θ). We start with the following result, which expresses the upper bound on ∥ˆ∆n∥2 in terms of the gradient of the objective at θ∗. Lemma 2 Assume that the RSC condition is satisfied in Cr by the loss L(·) with parameter κ. With ˆ∆n = ˆθλn −θ∗, for any norm R(·), we have ∥ˆ∆n∥2 ≤1 κ∥∇L(θ∗) + λn∇R(θ∗)∥2 , (8) where ∇R(·) is any sub-gradient of the norm R(·). Note that the right hand side is simply the L2 norm of the gradient of the objective evaluated at θ∗. For the special case when ˆθλn = θ∗, the gradient of the objective is zero, implying correctly that ∥ˆ∆n∥2 = 0. While the above result provides useful insights about the bound on ∥ˆ∆n∥2, the quantities on the right hand side depend on θ∗, which is unknown. We present another form of the result in terms of quantities such as λn, κ, and the norm compatibility constant Ψ(Cr) = supu∈Cr R(u) ∥u∥2 , which are often easier to compute or bound. Theorem 2 Assume that the RSC condition is satisfied in Cr by the loss L(·) with parameter κ. With ˆ∆n = ˆθλn −θ∗, for any norm R(·), we have ∥ˆ∆n∥2 ≤1 + β β λn κ Ψ(Cr) . (9) The above result is deterministic, but contains λn and κ. In Section 3, we give precise characterizations of λn, which needs to satisfy (2). In Sections 4 and 5, we characterize the RSC condition constant κ for different losses and a variety of design matrices. 3 3 Bounds on the Regularization Parameter Recall that the parameter λn needs to satisfy the inequality λn ≥βR∗(∇L(θ∗; Zn)) . (10) The right hand side of the inequality has two issues: it depends on θ∗, and it is a random variable, since it depends on Zn. In this section, we characterize E[R∗(∇L(θ∗; Zn))] in terms of the Gaussian width of the unit norm ball ΩR = {u : R(u) ≤1}, and also discuss large deviation bounds around the expectation. For ease of exposition, we present results for the case of squared loss, i.e., L(θ∗; Zn) = 1 2n∥y −Xθ∗||2 with the linear model y = Xθ + ω, where ω can be Gaussian or sub-Gaussian noise. For this setting, ∇L(θ∗; Zn) = 1 nXT (y −Xθ∗) = 1 nXT ω. The analysis can be extended to GLMs, using analysis techniques discussed in Section 5. Gaussian Designs: First, we consider Gaussian design X, where xij ∼N(0, 1) are independent, and ω is elementwise independent Gaussian or sub-Gaussian noise. Theorem 3 Let ΩR = {u : R(u) ≤1}. Then, for Gaussian design X and Gaussian or subGaussian noise ω, for a suitable constant η0 > 0, we have E[R∗(∇L(θ∗; Zn))] ≤η0 √nw(ΩR) . (11) Further, for any τ > 0, for suitable constants η1, η2 > 0, with probability at least (1 − η1 exp(−η2τ 2)) R∗(∇L(θ∗; Zn)) ≤η0 √nw(ΩR) + τ √n . (12) For anisotropic Gaussian design, i.e., when columns of X ∈Rn×p have covariance Σp×p, the above result continues to hold with w(ΩR) replaced by p Λmax(Σ)w(ΩR), where Λmax(Σ) denotes the operator norm (largest eigenvalue). For correlated isotropic design, i.e., when rows of X ∈Rn have covariance Γn×n, the result continues to hold with w(ΩR) replaced by p Λmax(Γ)w(ΩR). Sub-Gaussian Designs: Recall that for a sub-Gaussian variable x, the sub-Gaussian norm |||x|||ψ2 = supp≥1 1 √p(E[|x|p])1/p [18]. Now, we consider sub-Gaussian design X, where |||xij|||ψ2 ≤k and xij are i.i.d., and ω is elementwise independent Gaussian or sub-Gaussian noise. Theorem 4 Let ΩR = {u : R(u) ≤1}. Then, for sub-Gaussian design X and Gaussian or subGaussian noise ω, for a suitable constant η0 > 0, we have E[R∗(∇L(θ∗; Zn))] ≤η0 √nw(ΩR) . (13) Interestingly, the analysis for the result above involves ‘sub-Gaussian width’ which can be upper bounded by a constant times the Gaussian width, using generic chaining [15]. Further, one can get Gaussian-like exponential concentration around the expectation for important classes of subGaussian random variables, including bounded random variables [6], and when Xu = ⟨h, u⟩, where u is any unit vector, are such that their Malliavin derivatives have almost surely bounded norm in L2[0, 1], i.e., R 1 0 |DrXu|2dr ≤η [19]. Next, we provide a mechanism for bounding the Gaussian width w(ΩR) of the unit norm ball in terms of the Gaussian width of a suitable cone, obtained by shifting or translating the norm ball. In particular, the result involves taking any point on the boundary of the unit norm ball, considering that as the origin, and constructing a cone using the norm ball. Since such a construction can be done with any point on the boundary, the tightest bound is obtained by taking the infimum over all points on the boundary. The motivation behind getting an upper bound of the Gaussian width w(ΩR) of the unit norm ball in terms of the Gaussian width of such a cone is because considerable advances have been made in recent years in upper bounding Gaussian widths of such cones. Lemma 3 Let ΩR = {u : R(u) ≤1} be the unit norm ball and ΘR = {u : R(u) = 1} be the boundary. For any ˜θ ∈ΘR, ρ(˜θ) = supθ:R(θ)≤1 ∥θ −˜θ∥2 is the diameter of ΩR measured with respect to ˜θ. Let G(˜θ) = cone(ΩR −˜θ) ∩ρ(˜θ)Bp 2, i.e., the cone of (ΩR −˜θ) intersecting the ball of radius ρ(˜θ). Then w(ΩR) ≤inf ˜θ∈ΘR w(G(˜θ)) . (14) 4 4 Least Squares Models: Restricted Eigenvalue Conditions When the loss function is squared loss, i.e., L(θ; Zn) = 1 2n∥y −Xθ∥2, the RSC condition (6) becomes equivalent to the Restricted Eigenvalue (RE) condition [2, 9], i.e., 1 n∥X∆∥2 2 ≥κ∥∆∥2 2, or equivalently, ∥X∆∥2 ∥∆∥2 ≥√κn for any ∆in the error cone Cr. Since the absolute magnitude of ∥∆∥2 does not play a role in the RE condition, without loss of generality we work with unit vectors u ∈A = Cr ∩Sp−1, where Sp−1 is the unit sphere. In this section, we establish RE conditions for a variety of Gaussian and sub-Gaussian design matrices, with isotropic, anisotropic, or dependent rows, i.e., when samples (rows of X) are correlated. Results for certain types of design matrices for certain types of norms, especially the L1 norm, have appeared in the literature [2, 13, 14]. Our analysis considers a wider variety of design matrices and establishes RSC conditions for any A ⊆Sp−1, thus corresponding to any norm. Interestingly, the Gaussian width w(A) of A shows up in all bounds, as a geometric measure of the size of the set A, even for sub-Gaussian design matrices. In fact, all existing RE results do implicitly have the width term, but in a form specific to the chosen norm [13, 14]. The analysis on atomic norm in [4] has the w(A) term explicitly, but the analysis relies on Gordon’s inequality [5, 7], which is applicable only for isotropic Gaussian design matrices. The proof technique we use is simple, a standard covering argument, and is largely the same across all the cases considered. A unique aspect of our analysis, used in all the proofs, is a way to go from covering numbers of A to the Gaussian width of A using the Sudakov-Dudley inequality [7]. Our general techniques are in sharp contrast to much of the existing literature on RE conditions, which commonly use specialized tools such as Gaussian comparison principles [13, 9], and/or specialized analysis geared to a particular norm such as L1 [14]. 4.1 Restricted Eigenvalue Conditions: Gaussian Designs In this section, we focus on the case of Gaussian design matrices X ∈Rn×p, and consider three settings: (i) independent-isotropic, where the entries are elementwise independent, (ii) independentanisotropic, where rows Xi are independent but each row has a covariance E[XiXT i ] = Σ ∈Rp×p, and (iii) dependent-isotropic, where the rows are isotropic but the columns Xj are correlated with E[XjXT j ] = Γ ∈Rn×n. For convenience, we assume E[x2 ij] = 1, noting that the analysis easily extends to the general case of E[x2 ij] = σ2. Independent Isotropic Gaussian (IIG) Designs: The IIG setting has been extensively studied in the literature [3, 9]. As discussed in the recent work on atomic norms [4], one can use Gordon’s inequality [5, 7] to get RE conditions for the IIG setting. Our goal in this section is two-fold: first, we present the RE conditions obtained using our simple proof technique, and show that it is equivalent, up to constants, the RE condition obtained using Gordon’s inequality, an arguably heavy-duty technique only applicable to the IIG setting; and second, we go over some facets of how we present the results, which will apply to all subsequent RE-style results as well as give a way to plug-in κ in the estimation error bound in (9). Theorem 5 Let the design matrix X ∈Rn×p be elementwise independent and normal, i.e., xij ∼ N(0, 1). Then, for any A ⊆Sp−1, any n ≥2, and any τ > 0, with probability at least (1 − η1 exp(−η2τ 2)), we have inf u∈A ∥Xu∥2 ≥1 2 √n −η0w(A) −τ , (15) η0, η1, η2 > 0 are absolute constants. We consider the equivalent result one could obtain by directly using Gordon’s inequality [5, 7]: Theorem 6 Let the design matrix X be elementwise independent and normal, i.e., xij ∼N(0, 1). Then, for any A ⊆Sp−1 and any τ > 0, with probability at least (1 −2 exp(−τ 2/2)), we have inf u∈A ∥Xu∥2 ≥γn −w(A) −τ , (16) where γn = E[∥h∥2] > n √n+1 is the expected length of a Gaussian random vector in Rn. 5 Interestingly, the results are equivalent, up to constants. However, unlike Gordon’s inequality, our proof technique generalizes to all the other design matrices considered in the sequel. We emphasize three additional aspects in the context of the above analysis, which will continue to hold for all the subsequent results but will not be discussed explicitly. First, to get a form of the result which can be used as κ and plugged in to the estimation error bound (9), one can simply choose τ = 1 2( 1 2 √n −η0w(A)) so as to get inf u∈A ∥Xu∥2 ≥1 4 √n −η0 2 w(A) , (17) with high probability. Table 1 shows a summary of recovery bounds on Independent Isotropic Gaussian design matrices with Gaussian noise. Second, the result does not depend on the fact that u ∈A ⊆Cr ∩Sp−1 so that ∥u∥2 = 1. For example, one can consider the cone Cr to be intersecting with a sphere ρSp−1 of a different radius ρ, to give Aρ = Cr ∩ρSp−1 so that u ∈Aρ has ∥u∥2 = ρ. For simplicity, let A = A1, i.e., corresponding to ρ = 1. Then, a straightforward extension yields infu∈Aρ ∥Xu∥2 ≥( 1 2 √n −η0w(A) −τ)∥u∥2, with probability at least (1 −η1 exp(−η2τ 2)), since ∥Xu∥2 = ∥X u ∥u∥2 ∥2∥u∥2 and w(A∥u∥2) = ∥u∥2w(A) [4]. Such a scale independence is in fact necessary for the error bound analysis in Section 2. Finally, note that the leading constant 1 2 was a consequence of our choice of ϵ = 1 4 for the ϵ-net covering of A in the proof. One can get other constants, less than 1, with different choices of ϵ, and the constants η0, η1, η2 will change based on this choice. Independent Anisotropic Gaussian (IAG) Designs: We consider a setting where the rows Xi of the design matrix are independent, but each row is sampled from an anisotropic Gaussian distribution, i.e., Xi ∼N(0, Σp×p) where Xi ∈Rp. The setting has been considered in the literature [13] for the special case of L1 norms, and sharp results have been established using Gaussian comparison techniques [7]. We show that equivalent results can be obtained by our simple technique, which does not rely on Gaussian comparisons [7, 9]. Theorem 7 Let the design matrix X be row wise independent and each row Xi ∼N(0, Σp×p). Then, for any A ⊆Sp−1 and any τ > 0, with probability at least 1 −η1 exp(−η2τ 2), we have inf u∈A ∥Xu∥2 ≥1 2 √ν√n −η0 p Λmax(Σ) w(A) −τ , (18) where √ν = infu∈A ∥Σ1/2u∥2, p Λmax(Σ) denotes the largest eigenvalue of Σ1/2 and η0, η1, η2 > 0 are constants. A comparison with the results of [13] is instructive. The leading term √ν appears in [13] as well—we have simply considered infu∈A on both sides, and the result in [13] is for any u with the ∥Σ1/2u∥2 term. The second term in [13] depends on the largest entry in the diagonal of Σ, √log p, and ∥u∥1. These terms are a consequence of the special case analysis for L1 norm. In contrast, we consider the general case and simply get the scaled Gaussian width term p Λmax(Σ) w(A). Dependent Isotropic Gaussian (DIG) Designs: We now consider a setting where the rows of the design matrix ˜X are isotropic Gaussians, but the columns ˜Xj are correlated with E[ ˜Xj ˜XT j ] = Γ ∈ Rn×n. Interestingly, correlation structure over the columns make the samples dependent, a scenario which has not yet been widely studied in the literature [22, 10]. We show that our simple technique continues to work in this scenario and gives a rather intuitive result. Theorem 8 Let ˜X ∈Rn×p be a matrix whose rows ˜Xi are isotropic Gaussian random vectors in Rp and the columns ˜Xj are correlated with E[ ˜Xj ˜XT j ] = Γ. Then, for any set A ⊆Sp−1 and any τ > 0, with probability at least (1 −η1 exp(−η2τ 2), we have inf u∈A ∥˜Xu∥2 ≥3 4 p Tr(Γ) − p Λmax(Γ) η0w(A) + 5 2 −τ (19) where η0, η1, η2 > 0 are constants. Note that with the assumption that E[x2 ij] = 1, Γ will be a correlation matrix implying Tr(Γ) = n, and making the sample size dependence explicit. Intuitively, due to sample correlations, n samples are effectively equivalent to Tr(Γ) Λmax(Γ) = n Λmax(Γ) samples. 6 4.2 Restricted Eigenvalue Conditions: Sub-Gaussian Designs In this section, we focus on the case of sub-Gaussian design matrices X ∈Rn×p, and consider three settings: (i) independent-isotropic, where the rows are independent and isotropic, (ii) independentanisotropic, where the rows Xi are independent but each row has a covariance E[XiXT i ] = Σp×p, and (iii) dependent-isotropic, where the rows are isotropic and the columns Xj are correlated with E[XjXT j ] = Γn×n. For convenience, we assume E[x2 ij] = 1 and the sub-Gaussian norm |||xij|||ψ2 ≤k [18]. In recent work, [17] also considers generalizations of RE conditions to subGaussian designs, although our proof techniques are different. Independent Isotropic Sub-Gaussian Designs: We start with the setting where the sub-Gaussian design matrix X ∈Rn×p has independent rows Xi and each row is isotropic. Theorem 9 Let X ∈Rn×p be a design matrix whose rows Xi are independent isotropic subGaussian random vectors in Rp. Then, for any set A ⊆Sp−1 and any τ > 0, with probability at least (1 −2 exp(−η1τ 2)), we have inf u∈A ∥Xu∥2 ≥√n −η0w(A) −τ , (20) where η0, η1 > 0 are constants which depend only on the sub-Gaussian norm |||xij|||ψ2 = k. Independent Anisotropic Sub-Gaussian Designs: We consider a setting where the rows Xi of the design matrix are independent, but each row is sampled from an anisotropic sub-Gaussian distribution, i.e., |||xij|||ψ2 = k and E[XiXT i ] = Σp×p. Theorem 10 Let the sub-Gaussian design matrix X be row wise independent, and each row has E[XiXT i ] = Σ ∈Rp×p. Then, for any A ⊆Sp−1 and any τ > 0, with probability at least (1 −2 exp(−η1τ 2)), we have inf u∈A ∥Xu∥2 ≥√ν√n −η0 Λmax(Σ) w(A) −τ , (21) where √ν = infu∈A ∥Σ1/2u∥2, p Λmax(Σ) denotes the largest eigenvalue of Σ1/2, and η0, η1 > 0 are constants which depend on the sub-Gaussian norm |||xij|||ψ2 = k. Note that [14] establish RE conditions for anisotropic sub-Gaussian designs for the special case of L1 norm. In contrast, our results are general and in terms of the Gaussian width w(A). Dependent Isotropic Sub-Gaussian Designs: We consider the setting where the sub-Gaussian design matrix ˜X has isotropic sub-Gaussian rows, but the columns ˜Xj are correlated with E[ ˜Xj ˜XT j ] = Γ, implying dependent samples. Theorem 11 Let ˜X ∈Rn×p be a sub-Gaussian design matrix with isotropic rows and correlated columns with E[ ˜Xj ˜XT j ] = Γ ∈Rn×n. Then, for any A ⊆Sp−1 and any τ > 0, with probability at least (1 −2 exp(−η1τ 2)), we have inf u∈A ∥˜Xu∥2 ≥3 4 p Tr(Γ) −η0 Λmax(Γ)w(A) −τ , (22) where η0, η1 are constants which depend on the sub-Gaussian norm |||xij|||ψ2 = k. 5 Generalized Linear Models: Restricted Strong Convexity In this section, we consider the setting where the conditional probabilistic distribution of y|x follows an exponential family distribution: p(y|x; θ) = exp{y⟨θ, x⟩−ψ(⟨θ, x⟩)}, where ψ(·) is the logpartition function. Generalized linear models consider the negative likelihood of such conditional distributions as the loss function: L(θ; Zn) = 1 n Pn i=1(ψ(⟨θ, Xi⟩) −⟨θ, yiXi⟩). Least squares regression and logistic regression are popular special cases of GLMs. Since ∇ψ(⟨θ, x⟩) = E[y|x], we have ∇L(θ∗; Zn) = 1 nXT ω, where ωi = ∇ψ(⟨θ, Xi⟩) −yi = E[y|Xi] −yi plays the role of noise. Hence, the analysis in Section 3 can be applied assuming ω is Gaussian or sub-Gaussian. To obtain RSC conditions for GLMs, first note that δL(θ∗, ∆; Zn) = 1 n n X i=1 ∇2ψ(⟨θ∗, Xi⟩+ γi⟨∆, Xi⟩)⟨∆, Xi⟩2 , (23) 7 Table 1: A summary of various values for L1 and L∞norms with all values correct upto constants. R(u) λn := c1 w(ΩR) √n κ := h max n 1 −c2 w(A) √n , 0 oi2 Ψ(Cr) ∥ˆ∆n∥2 := c3 Ψ(Cr)λn κ ℓ1 norm O q log p n O (1) √s O q s log p n ℓ∞norm O p p 2n O(1) 1 O p p 2n where γi ∈[0, 1], by mean value theorem. Since ψ is of Legendre type, the second derivative ∇2ψ(·) is always positive. Since the RSC condition relies on a non-trivial lower bound for the above quantity, the analysis considers a suitable compact set where ℓ= ℓψ(T) = min|a|≤2T ∇2ψ(a) is bounded away from zero. Outside this compact set, we will only use ∇2ψ(·) > 0. Then, δL(θ∗, ∆; Zn) ≥ℓ n n X i=1 ⟨Xi, ∆⟩2 I[|⟨Xi, θ∗⟩| < T] I[|⟨Xi, ∆⟩| < T] . (24) We give a characterization of the RSC condition for independent isotropic sub-Gaussian design matrices X ∈Rn×p. The analysis can be suitably generalized to the other design matrices considered in Section 4 by using the same techniques. As before, we denote ∆as u, and consider u ∈A ⊆Sp−1 so that ∥u∥2 = 1. Further, we assume ∥θ∗∥2 ≤c1 for some constant c1. Assuming X has subGaussian entries with |||xij|||ψ2 ≤k, ⟨Xi, θ∗⟩and ⟨Xi, u⟩are sub-Gaussian random variables with sub-Gaussian norm at most Ck. Let φ1 = φ1(T; u) = P{|⟨Xi, u⟩| > T} ≤e · exp(−c2T 2/C2k2), and φ2 = φ2(T; θ∗) = P{|⟨Xi, θ∗⟩| > T} ≤e · exp(−c2T 2/C2k2). The result we present is in terms of the constants ℓ= ℓψ(T), φ1 = φ(T; u) and φ2 = φ(T, θ∗) for any suitably chosen T. Theorem 12 Let X ∈Rn×p be a design matrix with independent isotropic sub-Gaussian rows. Then, for any set A ⊆Sp−1, any α ∈(0, 1), any τ > 0, and any n ≥ 2 α2(1−φ1−φ2)(cw2(A) + c3(1−φ1−φ2)5 c4 4k4 (1−α)τ 2) for suitable constants c3 and c4, with probability at least 1−3 exp −η1τ 2 , we have inf u∈A p n∂L(θ∗; u, X) ≥ℓ√π √n −η0w(A) −τ) , (25) where π = (1 −α)(1 −φ1 −φ2), ℓ= ℓψ(T) = min|a|≤2T +K ∇2ψ(a), and constants (η0, η1) depend on the sub-Gaussian norm |||xij|||ψ2 = k. The form of the result is closely related to the corresponding result for the RE condition on infu∈A ∥Xu∥2 in Section 4.2. Note that RSC analysis for GLMs was considered in [9] for specific norms, especially L1, whereas our analysis applies to any set A ⊆Sp−1, and hence to any norm. Further, following similar argument structure as in Section 4.2, the analysis for GLMs can be extended to anisotropic and dependent design matrices. 6 Conclusions The paper presents a general set of results and tools for characterizing non-asymptotic estimation error in norm regularized regression problems. The analysis holds for any norm, and includes much of existing literature focused on structured sparsity and related themes as special cases. The work can be viewed as a direct generalization of results in [9], which presented related results for decomposable norms. Our analysis illustrates the important role Gaussian widths, as a geometric measure of size of suitable sets, play in such results. Further, the error sets of regularized and constrained versions of such problems are shown to be closely related [2]. Going forward, it will be interesting to explore similar generalizations for the semi-parametric and non-parametric settings. Acknowledgements: We thank the anonymous reviewers for helpful comments and suggestions on related work. We thank Sergey Bobkov, Snigdhansu Chatterjee, and Pradeep Ravikumar for discussions related to the paper. The research was supported by NSF grants IIS-1447566, IIS-1422557, CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, and by NASA grant NNX12AQ39A. 8 References [1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: A geometric theory of phase transitions in convex optimization. Inform. Inference, 3(3):224–294, 2013. [2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics, 37(4):1705–1732, 2009. [3] P. Buhlmann and S. van de Geer. Statistics for High Dimensional Data: Methods, Theory and Applications. Springer Series in Statistics. Springer, 2011. [4] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [5] Y. Gordon. On Milmans inequality and random subspaces which escape through a mesh in Rn. In Geometric Aspects of Functional Analysis, volume 1317 of Lecture Notes in Mathematics, pages 84–106. Springer, 1988. [6] M. Ledoux. The concentration of measure phenomenon. Mathematical Surveys and Mongraphs. American Mathematical Society. [7] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 2013. [8] N. Meinshausen and B Yu. Lasso-type recovery of sparse representations for high-dimensional data. The Annals of Statistics, 37(1):246—270, 2009. [9] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis of regularized M-estimators. Statistical Science, 27(4):538–557, December 2012. [10] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics, 39(2):1069–1097, 2011. [11] S. Oymak, C. Thrampoulidis, and B. Hassibi. The Squared-Error of Generalized Lasso: A Precise Analysis. Arxiv, arXiv:1311.0830v2, 2013. [12] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Transactions on Information Theory, 59(1):482–494, 2013. [13] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted Eigenvalue Properties for Correlated Gaussian Designs. Journal of Machine Learning Research, 11:2241–2259, 2010. [14] Z. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE Transactions on Information Theory, 59(6):3434–3447, 2013. [15] M. Talagrand. The Generic Chaining. Springer, 2005. [16] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. [17] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In Sampling Theory, a Renaissance. (To Appear), 2014. [18] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and G. Kutyniok, editors, Compressed Sensing, chapter 5, pages 210–268. Cambridge University Press, 2012. [19] A. B. Vizcarra and F. G. Viens. Some applications of the Malliavin calculus to sub-Gaussian and non-sub-Gaussian random fields. In Seminar on Stochastic Analysis, Random Fields and Applications, Progress in Probability, volume 59, pages 363–396. Birkhauser, 2008. [20] M. J. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using ℓ1-constrained quadratic programming(Lasso). IEEE Transactions on Information Theory, 55:2183–2202, 2009. [21] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2567, November 2006. [22] S. Zhou. Gemini: Graph estimation with matrix variate normal instances. The Annals of Statistics, 42(2):532–562, 2014. 9
|
2014
|
109
|
5,191
|
Recovery of Coherent Data via Low-Rank Dictionary Pursuit Guangcan Liu Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway, NJ 08854, USA gcliu@rutgers.edu Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway, NJ 08854, USA pingli@rutgers.edu Abstract The recently established RPCA [4] method provides a convenient way to restore low-rank matrices from grossly corrupted observations. While elegant in theory and powerful in reality, RPCA is not an ultimate solution to the low-rank matrix recovery problem. Indeed, its performance may not be perfect even when data are strictly low-rank. This is because RPCA ignores clustering structures of the data which are ubiquitous in applications. As the number of cluster grows, the coherence of data keeps increasing, and accordingly, the recovery performance of RPCA degrades. We show that the challenges raised by coherent data (i.e., data with high coherence) could be alleviated by Low-Rank Representation (LRR) [13], provided that the dictionary in LRR is configured appropriately. More precisely, we mathematically prove that if the dictionary itself is low-rank then LRR is immune to the coherence parameter which increases with the underlying cluster number. This provides an elementary principle for dealing with coherent data and naturally leads to a practical algorithm for obtaining proper dictionaries in unsupervised environments. Experiments on randomly generated matrices and real motion sequences verify our claims. See the full paper at arXiv:1404.4032. 1 Introduction Nowadays our data are often high-dimensional, massive and full of gross errors (e.g., corruptions, outliers and missing measurements). In the presence of gross errors, the classical Principal Component Analysis (PCA) method, which is probably the most widely used tool for data analysis and dimensionality reduction, becomes brittle — A single gross error could render the estimate produced by PCA arbitrarily far from the desired estimate. As a consequence, it is crucial to develop new statistical tools for robustifying PCA. A variety of methods have been proposed and explored in the literature over several decades, e.g., [2, 3, 4, 8, 9, 10, 11, 12, 24, 13, 16, 19, 25]. One of the most exciting methods is probably the so-called RPCA (Robust Principal Component Analysis) method [4], which was built upon the exploration of the following low-rank matrix recovery problem: Problem 1 (Low-Rank Matrix Recovery) Suppose we have a data matrix X ∈Rm×n and we know it can be decomposed as X = L0 + S0, (1.1) where L0 ∈Rm×n is a low-rank matrix each column of which is a data point drawn from some low-dimensional subspace, and S0 ∈Rm×n is a sparse matrix supported on Ω⊆{1, · · · , m} × {1, · · · , n}. Except these mild restrictions, both components are arbitrary. The rank of L0 is unknown, the support set Ω(i.e., the locations of the nonzero entries of S0) and its cardinality (i.e., the amount of the nonzero entries of S0) are unknown either. In particular, the magnitudes of the nonzero entries in S0 may be arbitrarily large. Given X, can we recover both L0 and S0, in a scalable and exact fashion? 1 cluster 1 cluster 2 Figure 1: Exemplifying the extra structures of low-rank data. Each entry of the data matrix is a grade that a user assigns to a movie. It is often the case that such data are low-rank, as there exist wide correlations among the grades that different users assign to the same movie. Also, such data could own some clustering structure, since the preferences of the same type of users are more similar to each other than to those with different gender, personality, culture and education background. In summary, such data (1) are often low-rank and (2) exhibit some clustering structure. The theory of RPCA tells us that, very generally, when the low-rank matrix L0 is meanwhile incoherent (i.e., with low coherence), both the low-rank and the sparse matrices can be exactly recovered by using the following convex, potentially scalable program: min L,S ∥L∥∗+ λ∥S∥1, s.t. X = L + S, (1.2) where ∥· ∥∗is the nuclear norm [7] of a matrix, ∥· ∥1 denotes the ℓ1 norm of a matrix seen as a long vector, and λ > 0 is a parameter. Besides of its elegance in theory, RPCA also has good empirical performance in many practical areas, e.g., image processing [26], computer vision [18], radar imaging [1], magnetic resonance imaging [17], etc. While complete in theory and powerful in reality, RPCA cannot be an ultimate solution to the lowrank matrix recovery Problem 1. Indeed, the method might not produce perfect recovery even when L0 is strictly low-rank. This is because RPCA captures only the low-rankness property, which however is not the only property of our data, but essentially ignores the extra structures (beyond low-rankness) widely existing in data: Given the low-rankness constraint that the data points (i.e., columns vectors of L0) locate on a low-dimensional subspace, it is unnecessary for the data points to locate on the subspace uniformly at random and it is quite normal that the data may have some extra structures, which specify in more detail how the data points locate on the subspace. Figure 1 demonstrates a typical example of extra structures; that is, the clustering structures which are ubiquitous in modern applications. Whenever the data are exhibiting some clustering structures, RPCA is no longer a method of perfection. Because, as will be shown in this paper, while the rank of L0 is fixed and the underlying cluster number goes large, the coherence of L0 keeps heightening and thus, arguably, the performance of RPCA drops. To better handle coherent data (i.e., the cases where L0 has large coherence parameters), a seemingly straightforward idea is to avoid the coherence parameters of L0. However, as explained in [4], the coherence parameters are indeed necessary (if there is no additional condition assumed on the data). This paper shall further indicate that the coherence parameters are related in nature to some extra structures intrinsically existing in L0 and therefore cannot be discarded simply. Interestingly, we show that it is possible to avoid the coherence parameters by using some additional conditions, which are easy to obey in supervised environment and can also be approximately achieved in unsupervised environment. Our study is based on the following convex program termed Low-Rank Representation (LRR) [13]: min Z,S ∥Z∥∗+ λ∥S∥1, s.t. X = AZ + S, (1.3) where A ∈Rm×d is a size-d dictionary matrix constructed in advance1, and λ > 0 is a parameter. In order for LRR to avoid the coherence parameters which increase with the cluster number underlying 1It is not crucial to determine the exact value of d. Suppose Z∗is the optimal solution with respect to Z. Then LRR uses AZ∗to restore L0. LRR falls back to RPCA when A = I (identity matrix). Furthermore, it can be proved that the recovery produced by LRR is the same as RPCA whenever the dictionary A is orthogonal. 2 L0, we prove that it is sufficient to construct in advance a dictionary A which is low-rank by itself. This gives a generic prescription to defend the possible infections raised by coherent data, providing an elementary criteria for learning the dictionary matrix A. Subsequently, we propose a simple and effective algorithm that utilizes the output of RPCA to construct the dictionary in LRR. Our extensive experiments demonstrated on randomly generated matrices and motion data show promising results. In summary, the contributions of this paper include the following: ⋄For the first time, this paper studies the problem of recovering low-rank, and coherent (or less incoherent as equal) matrices from their corrupted versions. We investigate the physical regime where coherent data arise. For example, the widely existing clustering structures may lead to coherent data. We prove some basic theories for resolving the problem, and also establish a practical algorithm that outperforms RPCA in our experimental study. ⋄Our studies help reveal the physical meaning of coherence, which is now standard and widely used in various literatures, e.g., [2, 3, 4, 25, 15]. We show that the coherence parameters are not “assumptions” for a proof, but rather some excellent quantities that relate in nature to the extra structures (beyond low-rankness) intrinsically existing in L0. ⋄This paper provides insights regarding the LRR model proposed by [13]. While the special case of A = X has been extensively studied, the LRR model (1.3) with general dictionaries is not fully understood yet. We show that LRR (1.3) equipped with proper dictionaries could well handle coherent data. ⋄The idea of replacing L with AZ is essentially related to the spirit of matrix factorization which has been explored for long, e.g., [20, 23]. In that sense, the explorations of this paper help to understand why factorization techniques are useful. 2 Summary of Main Notations Capital letters such as M are used to represent matrices, and accordingly, [M]ij denotes its (i, j)th entry. Letters U, V , Ωand their variants (complements, subscripts, etc.) are reserved for left singular vectors, right singular vectors and support set, respectively. We shall abuse the notation U (resp. V ) to denote the linear space spanned by the columns of U (resp. V ), i.e., the column space (resp. row space). The projection onto the column space U, is denoted by PU and given by PU(M) = UU T M, and similarly for the row space PV (M) = MV V T . We shall also abuse the notation Ωto denote the linear space of matrices supported on Ω. Then PΩand PΩ⊥respectively denote the projections onto Ωand Ωc such that PΩ+ PΩ⊥= I, where I is the identity operator. The symbol (·)+ denotes the Moore-Penrose pseudoinverse of a matrix: M + = VMΣ−1 M U T M for a matrix M with Singular Value Decomposition (SVD)2 UMΣMV T M. Six different matrix norms are used in this paper. The first three norms are functions of the singular values: 1) The operator norm (i.e., the largest singular value) denoted by ∥M∥, 2) the Frobenius norm (i.e., square root of the sum of squared singular values) denoted by ∥M∥F, and 3) the nuclear norm (i.e., the sum of singular values) denoted by ∥M∥∗. The other three are the ℓ1, ℓ∞(i.e., sup-norm) and ℓ2,∞norms of a matrix: ∥M∥1 = P i,j |[M]ij|, ∥M∥∞= maxi,j{|[M]ij|} and ∥M∥2,∞= maxj{ qP i[M]2 ij}, respectively. The Greek letter µ and its variants (e.g., subscripts and superscripts) are reserved for the coherence parameters of a matrix. We shall also reserve two lower case letters, m and n, to respectively denote the data dimension and the number of data points, and we use the following two symbols throughout this paper: n1 = max(m, n) and n2 = min(m, n). 3 On the Recovery of Coherent Data In this section, we shall firstly investigate the physical regime that raises coherent (or less incoherent) data, and then discuss the problem of recovering coherent data from corrupted observations, providing some basic principles and an algorithm for resolving the problem. 2In this paper, SVD always refers to skinny SVD. For a rank-r matrix M ∈Rm×n, its SVD is of the form UMΣMV T M, with UM ∈Rm×r, ΣM ∈Rr×r and VM ∈Rn×r. 3 3.1 Coherence Parameters and Their Properties As the rank function cannot fully capture all characteristics of L0, it is necessary to define some quantities to measure the effects of various extra structures (beyond low-rankness) such as the clustering structure as demonstrated in Figure 1. The coherence parameters defined in [3, 4] are excellent exemplars of such quantities. 3.1.1 Coherence Parameters: µ1, µ2, µ3 For an m × n matrix L0 with rank r0 and SVD L0 = U0Σ0V T 0 , some important properties can be characterized by three coherence parameters, denoted as µ1, µ2 and µ3, respectively. The first coherence parameter, 1 ≤µ1(L0) ≤m, which characterizes the column space identified by U0, is defined as µ1(L0) = m r0 max 1≤i≤m ∥U T 0 ei∥2 2, (3.4) where ei denotes the ith standard basis. The second coherence parameter, 1 ≤µ2(L0) ≤n, which characterizes the row space identified by V0, is defined as µ2(L0) = n r0 max 1≤j≤n ∥V T 0 ej∥2 2. (3.5) The third coherence parameter, 1 ≤µ3(L0) ≤mn, which characterizes the joint space identified by U0V T 0 , is defined as µ3(L0) = mn r0 (∥U0V T 0 ∥∞)2 = mn r0 max i,j (|⟨U T 0 ei, V T 0 ej⟩|)2. (3.6) The analysis in RPCA [4] merges the above three parameters into a single one: µ(L0) = max{µ1(L0), µ2(L0), µ3(L0)}. As will be seen later, the behaviors of those three coherence parameters are different from each other, and hence it is more adequate to consider them individually. 3.1.2 µ2-phenomenon According to the analysis in [4], the success condition (regarding L0) of RPCA is rank (L0) ≤ crn2 µ(L0)(log n1)2 , (3.7) where µ(L0) = max{µ1(L0), µ2(L0), µ3(L0)} and cr > 0 is some numerical constant. Thus, RPCA will be less successful when the coherence parameters are considerably larger. In this subsection, we shall show that the widely existing clustering structure can enlarge the coherence parameters and, accordingly, downgrades the performance of RPCA. Given the restriction that rank(L0) = r0, the data points (i.e., column vectors of L0) are unnecessarily sampled from a r0-dimensional subspace uniformly at random. A more realistic interpretation is to consider the data points as samples from the union of k number of subspaces (i.e., clusters), and the sum of those multiple subspaces together has a dimension r0. That is to say, there are multiple “small” subspaces inside one r0-dimensional “large” subspace, as exemplified in Figure 1. Whenever the low-rank matrix L0 is meanwhile exhibiting such clustering behaviors, the second coherence parameter µ2(L0) (and so µ3(L0)) will increase with the number of clusters underlying L0, as shown in Figure 2. When the coherence is heightening, (3.7) suggests that the performance of RPCA will drop, as verified in Figure 2(d). Note here that the variation of µ3 is mainly due to the variation of the row space, which is characterized by µ2. We call the phenomena shown in Figure 2(b)∼(d) as the “µ2-phenomenon”. Readers can also refer to the full paper to see why the second coherence parameter increases with the cluster number underlying L0. Interestingly, one may have noticed that µ1 is invariant to the variation of the clustering number, as can be seen from Figure 2(a). This is because the clustering behavior of the data points can only affect the row space, while µ1 is defined on the column space. Yet, if the row vectors of L0 also own some clustering structure, µ1 could be large as well. Such kind of data can exist widely in text documents and we leave this as future work. 4 1 2 5 10 20 50 0 0.5 1 1.5 #cluster µ1 (a) 1 2 5 10 20 50 0 1 2 3 4 #cluster µ2 (b) 1 2 5 10 20 50 0 20 40 60 #cluster µ3 (c) 1 2 5 10 20 50 0 0.1 0.2 0.3 #cluster recover error (d) Figure 2: Exploring the influence of the cluster number, using randomly generated matrices. The size and rank of L0 are fixed to be 500 × 500 and 100, respectively. The underlying cluster number varies from 1 to 50. For the recovery experiments, S0 is fixed as a sparse matrix with 13% nonzero entries. (a) The first coherence parameter µ1(L0) vs cluster number. (b) µ2(L0) vs cluster number. (c) µ3(L0) vs cluster number. (d) Recover error (produced by RPCA) vs cluster number. The numbers shown in these figure are averaged from 100 random trials. The recover error is computed as ∥ˆL0 −L0∥F /∥L0∥F , where ˆL0 denotes an estimate of L0. 3.2 Avoiding µ2 by LRR The µ2-phenomenon implies that the second coherence parameter µ2 is related in nature to some intrinsic structures of L0 and thus cannot be eschewed without using additional conditions. In the following, we shall figure out under what conditions the second coherence parameter µ2 (and µ3) can be avoided such that LRR could well handle coherent data. Main Result: We show that, when the dictionary A itself is low-rank, LRR is able to avoid µ2. Namely, the following theorem is proved without using µ2. See the full paper for a detailed proof. Theorem 1 (Noiseless) Let A ∈Rm×d with SVD A = UAΣAV T A be a column-wisely unit-normed (i.e., ∥Aei∥2 = 1, ∀i) dictionary matrix which satisfies PUA(U0) = U0 (i.e., U0 is a subspace of UA). For any 0 < ǫ < 0.5 and some numerical constant ca > 1, if rank (L0) ≤rank (A) ≤ ǫ2n2 caµ1(A) log n1 and |Ω| ≤(0.5 −ǫ)mn, (3.8) then with probability at least 1 −n−10 1 , the optimal solution to the LRR problem (1.3) with λ = 1/√n1 is unique and exact, in a sense that Z∗= A+L0 and S∗= S0, where (Z∗, S∗) is the optimal solution to (1.3). It is worth noting that the restriction rank (L0) ≤O(n2/ log n1) is looser than that of PRCA3, which requires rank(L0) ≤O(n2/(log n1)2). The requirement of column-wisely unit-normed dictionary (i.e., ∥Aei∥2 = 1, ∀i) is purely for complying the parameter estimate of λ = 1/√n1, which is consistent with RPCA. The condition PUA(U0) = U0, i.e., U0 is a subspace of UA, is indispensable if we ask for exact recovery, because PUA(U0) = U0 is implied by the equality AZ∗= L0. This necessary condition, together with the low-rankness condition, provides an elementary criterion for learning the dictionary matrix A in LRR. Figure 3 presents an example, which further confirms our main result; that is, LRR is able to avoid µ2 as long as U0 ⊂UA and A is low-rank. It is also worth noting that it is unnecessary for A to satisfy UA = U0, and that LRR is actually tolerant to the “errors” possibly existing in the dictionary. The program (1.3) is designed for the case where the uncorrupted observations are noiseless. In reality this assumption is often not true and all entries of X can be contaminated by a small amount of noises, i.e., X = L0 + S0 + N, where N is a matrix of dense Gaussian noises. In this case, the formula of LRR (1.3) need be modified to min Z,S ∥Z∥∗+ λ∥S∥1, s.t. ∥X −AZ −S∥F ≤ε, (3.9) 3In terms of exact recovery, O(n2/ log n1) is probably the “finest” bound one could accomplish in theory. 5 X AZ* S* 1 5 10 15 20 0 0.1 0.2 rank(A) recover error Figure 3: Exemplifying that LRR can void µ2. In this experiment, L0 is a 200 × 200 rank-1 matrix with one column being 1 (i.e., a vector of all ones) and everything else being zero. Thus, µ1(L0) = 1 and µ2(L0) = 200. The dictionary is set as A = [1, W], where W is a 200 × p random Gaussian matrix (with varying p). As long as rank (A) = p+1 ≤10, LRR with λ = 0.08 can exactly recover L0 from a grossly corrupted observation matrix X. where ε is a parameter that measures the noise level of data. In the experiments of this paper, we consistently set ε = 10−6∥X∥F. In the presence of dense noises, the latent matrices, L0 and S0, cannot be exactly restored. Yet we have the following theorem to guarantee the near recovery property of the solution produced by the program (3.9): Theorem 2 (Noisy) Suppose ∥X −L0 −S0∥F ≤ε. Let A ∈Rm×d with SVD A = UAΣAV T A be a column-wisely unit-normed dictionary matrix which satisfies PUA(U0) = U0 (i.e., U0 is a subspace of UA). For any 0 < ǫ < 0.35 and some numerical constant ca > 1, if rank(L0) ≤rank (A) ≤ ǫ2n2 caµ1(A) log n1 and |Ω| ≤(0.35 −ǫ)mn, (3.10) then with probability at least 1 −n−10 1 , any solution (Z∗, S∗) to (3.9) with λ = 1/√n1 gives a near recovery to (L0, S0), in a sense that ∥AZ∗−L0∥F ≤8√mnε and ∥S∗−S0∥F ≤(8√mn + 2)ε. 3.3 An Unsupervised Algorithm for Matrix Recovery To handle coherent (equivalently, less incoherent) data, Theorem 1 suggests that the dictionary matrix A should be low-rank and satisfy U0 ⊂UA. In certain supervised environment, this might not be difficult as one could potentially use clear, well processed training data to construct the dictionary. In an unsupervised environment, however, it will be challenging to identify a low-rank dictionary that obeys U0 ⊂UA. Note that U0 ⊂UA can be viewed as supervision information (if A is low-rank). In this paper, we will introduce a heuristic algorithm that can work distinctly better than RPCA in an unsupervised environment. As can be seen from (3.7), RPCA is actually not brittle with respect to coherent data (although its performance is depressed). Based on this observation, we propose a simple algorithm, as summarized in Algorithm 1, to achieve a solid improvement over RPCA. Our idea is straightforward: We first obtain an estimate of L0 by using RPCA and then utilize the estimate to construct the dictionary matrix A in LRR. The post-processing steps (Step 2 and Step 3) that slightly modify the solution of RPCA is to encourage well-conditioned dictionary, which is the circumstance favoring LRR. Whenever the recovery produced by RPCA is already exact, the claim in Theorem 1 gives that the recovery produced by our Algorithm 1 is exact as well. That is to say, in terms of exactly recovering L0 from a given X, the success probability of our Algorithm 1 is greater than or equal to that of RPCA. From the computational perspective, Algorithm 1 does not really double the work of RPCA, although there are two convex programs in our algorithm. In fact, according to our simulations, usually the computational time of Algorithm 1 is merely about 1.2 times as much as RPCA. The reason is that, as has been explored by [13], the complexity of solving the LRR problem (1.3) is O(n2rA) (assuming m = n), which is much lower than that of RPCA (which requires O(n3)) provided that the obtained dictionary matrix A is fairly low-rank (i.e., rA is small). One may have noticed that the procedure of Algorithm 1 could be made iterative, i.e., one can consider ˆAZ∗as a new estimate of L0 and use it to further update the dictionary matrix A, and so on. Nevertheless, we empirically find that such an iterative procedure often converges within two iterations. Hence, for the sake of simplicity, we do not consider iterative strategies in this paper. 6 Algorithm 1 Matrix Recovery input: Observed data matrix X ∈Rm×n. adjustable parameter: λ. 1. Solve for ˆL0 by optimizing the RPCA problem (1.2) with λ = 1/√n1. 2. Estimate the rank of ˆL0 by ˆr0 = #{i : σi > 10−3σ1}, where σ1, σ2, · · · , σn2 are the singular values of ˆL0. 3. Form ˜L0 by using the rank-ˆr0 approximation of ˆL0. That is, ˜L0 = arg min L ∥L −ˆL0∥2 F, s.t. rank(L) ≤ˆr0, which is solved by SVD. 4. Construct a dictionary ˆA from ˜L0 by normalizing the column vectors of ˜L0: [ ˆA]:,i = [˜L0]:,i ∥[˜L0]:,i∥2 , i = 1, · · · , n, where [·]:,i denotes the ith column of a matrix. 5. Solve for Z∗by optimizing the LRR problem (1.3) with A = ˆA and λ = 1/√n1. output: ˆAZ∗. 4 Experiments 4.1 Results on Randomly Generated Matrices We first verify the effectiveness of our Algorithm 1 on randomly generated matrices. We generate a collection of 200 × 1000 data matrices according to the model of X = PΩ⊥(L0) + PΩ(S0): Ωis a support set chosen at random; L0 is created by sampling 200 data points from each of 5 randomly generated subspaces; S0 consists of random values from Bernoulli ±1. The dimension of each subspace varies from 1 to 20 with step size 1, and thus the rank of L0 varies from 5 to 100 with step size 5. The fraction |Ω|/(mn) varies from 2.5% to 50% with step size 2.5%. For each pair of rank and support size (r0, |Ω|), we run 10 trials, resulting in a total of 4000 (20 × 20 × 10) trials. RPCA rank(L0)/n2 corruption (%) 0.1 0.2 0.3 0.4 0.5 42 32 22 12 2 Algorithm 1 rank(L0)/n2 corruption (%) 0.1 0.2 0.3 0.4 0.5 42 32 22 12 2 0.1 0.2 0.3 0.4 0.5 10 20 30 40 50 rank(L0)/n2 corruption (%) RPCA Algorithm 1 Figure 4: Algorithm 1 vs RPCA for the task of recovering randomly generated matrices, both using λ = 1/√n1. A curve shown in the third subfigure is the boundary for a method to be successful — The recovery is successful for any pair (r0/n2, |Ω|/(mn)) that locates below the curve. Here, a success means ∥ˆL0 −L0∥F < 0.05∥L0∥F , where ˆL0 denotes an estimate of L0. Figure 4 compares our Algorithm 1 to RPCA, both using λ = 1/√n1. It can be seen that, using the learned dictionary matrix, Algorithm 1 works distinctly better than RPCA. In fact, the success area (i.e., the area of the white region) of our algorithm is 47% wider than that of RPCA! We should also mention that it is possible for RPCA to be exactly successful on coherent (or less incoherent) data, provided that the rank of L0 is low enough and/or S0 is sparse enough. Our algorithm in general improves RPCA when L0 is moderately low-rank and/or S0 is moderately sparse. 7 4.2 Results on Corrupted Motion Sequences We now present our experiment with 11 additional sequences attached to the Hopkins155 [21] database. In those sequences, about 10% of the entries in the data matrix of trajectories are unobserved (i.e., missed) due to vision occlusion. We replace each missed entry with a number from Bernoulli ±1, resulting in a collection of corrupted trajectory matrices for evaluating the effectiveness of matrix recovery algorithms. We perform subspace clustering on both the corrupted trajectory matrices and the recovered versions, and use the clustering error rates produced by existing subspace clustering methods as the evaluation metrics. We consider three state-of-the-art subspace clustering methods: Shape Interaction Matrix (SIM) [5], Low-Rank Representation with A = X [14] (which is referred to as “LRRx”) and Sparse Subspace Clustering (SSC) [6]. Table 1: Clustering error rates (%) on 11 corrupted motion sequences. Mean Median Maximum Minimum Std. Time (sec.) SIM 29.19 27.77 45.82 12.45 11.74 0.07 RPCA + SIM 14.82 8.38 45.78 0.97 16.23 9.96 Algorithm 1 + SIM 8.74 3.09 42.61 0.23 12.95 11.64 LRRx 21.38 22.00 56.96 0.58 17.10 1.80 RPCA + LRRx 10.70 3.05 46.25 0.20 15.63 10.75 Algorithm 1 + LRRx 7.09 3.06 32.33 0.22 10.59 12.11 SSC 22.81 20.78 58.24 1.55 18.46 3.18 RPCA + SSC 9.50 2.13 50.32 0.61 16.17 12.51 Algorithm 1 + SSC 5.74 1.85 27.84 0.20 8.52 13.11 Table 1 shows the error rates of various algorithms. Without the preprocessing of matrix recovery, all the subspace clustering methods fail to accurately categorize the trajectories of motion objects, producing error rates higher than 20%. This illustrates that it is important for motion segmentation to correct the gross corruptions possibly existing in the data matrix of trajectories. By using RPCA (λ = 1/√n1) to correct the corruptions, the clustering performances of all considered methods are improved dramatically. For example, the error rate of SSC is reduced from 22.81% to 9.50%. By choosing an appropriate dictionary for LRR (λ = 1/√n1), the error rates can be reduced again, from 9.50% to 5.74%, which is a 40% relative improvement. These results verify the effectiveness of our dictionary learning strategy in realistic environments. 5 Conclusion and Future Work We have studied the problem of disentangling the low-rank and sparse components in a given data matrix. Whenever the low-rank component exhibits clustering structures, the state-of-the-art RPCA method could be less successful. This is because RPCA prefers incoherent data, which however may be inconsistent with data in the real world. When the number of clusters becomes large, the second and third coherence parameters enlarge and hence the performance of RPCA could be depressed. We have showed that the challenges arising from coherent (equivalently, less incoherent) data could be effectively alleviated by learning a suitable dictionary under the LRR framework. Namely, when the dictionary matrix is low-rank and contains information about the ground truth matrix, LRR can be immune to the coherence parameters that increase with the underlying cluster number. Furthermore, we have established a practical algorithm that outperforms RPCA in our extensive experiments. The problem of recovering coherent data essentially concerns the robustness issues of the Generalized PCA (GPCA) [22] problem. Although the classic GPCA problem has been explored for several decades, robust GPCA is new and has not been well studied. The approach proposed in this paper is in a sense preliminary, and it is possible to develop other effective methods for learning the dictionary matrix in LRR and for handling coherent data. We leave these as future work. Acknowledgement Guangcan Liu was a Postdoctoral Researcher supported by NSF-DMS0808864, NSF-SES1131848, NSF-EAGER1249316, AFOSR-FA9550-13-1-0137, and ONR-N00014-13-1-0764. Ping Li is also partially supported by NSF-III1360971 and NSF-BIGDATA1419210. 8 References [1] Liliana Borcea, Thomas Callaghan, and George Papanicolaou. Synthetic aperture radar imaging and motion estimation via robust principle component analysis. Arxiv, 2012. [2] Emmanuel Cand`es and Yaniv Plan. Matrix completion with noise. In IEEE Proceeding, volume 98, pages 925–936, 2010. [3] Emmanuel Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [4] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM, 58(3):1–37, 2011. [5] Joao Costeira and Takeo Kanade. A multibody factorization method for independently moving objects. International Journal of Computer Vision, 29(3):159–179, 1998. [6] E. Elhamifar and R. Vidal. Sparse subspace clustering. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 2790–2797, 2009. [7] M. Fazel. Matrix rank minimization with applications. PhD thesis, 2002. [8] Martin Fischler and Robert Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981. [9] R. Gnanadesikan and J. R. Kettenring. Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics, 28(1):81–124, 1972. [10] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, 2011. [11] Qifa Ke and Takeo Kanade. Robust l1 norm factorization in the presence of outliers and missing data by alternative convex programming. In IEEE Conference on Computer Vision and Pattern Recognition, pages 739–746, 2005. [12] Fernando De la Torre and Michael J. Black. A framework for robust subspace learning. International Journal of Computer Vision, 54(1-3):117–142, 2003. [13] Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, and Yi Ma. Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):171–184, 2013. [14] Guangcan Liu, Zhouchen Lin, and Yong Yu. Robust subspace segmentation by low-rank representation. In International Conference on Machine Learning, pages 663–670, 2010. [15] Guangcan Liu, Huan Xu, and Shuicheng Yan. Exact subspace segmentation and outlier detection by low-rank representation. Journal of Machine Learning Research - Proceedings Track, 22:703–711, 2012. [16] Rahul Mazumder, Trevor Hastie, and Robert Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. Journal of Machine Learning Research, 11:2287–2322, 2010. [17] Ricardo Otazo, Emmanuel Cand`es, and Daniel K. Sodickson. Low-rank and sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components. Arxiv, 2012. [18] YiGang Peng, Arvind Ganesh, John Wright, Wenli Xu, and Yi Ma. Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2233–2246, 2012. [19] Mahdi Soltanolkotabi, Ehsan Elhamifar, and Emmanuel Candes. Robust subspace clustering. arXiv:1301.2603, 2013. [20] Nathan Srebro and Tommi Jaakkola. Generalization error bounds for collaborative prediction with lowrank matrices. In Neural Information Processing Systems, pages 5–27, 2005. [21] Roberto Tron and Rene Vidal. A benchmark for the comparison of 3-d motion segmentation algorithms. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2007. [22] Rene Vidal, Yi Ma, and S. Sastry. Generalized Principal Component Analysis. Springer Verlag, 2012. [23] Markus Weimer, Alexandros Karatzoglou, Quoc V. Le, and Alex J. Smola. Cofirank - maximum margin matrix factorization for collaborative ranking. In Neural Information Processing Systems, 2007. [24] Huan Xu, Constantine Caramanis, and Shie Mannor. Outlier-robust pca: The high-dimensional case. IEEE Transactions on Information Theory, 59(1):546–572, 2013. [25] Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. In Neural Information Processing Systems, 2010. [26] Zhengdong Zhang, Arvind Ganesh, Xiao Liang, and Yi Ma. Tilt: Transform invariant low-rank textures. International Journal of Computer Vision, 99(1):1–24, 2012. 9
|
2014
|
11
|
5,192
|
Sparse Random Features Algorithm as Coordinate Descent in Hilbert Space Ian E.H. Yen 1 Ting-Wei Lin 2 Shou-De Lin 2 Pradeep Ravikumar 1 Inderjit S. Dhillon 1 Department of Computer Science 1: University of Texas at Austin, 2: National Taiwan University 1: {ianyen,pradeepr,inderjit}@cs.utexas.edu, 2: {b97083,sdlin}@csie.ntu.edu.tw Abstract In this paper, we propose a Sparse Random Features algorithm, which learns a sparse non-linear predictor by minimizing an ℓ1-regularized objective function over the Hilbert Space induced from a kernel function. By interpreting the algorithm as Randomized Coordinate Descent in an infinite-dimensional space, we show the proposed approach converges to a solution within ϵ-precision of that using an exact kernel method, by drawing O(1/ϵ) random features, in contrast to the O(1/ϵ2) convergence achieved by current Monte-Carlo analyses of Random Features. In our experiments, the Sparse Random Feature algorithm obtains a sparse solution that requires less memory and prediction time, while maintaining comparable performance on regression and classification tasks. Moreover, as an approximate solver for the infinite-dimensional ℓ1-regularized problem, the randomized approach also enjoys better convergence guarantees than a Boosting approach in the setting where the greedy Boosting step cannot be performed exactly. 1 Introduction Kernel methods have become standard for building non-linear models from simple feature representations, and have proven successful in problems ranging across classification, regression, structured prediction and feature extraction [16, 20]. A caveat however is that they are not scalable as the number of training samples increases. In particular, the size of the models produced by kernel methods scale linearly with the number of training samples, even for sparse kernel methods like support vector machines [17]. This makes the corresponding training and prediction computationally prohibitive for large-scale problems. A line of research has thus been devoted to kernel approximation methods that aim to preserve predictive performance, while maintaining computational tractability. Among these, Random Features has attracted considerable recent interest due to its simplicity and efficiency [2, 3, 4, 5, 10, 6]. Since first proposed in [2], and extended by several works [3, 4, 5, 10], the Random Features approach is a sampling based approximation to the kernel function, where by drawing D features from the distribution induced from the kernel function, one can guarantee uniform convergence of approximation error to the order of O(1/ √ D). On the flip side, such a rate of convergence suggests that in order to achieve high precision, one might need a large number of random features, which might lead to model sizes even larger than that of the vanilla kernel method. One approach to remedy this problem would be to employ feature selection techniques to prevent the model size from growing linearly with D. A simple way to do so would be by adding ℓ1regularization to the objective function, so that one can simultaneously increase the number of random features D, while selecting a compact subset of them with non-zero weight. However, the resulting algorithm cannot be justified by existing analyses of Random Features, since the Representer theorem does not hold for the ℓ1-regularized problem [15, 16]. In other words, since the prediction 1 cannot be expressed as a linear combination of kernel evaluations, a small error in approximating the kernel function cannot correspondingly guarantee a small prediction error. In this paper, we propose a new interpretation of Random Features that justifies its usage with ℓ1-regularization — yielding the Sparse Random Features algorithm. In particular, we show that the Sparse Random Feature algorithm can be seen as Randomized Coordinate Descent (RCD) in the Hilbert Space induced from the kernel, and by taking D steps of coordinate descent, one can achieve a solution comparable to exact kernel methods within O(1/D) precision in terms of the objective function. Note that the surprising facet of this analysis is that in the finite-dimensional case, the iteration complexity of RCD increases with number of dimensions [18], which would trivially yield a bound going to infinity for our infinite-dimensional problem. In our experiments, the Sparse Random Features algorithm obtains a sparse solution that requires less memory and prediction time, while maintaining comparable performance on regression and classification tasks with various kernels. Note that our technique is complementary to that proposed in [10], which aims to reduce the cost of evaluating and storing basis functions, while our goal is to reduce the number of basis functions in a model. Another interesting aspect of our algorithm is that our infinite-dimensional ℓ1-regularized objective is also considered in the literature of Boosting [7, 8], which can be interpreted as greedy coordinate descent in the infinite-dimensional space. As an approximate solver for the ℓ1-regularized problem, we compare our randomized approach to the boosting approach in theory and also in experiments. As we show, for basis functions that do not allow exact greedy search, a randomized approach enjoys better guarantees. 2 Problem Setup We are interested in estimating a prediction function f: X→Y from training data set D = {(xn, yn)}N n=1, (xn, yn) ∈X × Y by solving an optimization problem over some Reproducing Kernel Hilbert Space (RKHS) H: f ∗= argmin f∈H λ 2 ∥f∥2 H + 1 N N ∑ n=1 L(f(xn), yn), (1) where L(z, y) is a convex loss function with Lipschitz-continuous derivative satisfying |L′(z1, y) − L′(z2, y)| ≤β|z1 −z2|, which includes several standard loss functions such as the square-loss L(z, y) = 1 2(z −y)2, square-hinge loss L(z, y) = max(1 −zy, 0)2 and logistic loss L(z, y) = log(1 + exp(−yz)). 2.1 Kernel and Feature Map There are two ways in practice to specify the space H. One is via specifying a positive-definite kernel k(x, y) that encodes similarity between instances, and where H can be expressed as the completion of the space spanned by {k(x, ·)}x∈X , that is, H = { f(·) = K ∑ i=1 αik(xi, ·) | αi ∈R, xi ∈X } . The other way is to find an explicit feature map {¯ϕh(x)}h∈H, where each h ∈H defines a basis function ¯ϕh(x) : X →R. The RKHS H can then be defined as H = { f(·) = ∫ h∈H w(h)¯ϕh(·)dh = ⟨w, ¯ϕ(·)⟩H | ∥f∥2 H < ∞ } , (2) where w(h) is a weight distribution over the basis {ϕh(x)}h∈H. By Mercer’s theorem [1], every positive-definite kernel k(x, y) has a decomposition s.t. k(x, y) = ∫ h∈H p(h)ϕh(x)ϕh(y)dh = ⟨¯ϕ(x), ¯ϕ(y)⟩H, (3) where p(h) ≥0 and ¯ϕh(.) = √ p(h)ϕh(.), denoted as ¯ϕ = √p ◦ϕ. However, the decomposition is not unique. One can derive multiple decompositions from the same kernel k(x, y) based on 2 different sets of basis functions {ϕh(x)}h∈H. For example, in [2], the Laplacian kernel k(x, y) = exp(−γ∥x −y∥1) can be decomposed through both the Fourier basis and the Random Binning basis, while in [7], the Laplacian kernel can be obtained from the integrating of an infinite number of decision trees. On the other hand, multiple kernels can be derived from the same set of basis functions via different distribution p(h). For example, in [2, 3], a general decomposition method using Fourier basis functions { ϕω(x) = cos(ωT x) } ω∈Rd was proposed to find feature map for any shift-invariant kernel of the form k(x −y), where the feature maps (3) of different kernels k(∆) differ only in the distribution p(ω) obtained from the Fourier transform of k(∆). Similarly, [5] proposed decomposition based on polynomial basis for any dot-product kernel of the form k(⟨x, y⟩). 2.2 Random Features as Monte-Carlo Approximation The standard kernel method, often referred to as the “kernel trick,” solves problem (1) through the Representer Theorem [15, 16], which states that the optimal decision function f ∗∈H lies in the span of training samples HD = { f(·) = ∑N n=1 αnk(xn, ·) | αn ∈R, (xn, yn) ∈D } , which reduces the infinite-dimensional problem (1) to a finite-dimensional problem with N variables {αn}N n=1. However, it is known that even for loss functions with dual-sparsity (e.g. hinge-loss), the number of non-zero αn increases linearly with data size [17]. Random Features has been proposed as a kernel approximation method [2, 3, 10, 5], where a MonteCarlo approximation k(xi, xj) = Ep(h)[ϕh(xi)ϕh(xj)] ≈1 D D ∑ k=1 ϕhk(xi)ϕhk(xj) = z(xi)T z(xj) (4) is used to approximate (3), so that the solution to (1) can be obtained by wRF = argmin w∈RD λ 2 ∥w∥2 + 1 N N ∑ n=1 L(wT z(xn), yn). (5) The corresponding approximation error wT RF z(x) −f ∗(x) = N ∑ n=1 αRF n z(xn)T z(x) − N ∑ n=1 α∗ nk(xn, x) , (6) as proved in [2,Appendix B], can be bounded by ϵ given D = Ω(1/ϵ2) number of random features, which is a direct consequence of the uniform convergence of the sampling approximation (4). Unfortunately, the rate of convergence suggests that to achieve small approximation error ϵ, one needs significant amount of random features, and since the model size of (5) grows linearly with D, such an algorithm might not obtain a sparser model than kernel method. On the other hand, the ℓ1-regularized Random-Feature algorithm we are proposing aims to minimize loss with a selected subset of random feature that neither grows linearly with D nor with N. However, (6) does not hold for ℓ1-regularization, and thus one cannot transfer guarantee from kernel approximation (4) to the learned decision function. 3 Sparse Random Feature as Coordinate Descent In this section, we present the Sparse Random Features algorithm and analyze its convergence by interpreting it as a fully-corrective randomized coordinate descent in a Hilbert space. Given a feature map of orthogonal basic functions {¯ϕh(x) = √ p(h)ϕh(x)}h∈H, the optimization program (1) can be written as the infinite-dimensional optimization problem min w∈H λ 2 ∥w∥2 2 + 1 N N ∑ n=1 L(⟨w, ¯ϕ(xn)⟩H, yn). (7) 3 Instead of directly minimizing (7), the Sparse Random Features algorithm optimizes the related ℓ1-regularized problem defined as min ¯w∈H F( ¯w) = λ∥¯w∥1 + 1 N N ∑ n=1 L(⟨¯w, ϕ(xn)⟩H, yn), (8) where ¯ϕ(x) = √p ◦ϕ(x) is replaced by ϕ(x) and ∥¯w∥1 is defined as the ℓ1-norm in function space ∥¯w∥1 = ∫ h∈H | ¯w(h)|dh. The whole procedure is depicted in Algorithm 1. At each iteration, we draw R coordinates h1, h2, ..., hR from distribution p(h), add them into a working set At, and minimize (8) w.r.t. the working set At as min ¯ w(h),h∈At λ ∑ h∈At | ¯w(h)| + 1 N N ∑ n=1 L( ∑ h∈At ¯w(h)ϕh(xn), yn). (9) At the end of each iteration, the algorithm removes features with zero weight to maintain a compact working set. Algorithm 1 Sparse Random-Feature Algorithm Initialize ¯w0 = 0, working set A(0) = {}, and t = 0. repeat 1. Sample h1, h2, ..., hR i.i.d. from distribution p(h). 2. Add h1, h2, ..., hR to the set A(t). 3. Obtain ¯wt+1 by solving (9). 4. A(t+1) = A(t) \ { h | ¯wt+1(h) = 0 } . 5. t ←t + 1. until t = T 3.1 Convergence Analysis In this section, we analyze the convergence behavior of Algorithm 1. The analysis comprises of two parts. First, we estimate the number of iterations Algorithm 1 takes to produce a solution wt that is at most ϵ away from some arbitrary reference solution wref on the ℓ1-regularized program (8). Then, by taking wref as the optimal solution w∗of (7), we obtain an approximation guarantee for wt with respect to w∗. The proofs for most lemmas and corollaries will be in the appendix. Lemma 1. Suppose loss function L(z, y) has β-Lipschitz-continuous derivative and |ϕh(x)| ≤ B, ∀h ∈H, ∀x ∈X. The loss term Loss( ¯w; ϕ) = 1 N ∑N n=1 L(⟨¯w, ϕ(xn)⟩, yn) in (8) has Loss( ¯w + ηδh; ϕ) −Loss( ¯w; ϕ) ≤ghη + γ 2 η2, where δh = δ(∥x −h∥) is a Dirac function centered at h, and gh = ∇¯wLoss( ¯w; ϕ)(h) is the Frechet derivative of the loss term evaluated at h, and γ = βB2. The above lemma states smoothness of the loss term, which is essential to guarantee descent amount obtained by taking a coordinate descent step. In particular, we aim to express the expected progress made by Algorithm 1 as the proximal-gradient magnitude of ¯F(w) = F(√p ◦w) defined as ¯F(w) = λ∥√p ◦w∥1 + 1 N N ∑ n=1 L(⟨w, ¯ϕ(xn)⟩, yn). (10) . Let g = ∇¯wLoss( ¯w, ϕ), ¯g = ∇wLoss(w, ¯ϕ) be the gradients of loss terms in (8), (10) respectively, and let ρ ∈∂(λ∥¯w∥1). We have following relations between (8) and (10): ¯ρ := √p ◦ρ ∈∂(λ∥√p ◦w∥1), ¯g = √p ◦g, (11) by simple applications of the chain rule. We then analyze the progress made by each iteration of Algorithm 1. Recalling that we used R to denote the number of samples drawn in step 1 of our algorithm, we will first assume R = 1, and then show that same result holds also for R > 1. 4 Theorem 1 (Descent Amount). The expected descent of the iterates of Algorithm 1 satisfies E[F( ¯wt+1)] −F( ¯wt) ≤−γ∥¯ηt∥2 2 , (12) where ¯η is the proximal gradient of (10), that is, ¯η = argmin η λ∥√p ◦(wt + η)∥1 −λ∥√p ◦wt∥1 + ⟨¯g, η⟩+ γ 2 ∥η∥2 (13) and ¯g = ∇wLoss(wt, ¯ϕ) is the derivative of loss term w.r.t. w. Proof. Let gh = ∇¯wLoss( ¯wt, ϕ)(h). By Corollary 1, we have F( ¯wt + ηδh) −F( ¯wt) ≤λ| ¯wt(h) + η| −λ| ¯wt(h)| + ghη + γ 2 η2. (14) Minimizing RHS w.r.t. η, the minimizer ηh should satisfy gh + ρh + γηh = 0 (15) for some sub-gradient ρh ∈∂(λ| ¯wt(h) + ηh|). Then by definition of sub-gradient and (15) we have λ| ¯wt(h) + η| −λ| ¯wt(h)| + ghη + γ 2 η2 ≤ρhηh + ghηh + γ 2 η2 h (16) = −γη2 h + γ 2 η2 h = −γ 2 η2 h. (17) Note the equality in (16) holds if ¯wt(h) = 0 or the optimal ηh = 0, which is true for Algorithm 1. Since ¯wt+1 minimizes (9) over a block At containing h, we have F( ¯wt+1) ≤F( ¯wt + ηhδh). Combining (14) and (16), taking expectation over h on both sides, and then we have E[F( ¯wt+1)] −F( ¯wt) ≤−γ 2 E[η2 h] = ∥√p ◦η∥2 = ∥¯η∥2 Then it remains to verify that ¯η = √p ◦η is the proximal gradient (13) of ¯F(wt), which is true since ¯η satisfies the optimality condition of (13) ¯g + ¯ρ + γ¯η = √p ◦(g + ρ + γη) = 0, where first equality is from (11) and the second is from (15). Theorem 2 (Convergence Rate). Given any reference solution wref, the sequence {wt}∞ t=1 satisfies E[ ¯F(wt)] ≤¯F(wref) + 2γ∥wref∥2 k , (18) where k = max{t −c, 0} and c = 2( ¯ F (0)−¯ F (wref )) γ∥wref ∥2 is a constant. Proof. First, the equality actually holds in inequality (16), since for h /∈A(t−1), we have wt(h) = 0, which implies λ|wt(h) + η| −λ|wt(h)| = ρη, ρ ∈∂(λ|wt(h) + η|), and for h ∈At−1 we have ¯ηh = 0, which gives 0 to both LHS and RHS. Therefore, we have −γ 2 ∥¯η∥2 = min η λ∥√p ◦(wt + η)∥1 −λ∥√p ◦wt∥1 + ¯gT η + γ 2 ∥η∥2. (19) Note the minimization in (19) is separable for different coordinates. For h ∈A(t−1), the weight wt(h) is already optimal in the beginning of iteration t, so we have ¯ρh + ¯gh = 0 for some ¯ρh ∈ ∂(| √ p(h)w(h)|). Therefore, ηh = 0, h ∈A(t−1) is optimal both to (| √ p(h)(w(h) + ηh)| + ¯ghηh) and to γ 2 η2 h. Set ηh = 0 for the latter, we have −γ 2 ∥¯η∥2 = min η { λ∥√p ◦(wt + η)∥1 −λ∥√p ◦wt∥1 + ⟨¯g, η⟩+ γ 2 ∫ h/∈A(t−1) η2 hdh } ≤ min η { ¯F(wt + η) −¯F(wt) + γ 2 ∫ h/∈A(t−1) η2 hdh } 5 from convexity of ¯F(w). Consider solution of the form η = α(wref −wt), we have −γ 2 ∥¯η∥2 ≤ min α∈[0,1] { ¯F ( wt + α(wref −wt) ) −¯F(wt) + γα2 2 ∫ h/∈A(t−1)(wref(h) −wt(h))2dh } ≤ min α∈[0,1] { ¯F(wt) + α ( ¯F(wref) −¯F(wt) ) −¯F(wt) + γα2 2 ∫ h/∈A(t−1) wref(h)2dh } ≤ min α∈[0,1] { −α ( ¯F(wt) −¯F(wref) ) + γα2 2 ∥wref∥2 } , where the second inequality results from wt(h) = 0, h /∈A(t−1). Minimizing last expression w.r.t. α, we have α∗= min ( ¯ F (wt)−¯ F (wref ) γ∥wref ∥2 , 1 ) and −γ 2 ∥¯η∥2 ≤ { − ( ¯F(wt) −¯F(wref) )2 /(2γ∥wref∥2) , if ¯F(wt) −¯F(wref) < γ∥wref∥2 −γ 2 ∥wref∥2 , o.w. . (20) Note, since the function value { ¯F(wt)}∞ t=1 is non-increasing, only iterations in the beginning fall in second case of (20), and the number of such iterations is at most c = ⌈2( ¯ F (0)−¯ F (wref )) γ∥wref ∥2 ⌉. For t > c, we have E[ ¯F(wt+1)] −¯F(wt) ≤−γ∥¯ηt∥2 2 2 ≤−( ¯F(wt) −¯F(wref))2 2γ∥wref∥2 . (21) The recursion then leads to the result. Note the above bound does not yield useful result if ∥wref∥2 →∞. Fortunately, the optimal solution of our target problem (7) has finite ∥w∗∥2 as long as in (7) λ > 0, so it always give a useful bound when plugged into (18), as following corollary shows. Corollary 1 (Approximation Guarantee). The output of Algorithm 1 satisfies E [ λ∥¯w(D)∥1 + Loss( ¯w(D); ϕ) ] ≤ { λ∥w∗∥2 + Loss(w∗; ¯ϕ) } + 2γ∥w∗∥2 2 D′ (22) with D′ = max{D −c, 0}, where w∗is the optimal solution of problem (7), c is a constant defined in Theorem 2. Then the following two corollaries extend the guarantee (22) to any R ≥1, and a bound holds with high probability. The latter is a direct result of [18,Theorem 1] applied to the recursion (21). Corollary 2. The bound (22) holds for any R ≥1 in Algorithm 1, where if there are T iterations then D = TR. Corollary 3. For D ≥2γ∥w∗∥2 ϵ (1 + log 1 ρ) + 2 −4 c + c , the output of Algorithm 1 has λ∥¯w(D)∥1 + Loss( ¯w(D); ϕ) ≤ { λ∥w∗∥2 + Loss(w∗; ¯ϕ) } + ϵ (23) with probability 1 −ρ, where c is as defined in Theorem 2 and w∗is the optimal solution of (7). 3.2 Relation to the Kernel Method Our result (23) states that, for D large enough, the Sparse Random Features algorithm achieves either a comparable loss to that of the vanilla kernel method, or a model complexity (measured in ℓ1-norm) less than that of kernel method (measured in ℓ2-norm). Furthermore, since w∗is not the optimal solution of the ℓ1-regularized program (8), it is possible for the LHS of (23) to be much smaller than the RHS. On the other hand, since any w∗of finite ℓ2-norm can be the reference solution wref, the λ used in solving the ℓ1-regularized problem (8) can be different from the λ used in the kernel method. The tightest bound is achieved by minimizing the RHS of (23), which is equivalent to minimizing (7) with some unknown ˜λ(λ) due to the difference of ∥w∥1 and ∥w∥2 2. In practice, we can follow a regularization path to find small enough λ that yields comparable predictive performance while maintains model as compact as possible. Note, when using different sampling distribution p(h) from the decomposition (3), our analysis provides different bounds (23) for the Randomized Coordinate Descent in Hilbert Space. This is in contrast to the analysis in the finite-dimensional case, where RCD with different sampling distribution converges to the same solution [18]. 6 3.3 Relation to the Boosting Method Boosting is a well-known approach to minimize infinite-dimensional problems with ℓ1regularization [8, 9], and which in this setting, performs greedy coordinate descent on (8). For each iteration t, the algorithm finds the coordinate h(t) yielding steepest descent in the loss term h(t) = argmin h∈H 1 N N ∑ n=1 L′ nϕh(xn) (24) to add into a working set At and minimize (8) w.r.t. At. When the greedy step (24) can be solved exactly, Boosting has fast convergence to the optimal solution of (8) [13, 14]. On the contrary, randomized coordinate descent can only converge to a sub-optimal solution in finite time when there are infinite number of dimensions. However, in practice, only a very limited class of basis functions allow the greedy step in (24) to be performed exactly. For most basis functions (weak learners) such as perceptrons and decision trees, the greedy step (24) can only be solved approximately. In such cases, Boosting might have no convergence guarantee, while the randomized approach is still guaranteed to find a comparable solution to that of the kernel method. In our experiments, we found that the randomized coordinate descent performs considerably better than approximate Boosting with the perceptron basis functions (weak learners), where as adopted in the Boosting literature [19, 8], a convex surrogate loss is used to solve (24) approximately. 4 Experiments In this section, we compare Sparse Random Features (Sparse-RF) to the existing Random Features algorithm (RF) and the kernel method (Kernel) on regression and classification problems with kernels set to Gaussian RBF, Laplacian RBF [2], and Perceptron kernel [7] 1. For Gaussian and Laplacian RBF kernel, we use Fourier basis function with corresponding distribution p(h) derived in [2]; for Perceptron kernel, we use perceptron basis function with distribution p(h) being uniform over unit-sphere as shown in [7]. For regression, we solve kernel ridge regression (1) and RF regression (6) in closed-form as in [10] using Eigen, a standard C++ library of numerical linear algebra. For Sparse-RF, we solve the LASSO sub-problem (9) by standard RCD algorithm. In classification, we use LIBSVM2as solver of kernel method, and use Newton-CG method and Coordinate Descent method in LIBLINEAR [12] to solve the RF approximation (6) and Sparse-RF sub-problem (9) respectively. We set λN = Nλ = 1 for the kernel and RF methods, and for Sparse-RF, we choose λN ∈{1, 10, 100, 1000} that gives RMSE (accuracy) closest to the RF method to compare sparsity and efficiency. The results are in Tables 1 and 2, where the cost of kernel method grows at least quadratically in the number of training samples. For YearPred, we use D = 5000 to maintain tractability of the RF method. Note for Covtype dataset, the ℓ2-norm ∥w∗∥2 from kernel machine is significantly larger than that of others, so according to (22), a larger number of random features D are required to obtain similar performance, as shown in Figure 1. In Figure 1, we compare Sparse-RF (randomized coordinate descent) to Boosting (greedy coordinate descent) and the bound (23) obtained from SVM with Perceptron kernel and basis function (weak learner). The figure shows that Sparse-RF always converges to a solution comparable to that of the kernel method, while Boosting with approximate greedy steps (using convex surrogate loss) converges to a higher objective value, due to bias from the approximation. Acknowledgement S.-D.Lin acknowledges the support of Telecommunication Lab., Chunghwa Telecom Co., Ltd via TL-1038201, AOARD via No. FA2386-13-1-4045, Ministry of Science and Technology, National Taiwan University and Intel Co. via MOST102-2911-I-002-001, NTU103R7501, 102-2923-E-002-007-MY2, 102-2221-E-002170, 103-2221-E-002-104-MY2. P.R. acknowledges the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894, IIS-1447574, and DMS-1264033. This research was also supported by NSF grants CCF-1320746 and CCF-1117055. 2Data set for classification can be downloaded from LIBSVM data set web page, and data set for regression can be found at UCI Machine Learning Repository and Ali Rahimi’s page for the paper [2]. 2We follow the FAQ page of LIBSVM to replace hinge-loss by square-hinge-loss for comparison. 7 Table 1: Results for Kernel Ridge Regression. Fields from top to bottom are model size (# of support vectors or # of random features or # of non-zero weights respectively), testing RMSE, training time, testing prediction time, and memory usage during training. Gaussian RBF Laplacian RBF Perceptron Kernel Data set Kernel RF Sparse-RF Kernel RF Sparse-RF Kernel RF Sparse-RF CPU SV=6554 D=10000 NZ=57 SV=6554 D=10000 NZ=289 SV=6554 D=10000 NZ=251 Ntr =6554 RMSE=0.038 0.037 0.032 0.034 . 0.035 0.027 0.026 0.038 0.027 Nt =819 Ttr=154 s 875 s 22 s 157 s 803 s 43 s 151 s 776 s 27 s d =21 Tt=2.59 s 6 s 0.04 s 3.13 s 6.99 s 0.18 s 2.48 s 6.37 s 0.13 s Mem=1.36 G 4.71 G 0.069 G 1.35 G 4.71 G 0.095 G 1.36 G 4.71 G 0.090 G Census SV=18186 D=10000 NZ=1174 SV=18186 D=10000 NZ=5269 SV=18186 D=10000 NZ=976 Ntr =18186 RMSE=0.029 0.032 0.030 0.146 0.168 0.179 0.010 0.016 0.016 Nt =2273 Ttr=2719 s 1615 s 229 s 3268 s 1633 s 225 s 2674 s 1587 s 185 s d =119 Tt=74 s 80 s 8.6 s 68 s 88 s 38s 67.45 s 76 s 6.7 s Mem=10 G 8.2 G 0.55 G 10 G 8.2 G 1.7 G 10 G 8.2 G 0.49 G YearPred SV=# D=5000 NZ=1865 SV=# D=5000 NZ=3739 SV=# D=5000 NZ=896 Ntr =463715 RMSE=# 0.103 0.104 # 0.286 0.273 # 0.105 0.105 Nt =51630 Ttr=# 7697 s 1618 s # 9417 s 1453 s # 8636 s 680 s d =90 Tt=# 697 s 97 s # 715 s 209 s # 688 s 51 s Mem=# 76.7G 45.6G # 76.6 G 54.3 G # 76.7 G 38.1 G Table 2: Results for Kernel Support Vector Machine. Fields from top to bottom are model size (# of support vectors or # of random features or # of non-zero weights respectively), testing accuracy, training time, testing prediction time, and memory usage during training. Gaussian RBF Laplacian RBF Perceptron Kernel Data set Kernel RF Sparse-RF Kernel RF Sparse-RF Kernel RF Sparse-RF Cod-RNA SV=14762 D=10000 NZ=180 SV=13769 D=10000 NZ=1195 SV=15201 D=10000 NZ=1148 Ntr =59535 Acc=0.966 0.964 0.964 0.971 . 0.969 0.970 0.967 0.964 0.963 Nt =10000 Ttr=95 s 214 s 180 s 89 s 290 s 137 s 57.34 s 197 s 131 s d =8 Tt=15 s 56 s 0.61 s 15 s 46 s 6.41 s 7.01 s 71.9 s 3.81 s Mem=3.8 G 9.5 G 0.66 G 3.6 G 9.6 G 1.8 G 3.6 G 9.6 G 1.4 G IJCNN SV=16888 D=10000 NZ=1392 SV=16761 D=10000 NZ=2508 SV=26563 D=10000 NZ=1530 Ntr =127591 Acc=0.991 0.989 0.989 0.995 0.992 0.992 0.991 0.987 0.988 Nt =14100 Ttr=636 s 601 s 292 s 988 s 379 s 566 s 634 s 381 s 490 s d =22 Tt=34 s 88 s 11 s 34 s 86 s 25 s 16 s 77 s 11 s Mem=12 G 20 G 7.5 G 12 G 20 G 9.9 G 11 G 20 G 7.8 G Covtype SV=335606 D=10000 NZ=3421 SV=224373 D=10000 NZ=3141 SV=358174 D=10000 NZ=1401 Ntr =464810 Acc=0.849 0.829 0.836 0.954 0.888 0.869 0.905 0.835 0.836 Nt =116202 Ttr=74891 s 9909 s 6273 s 64172 s 10170 s 2788 s 79010 s 6969 s 1706 s d =54 Tt=3012 s 735 s 132 s 2004 s 635 s 175 s 1774 s 664 s 70 s Mem=78.5 G 74.7 G 28.1 G 80.8 G 74.6 G 56.5 G 80.5 G 74.7 G 44.4 G 0 500 1000 1500 2000 2500 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 time objective Cod−RNA−Objective Boosting Sparse−RF Kernel 0 0.5 1 1.5 2 2.5 x 10 4 0 0.05 0.1 0.15 0.2 0.25 time objective IJCNN−Objective Boosting Sparse−RF Kernel 0 5000 10000 15000 0.35 0.4 0.45 0.5 0.55 0.6 0.65 time objective Covtype−Objective Boosting Sparse−RF Kernel 0 500 1000 1500 2000 2500 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 time error Cod−RNA−Error Boosting Sparse−RF Kernel 0 0.5 1 1.5 2 2.5 x 10 4 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 time error IJCNN−Error Boosting Sparse−RF Kernel 0 5000 10000 15000 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 time error Covtype−Error Boosting Sparse−RF Kernel Figure 1: The ℓ1-regularized objective (8) (top) and error rate (bottom) achieved by Sparse Random Feature (randomized coordinate descent) and Boosting (greedy coordinate descent) using perceptron basis function (weak learner). The dashed line shows the ℓ2-norm plus loss achieved by kernel method (RHS of (22)) and the corresponding error rate using perceptron kernel [7]. 8 References [1] Mercer, J. Functions of positive and negative type and their connection with the theory of integral equations. Royal Society London, A 209:415 446, 1909. [2] Rahimi, A. and Recht, B. Random features for large-scale kernel machines. NIPS 20, 2007. [3] Rahimi, A. and Recht, B. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. NIPS 21, 2008. [4] Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. In CVPR. (2010) [5] P. Kar and H. Karnick. Random feature maps for dot product kernels. In Proceedings of AISTATS’12, pages 583 591, 2012. [6] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystrom method vs. random Fourier features: A theoretical and empirical comparison. In Adv. NIPS, 2012. [7] Husan-Tien Lin, Ling Li, Support Vector Machinery for Infinite Ensemble Learnings. JMLR 2008. [8] Saharon Rosset, Ji Zhu, and Trevor Hastie. Boosting as a Regularized Path to a Maximum Margin Classifier. JMLR, 2004. [9] Saharon Rosset, Grzegorz Swirszcz, Nathan Srebro, and Ji Zhu. ℓ1-regularization in infinite dimensional feature spaces. In Learning Theory: 20th Annual Conference on Learning Theory, 2007. [10] Q. Le, T. Sarlos, and A. J. Smola. Fastfood - approximating kernel expansions in loglinear time. In The 30th International Conference on Machine Learning, 2013. [11] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011. [12] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871 1874, 2008. [13] Gunnar Ratsch, Sebastian Mika, and Manfred K. Warmuth. On the convergence of leveraging. In NIPS, 2001. [14] Matus Telgarsky. The Fast Convergence of Boosting. In NIPS, 2011. [15] Kimeldorf, G. S. and Wahba, G. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics, 41:495502, 1970. [16] Scholkopf, Bernhard and Smola, A. J. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [17] Steinwart, Ingo and Christmann, Andreas. Support Vector Machines. Springer, 2008. [18] P. Ricktarik and M. Takac, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, School of Mathematics, University of Edinburgh, Tech. Rep., 2011. [19] Chen, S.-T., Lin, H.-T. and Lu, C.-J. An online boosting algorithm with theoretical justifications. ICML 2012. [20] Taskar, B., Guestrin, C., and Koller, D. Max-margin Markov networks. NIPS 16, 2004. [21] G. Song et.al. Reproducing kernel Banach spaces with the l1 norm. Journal of Applied and Computational Harmonic Analysis, 2011. 9
|
2014
|
110
|
5,193
|
Nonparametric Bayesian inference on multivariate exponential families William Vega-Brown, Marek Doniec, and Nicholas Roy Massachusetts Institute of Technology Cambridge, MA 02139 {wrvb, doniec, nickroy}@csail.mit.edu Abstract We develop a model by choosing the maximum entropy distribution from the set of models satisfying certain smoothness and independence criteria; we show that inference on this model generalizes local kernel estimation to the context of Bayesian inference on stochastic processes. Our model enables Bayesian inference in contexts when standard techniques like Gaussian process inference are too expensive to apply. Exact inference on our model is possible for any likelihood function from the exponential family. Inference is then highly efficient, requiring only O (log N) time and O (N) space at run time. We demonstrate our algorithm on several problems and show quantifiable improvement in both speed and performance relative to models based on the Gaussian process. 1 Introduction Many learning problems can be formulated in terms of inference on predictive stochastic models. These models are distributions p(y|x) over possible observation values y drawn from some observation set Y, conditioned on a known input value x from an input set X. The supervised learning problem is then to infer a distribution p(y|x∗, D) over possible observations for some target input x∗, given a sequence of N independent observations D = {(x1, y1), . . . , (xN, yN)}. It is often convenient to associate latent parameters θ ∈Θ with each input x, where p(y|θ) is a known likelihood function. By inferring a distribution over target parameters θ∗associated with x∗, we can infer a distribution over y. p(y|x∗, D) = Z Θ dθ∗p(y|θ∗)p(θ∗|x∗, D) (1) For instance, regression problems can be formulated as the inference of an unknown but deterministic underlying function θ(x) given noisy observations, so that p(y|x) = N(y; θ(x), σ2), where σ2 is a known noise variance. If we can specify a joint prior over the parameters corresponding to different inputs, we can infer p(θ∗|x∗, D) using Bayes’ rule. p(θ∗|x∗, D) ∝ Z ΘN " N Y i=1 dθip(yi|θi) # p(θ1:N, θ∗|x∗, x1:N) (2) The notation x1:N indicates the sample inputs {x1, . . . , xN}; this model is depicted graphically in figure 1a. Although the choice of likelihood is often straightforward, specifying a prior can be more difficult. Ideally, we want a prior which is expressive, in the sense that it can accurately capture all prior knowledge, and which permits efficient and accurate inference. A powerful motivating example for specifying problems in terms of generative models is the Gaussian process [1], which specifies the prior p(θ1:N|x1:N) as a multivariate Gaussian with a covariance parameterized by x1:N. This prior can express complex and subtle relationships between inputs and 1 xn θn yn i, j ∞ (a) Stochastic process xn θn yn x∗ y∗ θ∗ N (b) Inference model Figure 1: Figure 1a models any stochastic process with fully connected latent parameters. Figure 1b is our approximate model, used for inference; we assume that the latent parameters are independent given the target parameters. Shaded nodes are observed. observations, and permits efficient exact inference for a Gaussian likelihood with known variance. Extensions exist to perform approximate inference with other likelihood functions [2, 3, 4, 5]. However, the assumptions of the Gaussian process are not the only set of reasonable assumptions, and are not always appropriate. Very large datasets require complex sparsification techniques to be computationally tractable [6]. Likelihood functions with many coupled parameters, such as the parameters of a categorical distribution or of the covariance matrix of a multivariate Gaussian, require the introduction of large numbers of latent variables which must be inferred approximately. As an example, the generalized Wishart process developed by Wilson and Ghahramani [7] provides a distribution over covariance matrices using a sum of Gaussian processes. Inference of the the posterior distribution over the covariance can only be performed approximately: no exact inference procedure is known. Historically, an alternative approach to estimation has been to use local kernel estimation techniques [8, 9, 10], which are often developed from a weighted parameter likelihood of the form p(θ|D) = Q i p(yi|θ)wi. Algorithms for determining the maximum likelihood parameters of such a model are easy to implement and very fast in practice; various techniques, such as dual trees [11] or the improved fast Gauss transform [12] allow the computation of kernel estimates in logarithmic or constant time. This choice of model is often principally motivated by the computational convenience of resulting algorithms. However, it is not clear how to perform Bayesian inference on such models. Most instantiations instead return a point estimate of the parameters. In this paper, we bridge the gap between the local kernel estimators and Bayesian inference. Rather than perform approximate inference on an exact generative model, we formulate a simplified model for which we can efficiently perform exact inference. Our simplification is to choose the maximum entropy distribution from the set of models satisfying certain smoothness and independence criteria. We then show that for any likelihood function in the exponential family, our process model has a conjugate prior, which permits us to perform Bayesian inference in closed form. This motivates many of the local kernel estimators from a Bayesian perspective, and generalizes them to new problem domains. We demonstrate the usefulness of this model on multidimensional regression problems with coupled observations and input-dependent noise, a setting which is difficult to model using Gaussian process inference. 2 The kernel process model Given a likelihood function, a generative model can be specified by a prior p(θ1:N, θ∗|x∗, x1:N). For almost all combinations of prior and likelihood, inference is analytically intractable. We relax the requirement that the model be generative, and instead require only that the prior be well-formed for a given query x∗. To facilitate inference, we make the strong assumption that the latent training parameters θ1:N are conditionally independent given the target parameters θ∗. p(θ1:N, θ∗|x1:N, x∗) = " N Y i=1 p(θi|θ∗, xi, x∗) # p(θ∗|x∗) (3) This restricted structure is depicted graphically in figure 1b. Essentially, we assume that interactions between latent parameters are unimportant relative to interactions between the latent and target parameters; this will allow us to build models based on exponential family likelihood functions which will permit exact inference. Note that models with this structure will not correspond exactly to probabilistic generative models; the prior distribution over the latent parameters associated with the training inputs varies depending on the target input. Instead of approximating inference on our 2 model, we make our approximation at the stage of model selection; in doing so, we enable fast, exact inference. Note that the class of models with the structure of equation (3) is quite rich, and as we demonstrate in section 3.2, performs well on many problems. We discuss in section 4 the ramifications of this assumption and when it is appropriate. This assumption is closely related to known techniques for sparsifying Gaussian process inference. Qui˜nonero-Candela and Rasmussen [6] provide a summary of many sparsification techniques, and describe which correspond to generative models. One of the most successful sparsification techniques, the fully independent training conditional approximation of Snelson [13], assumes all training examples are independent given a specified set of inducing inputs. Our assumption extends this to the case of a single inducing input equal to the target input. Note that by marginalizing the parameters θ1:N, we can directly relate the observations y1:N to the target parameters θ∗. Combining equations (2) and (3), p(θ∗|x∗, D) ∝ " N Y i=1 Z Θ dθip(yi|θi)p(θi|θ∗, xi, x∗) # p(θ∗|x∗) (4) and marginalizing the latent parameters θ1:N, we observe that the posterior factors into a product over likelihoods p(yi|θ∗, x, x∗) and a prior over θ∗. = " N Y i=1 p(yi|θ∗, xi, x∗) # p(θ∗|x∗) (5) Note that we can equivalently specify either p(θ|θ∗, x, x∗) or p(y|θ∗, x, x∗), without loss of generality. In other words, we can equivalently describe the interaction between input variables either in the likelihood function or in the prior. 2.1 The extended likelihood function By construction, we know the distribution p(yi|θi). After making the transformation to equation (5), much of the problem of model specification has shifted to specifying the distribution p(yi|θ∗, xi, x∗). We call this distribution the extended likelihood distribution. Intuitively, these distributions should be related; if x∗= xi, then we expect θi = θ∗and p(yi|θ∗, xi, x∗) = p(yi|θi). We therefore define the extended likelihood function in terms of the known likelihood p(yi|θi). Typically, we prefer smooth models: we expect similar inputs to lead to a similar distribution over outputs. In the absence of a smoothness constraint, any inference method can perform arbitrarily poorly [14]. However, the notion of smoothness is not well-defined in the context of probability distributions. Denote g(yi) = p(yi|θ∗, xi, x∗), and f(yi) = p(yi|θi). We can formalize a smooth model as one in which the information divergence of the likelihood distribution f from the extended likelihood distribution g is bounded by some function ρ : X × X →R+. DKL (g∥f) ≤ρ(x∗, xi) (6) Since the divergence is a premetric, ρ(·, ·) must also satisfy the properties of a premetric: ρ(x, x) = 0 ∀x and ρ(x1, x2) ≥0 ∀x1, x2. For example, if X = Rn, we may draw an analogy to Lipschitz continuity and choose ρ(x1, x2) = K∥x1 −x2∥, with K a positive constant. The class of models with bounded divergence has the property that g →f as x′ →x, and it does so smoothly provided ρ(·, ·) is smooth. Note that this bound is a constraint on possible g, not an objective to be minimized; in particular, we do not minimize the divergence between g and f to develop an approximation, as is common in the approximate inference literature. Note also that this constraint has a straightforward information-theoretic interpretation; ρ(x1, x2) is a bound on the amount of information we would lose if we were to assume an observation y1 were taken at x2 instead of at x1. The assumptions of equations (3) and (6) define a class of models for a given likelihood function, but are insufficient for specifying a well-defined prior. We therefore use the principle of maximum entropy and choose the maximum entropy distribution from among that class. In our attached supporting material, we prove the following. Theorem 1 The maximum entropy distribution g satisfying DKL (g∥f) ≤ρ(x∗, x) has the form g(y) ∝f(y)k(x∗,x) (7) where k : X × X →[0, 1] is a kernel function which can be uniquely determined from ρ(·, ·) and f(·). 3 There is an equivalence relationship between functions k(·, ·) and ρ(·, ·); as either is uniquely determined by the other, it may more convenient to select a kernel function than a smoothness bound, and doing so implies no loss in generality or correctness. Note it is neither necessary nor sufficient that the kernel function k(·, ·) be positive definite. It is necessary only that k(x, x) = 1∀x and that k(x, x′) ∈[0, 1]∀x, x′. This includes the possibility of asymmetric kernel functions. We discuss in the attached supporting material the mapping between valid kernel functions k(·, ·) and bounding functions ρ(·, ·). It follows from equation (7) that the maximum entropy distribution satisfying a bound of ρ(x, x∗) on the divergence of the observation distribution p(y|θ∗, x, x∗) from the known distribution p(y|θ, x, x∗) = p(y|θ) is p(y|θ∗, x, x∗) ∝p(y|θ)k(x,x∗). (8) By combining equations (5) and (6), we can fully specify a stochastic model with a likelihood p(y|θ), a pointwise marginal prior p(θ|x), and a kernel function k : X × X →[0, 1]. To perform inference, we must evaluate p(θ|x, D) ∝ N Y i=1 p(yi|θ)k(x,xi)p(θ|x) (9) This can be done in closed form if we can normalize the terms on the right side of the equality. In certain limiting cases with uninformative priors, our model can be reduced to known frequentist estimators. For instance, if we employ an uninformative prior p(θ|x) ∝1 and choose the maximumlikelihood target parameters ˆθ ∗= arg max p(θ∗|x∗, D), we recover the weighted maximumlikelihood estimator, detailed by Wang [15]. If the function k(x, x′) is local, in the sense that it goes to zero if the distance ∥x −x′∥is large, then choosing maximum likelihood parameter estimates for an uninformative prior gives the locally weighted maximum-likelihood estimator, described in the context of regression by Cleveland [16] and for generalized linear models by Tibshirani and Hastie [10]. However, our result is derived from a Bayesian interpretation of statistics, and we infer a full distribution over the parameters; we are not limited to a point estimate. The distinction is of both academic and practical interest; in addition to providing insight into to the meaning of the weighting function and the validity of the inferred parameters, by inferring a posterior distribution we provide a principled way to reason about our knowledge and to insert prior knowledge of the underlying process. 2.2 Kernel inference on the exponential family Equation (8) is particularly useful if we choose our likelihood model p(y|θ) from the exponential family. p(y|θ) = h(y) exp θ⊤T (y) −A(θ) (10) A member of an exponential family remains in the same family when raised to the power of k(x, xi). Because every exponential family has a conjugate prior, we may choose our point-wise prior p(θ∗|x∗) to be conjugate to our chosen likelihood. We denote this conjugate prior pπ(χ, ν), where χ and ν are hyperparameters. p(θ|x∗) = pπ(χ(x∗), ν(x∗)) = f(χ(x∗), ν(x∗)) exp (θ · χ(x∗) −ν(x∗)A(θ)) (11) Therefore, our posterior as defined by equation (9) may be evaluated in closed form. p(θ∗|x∗, D) = pπ( N X i=1 k(x∗, xi)T (yi) + χ(x∗), N X i=1 k(x∗, xi) + ν(x∗)) (12) The prior predictive distribution p(y|x) is given by p(y|x) = Z p(y|θ)pπ(θ|χ(x∗), ν(x∗)) (13) = h(y) f(χ(x∗), ν(x∗)) f(χ(x∗) + T (y), ν(x∗) + 1) (14) 4 and the posterior predictive distribution is p(y|x∗, D) = h(y) f(PN i=1 k(x∗, xi)T (yi) + χ(x∗), PN i=1 k(x∗, xi) + ν(x∗)) f(PN i=1 k(x∗, xi)T (yi) + χ(x∗) + T (y), PN i=1 k(x∗, xi) + ν(x∗) + 1) (15) This is a general formulation of the posterior distribution over the parameters of any likelihood model belonging to the exponential family. Note that given a function k(x∗, x), we may evaluate this posterior without sampling, in time linear in the number of samples. Moreover, for several choices of kernels the relevant sums can be evaluates in sub-linear time; a sum over squared exponential kernels, for instance, can be evaluated in logarithmic time. 3 Local inference for multivariate Gaussian We now discuss in detail the application of equation (12) to the case of a multivariate Gaussian likelihood model with unknown mean µ and unknown covariance Σ. p(y|µ, Σ) = N (y; µ, Σ) (16) We present the conjugate prior, posterior, and predictive distributions without derivation; see [17], for example, for a derivation. The conjugate prior for a multivariate Gaussian with unknown mean and covariance is the normal-inverse Wishart distribution, with hyperparameter functions µ0(x∗), Ψ(x∗), ν(x∗), and λ(x∗). p(µ, Σ|x∗) = N µ; µ0(x∗), Σ λ(x∗) × W−1 (Σ; Ψ(x∗), ν(x∗)) (17) The hyperparameter functions have intuitive interpretations; µ0(x∗) is our initial belief of the mean function, while λ(x∗) is our confidence in that belief, with λ(x∗) = 0 indicating no confidence in the region near x∗, and λ(x∗) →∞indicating a state of perfect knowledge. Likewise, Ψ(x∗) indicates the expected covariance, and ν(x∗) represents the confidence in that estimate, much like λ. Given a dataset D, we can compute a posterior over the mean and covariance, represented by updated parameters µ′ 0(x∗), Ψ′(x∗), λ′(x∗), and ν′(x∗). λ′(x∗) = λ(x∗) + k(x∗) ν′(x∗) = ν(x∗) + k(x∗) µ′ 0(x∗) = λ(x∗)µ0(x∗) + y λ(x∗) + k(x∗) Ψ′(x∗) = Ψ(x∗) + S(x∗) + λ(x∗)k(x∗) λ(x∗) + k(x∗)E(x∗) (18) where k(x∗) = N X i=1 k(x∗, xi) y(x∗) = 1 k(x∗) N X i=1 k(x∗, xi)yi S(x∗) = N X i=1 k(x∗, xi) yi −y(x∗) yi −y(x∗) ⊤ (19) E(x∗) = y(x∗) −µ0(x∗) y(x∗) −µ0(x∗) ⊤ The resulting posterior predictive distribution is a multivariate Student-t distribution. p(y|x∗) = tν′(x∗) µ′ 0(x∗), λ′(x∗) + 1 λ′(x∗)ν′(x∗)Ψ′(x∗) (20) 3.1 Special cases Two special cases of the multivariate Gaussian are worth mentioning. First, a fixed, known covariance Σ(x∗) can be described by the hyperparameters Ψ(x∗) = limν→∞ Σ(x∗) ν . The resulting posterior distribution is then p(µ|x∗, D) = N µ′ 0, 1 λ′(x∗)Σ(x∗) (21) 5 with predictive distribution p(µ|x∗, D) = N µ′ 0, 1 + λ′(x∗) λ′(x∗) Σ(x∗) (22) In the limit as λ goes to 0, when the prior is uninformative, the mean and mode of the predictive distribution approaches the Nadaraya-Watson [8, 9] estimate. µNW (x∗) = PN i=1 k(x∗, xi)yi PN i=1 k(x∗, xi) (23) The complementary case of known mean µ(x∗) and unknown covariance Σ(x∗) is described by the limit λ →∞. In this case, the posterior distribution is p(Σ|x∗, D) = W−1 Ψ(x∗) + N X i=1 ki(yi −µ(x∗))(yi −µ(x∗))⊤, λ(x∗) + N X i=1 ki (24) with predictive distribution p(y|x∗) = tν′(x∗) µ(x∗), 1 ν′(x∗)Ψ(x∗) + N X i=1 ki(yi −µ(x∗))(yi −µ(x∗))⊤ ! (25) In the limit as ν goes to 0, the maximum likelihood covariance estimate is ΣML(x∗) = N X i=1 ki(yi −µ(x∗))(yi −µ(x∗))⊤ (26) which is precisely the result of our prior work [18, 19]. In both cases, our method yields distributions over parameters, rather than point estimates; moreover, the use of Bayesian inference naturally handles the case of limited or no available samples. 3.2 Experimental results We evaluate our approach on several regression problems, and compare the results with alternative nonparametric Bayesian models. In all experiments, we use the squared-exponential kernel k(y, y′) = exp( c 2∥y −y′∥2). This function meets both the requirements of our algorithm and is positive-definite and thus a suitable covariance function for models based on the Gaussian process. We set the kernel scale c by maximum likelihood for each model. We compare our approach to covariance prediction to the generalized Wishart process (GWP) of [7]. First, we sample a synthetic dataset; the output is a two-dimensional observation set Y = R2, where samples are drawn from a zero-mean normal distribution with a covariance that rotates over time. Σ(t) = cos(t) −sin(t) sin(t) cos(t) 4 0 0 10 cos(t) −sin(t) sin(t) cos(t) ⊤ (27) Second, we predict the covariances of the returns on two currency exchanges—the Euro to US dollar, and the Japanese yen to US dollar—over the past four years. Following Wilson and Ghahramani, we define a return as log( Pt+1 Pt ), where Pt is the exchange rate on day t. Illustrative results are provided in figure 2. To compare these results quantitatively, one natural measure is the mean of the logarithm of the likelihood of the predicted model given the data. MLL = 1 N N X i=1 −1 2(y⊤ i ˆΣ −1 i yi + log det ˆΣi) (28) Here, ˆΣi is the maximum likelihood covariance predicted for the ith sample. In addition to how well our model describes the available data, we may also be interested in how accurately we recover the distribution used to generate the data. This is a measure of how closely the inferred ellipses in figure 2 approximate the true covariance ellipses. One measure of the quality of the inferred distribution is the KL divergence of the inferred distribution from the true distribution 6 −10 −5 0 5 10 −10 −5 0 5 10 x1 x2 α=0.83 −10 −5 0 5 10 −10 −5 0 5 10 x1 x2 α=1.20 −10 −5 0 5 10 −10 −5 0 5 10 x1 x2 α=1.57 −10 −5 0 5 10 −10 −5 0 5 10 x1 x2 α=1.94 −10 −5 0 5 10 −10 −5 0 5 10 x1 x2 α=2.31 Ground Truth Inverse Wishart Generalised Wishart Process (a) Synthetic periodic data −0.05−0.025 0 0.025 0.05 −0.05 0 0.05 EUR/USD JPY/USD 2010/10/29 −0.05−0.025 0 0.025 0.05 −0.05 0 0.05 JPY/USD 2011/8/17 −0.05−0.025 0 0.025 0.05 −0.05 0 0.05 JPY/USD 2012/6/4 −0.05−0.025 0 0.025 0.05 −0.05 0 0.05 JPY/USD 2013/3/22 −0.05−0.025 0 0.025 0.05 −0.05 0 0.05 EUR/USD JPY/USD 2014/1/10 Inverse Wishart Generalised Wishart Process (b) Exchange data Figure 2: Comparison of covariances predicted by our kernel inverse Wishart process and the generalized Wishart process for the problems described in section 3.2. The true covariance used to generate data is provided for comparison. The samples used are plotted so that the area of the circle is proportional to the weight assigned by the kernel. The kernel inverse Wishart process outperforms the generalized Wishart process, both in terms of the likelihood of the training data, and in terms of the divergence of the inferred distribution from the true distribution. used to generate the data. Note we cannot evaluate this quantity on the exchange dataset, as we do not know the true distribution. We present both the mean likelihood and the KL divergence of both algorithms, along with running times, in table 1. By both metrics, our algorithm outperforms the GWP by a significant margin; the running time advantage of kernel estimation over the GWP is even more dramatic. It is important to note that running times are difficult to compare, as they depend heavily on implementation and hardware details; the numbers reported should be considered qualitatively. Both algorithms were implemented in the MATLAB programming language, with the likelihood functions for the GWP implemented in heavily optimized c code in an effort to ensure a fair competition. Despite this, the GWP took over a thousand times longer than our method to generate predictions. ttr (s) tev (ms) MLL DKL (ˆp∥p) Periodic kNIW 0.022 0.003 -10.43 0.0138 GWP 7.08 0.135 -19.79 0.0248 Exchange kNIW 0.520 0.020 7.73 — GWP 15.7 1.708 7.56 — Table 1: Comparison of the performance of two models of covariance prediction, based on time required to make predictions at evaluation, the mean log likelihood and the KL divergence between the predicted covariance and the ground truth covariance. We next evaluate our approach on heteroscedastic regression problems. First, we generate 100 samples from the distribution described by Yuan and Wahba [20], which has mean µ(x∗) = 2 exp(−30(x∗−0.25)2) + sin(π(x∗)2) and variance σ2(x∗) = exp(2 ∗sin(2πx∗)). Second, we test on the motorcycle dataset of Silverman et al. [21]. We compare our approach to a variety of Gaussian process based regression algorithms, including a standard homoscedastic Gaussian process, the variational heteroscedastic Gaussian process of L´azaro-Gredilla and Titsias [4], and the maximum likelihood heteroscedastic Gaussian process of Quadrianto et al. [22]. All algorithms are implemented in MATLAB, using the authors’ own code. Running times are presented with the same caveat as in the previous experiments, and a similar conclusion holds: our method provides results which are as good or better than methods based upon the Gaussian process, and does so in a fraction of the time. Figure 3 illustrates the predictions made by our method on the heteroscedastic motor7 10 15 20 25 30 35 40 45 50 −150 −100 −50 0 50 100 t a (a) kNIW 10 15 20 25 30 35 40 45 50 −150 −100 −50 0 50 100 t a (b) VHGP Figure 3: Comparison of the distributions inferred using the kernel normal inverse Wishart process and the variational heteroscedastic Gaussian process to model Silverman’s motorcycle dataset. Both models capture the time-varying nature of the measurement noise; as is typical, the kernel model is much less smooth and has more local structure than the Gaussian process model. Both models perform well according to most metrics, but the kernel model can be computed in a fraction of the time. cycle dataset of Silverman. For reference, we provide the distribution generated by the variational heteroscedastic Gaussian process. ttr (s) tev (ms) NMSE MLL Motorcycle kNIW 0.124 2.95 0.2 -4.04 GP 0.52 3.52 0.202 -4.51 VHGP 3.12 7.53 0.202 -4.07 MLHGP 2.39 5.83 0.204 -4.03 Periodic kNIW 0.68 7.94 0.0708 -2.07 GP 3.41 22 0.0822 -2.56 VHGP 26.4 54.4 0.0827 -1.85 MLHGP 38.3 29.1 0.0827 -2.38 Table 2: Comparison of the performance of various models of heteroscedastic processes, based on time required to train, time required to make predictions at evaluation, the normalized mean squared error, and the mean log likelihood. Note how the normal-inverse Wishart process obtains performance as good or better than the other algorithms in a fraction of the time. 4 Discussion We have presented a family of stochastic models which permit exact inference for any likelihood function from the exponential family. Algorithms for performing inference on this model include many local kernel estimators, and extend them to probabilistic contexts. We showed the instantiation of our model for a multivariate Gaussian likelihood; due to lack of space, we do not present others, but the approach is easily extended to tasks like classification and counting. The models we develop are built on a strong assumption of independence; this assumption is critical to enabling efficient exact inference. We now explore the costs of this assumption, and when it is inappropriate. First, while the kernel function in our model does not need to be positive definite—or even symmetric—we lose an important degree of flexibility relative to the covariance functions employed in a Gaussian process. Covariance functions can express a number of complex concepts, such as a prior over functions with a specified additive or hierarchical structure [23]; these concepts cannot be easily formulated in terms of smoothness. Second, by neglecting the relationships between latent parameters, we lose the ability to extrapolate trends in the data, meaning that in places where data is sparse we cannot expect good performance. Thus, for a problem like time series forecasting, our approach will likely be unsuccessful. Our approach is suitable in situations where we are likely to see similar inputs many times, which is often the case. Moreover, regardless of the family of models used, extrapolation to regions of sparse data can perform very poorly if the prior does not model the true process well. Our approach is particularly effective when data is readily available, but computation is expensive; the gains in efficiency due to an independence assumption allow us to scale to larger much larger datasets, improving predictive performance with less design effort. Acknowledgements This research was funded by the Office of Naval Research under contracts N00014-09-1-1052 and N00014-10-1-0936. The support of Behzad Kamgar-Parsi and Tom McKenna is gratefully acknowledged. 8 References [1] C. E. Rasmussen and C. Williams, Gaussian processes for machine learning. Cambridge, MA: MIT Press, Apr. 2006, vol. 14, no. 2. [2] Q. Le, A. Smola, and S. Canu, “Heteroscedastic Gaussian process regression,” in Proc. ICML, 2005, pp. 489–496. [3] K. Kersting, C. Plagemann, P. Pfaff, and W. Burgard, “Most-Likely Heteroscedastic Gaussian Process Regression,” in Proc. ICML, Corvallis, OR, USA, June 2007, pp. 393–400. [4] M. L´azaro-Gredilla and M. Titsias, “Variational heteroscedastic Gaussian process regression,” in Proc. ICML, 2011. [5] L. Shang and A. B. Chan, “On approximate inference for generalized gaussian process models,” arXiv preprint arXiv:1311.6371, 2013. [6] J. Qui˜nonero-Candela and C. Rasmussen, “A unifying view of sparse approximate Gaussian process regression,” The Journal of Machine Learning Research, vol. 6, pp. 1939–1959, 2005. [7] A. Wilson and Z. Ghahramani, “Generalised Wishart processes,” in Proc. UAI, 2011, pp. 736– 744. [8] E. Nadaraya, “On estimating regression,” Theory of Probability & Its Applications, vol. 9, no. 1, pp. 141–143, 1964. [9] G. Watson, “Smooth regression analysis,” Sankya: The Indian Journal of Statistics, Series A, vol. 26, no. 4, pp. 359–372, 1964. [10] R. Tibshirani and T. Hastie, “Local likelihood estimation,” Journal of the American Statistical Association, vol. 82, no. 398, pp. 559–567, 1987. [11] A. G. Gray and A. W. Moore, “N-body’problems in statistical learning,” in NIPS, vol. 4. Citeseer, 2000, pp. 521–527. [12] C. Yang, R. Duraiswami, N. A. Gumerov, and L. Davis, “Improved fast gauss transform and efficient kernel density estimation,” in Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, 2003, pp. 664–671. [13] E. Snelson, “Flexible and efficient Gaussian process models for machine learning,” PhD thesis, University of London, 2007. [14] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk, A Distribution Free Theory of Nonparametric Regression. New York, NY: Springer, 2002. [15] S. Wang, “Maximum weighted likelihood estimation,” PhD thesis, University of British Columbia, 2001. [16] W. S. Cleveland, “Robust locally weighted regression and smoothing scatterplots,” Journal of the American statistical association, vol. 74, no. 368, pp. 829–836, 1979. [17] K. Murphy, “Conjugate Bayesian analysis of the Gaussian distribution,” 2007. [18] W. Vega-Brown, “Predictive Parameter Estimation for Bayesian Filtering,” SM Thesis, Massachusetts Institute of Technology, 2013. [19] W. Vega-Brown and N. Roy, “CELLO-EM: Adaptive Sensor Models without Ground Truth,” in Proc. IROS, Tokyo, Japan, 2013. [20] M. Yuan and G. Wahba, “Doubly penalized likelihood estimator in heteroscedastic regression,” Statistics & probability letters, vol. 69, no. 1, pp. 11–20, 2004. [21] B. W. Silverman et al., “Some aspects of the spline smoothing approach to non-parametric regression curve fitting,” Journal of the Royal Statistical Society, Series B, vol. 47, no. 1, pp. 1–52, 1985. [22] N. Quadrianto, K. Kersting, M. Reid, T. Caetano, and W. Buntine, “Most-Likely Heteroscedastic Gaussian Process Regression,” in Proc. ICDM, Miami, FL, USA, December 2009. [23] D. Duvenaud, H. Nickisch, and C. E. Rasmussen, “Additive Gaussian processes,” in Advances in Neural Information Processing Systems 24, Granada, Spain, 2011, pp. 226–234. 9
|
2014
|
111
|
5,194
|
On Communication Cost of Distributed Statistical Estimation and Dimensionality Ankit Garg Department of Computer Science, Princeton University garg@cs.princeton.edu Tengyu Ma Department of Computer Science, Princeton University tengyu@cs.princeton.edu Huy L. Nguy˜ˆen Simons Institute, UC Berkeley hlnguyen@cs.princeton.edu Abstract We explore the connection between dimensionality and communication cost in distributed learning problems. Specifically we study the problem of estimating the mean ~✓of an unknown d dimensional gaussian distribution in the distributed setting. In this problem, the samples from the unknown distribution are distributed among m different machines. The goal is to estimate the mean ~✓at the optimal minimax rate while communicating as few bits as possible. We show that in this setting, the communication cost scales linearly in the number of dimensions i.e. one needs to deal with different dimensions individually. Applying this result to previous lower bounds for one dimension in the interactive setting [1] and to our improved bounds for the simultaneous setting, we prove new lower bounds of ⌦(md/ log(m)) and ⌦(md) for the bits of communication needed to achieve the minimax squared loss, in the interactive and simultaneous settings respectively. To complement, we also demonstrate an interactive protocol achieving the minimax squared loss with O(md) bits of communication, which improves upon the simple simultaneous protocol by a logarithmic factor. Given the strong lower bounds in the general setting, we initiate the study of the distributed parameter estimation problems with structured parameters. Specifically, when the parameter is promised to be s-sparse, we show a simple thresholding based protocol that achieves the same squared loss while saving a d/s factor of communication. We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor. 1 Introduction The last decade has witnessed a tremendous growth in the amount of data involved in machine learning tasks. In many cases, data volume has outgrown the capacity of memory of a single machine and it is increasingly common that learning tasks are performed in a distributed fashion on many machines. Communication has emerged as an important resource and sometimes the bottleneck of the whole system. A lot of recent works are devoted to understand how to solve problems distributedly with efficient communication [2, 3, 4, 1, 5]. In this paper, we study the relation between the dimensionality and the communication cost of statistical estimation problems. Most modern statistical problems are characterized by high dimensionality. Thus, it is natural to ask the following meta question: How does the communication cost scale in the dimensionality? 1 We study this question via the problems of estimating parameters of distributions in the distributed setting. For these problems, we answer the question above by providing two complementary results: 1. Lower bound for general case: If the distribution is a product distribution over the coordinates, then one essentially needs to estimate each dimension of the parameter individually and the information cost (a proxy for communication cost) scales linearly in the number of dimensions. 2. Upper bound for sparse case: If the true parameter is promised to have low sparsity, then a very simple thresholding estimator gives better tradeoff between communication cost and mean-square loss. Before getting into the ideas behind these results, we first define the problem more formally. We consider the case when there are m machines, each of which receives n i.i.d samples from an unknown distribution P (from a family P) over the d-dimensional Euclidean space Rd. These machines need to estimate a parameter ✓of the distribution via communicating with each other. Each machine can do arbitrary computation on its samples and messages it receives from other machines. We regard communication (the number of bits communicated) as a resource, and therefore we not only want to optimize over the estimation error of the parameters but also the tradeoff between the estimation error and communication cost of the whole procedure. For simplicity, here we are typically interested in achieving the minimax error 1 while communicating as few bits as possible. Our main focus is the high dimensional setting where d is very large. Communication Lower Bound via Direct-Sum Theorem The key idea for the lower bound is, when the unknown distribution P = P1 ⇥· · · ⇥Pd is a product distribution over Rd, and each coordinate of the parameter ✓only depends on the corresponding component of P, then we can view the d-dimensional problem as d independent copies of one dimensional problem. We show that, one unfortunately cannot do anything beyond this trivial decomposition, that is, treating each dimension independently, and solving d different estimations problems individually. In other words, the communication cost 2 must be at least d times the cost for one dimensional problem. We call this theorem “direct-sum” theorem. To demonstrate our theorem, we focus on the specific case where P is a d dimensional spherical Gaussian distribution with an unknown mean and covariance σ2Id 3 . The problem is to estimate the mean of P. The work [1] showed a lower bound on the communication cost for this problem when d = 1. Our technique when applied to their theorem immediately yields a lower bound equal to d times the lower bound for the one dimension problem for any choice of d. Note that [5] independently achieve the same bound by refining the proof in [1]. In the simultaneous communication setting, where all machines send one message to one machine and this machine needs to figure out the estimation, the work [1] showed that ⌦(md/ log m) bits of communication are needed to achieve the minimax squared loss. In this paper, we improve this bound to ⌦(md), by providing an improved lower bound for one-dimensional setting and then applying our direct-sum theorem. The direct-sum theorem that we prove heavily uses the idea and tools from the recent developments in communication complexity and information complexity. There has been a lot of work on the paradigm of studying communication complexity via the notion of information complexity [6, 7, 8, 9, 10]. Information complexity can be thought of as a proxy for communication complexity that is especially accurate for solving multiple copies of the same problem simultaneously [8]. Proving socalled “direct-sum” results has become a standard tool, namely the fact that the amount of resources required for solving d copies of a problem (with different inputs) in parallel is equal to d times the amount required for one copy. In other words, there is no saving from solving many copies of the same problem in batch and the trivial solution of solving each of them separately is optimal. Note that this generic statement is certainly NOT true for arbitrary type of tasks and arbitrary type of resources. Actually even for distributed computing tasks, if the measure of resources is the 1by minimax error we mean the minimum possible error that can be achieved when there is no limit on the communication 2technically, information cost, as discussed below 3where Id denote the d ⇥d identity matrix 2 communication cost instead of information cost, there exist examples where solving d copies of a certain problem requires less communication than d times the communication required for one copy [11]. Therefore, a direct-sum theorem, if true, could indeed capture the features and difficulties of the problems. Our result can be viewed as a direct sum theorem for communication complexity for statistical estimation problems: the amount of communication needed for solving an estimation problem in d dimensions is at least d times the amount of information needed for the same problem in one dimension. The proof technique is directly inspired by the notion of conditional information complexity [7], which was used to prove direct sum theorems and lower bounds for streaming algorithms. We believe this is a fruitful connection and can lead to more lower bounds in statistical machine learning. To complement the above lower bounds, we also show an interactive protocol that uses a log factor less communication than the simple protocol, under which each machine sends the sample mean and the center takes the average as the estimation. Our protocol demonstrates additional power of interactive communication and potential complexity of proving lower bound for interactive protocols. Thresholding Algorithm for Sparse Parameter Estimation In light of the strong lower bounds in the general case, a question suggests itself as a way to get around the impossibility results: Can we do better when the data (parameters) have more structure? We study this questions by considering the sparsity structure on the parameter ✓. Specifically, we consider the case when the underlying parameter ✓is promised to be s-sparse. We provide a simple protocol that achieves the same squared-loss O(dσ2/(mn)) as in the general case, while using ˜O(sm) communications, or achieving optimal squared loss O(sσ2/(mn)), with communication ˜O(dm), or any tradeoff between these cases. We even conjecture that this is the best tradeoff up to polylogarithmic factors. 2 Problem Setup, Notations and Preliminaries Classical Statistical Parameter Estimation We start by reviewing the classical framework of statistical parameter estimation problems. Let P be a family of distributions over X. Let ✓: P ! ⇥⇢R denote a function defined on P. We are given samples X1, . . . , Xn from some P 2 P, and are asked to estimate ✓(P). Let ˆ✓: X n ! ⇥be such an estimator, and ˆ✓(X1, . . . , Xn) is the corresponding estimate. Define the squared loss R of the estimator to be R(ˆ✓, ✓) = E ˆ✓,X h kˆ✓(X1, . . . , Xn) −✓(P)k2 2 i In the high-dimensional case, let Pd := {~P = P1 ⇥· · · ⇥Pd : Pi 2 P} be the family of product distributions over X d. Let ~✓: Pd ! ⇥d ⇢Rd be the d-dimensional function obtained by applying ✓point-wise ~✓(P1 ⇥· · · ⇥Pd) = (✓(P1), . . . , ✓(Pd)). Throughout this paper, we consider the case when X = R and P = {N(✓, σ2) : ✓2 [−1, 1]} is Gaussian distribution with for some fixed and known σ. Therefore, in the high-dimensional case, Pd = {N(~✓, σ2Id) : ~✓2 [−1, 1]d} is a collection of spherical Gaussian distributions. We use ˆ~✓to denote the d-dimensional estimator. For clarity, in this paper, we always use~· to indicate a vector in high dimensions. Distributed Protocols and Parameter Estimation: In this paper, we are interested in the situation where there are m machines and the jth machine receives n samples ~X(j,1), . . . , ~X(j,n) 2 Rd from the distribution ~P = N(~✓, σ2Id). The machines communicate via a publicly shown blackboard. That is, when a machine writes a message on the blackboard, all other machines can see the content of the message. Following [1], we usually refer to the blackboard as the fusion center or simply center. Note that this model captures both point-to-point communication as well as broadcast com3 munication. Therefore, our lower bounds in this model apply to both the message passing setting and the broadcast setting. We will say that a protocol is simultaneous if each machine broadcasts a single message based on its input independently of the other machine ([1] call such protocols independent). We denote the collection of all the messages written on the blackboard by Y . We will refer to Y as transcript and note that Y 2 {0, 1}⇤is written in bits and the communication cost is defined as the length of Y , denoted by |Y |. In multi-machine setting, the estimator ˆ~✓only sees the transcript Y , and it maps Y to ˆ~✓(Y ) 4, which is the estimation of ~✓. Let letter j be reserved for index of the machine and k for the sample and letter i for the dimension. In other words, ~X(j,k) i is the ith-coordinate of kth sample of machine j. We will use ~Xi as a shorthand for the collection of the ith coordinate of all the samples: ~Xi = { ~X(j,k) i : j 2 [m], k 2 [n]}. Also note that [n] is a shorthand for {1, . . . , n}. The mean-squared loss of the protocol ⇧with estimator ˆ~✓is defined as R ⇣ (⇧, ˆ~✓), ~✓ ⌘ = sup ~✓ E ~ X,⇧ [kˆ~✓(Y ) −~✓k2] and the communication cost of ⇧is defined as CC(⇧) = sup ~✓ E ~ X,⇧ [|Y |] The main goal of this paper is to study the tradeoff between R ⇣ (⇧, ˆ~✓), ~✓ ⌘ and CC(⇧). Proving Minimax Lower Bound: We follow the standard way to prove minimax lower bound. We introduce a (product) distribution Vd of ~✓over the [−1, 1]d. Let’s define the mean-squared loss with respect to distribution Vd as RVd((⇧, ˆ~✓), ~✓) = E ~✓⇠Vd " E ~ X,⇧ [kˆ~✓(Y ) −~✓k2] # It is easy to see that RVd((⇧, ˆ~✓), ~✓) R((⇧, ˆ~✓), ~✓) for any distribution Vd. Therefore to prove lower bound for the minimax rate, it suffices to prove the lower bound for the mean-squared loss under any distribution Vd. 5 Private/Public Randomness: We allow the protocol to use both private and public randomness. Private randomness, denoted by Rpriv, refers to the random bits that each machine draws by itself. Public randomness, denoted by Rpub, is a sequence of random bits that is shared among all parties before the protocol without being counted toward the total communication. Certainly allowing these two types of randomness only makes our lower bound stronger, and public randomness is actually only introduced for convenience. Furthermore, we will see in the proof of Theorem 3.1, the benefit of allowing private randomness is that we can hide information using private randomness when doing the reduction from one dimension protocol to d-dimensional one. The downside is that we require a stronger theorem (that tolerates private randomness) for the one dimensional lower bound, which is not a problem in our case since technique in [1] is general enough to handle private randomness. Information cost: We define information cost IC(⇧) of protocol ⇧as mutual information between the data and the messages communicated conditioned on the mean ~✓. 6 4Therefore here ˆ~✓maps {0, 1}⇤to ⇥ 5Standard minimax theorem says that actually the supVd RVd((⇧, ˆ~✓), ~✓) = R((⇧, ˆ~✓), ~✓) under certain compactness condition for the space of ~✓. 6Note that here we have introduced a distribution for the choice of ~✓, and therefore ~✓is a random variable. 4 ICVd(⇧) = I( ~X; Y | ~✓, Rpub) Private randomness doesn’t explicitly appear in the definition of information cost but it affects it. Note that the information cost is a lower bound on the communication cost: ICVd(⇧) = I( ~X; Y | ~✓, Rpub) H(Y ) CC(⇧) The first inequality uses the fact that I(U; V | W) H(V | W) H(V ) hold for any random variable U, V, W, and the second inequality uses Shannon’s source coding theorem [13]. We will drop the subscript for the prior Vd of ~✓when it is clear from the context. 3 Main Results 3.1 High Dimensional Lower bound via Direct Sum Our main theorem roughly states that if one can solves the d-dimensional problem, then one must be able to solve the one dimensional problem with information cost and square loss reduced by a factor of d. Therefore, a lower bound for one dimensional problem will imply a lower bound for high dimensional problem, with information cost and square loss scaled up by a factor of d. We first define our task formally, and then state the theorem that relates d-dimensional task with one-dimensional task. Definition 1. We say a protocol and estimator pair (⇧, ˆ~✓) solves task T(d, m, n, σ2, Vd) with information cost C and mean-squared loss R, if for ~✓randomly chosen from Vd, m machines, each of which takes n samples from N(~✓, σ2Id) as input, can run the protocol ⇧and get transcript Y so that the followings are true: RVd((⇧, ˆ~✓), ~✓) = R (1) IVd( ~X; Y | ~✓, Rpub) = C (2) Theorem 3.1. [Direct-Sum] If (⇧, ˆ~✓) solves the task T(d, m, n, σ2, Vd) with information cost C and squared loss R, then there exists (⇧0, ˆ✓) that solves the task T(1, m, n, σ2, V) with information cost at most 4C/d and squared loss at most 4R/d. Furthermore, if the protocol ⇧is simultaneous, then the protocol ⇧0 is also simultaneous. Remark 1. Note that this theorem doesn’t prove directly that communication cost scales linearly with the dimension, but only information cost. However for many natural problems, communication cost and information cost are similar for one dimension (e.g. for gaussian mean estimation) and then this direct sum theorem can be applied. In this sense it is very generic tool and is widely used in communication complexity and streaming algorithms literature. Corollary 3.1. Suppose (⇧, ˆ~✓) estimates the mean of N(~✓, σ2Id), for all ~✓2 [−1, 1]d, with meansquared loss R, and communication cost B. Then R ≥⌦ ✓ min ⇢ d2σ2 nB log m, dσ2 n log m, d )◆ As a corollary, when σ2 mn, to achieve the mean-squared loss R = dσ2 mn , the communication cost B is at least ⌦ ⇣ dm log m ⌘ . This lower bound is tight up to polylogarithmic factors. In most of the cases, roughly B/m machines sending their sample mean to the fusion center and ˆ~✓simply outputs the mean of the sample means with O(log m) bits of precision will match the lower bound up to a multiplicative log2 m factor. 7 7When σ is very large, when ✓is known to be in [−1, 1], ˆ~✓= 0 is a better estimator, that is essentially why the lower bounds not only have the first term we desired but also the other two. 5 3.2 Protocol for sparse estimation problem In this section we consider the class of gaussian distributions with sparse mean: Ps = {N(~✓, σ2Id) : | ~✓|0 s, ~✓2 Rd}. We provide a protocol that exploits the sparse structure of ~✓. Inputs : Machine j gets samples X(j,1), . . . , X(j,n) distributed according to N(~✓, σ2Id), where ~✓2 Rd with | ~✓|0 s. For each 1 j m0 = (Lm log d)/↵, (where L is a sufficiently large constant), machine j sends its sample mean ¯X(j) = 1 n + X(j,1), . . . , X(j,n), (with precision O(log m)) to the center. Fusion center calculates the mean of the sample means ¯X = 1 m0 ⇣ ¯X(1) + · · · + ¯X(m0)⌘ . Let ˆ~✓i = ⇢¯Xi if | ¯Xi|2 ≥↵σ2 mn 0 otherwise Outputs ˆ~✓ Protocol 1: Protocol for Ps Theorem 3.2. For any P 2 Ps, for any d/s ≥↵≥1, Protocol 1 returns ~✓with mean-squared loss O( ↵sσ2 mn ) with communication cost O((dm log m log d)↵). The proof of the theorem is deferred to supplementary material. Note that when ↵= 1, we have a protocol with ˜O(dm) communication cost and mean-squared loss O(sσ2/(mn)), and when ↵= d/s, the communication cost is ˜O(sm) but squared loss O(dσ2/(mn)). Comparing to the case where we don’t have sparse structure, basically we either replace the d factor in the communication cost by the intrinsic dimension s or the d factor in the squared loss by s, but not both. 3.3 Improved upper bound The lower bound provided in Section 3.1 is only tight up to polylogarithmic factor. To achieve the centralized minimax rate σ2d mn , the best existing upper bound of O(dm log(m)) bits of communication is achieved by the simple protocol that ask each machine to send its sample mean with O(log n) bits precision . We improve the upper bound to O(dm) using the interactive protocols. Recall that the class of unknown distributions of our model is Pd = {N(~✓, σ2Id) : ✓2 [−1, 1]d}. Theorem 3.3. Then there is an interactive protocol ⇧with communication O(md) and an estimator ˆ~✓based on ⇧which estimates ~✓up to a squared loss of O( dσ2 mn ). Remark 2. Our protocol is interactive but not simultaneous, and it is a very interesting question whether the upper bound of O(dm) could be achieved by a simultaneous protocol. 3.4 Improved lower bound for simultaneous protocols Although we are not able to prove ⌦(dm) lower bound for achieve the centralized minimax rate in the interactive model, the lower bound for simultaneous case can be improved to ⌦(dm). Again, we lowerbound the information cost for the one dimensional problem first, and applying the direct-sum theorem in Section 3.1, we got the d-dimensional lower bound. Theorem 3.4. Suppose simultaneous protocol (⇧, ˆ~✓) estimates the mean of N(~✓, σ2Id), for all ~✓2 [−1, 1]d, with mean-squared loss R, and communication cost B, Then R ≥⌦ ✓ min ⇢d2σ2 nB , d )◆ As a corollary, when σ2 mn, to achieve mean-squared loss R = dσ2 mn , the communication cost B is at least ⌦(dm). 6 4 Proof sketches 4.1 Proof sketch of theorem 3.1 and corollary 3.1 To prove a lower bound for the d dimensional problem using an existing lower bound for one dimensional problem, we demonstrate a reduction that uses the (hypothetical) protocol ⇧for d dimensions to construct a protocol for the one dimensional problem. For each fixed coordinate i 2 [d], we design a protocol ⇧i for the one-dimensional problem by embedding the one-dimensional problem into the ith coordinate of the d-dimensional problem. We will show essentially that if the machines first collectively choose randomly a coordinate i, and run protocol ⇧i for the one-dimensional problem, then the information cost and mean-squared loss of this protocol will be only 1/d factor of those of the d-dimensional problem. Therefore, the information cost of the d-dimensional problem is at least d times the information cost of one-dimensional problem. Inputs : Machine j gets samples X(j,1), . . . , X(j,n) distributed according to N(✓, σ2), where ✓⇠V. 1. All machines publicly sample ˘✓−i distributed according to Vd−1. 2. Machine j privately samples ˘X(j,1) −i , . . . , ˘X(j,n) −i distributed according to N(˘✓−i, σ2Id−1). Let ˘X(j,k) = ( ˘X(j,k) 1 , . . . , ˘X(j,k) i−1 , X(j,k), ˘X(j,k) i+1 , . . . , ˘X(j,k) d ). 3. All machines run protocol ⇧on data ˘X and get transcript Yi. The estimator ˆ✓i is ˆ✓i(Yi) = ˆ~✓(Y )i i.e. the ith coordinate of the d-dimensional estimator. Protocol 2: ⇧i In more detail, under protocol ⇧i (described formally in Protocol 2) the machines prepare a ddimensional dataset as follows: First they fill the one-dimensional data that they got into the ith coordinate of the d-dimensional data. Then the machines choose publicly randomly ~✓−i from distribution Vd−1, and draw independently and privately gaussian random variables from N(~✓−i , Id−1), and fill the data into the other d −1 coordinates. Then machines then simply run the d-dimension protocol ⇧on this tailored dataset. Finally the estimator, denoted by ˆ✓i, outputs the ith coordinate of the d-dimensional estimator ˆ~✓. We are interested in the mean-squared loss and information cost of the protocol ⇧i’s that we just designed. The following lemmas relate ⇧i’s with the original protocol ⇧. Lemma 1. Protocols ⇧i’s satisfy Pd i=1 RV ⇣ (⇧i, ˆ✓i), ✓ ⌘ = RVd ⇣ (⇧, ˆ~✓), ~✓ ⌘ Lemma 2. Protocols ⇧i’s satisfy Pd i=1 ICV(⇧i) ICVd(⇧) Note that the counterpart of Lemma 2 with communication cost won’t be true, and actually the communication cost of each ⇧i is the same as that of ⇧. It turns out doing reduction in communication cost is much harder, and this is part of the reason why we use information cost as a proxy for communication cost when proving lower bound. Also note that the correctness of Lemma 2 heavily relies on the fact that ⇧i draws the redundant data privately independently (see Section 2 and the proof for more discussion on private versus public randomness). By Lemma 1 and Lemma 2 and a Markov argument, there exists an i 2 {1, . . . , d} such that R ⇣ (⇧i, ˆ✓i), ✓ ⌘ 4 d · R ⇣ (⇧, ~✓), ~✓ ⌘ and IC(⇧i) 4 d · IC(⇧) Then the pair (⇧0, ˆ✓) = (⇧i, ˆ✓i) solves the task T(1, m, n, σ2, V) with information cost at most 4C/d and squared loss 4R/d, which proves Theorem 3.1. Corollary 3.1 follows Theorem 3.1 and the following lower bound for one dimensional gaussian mean estimation proved in [1]. We provide complete proofs in the supplementary. 7 Theorem 4.1. [1] Let V be the uniform distribution over {±δ}, where δ2 min ⇣ 1, σ2 log(m) n ⌘ . If (⇧, ˆ✓) solves the task T(1, m, n, σ2, V) with information cost C and squared loss R, then either C ≥⌦ ⇣ σ2 δ2n log(m) ⌘ or R ≥δ2/10. 4.2 Proof sketch of theorem 3.3 The protocol is described in protocol 3 in the supplementary. We only describe the d = 1 case, while for general case we only need to run d protocols individually for each dimension. The central idea is that we maintain an upper bound U and lower bound L for the target mean, and iteratively ask the machines to send their sample means to shrink the interval [L, U]. Initially we only know that ✓2 [−1, 1]. Therefore we set the upper bound U and lower bound L for ✓to be −1 and 1. In the first iteration the machines try to determine whether ✓< 0 or ≥0. This is done by letting several machines (precisely, O(log m)/σ2 machines) send whether their sample means are < 0 or ≥0. If the majority of the samples are < 0, ✓is likely to be < 0. However when ✓ is very close to 0, one needs a lot of samples to determine this, but here we only ask O(log m)/σ2 machines to send their sample means. Therefore we should be more conservative and we only update the interval in which ✓might lie to [−1, 1/2] if the majority of samples are < 0. We repeat this until the interval (L, U) become smaller than our target squared loss. Each round, we ask a number of new machines sending 1 bits of information about whether their sample mean is large than (U + L)/2. The number of machines participated is carefully set so that the failure probability p is small. An interesting feature of the protocol is to choose the target error probability p differently at each iteration so that we have a better balance between the failure probability and communication cost. The complete the description of the protocol and proof are given in the supplementary. 4.3 Proof sketch of theorem 3.4 We use a different prior on the mean N(0, δ2) instead of uniform over {−δ, δ} used by [1]. Gaussian prior allows us to use a strong data processing inequality for jointly gaussian random variables by [14]. Since we don’t have to truncate the gaussian, we don’t lose the factor of log(m) lost by [1]. Theorem 4.2. ([14], Theorem 7) Suppose X and V are jointly gaussian random variables with correlation ⇢. Let Y $ X $ V be a markov chain with I(Y ; X) R. Then I(Y ; V ) ⇢2R. Now suppose that each machine gets n samples X1, . . . , Xn ⇠N(V, σ2), where V is the prior N(0, δ2) on the mean. By an application of theorem 4.2, we prove that if Y is a B-bit message depending on X1, . . . , Xn, then Y has only nδ2 σ2 · B bits of information about V . Using some standard information theory arguments, this converts into the statement that if Y is the transcript of a simultaneous protocol with communication cost B, then it has at most nδ2 σ2 ·B bits of information about V . Then a lower bound on the communication cost B of a simultaneous protocol estimating the mean ✓2 [−1, 1] follows from proving that such a protocol must have ⌦(1) bit of information about V . Complete proof is given in the supplementary. 5 Conclusion We have lowerbounded the communication cost of estimating the mean of a d-dimensional spherical gaussian random variables in a distributed fashion. We provided a generic tool called direct-sum for relating the information cost of d-dimensional problem to one-dimensional problem, which might be of potential use for other statistical problem than gaussian mean estimation as well. We also initiated the study of distributed estimation of gaussian mean with sparse structure. We provide a simple protocol that exploits the sparse structure and conjecture its tradeoff to be optimal: Conjecture 1. If some protocol estimates the mean for any distribution P 2 Ps with mean-squared loss R and communication cost C, then C · R & sdσ2 mn , where we use & to hide log factors and potential corner cases. 8 References [1] Yuchen Zhang, John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Informationtheoretic lower bounds for distributed statistical estimation with communication constraints. In NIPS, pages 2328–2336, 2013. [2] Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. In COLT, pages 26.1–26.22, 2012. [3] Hal Daum´e III, Jeff M. Phillips, Avishek Saha, and Suresh Venkatasubramanian. Protocols for learning classifiers on distributed data. In AISTATS, pages 282–290, 2012. [4] Hal Daum´e III, Jeff M. Phillips, Avishek Saha, and Suresh Venkatasubramanian. Efficient protocols for distributed classification and optimization. In ALT, pages 154–168, 2012. [5] John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Yuchen Zhang. Informationtheoretic lower bounds for distributed statistical estimation with communication constraints. CoRR, abs/1405.0782, 2014. [6] Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Chi-Chih Yao. Informational complexity and the direct sum problem for simultaneous message complexity. In FOCS, pages 270–278, 2001. [7] Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. J. Comput. Syst. Sci., 68(4), 2004. [8] Mark Braverman and Anup Rao. Information equals amortized communication. In FOCS, pages 748–757, 2011. [9] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive communication. SIAM J. Comput., 42(3):1327–1363, 2013. [10] Mark Braverman, Faith Ellen, Rotem Oshman, Toniann Pitassi, and Vinod Vaikuntanathan. A tight bound for set disjointness in the message-passing model. In FOCS, pages 668–677, 2013. [11] Anat Ganor, Gillat Kol, and Ran Raz. Exponential separation of information and communication. Electronic Colloquium on Computational Complexity (ECCC), 21:49, 2014. [12] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14(1):3321–3363, 2013. [13] Claude Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379–423, 623–656, 1948. [14] Elza Erkip and Thomas M. Cover. The efficiency of investment information. IEEE Trans. Inform. Theory, 44, 1998. 9
|
2014
|
112
|
5,195
|
A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation Eran Treister∗† Computer Science, Technion, Israel and Earth and Ocean Sciences, UBC Vancouver, BC, V6T 1Z2, Canada eran@cs.technion.ac.il Javier Turek∗ Department of Computer Science Technion, Israel Institute of Technology Technion City, Haifa 32000, Israel javiert@cs.technion.ac.il Abstract The sparse inverse covariance estimation problem arises in many statistical applications in machine learning and signal processing. In this problem, the inverse of a covariance matrix of a multivariate normal distribution is estimated, assuming that it is sparse. An ℓ1 regularized log-determinant optimization problem is typically solved to approximate such matrices. Because of memory limitations, most existing algorithms are unable to handle large scale instances of this problem. In this paper we present a new block-coordinate descent approach for solving the problem for large-scale data sets. Our method treats the sought matrix block-by-block using quadratic approximations, and we show that this approach has advantages over existing methods in several aspects. Numerical experiments on both synthetic and real gene expression data demonstrate that our approach outperforms the existing state of the art methods, especially for large-scale problems. 1 Introduction The multivariate Gaussian (Normal) distribution is ubiquitous in statistical applications in machine learning, signal processing, computational biology, and others. Usually, normally distributed random vectors are denoted by x ∼N (µ, Σ) ∈Rn, where µ∈Rn is the mean, and Σ∈Rn×n is the covariance matrix. Given a set of realizations {xi}m i=1, many such applications require estimating the mean µ, and either the covariance Σ or its inverse Σ−1, which is also called the precision matrix. Estimating the inverse of the covariance matrix is useful in many applications [2] as it represents the underlying graph of a Gaussian Markov Random Field (GMRF). Given the samples {xi}m i=1, both the mean vector µ and the covariance matrix Σ are often approximated using the standard maximum likelihood estimator (MLE), which leads to ˆµ = 1 m Pm i=0 xi and1 S △= ˆΣMLE = 1 m m X i=0 (xi −ˆµ)(xi −ˆµ)T , (1) which is also called the empirical covariance matrix. Specifically, according to the MLE, Σ−1 is estimated by solving the optimization problem min A≻0 f(A) △= min A≻0 −log(det A) + tr(SA), (2) ∗The authors contributed equally to this work. †Eran Treister is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. 1Equation (1) is the standard MLE estimator. However, sometimes the unbiased MLE estimation is preferred, where m −1 replaces m in the denominator. 1 which is obtained by applying −log to the probability density function of the Normal distribution. However, if the number of samples is lower than the dimension of the vectors, i.e., m < n, then S in (1) is rank deficient and not invertible, whereas the true Σ is assumed to be positive definite, hence full-rank. Still, when m < n one can estimate the matrix by adding further assumptions. It is well-known [5] that if (Σ−1)ij = 0 then the random scalar variables in the i-th and j-th entries in x are conditionally independent. Therefore, in this work we adopt the notion of estimating the inverse of the covariance, Σ−1, assuming that it is sparse. (Note that in most cases Σ is dense.) For this purpose, we follow [2, 3, 4], and minimize (2) with a sparsity-promoting ℓ1 prior: min A≻0 F(A) △= min A≻0 f(A) + λ∥A∥1. (3) Here, f(A) is the MLE functional defined in (2), ∥A∥1 ≡P i,j |aij|, and λ > 0 is a regularization parameter that balances between the sparsity of the solution and the fidelity to the data. The sparsity assumption corresponds to a small number of statistical dependencies between the variables. Problem (3) is also called Covariance Selection [5], and is non-smooth and convex. Many methods were recently developed for solving (3)—see [3, 4, 7, 8, 10, 11, 12, 15, 16] and references therein. The current state-of-the-art methods, [10, 11, 12, 16], involve a “proximal Newton” approach [20], where a quadratic approximation is applied on the smooth part f(A) in (3), leaving the non-smooth ℓ1 term intact, in order to obtain the Newton descent direction. To obtain this, the gradient and Hessian of f(A) are needed and are given by ∇f(A) = S −A−1, ∇2f(A) = A−1 ⊗A−1, (4) where ⊗is the Kronecker product. The gradient in (4) already shows the main difficulty in solving this problem: it contains A−1, the inverse of the sparse matrix A, which may be dense and expensive to compute. The advantage of the proximal Newton approach for this problem is the low overhead: by calculating the A−1 in ∇f(A), we also get the Hessian at the same cost [11, 12, 16]. In this work we aim at solving large scale instances of (3), where n is large, such that O(n2) variables cannot fit in memory. Such problem sizes are required in fMRI [11] and gene expression analysis [9] applications, for example. Large values of n introduce limitations: (a) They preclude storing the full matrix S in (1), and allow us to use only the vectors {xi}m i=1, which are assumed to fit in memory. (b) While the sparse matrix A in (3) fits in memory, its dense inverse does not. Because of this limitation, most of the methods mentioned above cannot be used to solve (3), as they require computing the full gradient of f(A), which is a dense n × n symmetric matrix. The same applies for the blocking strategies of [2, 7], which target the dense covariance matrix itself rather than its inverse, using the dual formulation of (3). One exception is the proximal Newton approach in [11], which was made suitable for large-scale matrices by treating the Newton direction problem in blocks. In this paper, we introduce an iterative Block-Coordinate Descent [20] method for solving largescale instances of (3). We treat the problem in blocks defined as subsets of columns of A. Each block sub-problem is solved by a quadratic approximation, resulting in a descent direction that corresponds only to the variables in the block. Since we consider one sub-problem at a time, we can fully store the gradient and Hessian for the block. In contrast, [11] applies a blocking approach to the full Newton problem, which results in a sparse n×n descent direction. There, all the columns of A−1 are calculated for the gradient and Hessian of the problem for each inner iteration when solving the full Newton problem. Therefore, our method requires less calculations of A−1 than [11], which is the most computationally expensive task in both algorithms. Furthermore, our blocking strategy allows an efficient linesearch procedure, while [11] requires computing a determinant of a sparse n × n matrix. Although our method is of linear order of convergence, it converges in less iterations than [11] in our experiments. Note that the asymptotic convergence of [11] is quadratic only if the exact Newton direction is found at each iteration, which is very costly for large-scale problems. 2 1.1 Newton’s Method for Covariance Selection The proximal Newton approach mentioned earlier is iterative, and at each iteration k, the smooth part of the objective in (3) is approximated by a second order Taylor expansion around the k-th iterate A(k). Then, the Newton direction ∆∗is the solution of an ℓ1 penalized quadratic minimization problem, min ∆ ˜F(A(k) + ∆) = min ∆f(A(k)) + tr(∆(S −W)) + 1 2tr(∆W∆W) + λ∥A(k) + ∆∥1, (5) where W = A(k)−1 is the inverse of the k-th iterate. Note that the gradient and Hessian of f(A) in (4) are featured in the second and third terms in (5), respectively, while the first term of (5) is constant and can be ignored. Problem (5) corresponds to the well-known LASSO problem [18], which is popular in machine learning and signal/image processing applications [6]. The methods of [12, 16, 11] apply known LASSO-solvers for treating the Newton direction minimization (5). Once the direction ∆∗is computed, it is added to A(k) employing a linesearch procedure to sufficiently reduce the objective in (3) while ensuring positive definiteness. To this end, the updated iterate is A(k+1) = A(k) +α∗∆∗, and the parameter α∗is obtained using Armijo’s rule [1, 12]. That is, we choose an initial value of α0, and a step size 0 < β < 1, and accordingly define αi = βiα0. We then look for the smallest i ∈N that satisfies the constraint A(k) + αi∆∗≻0, and the condition F(A(k) + αi∆∗) ≤F(A(k)) + αiσ h tr(∆∗(S −W)) + λ∥A(k) + ∆∗∥1 −λ∥A(k)∥1 i . (6) The parameters α0, β, and σ are usually chosen as 1,0.5, and 10−4 respectively. 1.2 Restricting the Updates to Active Sets An additional significant idea of [12] is to restrict the minimization of (5) at each iteration to an “active set” of variables and keep the rest as zeros. The active set of a matrix A is defined as Active(A) = (i, j) : Aij ̸= 0 ∨|(S −A−1)ij| > λ . (7) This set comes from the definition of the sub-gradient of (3). In particular, as A(k) approaches the solution A∗, Active(A(k)) approaches (i, j) : A∗ ij ̸= 0 . As noted in [12, 16], restricting (5) to the variables in Active A(k) reduces the computational complexity: given the matrix W, the Hessian (third) term in (5) can be calculated in O(Kn) operations instead of O(n3), where K = |Active A(k) |. Hence, any method for solving the LASSO problem can be utilized to solve (5) effectively while saving computations by restricting its solution to Active A(k) . Our experiments have verified that restricting the minimization of (5) only to Active A(k) does not significantly increase the number of iterations needed for convergence. 2 Block-Coordinate-Descent for Inverse Covariance (BCD-IC) Estimation In this Section we describe our contribution. To solve problem (3), we apply an iterative BlockCoordinate-Descent approach [20]. At each iteration, we divide the column set {1, ..., n} into blocks. Then we iterate over all blocks, and in turn minimize (3) restricted to the “active” variables of each block, which are determined according to (7). The other matrix entries remain fixed during each update. The matrix A is updated after each block-minimization. We choose our blocks as sets of columns because the portion of the gradient (4) that corresponds to such blocks can be computed as solutions of linear systems. Because the matrix is symmetric, the corresponding rows are updated simultaneously. Figure 1 shows an example of a BCD iteration where the blocks of columns are chosen in sequential order. In practice, the sets of columns can be non-contiguous and vary between the BCD iterations. We elaborate later on how to partition 3 Figure 1: Example of a BCD iteration. The blocks are treated successively. the columns, and on some advantages of this block-partitioning. Partitioning the matrix into small blocks enables our method to solve (3) in high dimensions (up to millions of variables), requiring O(n2/p) additional memory, where p is the number of blocks (that is in addition to the memory needed for storing the iterated solution A(k) itself). 2.1 Block Coordinate Descent Iteration Assume that the set of columns {1, ..., n} is divided into p blocks {Ij}p j=1, where Ij is the set of indices that corresponds to the columns and rows in the j-th block. As mentioned before, in the BCD-IC algorithm we traverse all blocks and update the iterated solution matrix block by block. We denote the updated matrix after treating the j-th block at iteration k by A(k) j and the next iterate A(k+1) is defined once the last block is treated, i.e., A(k+1) = A(k) p . To treat each block of (3), we adopt both of the ideas described earlier: we use a quadratic approximation to solve each block, while also restricting the updated entries to the active set. For simplicity of notation in this section, let us denote the updated matrix A(k) j−1, before treating block j at iteration k, by ˜A. To update block j, we change only the entries in the rows/columns in Ij. First, we form and minimize a quadratic approximation of problem (3), restricted to the rows/columns in Ij: min ∆j ˜F( ˜A + ∆j), (8) where ˜F(·) is the quadratic approximation of (3) around ˜A, similarly to (5), and ∆j has non-zero entries only in the rows/columns in Ij. In addition, the non-zeros of ∆j are restricted to Active( ˜A) defined in (7). That is, we restrict the minimization (8) to ActiveIj( ˜A) = Active( ˜A) ∩{(i, k) : i ∈Ij ∨k ∈Ij} , (9) while all other elements are set to zero for the entire treatment of the j-th block. To calculate this set, we check the condition in (7) only in the columns and rows of Ij. To define this active set, and to calculate the gradient (4) for block Ij, we first calculate the columns Ij of ˜A−1, which is the main computational task of our algorithm. To achieve that, we solve |Ij| linear systems, with the canonical vectors el as right-hand-sides for each l ∈Ij, i.e., ( ˜A−1)Ij = ˜A−1EIj. The solution of these linear systems can be achieved in various ways. Direct methods may be applied using the Cholesky factorization, which requires up to O(n3) operations. For large dimensions, iterative methods such as Conjugate Gradients (CG) are usually preferred, because the cost of each iteration is proportional to the number of non-zeros in the sparse matrix. See Section A.4 in the Appendix for details about the computational cost of this part of the algorithm. 2.1.1 Treating a Block-subproblem by Newton’s Method To get the Newton direction for the j-th block, we solve the LASSO problem (8), for which there are many available solvers [22]. We choose the Polak-Ribiere non-linear Conjugate Gradients (NLCG) method of [19] which, together with a diagonal preconditioner, was used to solve this problem in [22, 19]. We describe the NLCG algorithm in Apendix A.1. To use this method, we need to calculate the objective of (8) and its gradient efficiently. The calculation of the objective in (8) is much simpler than the full version in (5), because only blocks of rows/columns are considered. Denoting W = ˜A−1, to compute the objective in (8) and its gradient we need to calculate the matrices W∆jW and S −W only at the entries where ∆j is 4 non-zero (in the rows/columns in Ij). These matrices are symmetric, and hence, only their columns are necessary. This idea applies for the ℓ1 term of the objective in (8) as well. In each iteration of the NLCG method, the main computational task involves calculating W∆jW in the columns of Ij. For that, we reuse the Ij columns of ˜A−1 calculated for obtaining (9), which we denote by WIj. Since we only need the result in the columns Ij, we first notice that (W∆jW)Ij = W∆jWIj, and the product ∆jWIj can be computed efficiently because ∆j is sparse. Computing W(∆jWIj) is another relatively expensive part of our algorithm, and here we exploit the restriction to the Active Set. That is, we only need to compute the entries in (9). For this, we follow the idea of [11] and use the rows (or columns) of W that are represented in (9). Besides the columns Ij of W we also need the “neighborhood” of Ij defined as Nj = i : ∃k /∈Ij : (i, k) ∈ActiveIj(A) . (10) The size of this set will determine the amount of additional columns of W that we need, and therefore we want it to be as small as possible. To achieve that, we define the blocks {Ij} using clustering methods, following [11]. We use METIS [13], but other methods may be used instead. The aim of these methods is to partition the indices of the matrix columns/rows into disjoint subsets of relatively small size, such that there are as few as possible non-zero entries outside the diagonal blocks of the matrix that correspond to each subset. In our notation, we aim that the size of Nj will be as small as possible for every block Ij, and that the size of Ij will be small enough. Note that after we compute WNj, we need to actually store and use only |Nj| × |Nj| numbers out of WNj. However, there might be situations where the matrix has a few dense columns, resulting in some sets Nj of size O(n). Computing WNj for those sets is not possible because of memory limitations. We treat this case separately—see Section A.2 in the Appendix for details. For a discussion about the computational cost of this part—see Section A.4 in the Appendix. 2.1.2 Optimizing the Solution in the Newton Direction with Line-search Assume that ∆∗ j is the Newton direction obtained by solving problem (8). Now we seek to update the iterated matrix A(k) j = A(k) j−1 + α∗∆∗ j, where α∗> 0 is obtained by a linesearch procedure similarly to Equation (6). For a general Newton direction matrix ∆∗as in (6), this procedure requires calculating the determinant of an n×n matrix. In [11], this is done by solving n−1 linear systems of decreasing sizes from n −1 to 1. However, since our direction ∆∗ j has a special block structure, we obtain a significantly cheaper linesearch procedure compared to [11], assuming that the blocks Ij are relatively small. First, the trace and ℓ1 terms that are involved in the objective of (3) can be calculated with respect only to the entries in the columns Ij (the rows are taken into account by symmetry). The log det term, however, needs more special care, and is eventually reduced to calculating the determinant of an |Ij| × |Ij| matrix, which becomes cheaper as the block size decreases. Let us introduce a partitioning of any matrix A into blocks, according to a set of indices Ij ⊆{1, ..., n}. Assume without loss of generality that the rows and columns of A have been permuted such that the columns/rows with indices in Ij appear first, and let A = A11 A12 A21 A22 (11) be a partitioning of A into four blocks. The sub-matrix A11 corresponds to the elements in rows Ij and in columns Ij in ˜A. According to the Schur complement [17], for any invertible matrix and block-partitioning as above, the following holds: log det(A) = log det(A22) + log det(A11 −A12A−1 22 A21). (12) 5 In addition, for any symmetric matrix A the following applies: A ≻0 ⇔A22 ≻0 and A11 −A12A−1 22 A21 ≻0. (13) Using the above notation for ˜A and the corresponding partitioning for ∆∗ j, we write using (12): log det ( ˜A + α∆j) = log det ( ˜A22) + log det(B0 + αB1 + α2B2) (14) where B0 = ˜A11 −˜A12 ˜A−1 22 ˜A21, B1 = ∆11 −∆12 ˜A−1 22 ˜A21 −˜A12 ˜A−1 22 ∆21, and B2 = −∆12 ˜A−1 22 ∆21. (Note that here we replaced ∆∗ j by ∆to ease notation.) Finally, the positive definiteness condition ˜A+α∗∆∗ j ≻0 involved in the linesearch (6) is equivalent to B0 + αB1 + α2B2 ≻0, assuming that ˜A22 ≻0, following (13). Throughout the iterations, we always guarantee that our iterated solution matrix ˜A remains positive definite by linesearch in every update. This requires that the initialization of the algorithm, A(0), be positive definite. If the set Ij is relatively small, then the matrices Bi in (14) are also small (|Ij| × |Ij|), and we can easily compute the objective F(·), and apply the Armijo rule (6) for ∆∗ j. Calculating the matrices Bi in (14) seems expensive, however, as we show in Appendix A.3, they can be obtained from the previously computed matrices WIj and WNj mentioned earlier. Therefore, computing (14) can be achieved in O(|Ij|3) time complexity. Algorithm: BCD-IC(A(0),{xi}m i=1,λ) for k = 0, 1, 2, ... do Calculate clusters of elements {Ij}p j=1 based on A(k). % Denote: A(k) 0 = A(k) for j = 1, ..., p do Compute WIj = (A(k) j−1)−1 Ij. % solve |Ij| linear systems Define ActiveIj A(k) j−1 as in (9), and define the set Nj in (10). Compute WNj = (A(k) j−1)−1 Nj . % solve |Nj| linear systems Find the Newton direction ∆∗ j by solving the LASSO problem (8). Update the solution: A(k) j = A(k) j−1 + α∗∆∗ j by linesearch. end % Denote: A(k+1) = A(k) p end Algorithm 1: Block Coordinate Descent for Inverse Covariance Estimation 3 Convergence Analysis In this Section, we elaborate on the convergence of the BCD-IC algorithm to the global optimum of (3). We base our analysis on [20, 12]. In [20], a general block-coordinate-descent approach is analyzed to solve minimization problems of the form F(A) = f(A) + λh(A) composed of the sum of a smooth function f(·) and a separable convex function h(·), which in our case are −log det(A) + tr(SA) and ∥A∥1, respectively. Although this setup fits the functional F(A) in (3), [20] treats the problem in the Rn×n domain, while the minimization in (3) is being constrained over Sn ++—the symmetric positive definite matrices domain. To overcome this limitation, the authors in [12] extended the analysis in [20] to treat the specific constrained problem (3). In particular, [20, 12] consider block-coordinate-descent methods where in each step t a subset Jt of variables is updated. Then, a Gauss-Seidel condition is necessary to ensure that all variables are updated every T steps: [ l=0,...,T −1 Jl+t ⊇N ∀t = 1, 2, . . . , (15) 6 where N is the set of all variables, and T is a fixed number. Similarly to [12], treating each block of columns Ij in the BCD-IC algorithm is equivalent to updating the elements outside the active set ActiveIj(A), followed by an update of the elements in ActiveIj(A). Therefore, in (15), we set J2t = {(i, l) : i ∈Ij ∨l ∈Ij} \ ActiveIj( ˜A), J2t+1 = ActiveIj( ˜A), where the step index t corresponds to the block j at the iteration k of BCD-IC. In [12, Lemma 1], it is shown that setting the elements outside the active set for block j to zero satisfies the optimality condition of that step. Therefore, in our algorithm we only need to update the elements in ActiveIj(A). Now, if we were using p fixed blocks containing all the coordinates of A in Algorithm (1) (no clustering is applied), then the Gauss-Seidel condition (15) would be satisfied every T = 2p blocks. When clustering is applied, the block-partitioning {Ij} can change at every activation of the clustering method. Therefore, condition (15) is satisfied at most after T = 4˜p, where ˜p is the maximum number of blocks obtained from all the activations of the clustering algorithm. For completeness, we include in Appendix A.5 the lemmas in [12] and the proof of the following theorem: Theorem 1. In Algorithm 1, the sequence n A(k) j o converges to the global optimum of (3). 4 Numerical Results In this section we demonstrate the efficiency of the BCD-IC method, and compare it with other methods for both small and large scales. For small-scale problems we include QUIC [12], BIGQUIC [11] and G-ISTA [8], which are the state-of-the-art methods at this scale. For large-scale problems, we compare our method only with BIG-QUIC as it is the only feasible method known to us at this scale. For all methods, we use the original code which was provided by the authors— all implemented in C and parallelized (except QUIC which is partially parallelized). Our code for BCD-IC is MATLAB based with several routines in C. All the experiments were run on a machine with 2 Intel Xeon E-2650 2.0GHz processors with 16 cores and 64GB RAM, using Windows 7 OS. As a stopping criterion for BCD-IC, we use the rule as in [11]: ∥gradSF(A(k))∥1 < ϵ∥A(k)∥1, where gradSF(·) is the minimal norm subgradient, defined in Equation (25) in Appendix A.5. For ϵ = 10−2 as we choose, this results in the entries in A(k) being about two digits accurate compared to the true solution Σ−1∗. As in [11], we approximate WIj and WNj by using CG, which we stop once the residual drops below 10−5 and 10−4, respectively. For stopping NLCG (Algorithm 2) we use ϵnlcg = 10−4 (see details at the end of Section A.1). We note that for the large-scale test problems, BCD-IC with optimal block size requires less memory than BIG-QUIC. 4.1 Synthetic Experiments We use two synthetic experiments to compare the performance of the methods. First, the random matrix from [14], which is generated to have about 10 non-zeros per row, and to be well-conditioned. We generate matrices of sizes n varying from 5,000 to 160,000, and generate 200 samples for each (m = 200). The values of λ are chosen so that the solution Σ−1∗has approximately 10n non-zeros. BCD-IC is run with block sizes of 64, 96, 128, 256, and 256 for each of the random tests in Table 1, respectively. The second problem is a 2D version of the chain example in [14], which can be represented as the 2D stencil 1 4 h −1 −1 5 −1 −1 i , applied on a square lattice. λ is chosen such that Σ−1∗ has about 5n non-zeros. For these tests, BCD-IC is run with block size of 1024. Table 1 summarizes the results for this test case. The results show that for small-scale problems, G-ISTA is the fastest method and BCD-IC is just behind it. However, from size 20,000 and higher, BCD-IC is the fastest. We could not run QUIC and G-ISTA on problems larger than 20,000 because of memory limitations. The time gap between G-ISTA and both BCD-IC and BIG-QUIC in smallscales can be reduced if their programs receive the matrix S as input instead of the {xi}m i=1. 4.2 Gene Expression Analysis Experiments For the large-scale real-world experiments, we use gene expression datasets that are available at the Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/). We use several of the 7 test, n ∥Σ−1∥0 λ ∥Σ−1∗∥0 BCD-IC BIG-QUIC QUIC G-ISTA random 5K 59,138 0.22 63,164 15.3s(3) 19.6s(5) 28.7s(5) 13.6s(7) random 10K 118,794 0.23 139,708 61.8s(3) 73.8s(5) 114s(5) 60.2s(7) random 20K 237,898 0.24 311,932 265s(3) 673s(5) 823s(5) 491s(8) random 40K 475,406 0.26 423,696 729s(4) 2,671s(5) * * random 80K 950,950 0.27 891,268 4,102s(4) 16,764s(5) * * random 160K 1,901,404 0.28 1,852,198 21,296s(4) 25,584s(4) * * 2D 5002 1,248,000 0.30 1,553,698 24,235s(4) 40,530s(4) * * 2D 7082 2,503,488 0.31 3,002,338 130,636s(4) 203,370s(4) * * 2D 10002 4,996,000 0.32 5,684,306 777,947s(4) 1,220,213s(4) * * Table 1: Results for the random and 2D synthetic experiments. ∥Σ−1∥0 and ∥Σ−1∗∥0 denote the number of non-zeros in the true and estimated inverse covariance matrices, respectively. For each run, timings are reported in seconds and number of iterations in parentheses. ‘*’ means that the algorithm ran out of memory. tests reported in [9]. The data is preprocessed to have zero mean and unit variance for each variable (i.e., diag(S) = I). Table 2 shows the datasets as well as the numbers of variables (n) and samples (m) on each. In particular, these datasets have many variables and very few samples (m ≪n). Because of the size of the problems, we ran only BCD-IC and BIG-QUIC for these test cases. For the first three tests in Table 2, λ was chosen so that the solution matrix has about 10n non-zeros. For the fourth test, we choose a relatively high λ = 0.9 since the low number of samples causes the solutions with smaller λ’s to be quite dense. BCD-IC is run with block size of 256 for all the tests in Table 2. We found these datasets to be more challenging than the synthetic experiments above. Still, both algorithms BCD-IC and BIG-QUIC manage to estimate the inverse covariance matrix in reasonable time. As in the synthetic case, BCD-IC outperforms BIG-QUIC in all test cases. BCD-IC requires a smaller number of iterations to converge, which translates into shorter timings. Moreover, the average time of each BCD-IC iteration is faster than that of BIG-QUIC. code name Description n m λ ∥Σ−1∗∥0 BCD-IC BIG-QUIC GSE1898 Liver cancer 21, 794 182 0.7 293,845 788.3s (7) 5,079.5s (12) GSE20194 Breast cancer 22, 283 278 0.7 197,953 452.9s (8) 2,810.6s (10) GSE17951 Prostate cancer 54, 675 154 0.78 558,929 1,621.9s (6) 8,229.7s (9) GSE14322 Liver cancer 104, 702 76 0.9 4,973,476 55,314.8s (9) 127,199s (14) Table 2: Gene expression results. ∥Σ−1∗∥0 denotes the number of non-zeros in the estimated covariance matrix. For each run, timings are reported in seconds and number of iterations in parentheses. 5 Conclusions In this work we introduced a Block-Coordinate Descent method for solving the sparse inverse covariance problem. Our method has a relatively low memory footprint, and therefore it is especially attractive for solving large-scale instances of the problem. It solves the problem by iterating and updating the matrix block by block, where each block is chosen as a subset of columns and respective rows. For each block sub-problem, a proximal Newton method is applied, requiring a solution of a LASSO problem to find the descent direction. Because the update is limited to a subset of columns and rows, we are able to store the gradient and Hessian for each block, and enjoy an efficient linesearch procedure. Numerical results show that for medium-to-large scale experiments our algorithm is faster than the state-of-the-art methods, especially when the problem is relatively hard. Acknowledgement: The authors would like to thank Prof. Irad Yavneh for his valuable comments and guidance throughout this work. The research leading to these results has received funding from the European Union’s - Seventh Framework Programme (FP7/2007-2013) under grant agreement no 623212 MC Multiscale Inversion. 8 References [1] L. Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966. [2] O. Banerjee, L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. J. of Machine Learning Research, 9:485–516, 2008. [3] O. Banerjee, L. El Ghaoui, A. d’Aspremont, and G. Natsoulis. Convex optimization techniques for fitting sparse gaussian graphical models. In Proceedings of the 23rd ICML, pages 89–96. ACM, 2006. [4] A. d’Aspremont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and App., 30(1):56–66, 2008. [5] A. P. Dempster. Covariance selection. Biometrics, pages 157–175, 1972. [6] M. Elad. Sparse and redundant representations: from theory to applications in signal and image processing. Springer, 2010. [7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008. [8] D. Guillot, B. Rajaratnam, B. T. Rolfs, A. Maleki, and I. Wong. Iterative thresholding algorithm for sparse inverse covariance estimation. NIPS, Lake Tahoe CA, 2012. [9] J. Honorio and T. S. Jaakkola. Inverse covariance estimation for high-dimensional data in linear time and space: Spectral methods for riccati and sparse models. In Proc. of the 29th Conference on UAI, 2013. [10] Cho-Jui Hsieh, Inderjit Dhillon, Pradeep Ravikumar, and Arindam Banerjee. A divide-and-conquer method for sparse inverse covariance estimation. In NIPS 25, pages 2339–2347, 2012. [11] Cho-Jui Hsieh, Matyas A Sustik, Inderjit Dhillon, Pradeep Ravikumar, and Russell Poldrack. Big & Quic: Sparse inverse covariance estimation for a million variables. In NIPS 26, pages 3165–3173, 2013. [12] Cho-Jui Hsieh, Matyas A Sustik, Inderjit S Dhillon, and Pradeep D Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In NIPS 24, pages 2330–2338, 2011. [13] George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing, 20(1):359–392, 1998. [14] Lu Li and Kim-Chuan Toh. An inexact interior point method for l-1 regularized sparse covariance selection. Mathematical Programming Computation, 2(3-4):291–315, 2010. [15] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. The Journal of Machine Learning Research, 13:781–794, 2012. [16] Peder A Olsen, Figen ¨oztoprak, Jorge Nocedal, and Steven J Rennie. Newton-like methods for sparse inverse covariance estimation. In NIPS 25, pages 764–772, 2012. [17] Y. Saad. Iterative methods for sparse linear systems, 2nd edition. SIAM, 2003. [18] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. [19] E. Treister and I. Yavneh. A multilevel iterated-shrinkage approach to l1 penalized least-squares minimization. Signal Processing, IEEE Transactions on, 60(12):6319–6329, 2012. [20] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1-2):387–423, 2009. [21] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang. A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. SIAM Sci. Comp., 32(4):1832–1857, 2010. [22] M. Zibulevsky and M. Elad. L1-l2 optimization in signal and image processing. Signal Processing Magazine, IEEE, 27(3):76–88, May 2010. 9
|
2014
|
113
|
5,196
|
Local Decorrelation for Improved Pedestrian Detection Woonhyun Nam∗ StradVision, Inc. woonhyun.nam@stradvision.com Piotr Doll´ar Microsoft Research pdollar@microsoft.com Joon Hee Han POSTECH, Republic of Korea joonhan@postech.ac.kr Abstract Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art. 1 Introduction In recent years object detectors have undergone an impressive transformation [11, 32, 14]. Nevertheless, boosted detectors remain extraordinarily successful for fast detection of quasi-rigid objects. Such detectors were first proposed by Viola and Jones in their landmark work on efficient sliding window detection that made face detection practical and commercially viable [35]. This initial architecture remains largely intact today: boosting [31, 12] is used to train and combine decision trees and a cascade is employed to allow for fast rejection of negative samples. Details, however, have evolved considerably; in particular, significant progress has been made on the feature representation [6, 9, 2] and cascade architecture [3, 8]. Recent boosted detectors [1, 7] achieve state-of-the-art accuracy on modern benchmarks [10, 22] while retaining computational efficiency. While boosted detectors have evolved considerably over the past decade, decision trees with orthogonal (single feature) splits – also known as axis-aligned decision trees – remain popular and predominant. A possible explanation for the persistence of orthogonal splits is their efficiency: oblique (multiple feature) splits incur considerable computational cost during both training and detection. Nevertheless, oblique trees can hold considerable advantages. In particular, Menze et al. [23] recently demonstrated that oblique trees used in conjunction with random forests are quite effective given high dimensional data with heavily correlated features. To achieve similar advantages while avoiding the computational expense of oblique trees, we instead take inspiration from recent work by Hariharan et al. [15] and propose to decorrelate features prior to applying orthogonal trees. To do so we introduce an efficient feature transform that removes correlations in local image neighborhoods (as opposed to decorrelating features globally as in [15]). The result is an overcomplete but locally decorrelated representation that is ideally suited for use with orthogonal trees. In fact, orthogonal trees with our locally decorrelated features require estimation of fewer parameters and actually outperform oblique trees trained over the original features. ∗This research was performed while W.N. was a postdoctoral researcher at POSTECH. 1 Figure 1: A comparison of boosting of orthogonal and oblique trees on highly correlated data while varying the number (T) and depth (D) of the trees. Observe that orthogonal trees generalize poorly as the topology of the decision boundary is not well aligned to the natural topology of the data. We evaluate boosted decision tree learning with decorrelated features in the context of pedestrian detection. As our baseline we utilize the aggregated channel features (ACF) detector [7], a popular, top-performing detector for which source code is available online. Coupled with use of deeper trees and a denser sampling of the data, the improvement obtained using our locally decorrelated channel features (LDCF) is substantial. While in the past year the use of deep learning [25], motion features [27], and multi-resolution models [36] has brought down log-average miss rate (MR) to under 40% on the Caltech Pedestrian Dataset [10], LDCF reduces MR to under 25%. This translates to a nearly tenfold reduction in false positives over the (very recent) state-of-the-art. The paper is organized as follows. In §2 we review orthogonal and oblique trees and demonstrate that orthogonal trees trained on decorrelated data may be equally or more effective as oblique trees trained on the original data. We introduce the baseline in §3 and in §4 show that use of oblique trees improves results but at considerable computational expense. Next, in §5, we demonstrate that orthogonal trees trained with locally decorrelated features are efficient and effective. Experiments and results are presented in §6. We begin by briefly reviewing related work next. 1.1 Related Work Pedestrian Detection: Recent work in pedestrian detection includes use of deformable part models and their extensions [11, 36, 26], convolutional nets and deep learning [33, 37, 25], and approaches that focus on optimization and learning [20, 18, 34]. Boosted detectors are also widely used. In particular, the channel features detectors [9, 1, 2, 7] are a family of conceptually straightforward and efficient detectors based on boosted decision trees computed over multiple feature channels such as color, gradient magnitude, gradient orientation and others. Current top results on the INRIA [6] and Caltech [10] Pedestrian Datasets include instances of the channel features detector with additional mid-level edge features [19] and motion features [27], respectively. Oblique Decision Trees: Typically, decision trees are trained with orthogonal (single feature) splits; however, the extension to oblique (multiple feature) splits is fairly intuitive and well known, see e.g. [24]. In fact, Breiman’s foundational work on random forests [5] experimented with oblique trees. Recently there has been renewed interest in random forests with oblique splits [23, 30] and Marin et al. [20] even applied such a technique to pedestrian detection. Likewise, while typically orthogonal trees are used with boosting [12], oblique trees can easily be used instead. The contribution of this work is not the straightforward coupling of oblique trees with boosting, rather, we propose a local decorrelation transform that eliminates the necessity of oblique splits altogether. Decorrelation: Decorrelation is a common pre-processing step for classification [17, 15]. In recent work, Hariharan et al. [15] proposed an efficient scheme for estimating covariances between HOG features [6] with the goal of replacing linear SVMs with LDA and thus allowing for fast training. Hariharan et al. demonstrated that the global covariance matrix for a detection window can be estimated efficiently as the covariance between two features should depend only on their relative offset. Inspired by [15], we likewise exploit the stationarity of natural image statistics, but instead propose to estimate a local covariance matrix shared across all image patches. Next, rather than applying global decorrelation, which would be computationally prohibitive for sliding window detection with a nonlinear classifier1, we instead propose to apply an efficient local decorrelation transform. The result is an overcomplete representation well suited for use with orthogonal trees. 1Global decorrelation coupled with a linear classifier is efficient as the two linear operations can be merged. 2 Figure 2: A comparison of boosting with orthogonal decision trees (T = 5) on transformed data. Orthogonal trees with both decorrelated and PCA-whitened features show improved generalization while ZCA-whitening is ineffective. Decorrelating the features is critical, while scaling is not. 2 Boosted Decision Trees with Correlated Data Boosting is a simple yet powerful tool for classification and can model complex non-linear functions [31, 12]. The general idea is to train and combine a number of weak learners into a more powerful strong classifier. Decision trees are frequently used as the weak learner in conjunction with boosting, and in particular orthogonal decision trees, that is trees in which every split is a threshold on a single feature, are especially popular due to their speed and simplicity [35, 7, 1]. The representational power obtained by boosting orthogonal trees is not limited by use of orthogonal splits; however, the number and depth of the trees necessary to fit the data may be large. This can lead to complex decision boundaries and poor generalization, especially given highly correlated features. Figure 1(a)-(c) shows the result of boosted orthogonal trees on correlated data. Observe that the orthogonal trees generalize poorly even as we vary the number and depth of the trees. Decision trees with oblique splits can more effectively model data with correlated features as the topology of the resulting classifier can better match the natural topology of the data [23]. In oblique trees, every split is based on a linear projection of the data z = w⊺x followed by thresholding. The projection w can be sparse (and orthogonal splits are a special case with ∥w∥0 = 1). While in principle numerous approaches can be used to obtain w, in practice linear discriminant analysis (LDA) is a natural choice for obtaining discriminative splits efficiently [16]. LDA aims to minimize within-class scatter while maximizing between-class scatter. w is computed from class-conditional mean vectors µ+ and µ−and a class-independent covariance matrix Σ as follows: w = Σ−1(µ+ −µ−). (1) The covariance may be degenerate if the amount or underlying dimension of the data is low; in this case LDA can be regularized by using (1−ϵ)Σ+ϵI in place of Σ. In Figure 1(d) we apply boosted oblique trees trained with LDA on the same data as before. Observe the resulting decision boundary better matches the underlying data distribution and shows improved generalization. The connection between whitening and LDA is well known [15]. Specifically, LDA simplifies to a trivial classification rule on whitened data (data whose covariance is the identity). Let Σ = QΛQ⊺ be the eigendecomposition of Σ where Q is an orthogonal matrix and Λ is a diagonal matrix of eigenvalues. W = QΛ−1 2 Q⊺= Σ−1 2 is known as a whitening matrix because the covariance of x′ = Wx is the identity matrix. Given whitened data and means, LDA can be interpreted as learning the trivial projection w′ = µ′ + −µ′ −= Wµ+ −Wµ−since w′⊺x′ = w′⊺Wx = w⊺x. Can whitening or a related transform likewise simplify learning of boosted decision trees? Using standard terminology [17], we define the following related transforms: decorrelation (Q⊺), PCA-whitening (Λ−1 2 Q⊺), and ZCA-whitening (QΛ−1 2 Q⊺). Figure 2 shows the result of boosting orthogonal trees on the variously transformed features, using the same data as before. Observe that with decorrelated and PCA-whitened features orthogonal trees show improved generalization. In fact, as each split is invariant to scaling of individual features, orthogonal trees with PCA-whitened and decorrelated features give identical results. Decorrelating the features is critical, while scaling is not. The intuition is clear: each split operates on a single feature, which is most effective if the features are decorrelated. Interestingly, the standard ZCA-whitened transform used by LDA is ineffective: while the resulting features are not technically correlated, due to the additional rotation by Q each resulting feature is a linear combination of features obtained by PCA-whitening. 3 3 Baseline Detector (ACF) We next briefly review our baseline detector and evaluation benchmark. This will allow us to apply the ideas from §2 to object detection in subsequent sections. In this work we utilize the channel features detectors [9, 7, 1, 2], a family of conceptually straightforward and efficient detectors for which variants have been utilized for diverse tasks such as pedestrian detection [10], sign recognition [22] and edge detection [19]. Specifically, for our experiments we focus on pedestrian detection and employ the aggregate channel features (ACF) variant [7] for which code is available online2. Given an input image, ACF computes several feature channels, where each channel is a per-pixel feature map such that output pixels are computed from corresponding patches of input pixels (thus preserving image layout). We use the same channels as [7]: normalized gradient magnitude (1 channel), histogram of oriented gradients (6 channels), and LUV color channels (3 channels), for a total of 10 channels. We downsample the channels by 2x and features are single pixel lookups in the aggregated channels. Thus, given a h × w detection window, there are h/2 · w/2 · 10 candidate features (channel pixel lookups). We use RealBoost [12] with multiple rounds of bootstrapping to train and combine 2048 depth-3 decision trees over these features to distinguish object from background. Soft-cascades [3] and an efficient multiscale sliding-window approach are employed. Our baseline uses slightly altered parameters from [7] (RealBoost, deeper trees, and less downsampling); this increases model capacity and benefits our final approach as we report in detail in §6. Current practice is to use the INRIA Pedestrian Dataset [6] for parameter tuning, with the test set serving as a validation set, see e.g. [20, 2, 9]. We utilize this dataset in much the same way and report full results on the more challenging Caltech Pedestrian Dataset [10]. Following the methodology of [10], we summarize performance using the log-average miss rate (MR) between 10−2 and 100 false positives per image. We repeat all experiments 10 times and report the mean MR and standard error for every result. Due to the use of a log-log scale, even small improvements in (log-average) MR correspond to large reductions in false-positives. On INRIA, our (slightly modified) baseline version of ACF scores at 17.3% MR compared to 17.0% MR for the model reported in [7]. 4 Detection with Oblique Splits (ACF-LDA) In this section we modify the ACF detector to enable oblique splits and report the resulting gains. Recall that given input x, at each split of an oblique decision tree we need to compute z = w⊺x for some projection w and threshold the result. For our baseline pedestrian detector, we use 128 × 64 windows where each window is represented by a feature vector x of size 128/2 · 64/2 · 10 = 20480 (see §3). Given the high dimensionality of the input x coupled with the use of thousands of trees in a typical boosted classifier, for efficiency w must be sparse. Local w: We opt to use w’s that correspond to local m×m blocks of pixels. In other words, we treat x as a h/2 × w/2 × 10 tensor and allow w to operate over any m × m × 1 patch in a single channel of x. Doing so holds multiple advantages. Most importantly, each pixel has strongest correlations to spatially nearby pixels [15]. Since oblique splits are expected to help most when features are strongly correlated, operating over local neighborhoods is a natural choice. In addition, using local w allows for faster lookups due to the locality of adjacent pixels in memory. Complexity: First, let us consider the complexity of training the oblique splits. Let d = h/2·w/2 be the window size of a single channel. The number of patches per channel in x is about d, thus naively training a single split means applying LDA d times – once per patch – and keeping w with lowest error. Instead of computing d independent matrices Σ per channel, for efficiency, we compute Σ, a d × d covariance matrix for the entire window, and reconstruct individual m2 × m2 Σ’s by fetching appropriate entries from Σ. A similar trick can be used for the µ’s. Computing Σ is O(nd2) given n training examples (and could be made faster by omitting unnecessary elements). Inverting each Σ, the bottleneck of computing Eq. (1), is O(dm6) but independent of n and thus fairly small as n ≫m. Finally computing z = w⊺x over all n training examples and d projections is O(ndm2). Given the high complexity of each step, a naive brute-force approach for training is infeasible. Speedup: While the weights over training examples change at every boosting iteration and after every tree split, in practice we find it is unnecessary to recompute the projections that frequently. Table 1, rows 2-4, shows the results of ACF with oblique splits, updated every T boosting iterations 2http://vision.ucsd.edu/˜pdollar/toolbox/doc/ 4 Shared Σ T Miss Rate Training ACF 17.3 ± .33 4.93m ACF-LDA-4 No 4 14.9 ± .37 303.57m ACF-LDA-16 No 16 15.1 ± .28 78.11m ACF-LDA-∞ No ∞ 17.0 ± .22 5.82m ACF-LDA∗-4 Yes 4 14.7 ± .29 194.26m ACF-LDA∗-16 Yes 16 15.1 ± .12 51.19m ACF-LDA∗-∞ Yes ∞ 16.4 ± .17 5.79m LDCF Yes 13.7 ± .15 6.04m Table 1: A comparison of boosted trees with orthogonal and oblique splits. (denoted by ACF-LDA-T). While more frequent updates improve accuracy, ACF-LDA-16 has negligibly higher MR than ACF-LDA-4 but a nearly fourfold reduction in training time (timed using 12 cores). Training the brute force version of ACF-LDA, updated at every iteration and each tree split (7 interior nodes per depth-3 tree) would have taken about 5 · 4 · 7 = 140 hours. For these results we used regularization of ϵ = .1 and patch size of m = 5 (effect of varying m is explored in §6). Shared Σ: The crux and computational bottleneck of ACF-LDA is the computation and application of a separate covariance Σ at each local neighborhood. In recent work on training linear object detectors using LDA, Hariharan et al. [15] exploited the observation that the statistics of natural images are translationally invariant and therefore the covariance between two features should depend only on their relative offset. Furthermore, as positives are rare, [15] showed that the covariances can be precomputed using natural images. Inspired by these observations, we propose to use a single, fixed covariance Σ shared across all local image neighborhoods. We precompute one Σ per channel and do not allow it to vary spatially or with boosting iteration. Table 1, rows 5-7, shows the results of ACF with oblique splits using fixed Σ, denoted by ACF-LDA∗. As before, the µ’s and resulting w are updated every T iterations. As expected, training time is reduced relative to ACF-LDA. Surprisingly, however, accuracy improves as well, presumably due to the implicit regularization effect of using a fixed Σ. This is a powerful result we will exploit further. Summary: ACF with local oblique splits and a single shared Σ (ACF-LDA∗-4) achieves 14.7% MR compared to 17.3% MR for ACF with orthogonal splits. The 2.6% improvement in log-average MR corresponds to a nearly twofold reduction in false positives but comes at considerable computational cost. In the next section, we propose an alternative, more efficient approach for exploiting the use of a single shared Σ capturing correlations in local neighborhoods. 5 Locally Decorrelated Channel Features (LDCF) We now have all the necessary ingredients to introduce our approach. We have made the following observations: (1) oblique splits learned with LDA over local m × m patches improve results over orthogonal splits, (2) a single covariance matrix Σ can be shared across all patches per channel, and (3) orthogonal trees with decorrelated features can potentially be used in place of oblique trees. This suggests the following approach: for every m × m patch p in x, we can create a decorrelated representation by computing Q⊺p, where QΛQ⊺is the eigendecomposition of Σ as before, followed by use of orthogonal trees. However, such an approach is computationally expensive. First, due to use of overlapping patches, computing Q⊺p for every overlapping patch results in an overcomplete representation with a factor m2 increase in feature dimensionality. To reduce dimensionality, we only utilize the top k eigenvectors in Q, resulting in k < m2 features per pixel. The intuition is that the top eigenvectors capture the salient neighborhood structure. Our experiments in §6 confirm this: using as few as k = 4 eigenvectors per channel for patches of size m = 5 is sufficient. As our second speedup, we observe that the projection Q⊺p can be computed by a series of k convolutions between a channel image and each m × m filter reshaped from its corresponding eigenvector (column of Q). This is possible because the covariance matrix Σ is shared across all patches per channel and hence the derived Q is likewise spatially invariant. Decorrelating all 10 channels in an entire feature pyramid for a 640 × 480 image takes about .5 seconds. 5 Figure 3: Top-left: autocorrelation for each channel. Bottom-left: learned decorrelation filters. Right: visualization of original and decorrelated channels averaged over positive training examples. In summary, we modify ACF by taking the original 10 channels and applying k = 4 decorrelating (linear) filters per channel. The result is a set of 40 locally decorrelated channel features (LDCF). To further increase efficiency, we downsample the decorrelated channels by a factor of 2x which has negligible impact on accuracy but reduces feature dimension to the original value. Given the new locally decorrelated channels, all other steps of ACF training and testing are identical. The extra implementation effort is likewise minimal: given the decorrelation filters, a few lines of code suffice to convert ACF into LDCF. To further improve clarity, all source code for LDCF will be released. Results of the LDCF detector on the INRIA dataset are given in the last row of Table 1. The LCDF detector (which uses orthogonal splits) improves accuracy over ACF with oblique splits by an additional 1% MR. Training time is significantly faster, and indeed, is only ∼1 minute longer than for the original ACF detector. More detailed experiments and results are reported in §6. We conclude by (1) describing the estimation of Σ for each channel, (2) showing various visualizations, and (3) discussing the filters themselves and connections to known filters. Estimating Σ: We can estimate a spatially constant Σ for each channel using any large collection of natural images. Σ for each channel is represented by a spatial autocorrelation function Σ(x,y),(x+∆x,y+∆y) = C(∆x, ∆y). Given a collection of natural images, we first estimate a separate autocorrelation function for each image and then average the results. Naive computation of the final function is O(np2) but the Wiener-Khinchin theorem reduces the complexity to O(np log p) via the FFT [4], where n and p denote the number of images and pixels per image, respectively. Visualization: Fig. 3, top-left, illustrates the estimated autocorrelations for each channel. Nearby features are highly correlated and oriented gradients are spatially correlated along their orientation due to curvilinear continuity [15]. Fig. 3, bottom-left, shows the decorrelation filters for each channel obtained by reshaping the largest eigenvectors of Σ. The largest eigenvectors are smoothing filters while the smaller ones resemble increasingly higher-frequency filters. The corresponding eigenvalues decay rapidly and in practice we use the top k = 4 filters. Observe that the decorrelation filters for oriented gradients are aligned to their orientation. Finally, Fig. 3, right, shows original and decorrelated channels averaged over positive training examples. Discussion: Our decorrelation filters are closely related to sinusoidal, DCT basis, and Gaussian derivative filters. Spatial interactions in natural images are often well-described by Markov models [13] and first-order stationary Markov processes are known to have sinusoidal KLT bases [29]. In particular, for the LUV color channels, our filters are similar to the discrete cosine transform (DCT) bases that are often used to approximate the KLT. For oriented gradients, however, the decorrelation filters are no longer well modeled by the DCT bases (note also that our filters are applied densely whereas the DCT typically uses block processing). Alternatively, we can interpret our filters as Gaussian derivative filters. Assume that the autocorrelation is modeled by a squared-exponential function C(∆x) = exp(−∆x2/2l2), which is fairly reasonable given the estimation results in Fig. 3. In 1D, the kth largest eigenfunction of such an autocorrelation function is a k −1 order Gaussian derivative filter [28]. It is straightforward to extend the result to an anisotropic multivariate case in which case the eigenfunctions are Gaussian directional derivative filters similar to our filters. 6 Figure 4: (a-b) Use of k = 4 local decorrelation filters of size m = 5 gives optimal performance. (c) Increasing tree depth while simultaneously enlarging the quantity of data available for training can have a large impact on accuracy (blue stars indicate optimal depth at each sampling interval). description # channels miss rate 1. ACF (modified) baseline 10 17.3 ± .33 2. LDCF small λ decorrelation w k smallest filters 10k 61.7 ± .28 3. LDCF random filtering w k random filters 10k 15.6 ± .26 4. LDCF LUV only decorrelation of LUV channels only 3k + 7 16.2 ± .37 5. LDCF grad only decorrelation of grad channels only 3 + 7k 14.9 ± .29 6. LDCF constant decorrelation w constant filters 10k 14.2 ± .34 7. LDCF proposed approach 10k 13.7 ± .15 Table 2: Locally decorrelated channels compared to alternate filtering strategies. See text. 6 Experiments In this section, we demonstrate the effectiveness of locally decorrelated channel features (LDCF) in the context of pedestrian detection. We: (1) study the effect of parameter settings, (2) test variations of our approach, and finally (3) compare our results with the state-of-the-art. Parameters: LDCF has two parameters: the count and size of the decorrelation filters. Fig. 4(a) and (b) show the results of LDCF on the INRIA dataset while varying the filter count (k) and size (m), respectively. Use of k = 4 decorrelation filters of size m = 5 improves performance up to ∼4% MR compared to ACF. Inclusion of additional higher-frequency filters or use of larger filters can cause performance degradation. For all remaining experiments we fix k = 4 and m = 5. Variations: We test variants of LDCF and report results on INRIA in Table 2. LDCF (row 7) outperforms all variants, including the baseline (1). Filtering the channels with the smallest k eigenvectors (2) or k random filters (3) performs worse. Local decorrelation of only the color channels (4) or only the gradient channels (5) is inferior to decorrelation of all channels. Finally, we test constant decorrelation filters obtained from the intensity channel L that resemble the first k DCT basis filters. Use of unique filters per channel outperforms use of constant filters across all channels (6). Model Capacity: Use of locally decorrelated features implicitly allows for richer, more effective splitting functions, increasing modeling capacity and generalization ability. Inspired by their success, we explore additional strategies for augmenting model capacity. For the following experiments, we rely solely on the training set of the Caltech Pedestrian Dataset [10]. Of the 71 minute long training videos (∼128k images), we use every fourth video as validation data and the rest for training. On the validation set, LDCF outperforms ACF by a considerable margin, reducing MR from 46.2% to 41.7%. We first augment model capacity by increasing the number of trees twofold (to 4096) and the sampled negatives fivefold (to 50k). Surprisingly, doing so reduces MR by an additional 4%. Next, we experiment with increasing maximum tree depth while simultaneously enlarging the amount of data available for training. Typically, every 30th image in the Caltech dataset is used for training and testing. Instead, Figure 4(c) shows validation performance of LDCF with different tree depths while varying the training data sampling interval. The impact of maximum depth on performance is quite large. At a dense sampling interval of every 4th frame, use of depth-5 trees (up from depth-2 for the original approach) improves performance by an additional 5% to 32.6% MR. Note that consistent with the generalization bounds of boosting [31], use of deeper trees requires more data. 7 10 −2 10 −1 10 0 10 1 .05 .10 .20 .30 .40 .50 .64 .80 1 false positives per image miss rate 72% VJ 46% HOG 21% pAUCBoost 20% FisherBoost 20% LatSvm−V2 20% ConvNet 19% CrossTalk 17% ACF 16% VeryFast 15% RandForest 14% LDCF 14% Franken 14% Roerei 13% SketchTokens (a) INRIA Pedestrian Dataset 10 −3 10 −2 10 −1 10 0 10 1 .05 .10 .20 .30 .40 .50 .64 .80 1 false positives per image miss rate 95% VJ 68% HOG 48% DBN−Mut 46% MF+Motion+2Ped 46% MOCO 45% MultiSDP 44% ACF−Caltech 43% MultiResC+2Ped 41% MT−DPM 39% JointDeep 38% MT−DPM+Context 37% ACF+SDt 30% ACF−Caltech+ 25% LDCF (b) Caltech Pedestrian Dataset Figure 5: A comparison of our LDCF detector with state-of-the-art pedestrian detectors. INRIA Results: In Figure 5(a) we compare LDCF with state-of-the-art detectors on INRIA [6] using benchmark code maintained by [10]. Since the INRIA dataset is oft-used as a validation set, including in this work, we include these results for completeness only. LDCF is essentially tied for second place with Roerei [2] and Franken [21] and outperformed by ∼1% MR by SketchTokens [19]. These approaches all belong to the family of channel features detectors, and as the improvements proposed in this work are orthogonal, the methods could potentially be combined. Caltech Results: We present our main result on the Caltech Pedestrian Dataset [10], see Fig. 5(b), generated using the official evaluation code available online3. The Caltech dataset has become the standard for evaluating pedestrian detectors and the latest methods based on deep learning (JointDeep) [25], multi-resolution models (MT-DPM) [36] and motion features (ACF+SDt) [27] achieve under 40% log-average MR. For a complete comparison, we first present results for an augmented capacity ACF model which uses more (4096) and deeper (depth-5) trees trained with RealBoost using dense sampling of the training data (every 4th image). See preceding note on model capacity for details and motivation. This augmented model (ACF-Caltech+) achieves 29.8% MR, a considerable nearly 10% MR gain over previous methods, including the baseline version of ACF (ACFCaltech). With identical parameters, locally decorrelated channel features (LDCF) further reduce error to 24.9% MR with substantial gains at higher recall. Overall, this is a massive improvement and represents a nearly 10x reduction in false positives over the previous state-of-the-art. 7 Conclusion In this work we have presented a simple, principled approach for improving boosted object detectors. Our core observation was that effective but expensive oblique splits in decision trees can be replaced by orthogonal splits over locally decorrelated data. Moreover, due to the stationary statistics of image features, the local decorrelation can be performed efficiently via convolution with a fixed filter bank precomputed from natural images. Our approach is general, simple and fast. Our method showed dramatic improvement over previous state-of-the-art. While some of the gain was from increasing model capacity, use of local decorrelation gave a clear and significant boost. Overall, we reduced false-positives tenfold on Caltech. Such large gains are fairly rare. In the present work we did not decorrelate features across channels (decorrelation was applied independently per channel). This is a clear future direction. Testing local decorrelation in the context of other classifiers (e.g. convolutional nets or linear classifiers as in [15]) would also be interesting. While the proposed locally decorrelated channel features (LDCF) require only modest modification to existing code, we will release all source code used in this work to ease reproducibility. 3http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/ 8 References [1] R. Benenson, M. Mathias, R. Timofte, and L. Van Gool. Pedestrian detection at 100 frames per second. In CVPR, 2012. [2] R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool. Seeking the strongest rigid detector. In CVPR, 2013. [3] L. Bourdev and J. Brandt. Robust object detection via soft cascade. In CVPR, 2005. [4] G. Box, G. Jenkins, and G. Reinsel. Time series analysis: forecasting and control. Prentice Hall, 1994. [5] L. Breiman. Random forests. Machine Learning, 45(1):5–32, Oct. 2001. [6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [7] P. Doll´ar, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. PAMI, 2014. [8] P. Doll´ar, R. Appel, and W. Kienzle. Crosstalk cascades for frame-rate pedestrian detection. In ECCV, 2012. [9] P. Doll´ar, Z. Tu, P. Perona, and S. Belongie. Integral channel features. In BMVC, 2009. [10] P. Doll´ar, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: An evaluation of the state of the art. PAMI, 34, 2012. [11] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 32(9):1627–1645, 2010. [12] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, 38(2):337–374, 2000. [13] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. PAMI, PAMI-6(6):721–741, 1984. [14] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and emantic segmentation. In CVPR, 2014. [15] B. Hariharan, J. Malik, and D. Ramanan. Discriminative decorrelation for clustering and classification. In ECCV, 2012. [16] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer, 2009. [17] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. [18] D. Levi, S. Silberstein, and A. Bar-Hillel. Fast multiple-part based object detection using kd-ferns. In CVPR, 2013. [19] J. Lim, C. L. Zitnick, and P. Doll´ar. Sketch tokens: A learned mid-level representation for contour and object detection. In CVPR, 2013. [20] J. Mar´ın, D. V´azquez, A. L´opez, J. Amores, and B. Leibe. Random forests of local experts for pedestrian detection. In ICCV, 2013. [21] M. Mathias, R. Benenson, R. Timofte, and L. Van Gool. Handling occlusions with franken-classifiers. In ICCV, 2013. [22] M. Mathias, R. Timofte, R. Benenson, and L. Van Gool. Traffic sign recognition - how far are we from the solution? In IJCNN, 2013. [23] B. H. Menze, B. M. Kelm, D. N. Splitthoff, U. Koethe, and F. A. Hamprecht. On oblique random forests. In Machine Learning and Knowledge Discovery in Databases, 2011. [24] S. K. Murthy, S. Kasif, and S. Salzberg. A system for induction of oblique decision trees. Journal of Artificial Intelligence Research, 1994. [25] W. Ouyang and X. Wang. Joint deep learning for pedestrian detection. In ICCV, 2013. [26] D. Park, D. Ramanan, and C. Fowlkes. Multiresolution models for object detection. In ECCV, 2010. [27] D. Park, C. L. Zitnick, D. Ramanan, and P. Doll´ar. Exploring weak stabilization for motion feature extraction. In CVPR, 2013. [28] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. [29] W. Ray and R. Driver. Further decomposition of the karhunen-lo`eve series representation of a stationary random process. IEEE Transactions on Information Theory, 16(6):663–668, Nov 1970. [30] J. J. Rodriguez, L. I. Kuncheva, and C. J. Alonso. Rotation forest: A new classifier ensemble method. PAMI, 28(10), 2006. [31] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 1998. [32] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013. [33] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun. Pedestrian detection with unsupervised multistage feature learning. In CVPR, 2013. [34] C. Shen, P. Wang, S. Paisitkriangkrai, and A. van den Hengel. Training effective node classifiers for cascade classification. IJCV, 103(3):326–347, July 2013. [35] P. A. Viola and M. J. Jones. Robust real-time face detection. IJCV, 57(2):137–154, 2004. [36] J. Yan, X. Zhang, Z. Lei, S. Liao, and S. Z. Li. Robust multi-resolution pedestrian detection in traffic scenes. In CVPR, 2013. [37] X. Zeng, W. Ouyang, and X. Wang. Multi-stage contextual deep learning for ped. det. In ICCV, 2013. 9
|
2014
|
114
|
5,197
|
Graph Clustering With Missing Data : Convex Algorithms and Analysis Ramya Korlakai Vinayak, Samet Oymak, Babak Hassibi Department of Electrical Engineering California Institute of Technology, Pasadena, CA 91125 {ramya, soymak}@caltech.edu, hassibi@systems.caltech.edu Abstract We consider the problem of finding clusters in an unweighted graph, when the graph is partially observed. We analyze two programs, one which works for dense graphs and one which works for both sparse and dense graphs, but requires some a priori knowledge of the total cluster size, that are based on the convex optimization approach for low-rank matrix recovery using nuclear norm minimization. For the commonly used Stochastic Block Model, we obtain explicit bounds on the parameters of the problem (size and sparsity of clusters, the amount of observed data) and the regularization parameter characterize the success and failure of the programs. We corroborate our theoretical findings through extensive simulations. We also run our algorithm on a real data set obtained from crowdsourcing an image classification task on the Amazon Mechanical Turk, and observe significant performance improvement over traditional methods such as k-means. 1 Introduction Clustering [1] broadly refers to the problem of identifying data points that are similar to each other. It has applications in various problems in machine learning, data mining [2, 3], social networks [4– 6], bioinformatics [7, 8], etc. In this paper we focus on graph clustering [9] problems where the data is in the form of an unweighted graph. Clearly, to observe the entire graph on n nodes requires n 2 measurements. In most practical scenarios this is infeasible and we can only expect to have partial observations. That is, for some node pairs we know whether there exists an edge between them or not, whereas for the rest of the node pairs we do not have this knowledge. This leads us to the problem of clustering graphs with missing data. Given the adjacency matrix of an unweighted graph, a cluster is defined as a set of nodes that are densely connected to each other when compared to the rest of the nodes. We consider the problem of identifying such clusters when the input is a partially observed adjacency matrix. We use the popular Stochastic Block Model (SBM) [10] or Planted Partition Model [11] to analyze the performance of the proposed algorithms. SBM is a random graph model where the edge probability depends on whether the pair of nodes being considered belong to the same cluster or not. More specifically, the edge probability is higher when both nodes belong to the same cluster. Further, we assume that each entry of the adjacency matrix of the graph is observed independently with probability r. We will define the model in detail in Section 2.1. 1.1 Clustering by Low-Rank Matrix Recovery and Completion The idea of using convex optimization for clustering has been proposed in [12–21]. While each of these works differ in certain ways, and we will comment on their relation to the current paper in Section 1.3, the common approach they use for clustering is inspired by recent work on low-rank matrix recovery and completion via regularized nuclear norm (trace norm) minimization [22–26]. 1 In the case of unweighted graphs, an ideal clustered graph is a union of disjoint cliques. Given the adjacency matrix of an unweighted graph with clusters (denser connectivity inside the clusters compared to outside), we can interpret it as an ideal clustered graph with missing edges inside the clusters and erroneous edges in between clusters. Recovering the low-rank matrix corresponding to the disjoint cliques is equivalent to finding the clusters. We will look at the following well known convex program which aims to recover and complete the low-rank matrix (L) from the partially observed adjacency matrix (Aobs): Simple Convex Program: minimize L,S ∥L∥⋆+ λ∥S∥1 (1.1) subject to 1 ≥Li,j ≥0 for all i, j ∈{1, 2, . . . n} (1.2) Lobs + Sobs = Aobs (1.3) where λ ≥0 is the regularization parameter, ∥.∥⋆is the nuclear norm (sum of the singular values of the matrix), and ∥.∥1 is the l1-norm (sum of absolute values of the entries of the matrix). S is the sparse error matrix that accounts for the missing edges inside the clusters and erroneous edges outside the clusters on the observed entries. Lobs and Sobs denote entries of L and S that correspond to the observed part of the adjacency matrix. Program 1.1 is very simple and intuitive. Further, it does not require any information other than the observed part of the adjacency matrix. In [13], the authors analyze Program 1.1 without the constraint (1.2). While dropping (1.2) makes the convex program less effective, it does allow [13] to make use of low-rank matrix completion results for its analysis. In [16] and [21], the authors analyze Program 1.1 when the entire adjacency matrix is observed. In [17], the authors study a slightly more general program, where the regularization parameter is different for the extra edges and the missing edges. However, the adjacency matrix is completely observed. It is not difficult to see that, when the edge probability inside the cluster is p < 1/2, that (as n →∞) Program 1.1 will return L0 = 0 as the optimal solution (since if the cluster is not dense enough it is more costly to complete the missing edges). As a result our analysis of Program 1.1, and the main result of Theorem 1, assumes p > 1/2. Clearly, there are many instances of graphs we would like to cluster where p < 1/2. If the total size of the cluster region (i.e, the total number of edges in the cluster, denoted by |R|) is known, then the following convex program can be used, and can be shown to work for p < 1/2 (see Theorem 2). Improved Convex Program: minimize L,S ∥L∥⋆+ λ∥S∥1 (1.4) subject to 1 ≥Li,j ≥Si,j ≥0 for all i, j ∈{1, 2, . . . n} (1.5) Li,j = Si,j whenever Aobs i,j = 0 (1.6) sum(L) ≥|R| (1.7) As before, L is the low-rank matrix corresponding to the ideal cluster structure and λ ≥0 is the regularization parameter. However, S is now the sparse error matrix that accounts only for the missing edges inside the clusters on the observed part of adjacency matrix. [16] and [19] study programs similar to Program 1.4 for the case of a completely observed adjacency matrix. In [19], the constraint 1.7 is a strict equality. In [15] the authors analyze a program close to Program 1.4 but without the l1 penalty. If R is not known, it is possible to solve Problem 1.4 for several values of R until the desired performance is obtained. Our empirical results reported in Section 3, suggest that the solution is not very sensitive to the choice of R. 1.2 Our Contributions • We analyze the Simple Convex Program 1.1 for the SBM with partial observations. We provide explicit bounds on the regularization parameter as a function of the parameters of the SBM, that 2 characterizes the success and failure conditions of Program 1.1 (see results in Section 2.2). We show that clusters that are either too small or too sparse constitute the bottleneck. Our analysis is helpful in understanding the phase transition from failure to success for the simple approach. • We also analyze the Improved Convex Program 1.4. We explicitly characterize the conditions on the parameters of the SBM and the regularization parameter for successfully recovering clusters using this approach (see results in Section 2.3). • Apart from providing theoretical guarantees and corroborating them with simulation results (Section 3), we also apply Programs 1.1 and 1.4 on a real data set (Section 3.3) obtained by crowdsourcing an image labeling task on Amazon Mechanical Turk. 1.3 Related Work In [13], the authors consider the problem of identifying clusters from partially observed unweighted graphs. For the SBM with partial observations, they analyze Program 1.1 without constraint (1.2), and show that under certain conditions, the minimum cluster size must be at least O( p n(log(n))4/r) for successful recovery of the clusters. Unlike our analysis, the exact requirement on the cluster size is not known (since the constant of proportionality is not known). Also they do not provide conditions under which the approach fails to identify the clusters. Finding the explicit bounds on the constant of proportionality is critical to understanding the phase transition from failure to successfully identifying clusters. In [14–19], analyze convex programs similar to the Programs 1.1 and 1.4 for the SBM and show that the minimum cluster size should be at least O(√n) for successfully recovering the clusters. However, the exact requirement on the cluster size is not known. Also, they do not provide explicit conditions for failure, and except for [16] they do not address the case when the data is missing. In contrast, we consider the problem of clustering with missing data. We explicitly characterize the constants by providing bounds on the model parameters that decide if Programs 1.1 and 1.4 can successfully identify clusters. Furthermore, for Program 1.1, we also explicitly characterize the conditions under which the program fails. In [16], the authors extend their results to partial observations by scaling the edge probabilities by r (observation probability), which will not work for r < 1/2 or 1/2 < p < 1/2r in Program 1.1 . [21] analyzes Program 1.1 for the SBM and provides conditions for success and failure of the program when the entire adjacency matrix is observed. The dependence on the number of observed entries emerges non-trivially in our analysis. Further, [21] does not address the drawback of Program 1.1, which is p > 1/2, whereas in our work we analyze Program 1.4 that overcomes this drawback. 2 Partially Observed Unweighted Graph 2.1 Model Definition 2.1 (Stochastic Block Model). Let A = AT be the adjacency matrix of a graph on n nodes with K disjoint clusters of size ni each, i = 1, 2, · · · , K. Let 1 ≥pi ≥0, i = 1, · · · , K and 1 ≥q ≥0. For l > m, Al,m = 1 w.p. pi, if both nodes l, m are in the same cluster i. 1 w.p. q, if nodes l, m are not in the same cluster. (2.1) If pi > q for each i, then we expect the density of edges to be higher inside the clusters compared to outside. We will say the random variable Y has a Φ(r, δ) distribution, for 0 ≤δ, r ≤1, written as Y ∼Φ(r, δ), if Y = 1, w.p. rδ 0, w.p. r(1 −δ) ∗, w.p. (1 −r) where ∗denotes unknown. Definition 2.2 (Partial Observation Model). Let A be the adjacency matrix of a random graph generated according to the Stochastic Block Model of Definition 2.1. Let 0 < r ≤1. Each entry of 3 the adjacency matrix A is observed independently with probability r. Let Aobs denote the observed adjacency matrix. Then for l > m: (Aobs)l,m ∼Φ(r, pi) if both the nodes l and m belong to the same cluster i. Otherwise, (Aobs)l,m ∼Φ(r, q). 2.2 Results : Simple Convex Program Let [n] = {1, 2, · · · , n}. Let R be the union of regions induced by the clusters and Rc = [n]×[n]− R its complement. Note that |R| = PK i=1 n2 i and |Rc| = n2 −PK i=1 n2 i . Let nmin := min 1≤i≤K ni, pmin := min 1≤i≤K pi and nmax := max 1≤i≤K ni. The following definitions are important to describe our results. • Define Di := ni r (2pi −1) as the effective density of cluster i and Dmin = min 1≤i≤KDi. • γsucc := max 1≤i≤K2r√ni q 2( 1 r −1) + 4 (q(1 −q) + pi(1 −pi)) and γfail := PK i=1 n2 i n • Λ−1 succ := 2r√n q 1 r −1 + 4q(1 −q) + γsucc and Λ−1 fail := p rq(n −γfail). We note that the thresholds, Λsucc and Λfail depend only the parameters of the model. Some simple algebra shows that Λsucc < Λfail. Theorem 1 (Simple Program). Consider a random graph generated according to the Partial Observation Model of Definition (2.2) with K disjoint clusters of sizes {ni}K i=1, and probabilities {pi}K i=1 and q, such that pmin > 1 2 > q > 0. Given ϵ > 0, there exists positive constants c′ 1, c′ 2 such that, 1. If λ ≥(1 + ϵ)Λfail, then Program 1.1 fails to correctly recover the clusters with probability 1 −c′ 1 exp(−c′ 2|Rc|). 2. If 0 < λ ≤(1 −ϵ)Λsucc, • If Dmin ≥(1 + ϵ) 1 λ, then Program 1.1 succeeds in correctly recovering the clusters with probability 1 −c′ 1n2 exp(−c′ 2nmin). • If Dmin ≤(1 −ϵ) 1 λ, then Program 1.1 fails to correctly recover the clusters with probability 1 −c′ 1 exp(−c′ 2nmin). Discussion: 1. Theorem 1 characterizes the success and failure of Program 1.1 as a function of the regularization parameter λ. In particular, if λ > Λfail, Program 1.1 fails with high probability. If λ < Λsucc, Program 1.1 succeeds with high probability if and only if Dmin > 1 λ. However, Theorem 1 has nothing to say about Λsucc < λ < Λfail. 2. Small Cluster Regime: When nmax = o(n), we have Λ−1 succ = 2r√n q 1 r −1 + 4q(1 −q) . For simplicity let pi = p, ∀i, which yields Dmin = nminr(2p−1). Then Dmin > Λ−1 succ implies, nmin > 2√n 2p −1 s1 r −1 + 4q(1 −q) , (2.2) giving a lower bound on the minimum cluster size that is sufficient for success. 2.3 Results: Improved Convex Program The following definitions are critical to describe our results. • Define ˜Di := ni r (pi −q) as the effective density of cluster i and ˜Dmin = min 1≤i≤K ˜Di. • ˜γsucc := 2 max 1≤i≤Kr√ni q (1 −pi)( 1 r −1 + pi) + (1 −q)( 1 r −1 + q) 4 Edge Probability inside the cluster (p) Observation Probability (r) 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 Success Failure (a) Minimum Cluster Size Observation Probability (r) 50 100 150 200 0.2 0.4 0.6 0.8 1 Success Failure (b) Figure 1: Region of success (white region) and failure (black region) of Program 1.1 with λ = 1.01D−1 min. The solid red curve is the threshold for success (λ < Λsucc) and the dashed green line which is the threshold for failure (λ > Λfail) as predicted by Theorem 1. • ˜Λ−1 succ := 2r√n q ( 1 r −1 + q)(1 −q) + ˜γsucc. We note that the threshold, ˜Λsucc depends only on the parameters of the model. Theorem 2 (Improved Program). Consider a random graph generated according to the Partial Observation Model of Definition 2.2, with K disjoint clusters of sizes {ni}K i=1, and probabilities {pi}K i=1 and q, such that pmin > q > 0. Given ϵ > 0, there exists positive constants c′ 1, c′ 2 such that: If 0 < λ ≤(1 −ϵ) ˜Λsucc and ˜Dmin ≥(1 + ϵ) 1 λ, then Program 1.4 succeeds in recovering the clusters with probability 1 −c′ 1n2 exp(−c′ 2nmin). Discussion:1 1. Theorem 2 gives a sufficient condition for the success of Program 1.4 as a function of λ. In particular, for any λ > 0, we succeed if ˜D−1 min < λ < ˜Λsucc. 2. Small Cluster Regime: When nmax = o(n), we have ˜Λ−1 succ = 2r√n q 1 r −1 + q (1 −q). For simplicity let pi = p, ∀i, which yields ˜Dmin = nminr(p −q). Then ˜Dmin > ˜Λ−1 succ implies, nmin > 2√n p −q s1 r −1 + q (1 −q), (2.3) which gives a lower bound on the minimum cluster size that is sufficient for success. 3. (p, q) as a function of n: We now briefly discuss the regime in which cluster sizes are large (i.e. O(n)) and we are interested in the parameters (p, q) as a function of n that allows proposed approaches to be successful. Critical to Program 1.4 is the constraint (1.6): Li,j = Si,j when Aobs i,j = 0 (which is the only constraint involving the adjacency Aobs). With missing data, Aobs i,j = 0 with probability r(1−p) inside the clusters and r(1−q) outside the clusters. Defining ˆp = rp + 1 −r and ˆq = rq + 1 −r, the number of constraints in (1.6) becomes statistically equivalent to those of a fully observed graph where p and q are replaced by ˆp and ˆq. Consequently, for a fixed r > 0, from (2.3), we require p ≥p −q ≳O( 1 √n) for success. However, setting the unobserved entries to 0, yields Ai,j = 0 with probability 1 −rp inside the clusters and 1 −rq outside the clusters. This is equivalent to a fully observed graph where p and q are replaced by rp and rq. In this case, we can allow p ≈O( 1 n) for success which is order-wise better, and matches the results in McSherry [27]. Intuitively, clustering a fully observed graph with parameters ˆp = rp + 1 −r and ˆq = rq + 1 −r is much more difficult than one with rp and rq, since the links are more noisy in the former case. Hence, while it is beneficial to leave the unobserved entries blank in Program 1.1, for Program 1.4 it is in fact beneficial to set the unobserved entries to 0. 1The proofs for Theorems 1 and 2 are provided in the supplementary material. 5 Observation Probability (r) Edge Probability inside the cluster (p) 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 Success (a) Region of success (white region) and failure (black region) of Program 1.4 with λ = 0.49 ˜Λsucc. The solid red curve is the threshold for success ( ˜Dmin > λ−1) as predicted by Theorem 2. 0.2 0.4 0.6 0.8 1 0 0.5 1 Edge Probability inside the clusters (p) Probability of Success Simple Improved (b) Comparison range of edge probability p for Simple Program 1.1 and Improved Program 1.4. Figure 2: Simulation results for Improved Program. 3 Experimental Results We implement Program 1.1 and 1.4 using the inexact augmented Lagrange method of multipliers [28]. Note that this method solves the Program 1.1 and 1.4 approximately. Further, the numerical imprecisions will prevent the entries of the output of the algorithms from being strictly equal to 0 or 1. We use the mean of all the entries of the output as a hard threshold to round each entry. That is, if an entry is less than the threshold, it is rounded to 0 and to 1 otherwise. We compare the output of the algorithm after rounding to the optimal solution (L0), and declare success if the number of wrong entries is less than 0.1%. Set Up: We consider at an unweighted graph on n = 600 nodes with 3 disjoint clusters. For simplicity the clusters are of equal size n1 = n2 = n3, and the edge probability inside the clusters are same p1 = p2 = p3 = p. The edge probability outside the clusters is fixed, q = 0.1. We generate the adjacency matrix randomly according to the Stochastic Block Model 2.1 and Partial Observation Model 2.2. All the results are an average over 20 experiments. 3.1 Simulations for Simple Convex Program Dependence between r and p: In the first set of experiments we keep n1 = n2 = n3 = 200, and vary p from 0.55 to 1 and r from 0.05 to 1 in steps of 0.05. Dependence between nmin and r: In the second set of experiments we keep the edge probability inside the clusters fixed, p = 0.85. The cluster size is varied from nmin = 20 to nmin = 200 in steps of 20 and r is varied from 0.05 to 1 in steps of 0.05. In both the experiments, we set the regularization parameter λ = 1.01D−1 min, ensuring that Dmin > 1/λ, enabling us to focus on observing the transition around Λsucc and Λfail. The outcome of the experiments are shown in the Figures 1a and 1b. The experimental region of success is shown in white and the region of failure is shown in black. The theoretical region of success is about the solid red curve (λ < Λsucc) and the region of failure is below dashed green curve (λ > Λfail). As we can see the transition indeed occurs between the two thresholds Λsucc and Λfail. 3.2 Simulations for Improved Convex Program We keep the cluster size, n1 = n2 = n3 = 200 and vary p from 0.15 to 1 and r from 0.05 to 1 in steps of 0.05. We set the regularization parameter, λ = 0.49 ˜Λsucc, ensuring that λ < ˜Λsucc, enabling us to focus on observing the condition of success around ˜Dmin. The outcome of this experiment is shown in the Figure 2a. The experimental region of success is shown in white and region of failure is shown in black. The theoretical region of success is above solid red curve. Comparison with the Simple Convex Program: In this experiment, we are interested in observing the range of p for which the Programs 1.1 and 1.4 work. Keeping the cluster size n1 = n2 = n3 = 6 Matrix Recovered by Simple Program 100 200 300 400 50 100 150 200 250 300 350 400 450 (a) Matrix Recovered by Improved Program 100 200 300 400 50 100 150 200 250 300 350 400 450 (b) Ideal Clusters 50 100 150 200 250 300 350 400 450 0.5 1 1.5 Clusters identifyed by k−means on A 50 100 150 200 250 300 350 400 450 0.5 1 1.5 Clusters Identified from Simple Program 50 100 150 200 250 300 350 400 450 0.5 1 1.5 Clusters Identified from Improved Program 50 100 150 200 250 300 350 400 450 0.5 1 1.5 (c) Comparing with k-means clustering. Figure 3: Result of using (a) Program 1.1 (Simple) and (b) Program 1.4 (Improved) on the real data set. (c) Comparing the clustering output after running Program 1.1 and Program 1.4 with the output of applying k-means clustering directly on A (with unknown entries set to 0). 200 and r = 1, we vary the edge probability inside the clusters from p = 0.15 to p = 1 in steps of 0.05. For each instance of the adjacency matrix, we run both Program 1.1 and 1.4. We plot the probability of success of both the algorithms in Figure 2b. As we can observe, Program 1.1 starts succeeding only after p > 1/2, whereas for Program 1.4 it starts at p ≈0.35. 3.3 Labeling Images: Amazon MTurk Experiment Creating a training dataset by labeling images is a tedious task. It would be useful to crowdsource this task instead. Consider a specific example of a set of images of dogs of different breeds. We want to cluster them such that the images of dogs of the same breed are in the same cluster. One could show a set of images to each worker, and ask him/her to identify the breed of dog in each of those images. But such a task would require the workers to be experts in identifying the dog breeds. A relatively reasonable task is to ask the workers to compare pairs of images, and for each pair, answer whether they think the dogs in the images are of the same breed or not. If we have n images, then there are n 2 distinct pairs of images, and it will pretty quickly become unreasonable to compare all possible pairs. This is an example where we could obtain a subset of the data and try to cluster the images based on the partial observations. Image Data Set: We used images of 3 different breeds of dogs : Norfolk Terrier (172 images), Toy Poodle (151 images) and Bouvier des Flandres (150 images) from the Standford Dogs Dataset [29]. We uploaded all the 473 images of dogs on an image hosting server (we used imgur.com). MTurk Task: We used Amazon Mechanical Turk [30] as the platform for crowdsourcing. For each worker, we showed 30 pairs of images chosen randomly from the n 2 possible pairs. The task assigned to the worker was to compare each pair of images, and answer whether they think the dogs belong to the same breed or not. If the worker’s response is a “yes”, then there we fill the entry of the adjacency matrix corresponding to the pair as 1, and 0 if the answer is a “no”. Collected Data: We recorded around 608 responses. We were able to fill 16, 750 out of 111, 628 entries in A. That is, we observed 15% of the total number of entries. Compared with true answers (which we know a priori), the answers given by the workers had around 23.53% errors (3941 out of 16750). The empirical parameters for the partially observed graph thus obtained is shown Table 1. We ran Program 1.1 and Program 1.4 with regularization parameter, λ = 1/√n. Further, for Program 1.4, we set the size of the cluster region, R to 0.125 times n 2 . Figure 3a shows the recovered matrices. Entries with value 1 are depicted by white and 0 is depicted by black. In Figure 3c we compare the clusters output by running the k-means algorithm directly on the adjacency matrix A (with unknown entries set to 0) to that obtained by running k-means algorithm on the matrices recovered after running Program 1.1 (Simple Program) and Program 1.4 (Improved Program) respectively. The overall error with k-means was 40.8% whereas the error significantly reduced to 15.86% and 7.19% respectively when we used the matrices recoverd from Programs 1.1 and 1.4 respectively (see Table 2). Further, note that for running the k-means algorithm we need to know the exact number of clusters. A common heuristic is to identify the top K eigenvalues that are much 7 Table 1: Empirical Parameters from the real data. Params Value Params Value n 473 r 0.1500 K 3 q 0.1929 n1 172 p1 0.7587 n2 151 p2 0.6444 n3 150 p3 0.7687 Table 2: Number of miss-classified images Clusters→ 1 2 3 Total K-means 39 150 4 193 Simple 9 57 8 74 Improved 1 29 4 34 larger than the rest. In Figure 4 we plot the sorted eigenvalues for the adjacency matrix A and the recovered matrices. We can see that the top 3 eigen values are very easily distinguished from the rest for the matrix recovered after running Program 1.4. A sample of the data is shown in Figure 5. We observe that factors such as color, grooming, posture, face visibility etc. can result in confusion while comparing image pairs. Also, note that the ability of the workers to distinguish the dog breeds is neither guaranteed nor uniform. Thus, the edge probability inside and outside clusters are not uniform. Nonetheless, Programs 1.1 and Program 1.4, especially Program 1.4, are quite successful in clustering the data with only 15% observations. 0 200 400 600 −10 0 10 20 30 A 0 200 400 600 −100 0 100 200 300 Simple 0 200 400 600 −100 0 100 200 300 Improved Figure 4: Plot of sorted eigen values for (1) Adjacency matrix with unknown entries filled by 0, (2) Recovered adjacency matrix from Program 1.1, (3) Recovered adjacency matrix from Program 1.4 Norfolk Terrier Toy Poodle Bouvier des Flandres Figure 5: Sample images of three breeds of dogs that were used in the MTurk experiment. The authors thank the anonymous reviewers for their insightful comments. This work was supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CIF1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by a grant from Qualcomm Inc. The first author is also supported by the Schlumberger Foundation Faculty for the Future Program Grant. References [1] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: A review. ACM Comput. Surv., 31(3):264–323, September 1999. [2] M. Ester, H.-P. Kriegel, and X. Xu. A database interface for clustering in large spatial databases. In Proceedings of the 1st international conference on Knowledge Discovery and Data mining (KDD’95), pages 94–99. AAAI Press, August 1995. [3] Xiaowei Xu, Jochen J¨ager, and Hans-Peter Kriegel. A fast parallel clustering algorithm for large spatial databases. Data Min. Knowl. Discov., 3(3):263–290, September 1999. [4] Nina Mishra, Robert Schreiber, Isabelle Stanton, and Robert Tarjan. Clustering Social Networks. In Anthony Bonato and Fan R. K. Chung, editors, Algorithms and Models for the Web-Graph, volume 4863 of Lecture Notes in Computer Science, chapter 5, pages 56–67. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. 8 [5] Pedro Domingos and Matt Richardson. Mining the network value of customers. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’01, pages 57–66, New York, NY, USA, 2001. ACM. [6] Santo Fortunato. Community detection in graphs. Physics Reports, 486(3-5):75 – 174, 2010. [7] Ying Xu, Victor Olman, and Dong Xu. Clustering gene expression data using a graph-theoretic approach: an application of minimum spanning trees. Bioinformatics, 18(4):536–545, 2002. [8] Qiaofeng Yang and Stefano Lonardi. A parallel algorithm for clustering protein-protein interaction networks. In CSB Workshops, pages 174–177. IEEE Computer Society, 2005. [9] Satu Elisa Schaeffer. Graph clustering. Computer Science Review, 1(1):27 – 64, 2007. [10] Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109 – 137, 1983. [11] Anne Condon and Richard M. Karp. Algorithms for graph partitioning on the planted partition model. Random Struct. Algorithms, 18(2):116–140, 2001. [12] Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. In John D. Lafferty, Christopher K. I. Williams, John Shawe-Taylor, Richard S. Zemel, and Aron Culotta, editors, NIPS, pages 2496–2504. Curran Associates, Inc., 2010. [13] Ali Jalali, Yudong Chen, Sujay Sanghavi, and Huan Xu. Clustering partially observed graphs via convex optimization. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 1001–1008, New York, NY, USA, June 2011. ACM. [14] Brendan P. W. Ames and Stephen A. Vavasis. Convex optimization for the planted k-disjoint-clique problem. Math. Program., 143(1-2):299–337, 2014. [15] Brendan P. W. Ames and Stephen A. Vavasis. Nuclear norm minimization for the planted clique and biclique problems. Math. Program., 129(1):69–89, September 2011. [16] S. Oymak and B. Hassibi. Finding Dense Clusters via ”Low Rank + Sparse” Decomposition. arXiv:1104.5186, April 2011. [17] Yudong Chen, Sujay Sanghavi, and Huan Xu. Clustering sparse graphs. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Lon Bottou, and Kilian Q. Weinberger, editors, NIPS, pages 2213–2221, 2012. [18] Yudong Chen, Ali Jalali, Sujay Sanghavi, and Constantine Caramanis. Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7):4324–4337, 2013. [19] Brendan P. W. Ames. Robust convex relaxation for the planted clique and densest k-subgraph problems. 2013. [20] Nir Ailon, Yudong Chen, and Huan Xu. Breaking the small cluster barrier of graph clustering. CoRR, abs/1302.4549, 2013. [21] Ramya Korlakai Vinayak, Samet Oymak, and Babak Hassibi. Sharp performance bounds for graph clustering via convex optimizations. In Proceedings of the 39th International Conference on Acoustics, Speech and Signal Processing, ICASSP ’14, 2014. [22] Emmanuel J. Candes and Justin Romberg. Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math., 6(2):227–254, April 2006. [23] Emmanuel J. Candes and Benjamin Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717–772, December 2009. [24] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11:1–11:37, June 2011. [25] Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, and Alan S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011. [26] Venkat Chandrasekaran, Pablo A. Parrilo, and Alan S. Willsky. Rejoinder: Latent variable graphical model selection via convex optimization. CoRR, abs/1211.0835, 2012. [27] Frank McSherry. Spectral partitioning of random graphs. In FOCS, pages 529–537. IEEE Computer Society, 2001. [28] Zhouchen Lin, Minming Chen, and Yi Ma. The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices. Mathematical Programming, 2010. [29] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for finegrained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. [30] Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1):3–5, January 2011. 9
|
2014
|
115
|
5,198
|
Spatio-temporal Representations of Uncertainty in Spiking Neural Networks Cristina Savin IST Austria Klosterneuburg, A-3400, Austria csavin@ist.ac.at Sophie Deneve Group for Neural Theory, ENS Paris Rue d’Ulm, 29, Paris, France sophie.deneve@ens.fr Abstract It has been long argued that, because of inherent ambiguity and noise, the brain needs to represent uncertainty in the form of probability distributions. The neural encoding of such distributions remains however highly controversial. Here we present a novel circuit model for representing multidimensional real-valued distributions using a spike based spatio-temporal code. Our model combines the computational advantages of the currently competing models for probabilistic codes and exhibits realistic neural responses along a variety of classic measures. Furthermore, the model highlights the challenges associated with interpreting neural activity in relation to behavioral uncertainty and points to alternative populationlevel approaches for the experimental validation of distributed representations. Core brain computations, such as sensory perception, have been successfully characterized as probabilistic inference, whereby sensory stimuli are interpreted in terms of the objects or features that gave rise to them [1, 2]. The tenet of this Bayesian framework is the idea that the brain represents uncertainty about the world in the form of probability distributions. While this notion seems supported by behavioural evidence, the neural underpinnings of probabilistic computation remain highly debated [1, 2]. Different proposals offer different trade-offs between flexibility, i.e. the class of distributions they can represent, and speed, i.e. how fast can the uncertainty be read out from the neural activity. Given these two dimensions, we can divide existing models in two main classes. The first set, which we will refer to as spatial codes, distributes information about the distribution across neurons; the activity of different neurons reflects different values of an underlying random variable (alternatively, it can be viewed as encoding parameters of the underlying distribution [1, 2]). Linear probabilistic population codes (PPCs) are a popular instance of this class, whereby the log-probability of a random variable can be linearly decoded from the responses of neurons tuned to different values of that variable [3]. This encoding scheme has the advantage of speed, as uncertainty can be decoded in a neurally plausible way from the quasi-instantaneous neural activity, and reproduces aspects of the experimental data. However, these benefits come at the price of flexibility: the class of distributions that the network can represent needs to be highly restricted, otherwise the network size scales exponentially with the number of variables [1]. This limitation has lead to a second class of models, which we will refer to as temporal codes.These use stochastic network dynamics to sample from the target distribution [4, 1]. Existing models from this class assume that the activity of each neuron encodes a different random variable; the network explores the state space such that the time spent in any particular state is proportional to its probability under the distribution [4]. This representation is exact in the limit of infinite samples. It has several important computational advantages (e.g. easy marginalization, parameter learning, linear scaling of network size with the number of dimensions) and further accounts for trial-totrial variability in neural responses [1]. These benefits come at the cost of sampling time: a fair representation of the underlying distribution requires pooling over several samples, i.e. integrating neural activity over time. Some have argued that this feature makes sampling unfeasibly slow [2]. 1 Here we show that it is possible to construct spatio-temporal codes that combine the best of both worlds. The core idea is that the network activity evolves through recurrent dynamics such that samples from the posterior distribution can be linearly decoded from the (quasi-)instantaneous neural responses. This distributed representation allows several independent samples to be encoded simultaneously, thus enabling a fast representation of uncertainty that improves over time. Computationally, our model inherits all the benefits of a sampling-based representation, while overcoming potential shortcomings of classic temporal codes. We explored the general implications of the new coding scheme for a simple inference problem and found that the network reproduces many properties of biological neurons, such as tuning, variability, co-variability and their modulation by uncertainty. Nonetheless, these single or pairwise measures provided limited information about the underlying distribution represented by the circuit. In the context of our model, these results argue for using decoding as tool for validating distributed probabilistic codes, an approach which we illustrate with a simple example. 1 A distributed spatio-temporal representation of uncertainty The main idea of the representation is simple: we want to approximate a real-valued D-dimensional distribution P(x) by samples generated by K independent chains implementing Markov Chain Monte Carlo (MCMC) sampling [5], y(t) = {yk(t)}k=1...K, with yk ∼P(x) (Fig. 1). To this aim, we encode the stochastic trajectory of the chains in a population of N spiking neurons (N > KD), such that y(t) is linearly decodable from the neural responses. In particular, we adapt a recently proposed coding scheme for representing time-varying signals [6] and construct stochastic neural dynamics such that samples from the target distribution can be obtained by a linear mapping of the spikes convolved with an epsp-like exponential kernel (Fig. 1a): ˆy(t) = Γ · r(t) (1) where ˆy(t) denotes the decoded state of the K MCMC chains at time t (of size D × K), Γ is the decoding matrix1 and r is the low-pass version of the spikes o, τV ˙ri = −ri + oi. To facilitate the presentation of the model, we start by constructing recurrent dynamics for sampling a single MCMC chain, which we then generalise to the multi-chain scenario. Based on these network dynamics, we implement probabilistic inference in a linear Gaussian mixture, which we use in Section 2 to investigate the neural implications of the code. Distributed MCMC sampling As a starting point, consider the computational task of representing an arbitrary temporal trajectory (the gray line in Fig. 1b) as the linear combination of the responses of a set of neurons (one can think of this as an analog-to-digital conversion of sorts). If the decoding weights of each neuron points in a different direction (colour coded), then the trajectory could be efficiently reconstructed by adding the proper weight vectors (the local derivative of the trajectory) at just the right moment. Indeed, recent work has shown how to construct network dynamics enabling the network to track a trajectory as closely as possible [6]. To achieve this, neurons use a greedy strategy: each neuron monitors the current prediction error (the difference between the trajectory and its linear decoding from the spikes) and spikes only when its weight vector points in the right direction. When the decoding weights of several neurons point the same way (as in Fig. 1a), they compete to represent the signal via recurrent inhibition:2 from the perspective of the decoder, it does not matter which of these neurons spikes next, so the actual population responses depend on the previous spike history, initial conditions and intrinsic neural noise.3 As a result, spikes are highly irregular and look ‘random’ (with Poisson-like statistics), even when representing a constant signal. While competition is an important driving force for the network, neurons can also act cooperatively – when the change in the signal is larger than the contribution of a single decoding vector, then several neurons need to spike together to represent the signal (e.g. response to the step in Fig. 1a). 1The decoding matrix can be arbitrary. 2This competition makes spike correlations extremely weak in general [7]. 3When N ≫D there is a strong degeneracy in the map between neural responses and the signal, such that several different spike sequences yield the same decoded signal. In absence of internal noise, the encoding is nonetheless deterministic despite apparent variability. 2 Figure 1: Overview of the model. a. We assume a linear decoder, where the estimated signal ˆy is obtained as a weighted sum of neural responses (exponential kernel, blue). b. When the signal is multidimensional, different neurons are responsible for encoding different directions along the target trajectory (gray). c. Alternative network architectures: in the externally-driven version the target trajectory is given as an external input, whereas in the self-generated case it is computed via slow recurrent connections (green arrow); the input s is used during inference, when sampling from P(x|s). d. Encoding an example MCMC trajectory in the externally-driven mode. Light colours show ground truth; dark colours the decoded signal. e. Single-chain samples from a multivariate distribution (shown as colormap) decoded from a spiking network; trajectory subsampled by a factor of 10 for visibility. e. Decoded samples using 5 chains (colors) and a fifth of the time in e. Formally, the network dynamics minimise the squared reconstruction error, (y −ˆy)2, under certain constraints on mean firing rate which ensure the representation is distributed (see Suppl. Info.). The resulting network consists of spiking neurons with simple leaky-integrate-and-fire dynamics, ˙V = −1 τv V −Wo + I, where ˙V denotes the temporal derivative of V, the binary vector o denotes the spikes, oi(t) = δ iff Vi(t) > Θi, τv is the membrane time constant (same as that of the decoder), the neural threshold is Θi = P j Γ2 ij + λ and the recurrent connections, W = ΓTΓ + λ · I, can be learned by STDP [8], where λ is a free parameter controlling neural sparseness. The membrane potential of each neuron tracks the component of the reconstruction error along the direction of its decoding weights. As a consequence, the network is balanced (because the dynamics aim to bring the reconstruction error to zero) and membrane potentials are correlated, particularly in pairs of neurons with similar decoding weights [7] (see Fig. 2c). In the traditional form, which we refer to as the ‘externally-driven’ network (Fig. 1c), information about the target trajectory is provided as an external input to the neurons: I = ΓT · (1/τvy + ˙y). In our particular case, this input implements a particular kind of MCMC sampling (Langevin). Briefly, the sampler involves stochastic dynamics driven by the gradient of log P (y), with additive Gaussian noise [5] (see Suppl.Info. for implementation details). Hence, the external input is stochastic I = ΓT · (1/τvy + F(y) + ϵ), where F(y) = ∇log P(y), and ϵ is D-dimensional white independent Gaussian noise. Using our network dynamics, we can encode the MCMC trajectory with high precision (Fig. 1d). Importantly, because of the distributed representation, the integration window of the decoder does not restrict the frequency content of the signal. The network can represent signals that change faster than the membrane time constant (Fig. 1a, d). To construct a viable biological implementation of this network, we need to embed the sampling dynamics within the circuit (‘self-generated’ architecture in Fig. 1c). We achieved this by approximating the current I using the decoded signal ˆy instead of y. This results in a second recurrent input to the neurons, ˆI = ΓT ·(1/τv ˆy + F(ˆy) + ϵ). While this is an approximation, we found it does not affect sampling quality in the parameter regime when the encoding scheme itself works well (see example dynamics in Fig. 1e). 3 Such dynamics can be derived for any distribution from the broad class of product-of-(exponentialfamily) experts [9], with no restrictions on D; for simplicity and to ease visualisation, here we focus on the multivariate Gaussian case and restrict the simulations to bivariate distributions (D = 2). For a Gaussian distribution with mean µ and covariance Σ, the resulting membrane potential dynamics are linear:4 ∂V ∂t = −1 τv V −Wfasto + Wslowr + D + ΓTϵ (2) where o denotes the spikes, r is a low-passed version of the spikes. The connections Wfast correspond to the recurrent dynamics derived above, while the slow5 connections, Wslow = 1 τslow · ΓT I −Σ−1 Γ (e.g. NMDA currents) and the drift term D = 1 τslow ΓTΣ−1µ correspond to the deterministic component of the MCMC dynamics6 and ϵ is white independent Gaussian noise (implemented for instance by a small chaotic subnetwork appropriately connected to the principal neurons). In summary, relatively simple leaky integrate-and-fire neurons with appropriate recurrent connectivity are sufficient for implementing Langevin sampling from a Gaussian distribution in a distributed code. More complex distributions will likely involve nonlinearities in the slow connections (possibly computed in the dendrites) [10]. Multi-chain encoding: instantaneous representation of uncertainty The earliest proposal for sampling-based neural representations of uncertainty suggested distributing samples either across neurons or across time [4]. Nonetheless, all realisations of neural sampling use the second solution. The reason is simple: when equating the activity of individual neurons (either voltage or firing rate) to individual random variables, it is relatively straightforward to construct neural dynamics implementing MCMC sampling. It is less clear what kind of neural dynamics would generate samples in several neurons at a time. One naive solution would be to construct several networks that each sample from the same distribution in parallel. This however seems to unavoidably entail a ‘copy-pasting’ of all recurrent connections across different circuits, which is biologically unrealistic. Our distributed representation, in which neurons jointly encode the sampling trajectory, provides a potential solution to this problem. In particular, it allows several chains to be embedded in a single network. To extend the dynamics to a multi-chain scenario, we imagine an auxiliary probability distribution over K random variables. We want each to correspond to one chain, so we take them to be independent and identically distributed according to P(x). Since the sampling dynamics derived above do not restrict the dimensionality of the underlying distribution, we can use them to sample from this D × K-dimensional distribution instead. For the example of a multivariate normal, for instance, we would now sample from another Gaussian, P x∗K , with mean µ∗K (K repetitions of µ) and covariance Σ∗K, a block-diagonal matrix, obtained by K repetitions of Σ. In general, the multi-chain trajectory can be viewed as just another instance of MCMC sampling, where the encoding scheme guarantees that the signals across different chains remain independent. What may change, however, is the interpretability of neural responses in relation to the underlying encoded variable. We show that under mild assumptions on the decoding matrix Γ, the main features of single and pairwise responses are preserved (see below and Suppl.Info. Sec.4). Fig. 1f shows an example run for multi-chain sampling from a bivariate Gaussian. In a fifth of the time used in the single-chain scenario (Fig. 1e), the network dynamics achieves a similar spread across the state space, allowing for a quick estimation of uncertainty (see also Suppl.Info. 2). For a certain precision of encoding (determined by the size of the decoding weights Γ) and neural sparseness level, N scales linearly with the dimensionality of the state space D and the number of simultaneously encoded chains K. Thus, our representation provides a convenient trade-off between the network size and the speed of the underlying computation. When N is fixed, faster sampling requires either a penalty on precision, or increased firing rates (N ≫D). Overall, the coding scheme allows for a linear trade-off between speed and resources (either neurons or spikes). 4Since F(x) = Σ−1 (x −µ), this results in a stochastic generalisation of the dynamics in [7]. 5‘Slow’ marks the fact that the term depends on the low-passed neural output r, rather than o. 6Learning the connections goes beyond the scope of this paper; it seems parameter learning can be achieved using the plasticity rules derived for the temporal code, if these are local (not shown). 4 2 Neural implications To investigate the experimental implications of our coding scheme, we assumed the posterior distribution is centred around a stimulus-specific mean (a set of S = 12 values, equidistantly distributed on a circle of radius 1 around the origin, see black dots in Fig. 3a), with a stimulus independent covariance parametrizing the uncertainty about x. This kind of posterior arises e.g. as a result of inference in a linear Gaussian mixture (since the focus here is not on a specific probabilistic model of the circuit function, we keep the computation very basic, see Suppl. Info. for details). It allows us quantify the general properties of distributed sampling in terms of classic measures (tuning curves, Fano factors, FF, cross-correlogram, CCG, and spike count correlations, rsc) and how these change with uncertainty. Since we found that, under mild assumptions for the decoding matrix Γ, the results are qualitatively similar in a single vs. a multi-chain scenario (see Suppl. Info.), and to facilitate the explanation, the results reported in the main text used K = 1. Figure 2: Our model recapitulates several known features of cortical responses. a. Mean firing rates as a function of stimulus, for all neurons (N = 37); color reflects the phase of Γi (right). b. The network is in an asynchronous state. Left: example spike raster. Right: Fano factor distribution. c. Within-trial correlations in membrane potential for pairs of neurons as a function of the similarity of their decoding weights. d. Spike count correlations (averaged across stimuli) as a function of the neurons’ tuning similarity. Right: distribution of rsc, with mean in magenta. e We use crosscorrelograms (CCG) to asses spike synchrony. Left: CCG for an example neuron. Middle: Area under the peak ±10ms (between the dashed vertical bars) for all neuron pairs for 3 example stimuli; neurons ordered by Γi phase. Right: the area under CCG peak as a function of tuning similarity. a. The neural dynamics are consistent with a wide range of experimental observations First, we measured the mean firing rate of the neurons for each stimulus (averaged across 50 trials, each 1s long). We found that individual neurons show selectivity to stimulus orientations, with bell-shaped tuning curves, reminiscent of e.g. the orientation-tuning of V1 neurons (Fig. 2a). The inhomogeneity in the scale of the responses across the population is a reflection of the inhomogeneities in the decoding matrix Γ.7 7The phase of the decoding weights was sampled uniformly around the circle, with an amplitude drawn uniformly from the interval [0.005; 0.025]. 5 Neural responses were asynchronous, with irregular firing (Fig. 2b), consistent with experimental observations [11, 12]. To quantify neural variability, we estimated the Fano factors, measured as the ratio between the variance and the mean of the spike counts in different trials, FFi = σ2 fi/µfi. We found that the Fano factor distribution was centered around 1, a signature of Poisson variability. This observation suggests that the sampling dynamics preserve the main features of the distributed code described in Ref. [6]. Unlike the basic model, however, here neural variability arises both because of indeterminacies, due to distributed coding, and because of ‘true’ stochasticity, owed to sampling. The contribution of the latter, which is characteristic of our version, will depend on the underlying distribution represented: when the distribution is highly peaked, the deterministic component of the MCMC dynamics dominates, while the noise plays an increasingly important role the broader the distribution. At the level of the membrane potential, both sources of variability introduce correlations between neurons with similar tuning (Fig. 2c), as seen experimentally [13]: the first because the reconstruction error acts as a shared latent cause, the second because the stochastic component –which was independent in the y space– is mapped through ΓT in a distributed representation (see Eq. 2). While the membrane correlations introduced by the first disappear at the level of the spikes [7], the addition of the stochastic component turns out to have important consequences for the spike correlations both on the fast time scale, measured by CCG, and for the across-trial spike count covariability, measured by the noise correlations, rsc. Fig. 2e shows the CCG of an example pair of neurons, with similar tuning; their activity synchronizes on the time scale of few milliseconds. In more detail, our CCG measure was normalised by first computing the raw cross-correlogram (averaged across trials) and then subtracting a baseline obtained as the CCG of shuffled data, where the responses of each neuron come from a different trial. The raw cross-correlogram for a time delay, τ, CCG(τ) was computed as the Pearsons correlation of the neural responses, shifted in time time by τ.8 At the level of the population, the amount of synchrony (measured as the area under the CCG peak ±10ms) was strongly modulated by the input (Fig. 2e, middle), with synchrony most prominent in pairs of neurons that aligned with the stimulus (not shown). This is consistent with the idea that synchrony is stimulus-specific [14, 15]. We also measured spike count correlation (the Pearsons correlation coefficient of spike counts recorded in different trials for the same stimulus) and found they depend on the selectivity of the neurons, with positive correlations for pairs of neurons with similar tuning (Fig. 2d), as seen in experiments [16]. The overall distribution was broad, with a small positive mean (Fig. 2d), as in recent reports [11, 12]. Taken together, these results suggest that our model qualitatively recapitulates the basic features of cortical neural responses. b. Uncertainty modulates neural variability and covariability We have seen that sampling introduces spike correlations, not seen when encoding a deterministic dynamical system [7]. Since stochasticity seems to be key for these effects, this suggests uncertainty should significantly modulate pairwise correlations. To confirm this prediction, we varied the covariance structure of the underlying distribution for the same circuit (Fig. 3a; the low variance condition corresponds to baseline measures reported above) and repeated all previous measurements. We found that changes in uncertainty leave neuronal tuning invariant (Fig. 3b, not surprisingly since the mean firing rates reflect the posterior mean). Nonetheless, increasing uncertainty had significant effects on neural variability and co-variability. Fano factors increased for broader distributions (Fig. 3b), congruent with the common observation of the stimulus quenching response variability in experiments [17]. Second, we found a slower component in the CCG, which increased with uncertainty (Fig. 3e), as in the data [15]. Lastly, the dependence of different spike correlation measures on neural co-tuning increased with uncertainty (Fig. 3c, d). In particular, neurons with similar stimulus preferences increased their synchrony and spike-count correlations with increasing uncertainty, consistent with the stimulus quenching response co-variability in neural data and increases in correlations at low contrast [17, 16]. Although we see a significant modulation of (co-)variability with changes in uncentainty, these measures provide limited information about the underlying distribution represented in the network. They can be used to detect changes in the overall spread of the distribution, i.e. the high vs. low-variance 8While this is not the most common expression for the CCG; we found it reliably detects synchronous firing across neurons; spikes discretised in 2ms bins. 6 Figure 3: The effects of uncertainty on neural responses. a. Overview of different experimental conditions, posterior mean centred on different stimuli (black dots) with stimulus independent covariance shown for 4 conditions. b. Left: Tuning curves for an example neuron, for different conditions. Right: firing rate in the low variance vs. all other conditions, summary across all neurons; dots correspond to different neuron-stimulus pairs. c. Fano factor distribution for high-variance condition (compare Fig.2b). d. Area under CCG peak ±10ms as a function of the tuning similarity of the neurons, for different uncertainty conditions (colours as in b). e. Complete CCG, averaged across 10 neurons with similar tuning while sampling from independent bivariate Gaussians with different s.d. (0.1 for ‘high variance’). f. Spike count correlations (averaged across stimuli) as a function of the tuning similarity of the neurons, for different uncertainty conditions. condition look different at the level of pairwise neural responses. However, they cannot discriminate between distributions with similar spread, but very different dependency structure, e.g. between the correlated and anti-correlated condition (Fig. 3d, f; also true for FF and the slow component of the CCG, not shown). For this, we need to look at the population level. stimuli experimental setup S stimuli (repeated trials) neuron 1 2 3 4 5 true trajectory estimate estimate same condition (lowVar) across condition(highVar) estimate across condition (Corr) a b c Figure 4: A decoding approach to study the encoding of uncertainty. a. In a low-variability condition we record neural responses for several repetitions of different stimuli (black dots); We estimated the decoding matrix by linear regression and used it to project the activity of the population in individual trials. b. The decoder captures well the underlying dynamics in a trial; ground-truth in black. c. The same decoder ˆΓ can be used to visualise the structure of the underlying distribution in other conditions. Note the method is robust to a misalignment in initial conditions (red trace). c. Decoding can be used to assess neural representations of uncertainty Since in a distributed representation single-neuron or pairwise measures tell us little about the dependency structure of the represented random variables, alternative methods need to be devised for investigating the underlying computation performed by the circuit. The representational framework proposed here suggests that linear decoding may be used for this purpose. In particular, we can record neural responses for a variety of stimuli and reverse-engineer the map between spikes and the relevant latent variables (or, if the assumed generative model is linear as here, the stimuli themselves). We can use the low-variance condition to get a reasonable estimate of the decoding matrix, ˆΓ (since the underlying sampling dynamics are close to the posterior mean) and then use the decoder for visualising the trajectory of the network while varying uncertainty. As an illustration, we 7 use simple linear regression of the stimuli s as a function of the neuron firing rates, scaled by τv.9 Although the recovered decoding weights are imperfect and the initial conditions unknown, the projections of the neural responses in single trials along ˆΓ captures the main features of the underlying sampler, both in the low-variance and in other conditions (Fig. 4b, c). 3 Discussion How populations of neurons encode probability distributions in a central question for Bayesian approaches to understanding neural computation. While previous work has shown that spiking neural networks could represent a probability over single real-valued variables [18], or the joint probability of many binary random variables [19], the representation of complex multi-dimensional real-valued distributions10 remains less clear [1, 2]. Here we have proposed a new spatio-temporal code for representing such distributions quickly and flexibly. Our model relies on network dynamics which approximate the target distribution by several MCMC chains, encoded in the spiking neural activity such that the samples can be linearly decoded from the quasi-instantaneous neural responses. Unlike previous sampling-based codes [19], our model does not require a one-to-one correspondence between random variables and neurons. This separation between computation and representation is critical for the increased speed, as it allows multiple chains to be realistically embedded in the same circuit, while preserving all the computational benefits of sampling. Furthermore, it makes the encoding robust to neural damage, which seems important when representing behaviourally-relevant variables, e.g. in higher cortical areas. These benefits come at the cost of a linear increase in the number of neurons with K, providing a convenient trade-off between speed and neural resources. The speedup due to increases in network size is orthogonal to potential improvements in sampling efficiency achieved by more sophisticated MCMC dynamics, e.g. relying on oscillations [21] or nonnormal stochastic dynamics [22], suggesting that distributed sampling could be made even faster by combining the two approaches. The distributed coding scheme has important consequences for interpreting neural responses: since knowledge about the underlying distribution is spread across the population, the activity of single cells does not reflect the underlying computation in any obvious way. In particular, although the network did reproduce various properties of single neuron and pairs of neuron responses seen experimentally, we found that their modulation with uncertainty provides relatively limited information about the underlying probabilistic computation. Changes in the overall spread (entropy) of the posterior are reflected in changes in variability (Fano factors) and covariability (synchrony on the ms timescale and spike-count correlations across trials) of neural responses across the population, as seen in the data. Since these features arise due to the interaction between sampling and distributed coding, the model further predicts that the degree of correlations between a pair of neurons should depend on their functional similarity, and that the degree of this modulation should be affected by uncertainty. Nonetheless, the distributed representation occludes the structure of the underlying distribution (e.g. correlations between random variables), something which would have been immediately apparent in a one-to-one sampling code. Our results reinforce the idea that population, rather than single-cell, responses are key to understanding cortical computation, and points to linear decoding as a potential analysis tool for investigating probabilistic computation in a distributed code. In particular, we have shown that we can train a linear decoder on spiking data and use it to reveal the underlying sampling dynamics in different conditions. While ours is a simple toy example, where we assume that we can record from all the neurons in the population, the fact that the signal is low-dimensional relative to the number of neurons gives hope that it should be possible to adapt more sophisticated machine learning techniques [23] for decoding the underlying trajectory traced by a neural circuit in realistic settings. If this could be done reliability on data, then the analysis of probabilistic neural computation would no longer be restricted to regions for which we have good ideas about the mathematical form of the underlying distribution, but could be applied to any cortical circuit of interest.11 Thus, our coding scheme opens exciting avenues for multiunit data analysis. 9This requires knowledge of τv and, in a multi-chain scenario, a grouping of neural responses by chain preference. Proxies for which neurons should be decoded together are discussed in Suppl.Info. Sec.4. 10Such distribution arise in many models of probabilistic inference in the brain, e.g. [20]. 11The critical requirement is to know (some of) the variables represented in the circuit, up to a linear map. 8 References [1] Fiser, J., Berkes, P., Orb´an, G. & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences 14, 119–130 (2010). [2] Pouget, A., Beck, J.M., Ma, W.J. & Latham, P.E. Probabilistic brains: knowns and unknowns. Nature Neuroscience 16, 1170–1178 (2013). [3] Pouget, A., Zhang, K., Deneve, S. & Latham, P.E. Statistically efficient estimation using population coding. Neural computation 10, 373–401 (1998). [4] Hoyer, P.O. & Hyvarinen, A. Interpreting neural response variability as Monte Carlo sampling of the posterior. Advances in neural information processing systems, 293–300 (2003). [5] Neal, R. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 54, 113–162 (2010). [6] Boerlin, M. & Deneve, S. Spike-based population coding and working memory. PLoS Computational Biology 7, e1001080 (2011). [7] Boerlin, M., Machens, C.K. & Den`eve, S. Predictive coding of dynamical variables in balanced spiking networks. PLoS Computational Biology (2013). [8] Bourdoukan, R., Barrett, D., Machens, C. & Deneve, S. Learning optimal spike-based representations. Advances in neural information processing systems, 2294–2302 (2012). [9] Hinton, G.E. Training products of experts by minimizing contrastive divergence. Neural computation 14, 1771–1800 (2002). [10] Savin, C., Dayan, P. & Lengyel, M. Correlations strike back (again): the case of associative memory retrieval. in Advances in Neural Information Processing Systems 26 (eds. Burges, C., Bottou, L., Welling, M., Ghahramani, Z. & Weinberger, K.) 288–296 (2013). [11] Renart, A. et al. The asynchronous state in cortical circuits. Science 327, 587–590 (2010). [12] Ecker, A.S. et al. Decorrelated neuronal firing in cortical microcircuits. Science 327, 584–587 (2010). [13] Yu, J. & Ferster, D. Functional coupling from simple to complex cells in the visually driven cortical circuit. Journal of Neuroscience 33, 18855–18866 (2013). [14] Ohiorhenuan, I.E. et al. Sparse coding and high-order correlations in fine-scale cortical networks. Nature 466, 617–621 (2010). [15] Kohn, A. & Smith, M.A. Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal of Neuroscience 25, 3661–3673 (2005). [16] Smith, M.A. & Kohn, A. Spatial and temporal scales of neuronal correlation in primary visual cortex. Journal of Neuroscience 28, 12591–12603 (2008). [17] Churchland, M.M. et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience 13, 369–378 (2010). [18] Zemel, R.S., Dayan, P. & Pouget, A. Probabilistic interpretation of population codes. Neural computation 10, 403–430 (1998). [19] Buesing, L., Bill, J., Nessler, B. & Maass, W. Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Computational Biology 7, e1002211 (2011). [20] Karklin, Y. & Lewicki, M. A hierarchical bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural computation 17, 397–423 (2005). [21] Savin, C., Dayan, P. & Lengyel, M. Optimal recall from bounded metaplastic synapses: predicting functional adaptations in hippocampal area CA3. PLoS Computational Biology 10, e1003489 (2014). [22] Hennequin, G., Aitchison, L. & Lengyel, M. Fast sampling for Bayesian inference in neural circuits. arXiv preprint arXiv:1404.3521 (2014). [23] Macke, J.H. et al. Empirical models of spiking in neural populations. Advances in neural information processing systems 24, 1350–1358 (2011). 9
|
2014
|
116
|
5,199
|
Hamming Ball Auxiliary Sampling for Factorial Hidden Markov Models Michalis K. Titsias Department of Informatics Athens University of Economics and Business mtitsias@aueb.gr Christopher Yau Wellcome Trust Centre for Human Genetics University of Oxford cyau@well.ox.ac.uk Abstract We introduce a novel sampling algorithm for Markov chain Monte Carlo-based Bayesian inference for factorial hidden Markov models. This algorithm is based on an auxiliary variable construction that restricts the model space allowing iterative exploration in polynomial time. The sampling approach overcomes limitations with common conditional Gibbs samplers that use asymmetric updates and become easily trapped in local modes. Instead, our method uses symmetric moves that allows joint updating of the latent sequences and improves mixing. We illustrate the application of the approach with simulated and a real data example. 1 Introduction The hidden Markov model (HMM) [1] is one of the most widely and successfully applied statistical models for the description of discrete time series data. Much of its success lies in the availability of efficient computational algorithms that allows the calculation of key quantities necessary for statistical inference [1, 2]. Importantly, the complexity of these algorithms is linear in the length of the sequence and quadratic in the number of states which allows HMMs to be used in applications that involve long data sequences and reasonably large state spaces with modern computational hardware. In particular, the HMM has seen considerable use in areas such as bioinformatics and computational biology where non-trivially sized datasets are commonplace [3, 4, 5]. The factorial hidden Markov model (FHMM) [6] is an extension of the HMM where multiple independent hidden chains run in parallel and cooperatively generate the observed data. In a typical setting, we have an observed sequence Y = (y1, . . . , yN) of length N which is generated through K binary hidden sequences represented by a K × N binary matrix X = (x1, . . . , xN). The interpretation of the latter binary matrix is that each row encodes for the presence or absence of a single feature across the observed sequence while each column xi represents the different features that are active when generating the observation yi. Different rows of X correspond to independent Markov chains following p(xk,i|xk,i−1) = 1 −ρk, xk,i = xk,i−1, ρk, xk,i ̸= xk,i−1, (1) and where the initial state xk,1 is drawn from a Bernoulli distribution with parameter νk. All hidden chains are parametrized by 2K parameters denoted by the vectors ρ = {ρk}K k=1 and v = {vk}K k=1. Furthermore, each data point yi is generated conditional on xi through a likelihood model p(yi|xi) parametrized by φ. The whole set of model parameters consists of the vector θ = (φ, ρ, v) which determines the joint probability density over (Y, X), although for notational simplicity we omit reference to it in our expressions. The joint probability density over (Y, X) is written in the form p(Y, X) = p(Y |X)p(X) = N Y i=1 p(yi|xi) ! K Y k=1 p(xk,1) N Y i=2 p(xk,i|xk,i−1) ! , (2) 1 x1,i−1 x1,i x1,i+1 x2,i−1 x2,i x2,i+1 x3,i−1 x3,i x3,i+1 yi−1 yi yi+1 Figure 1: Graphical model for a factorial HMM with three hidden chains and three consecutive data points. and it is depicted as a directed graphical model in Figure 1. While the HMM has enjoyed widespread application, the utility of the FHMM has been relatively less abundant. One considerable challenge in the adoption of FHMMs concerns the computation of the posterior distribution p(X|Y ) (conditional on observed data and model parameters) which comprises a fully dependent distribution in the space of the 2KN possible configurations of the binary matrix X. Exact Monte Carlo inference can be achieved by applying the standard forward-filteringbackward-sampling (FF-BS) algorithm to simulate a sample from p(X|Y ) in O(22KN) time (the independence of the Markov chains can be exploited to reduce this complexity to O(2K+1KN) [6]). Joint updating of X is highly desirable in time series analysis since alternative strategies involving conditional single-site, single-row or block updates can be notoriously slow due to strong coupling between successive time steps. However, although the use of FF-BS is quite feasible for even very large HMMs, it is only practical for small values of K and N in FHMMs. As a consequence, inference in FHMMs has become somewhat synonymous with approximate methods such as variational inference [6, 7]. The main burden of the FF-BS algorithm is the requirement to sum over all possible configurations of the binary matrix X during the forward filtering phase. The central idea in this work is to avoid this computationally expensive step by applying a restricted sampling procedure with polynomial time complexity that, when applied iteratively, gives exact samples from the true posterior distribution. Whilst regular conditional sampling procedures use locally asymmetric moves that only allow one part of X to be altered at a time, our sampling method employs locally symmetric moves that allow localized joint updating of all the constituent chains making it less prone to becoming trapped in local modes. The sampling strategy adopts the use of an auxiliary variable construction, similar to slice sampling [8] and the Swendsen-Wang algorithm [9], that allows the automatic selection of the sequence of restricted configuration spaces. The size of these restricted configuration spaces is user-defined allowing control over balance between the sampling efficiency and computational complexity. Our sampler generalizes the standard FF-BS algorithm which is a special case. 2 Standard Monte Carlo inference for the FHMM Before discussing the details of our new sampler, we first describe the limitations of standard conditional sampling procedures for the FHMM. The most sophisticated conditional sampling schemes are based on alternating between sampling one chain (or a small block of chains) at a time using the FF-BS recursion. However, as discussed in the following and illustrated experimentally in Section 4, these algorithms can easily become trapped in local modes leading to inefficient exploration of the posterior distribution. One standard Gibbs sampling algorithm for the FHMM is based on simulating from the posterior conditional distribution over a single row of X given the remaining rows. Each such step can be carried out in O(4N) time using the FF-BS recursion, while a full sweep over all K rows requires O(4KN) time. A straightforward generalization of the above is to apply a block Gibbs sampling where at each step a small subset of chains is jointly sampled. For instance, when we consider pairs of chains the time complexity for sampling a pair is O(16N) while a full sweep over all possible pairs requires time O(16 K(K−1) 2 N). 2 .. 1 0 .. .. 0 0 .. .. 1 0 .. ! ⇏ .. 0 1 .. .. 0 1 .. .. 0 0 .. ! .. 1 0 .. .. 0 0 .. .. 1 0 .. ! ⇒ .. 1 0 .. .. 0 1 .. .. 0 0 .. ! ⇒ .. 0 1 .. .. 0 1 .. .. 0 0 .. ! X(t) X(t+1) X(t) U X(t+1) (a) (b) Figure 2: Panel (a) shows an example where from a current state X(t) it is impossible to jump to a new state X(t+1) in a single step using block Gibbs sampling on pairs of rows. In contrast, Hamming ball sampling applied with the smallest valid radius, i.e. m = 1, can accomplish such move through the intermediate simulation of U as illustrated in (b). Specifically, simulating U from the uniform p(U|X) results in a state having one bit flipped per column compared to X(t). Then sampling X(t+1) given U flips further two bits so in total X(t+1) differs by X(t) in four bits that exist in three different rows and two columns. While these schemes can propose large changes to X and be efficiently implemented using forwardbackward recursions, they can still easily get trapped to local modes of the posterior distribution. For instance, suppose we sample pairs of rows and we encounter a situation where, in order to escape from a local mode, four bits in two different columns (two bits from each column) must be jointly flipped. Given that these four bits belong to more than two rows, the above Gibbs sampler will fail to move out from the local mode no matter which row-pair, from the K(K−1) 2 possible ones, is jointly simulated. An illustrative example of this phenomenon is given in Figure 2(a). We could describe the conditional sampling updates of block Gibbs samplers as being locally asymmetric, in the sense that, in each step, one part of X is restricted to remain unchanged while the other part is free to change. As the above example indicates, these locally asymmetric updates can cause the chain to become trapped in local modes which can result in slow mixing. This can be particularly problematic in FHMMs where the observations are jointly dependent on the underlying hidden states which induces a coupling between rows of X. Of course, locality in any possible MCMC scheme for FHMMs seems unavoidable, certainly however, such a locality does not need to be asymmetric. In the next section, we develop a symmetrically local sampling approach so that each step gives a chance to any element of X to be flipped in any single update. 3 Hamming ball auxiliary sampling Here we develop the theory of the Hamming ball sampler. Section 3.1 presents the main idea while Section 3.2 discusses several extensions. 3.1 The basic Hamming ball algorithm Recall the K-dimensional binary vector xi (the i-th column of X) that defines the hidden state at i-th location. We consider the set of all K-dimensional binary vectors ui that lie within a certain Hamming distance from xi so that each ui is such that h(ui, xi) ≤m. (3) where m ≤K. Here, h(ui, xi) = PK k=1 I(uk,i ̸= xk,i) is the Hamming distance between two binary vectors and I(·) denotes the indicator function. Notice that the Hamming distance is simply the number of elements the two binary vectors disagree. We refer to the set of all uis satisfying (3) as the i-th location Hamming ball of radius m. For instance, when m = 1, the above set includes all ui vectors restricted to be the same as xi but with at most one bit flipped, when m = 2 these vectors can have at most two bits flipped and so on. For a given m, the cardinality of the i-th location Hamming ball is M = m X j=0 K j . (4) For m = 1 this number is equal to K + 1, for m = 2 is equal to K(K−1) 2 + K + 1 and so on. Clearly, when m = K there is no restriction on the values of ui and the above number takes its maximum value, i.e. M = 2K. Subsequently, given a certain X we define the full path Hamming 3 ball or simply Hamming ball as the set Bm(X) = {U; h(ui, xi) ≤m, i = 1, . . . , N}, (5) where U is a K ×N binary matrix such that U = (u1, . . . , uN). This Hamming ball, centered at X, is simply the intersection of all i-th location Hamming balls of radius m. Clearly, the Hamming ball set is such that U ∈Bm(X) iff X ∈Bm(U), or more concisely we can write I(U ∈Bm(X)) = I(X ∈Bm(U)). Furthermore, the indicator function I(U ∈Bm(X)) factorizes as follows, I(U ∈Bm(X)) = N Y i=1 I(h(ui, xi) ≤m). (6) We wish now to consider U as an auxiliary variable generated given X uniformly inside Bm(X), i.e. we define the conditional distribution p(U|X) = 1 Z I(U ∈Bm(X)), (7) where crucially the normalizing constant Z simply reflects the volume of the ball and is independent from X. We can augment the initial joint model density from Eq. (2) with the auxiliary variables U and express the augmented model p(Y, X, U) = p(Y |X)p(X)p(U|X). (8) Based on this, we can apply Gibbs sampling in the augmented space and iteratively sample U from the posterior conditional, which is just p(U|X), and then sample X given the remaining variables. Sampling p(U|X) is trivial as it requires to independently draw each ui, with i = 1, . . . , N, from the uniform distribution proportional to I(h(ui, xi) ≤m), i.e. randomly select a ui within Hamming distance at most m from xi. Then, sampling X is carried out by simulating from the following posterior conditional distribution p(X|Y, U) ∝p(Y |X)p(X)p(U|X) ∝ N Y i=1 p(yi|xi)I(h(xi, ui) ≤m) ! p(X), (9) where we used Eq. (6). Exact sampling from this distribution can be done using the FF-BS algorithm in O(M 2N) time where M is the size of each location-specific Hamming ball given in (4). The intuition behind the above algorithm is the following. Sampling p(U|X) given the current state X can be thought of as an exploration step where X is randomly perturbed to produce an auxiliary matrix U. We can imagine this as moving the Hamming ball that initially is centered at X to a new location centered at U. Subsequently, we take a slice of the model by considering only the binary matrices that exist inside this new Hamming ball, centered at U, and draw an new state for X by performing exact sampling in this sliced part of the model. Exact sampling is possible using the FF-BS recursion and it has an user-controllable time complexity that depends on the volume of the Hamming ball. An illustrative example of how the algorithm operates is given in Figure 2(b). To be ergodic the above sampling scheme (under standard conditions) the auxiliary variable U must be allowed to move away from the current X(t) (the value of X at the t-th iteration) which implies that the radius m must be strictly larger than zero. Furthermore, the maximum distance a new X(t+1) can travel away from the current X(t) in a single iteration is 2mN bits (assuming m ≤K/2). This is because resampling a U given the current X(t) can select a U that differs at most mN bits from X(t), while subsequently sampling X(t+1) given U further adds at most other mN bits. 3.2 Extensions So far we have defined Hamming ball sampling assuming binary factor chains in the FHMM. It is possible to generalize the whole approach to deal with factor chains that can take values in general finite discrete state spaces. Suppose that each hidden variable takes P values so that the matrix X ∈{1, . . . , P}K×N. Exactly as in the binary case, the Hamming distance between the auxiliary vector ui ∈{1, . . . , P}K and the corresponding i-th column xi of X is the number of elements these two vectors disagree. Based on this we can define the i-th location Hamming ball of radius m as the set of all uis satisfying Eq. (3) which has cardinality M = m X j=0 (P −1)j K j . (10) 4 This, for m = 1 is equal (P −1)K + 1, for m = 2 it is equal to (P −1)2 K(K−1) 2 + (P −1)K + 1 and so forth. Notice that for the binary case, where P = 2, all these expressions reduce to the ones from Section 3.1. Then, the sampling scheme from the previous section can be applied unchanged where in one step we sample U given the current X and in the second step we sample X given U using the FF-BS recursion. Another direction of extending the method is to vary the structure of the uniform distribution p(U|X) which essentially determines the exploration area around the current value of X. We can even add randomness in the structure of this distribution by further expanding the joint density in Eq. (8) with random variables that determine this structure. For instance, we can consider a distribution p(m) over the radius m that covers a range of possible values and then sample iteratively (U, m) from p(U|X, m)p(m) and X from p(X|Y, U, m) ∝p(Y |X)p(X)p(U|X, m). This scheme remains valid since essentially it is Gibbs sampling in an augmented probability model where we added the auxiliary variables (U, m). In practical implementation, such a scheme would place high prior probability on small values of m where sampling iterations would be fast to compute and enable efficient exploration of local structure but, with non-zero probabilities on larger values on m, the sampler could still periodically consider larger portions of the model space that would allow more significant changes to the configuration of X. More generally, we can determine the structure of p(U|X) through a set of radius constraints m = (m1, . . . , mQ) and base our sampling on the augmented density p(Y, X, U, m) = p(Y |X)p(X)p(U|X, m)p(m). (11) For instance, we can choose m = (m1, . . . , mN) and consider mi as determining the radius of the i-location Hamming ball (for the column xi) so that the corresponding uniform distribution over ui becomes p(ui|xi, mi) ∝I(h(ui, xi) ≤mi). This could allow for asymmetric local moves where in some part of the hidden sequence (where mis are large) we allow for greater exploration compared to others where the exploration can be more constrained. This could lead to more efficient variations of the Hamming Ball sampler where the vector m could be automatically tuned during sampling to focus computational effort in regions of the sequence where there is most uncertainty in the underlying latent structure of X. In a different direction, we could introduce the constraints m = (m1, . . . , mK) associated with the rows of X instead of the columns. This can lead to obtain regular Gibbs sampling as a special case. In particular, if p(m) is chosen so that in a random draw we pick a single k such that mk = N and the rest mk′ = 0, then we essentially freeze all rows of X apart from the k-th row1 and thus allowing the subsequent step of sampling X to reduce to exact sampling the k-th row of X using the FF-BS recursion. Under this perspective, block Gibbs sampling for FHMMs can be seen as a special case of Hamming ball sampling. Finally, there maybe utility in developing other proposals for sampling U based on distributions other than the uniform approach used here. For example, a local exponentially weighted proposal of the form p(U|X) ∝QN i=1 exp(−λh(ui, xi))I(h(ui, xi) ≤m), would keep the centre of the proposed Hamming ball closer to its current location enabling more efficient exploration of local configurations. However, in developing alternative proposals, it is crucial that the normalizing constant of p(U|X) is computed efficiently so that the overall time complexity remains O(M 2N). 4 Experiments To demonstrate Hamming ball (HB) sampling we consider an additive FHMM as the one used in [6] and popularized recently for energy disaggregation applications [7, 10, 11]. In this model, each k-th factor chain interacts with the data through an associated mean vector wk ∈RD so that each observed output yi is taken to be a noisy version of the sum of all factor vectors activated at time i: yi = w0 + K X k=1 wkxk,i + ηi, (12) 1In particular, for the rows k′ ̸= k the corresponding uniform distribution over uk′,is collapses to a point delta mass centred at the previous states xk′,is. 5 where w0 is an extra bias term while ηi is white noise that typically follows a Gaussian: ηi ∼ N(0, σ2I). Using this model we demonstrate the proposed method using an artificial dataset in Section 4.1 and a real dataset [11] in energy disaggregation in Section 4.2. In all examples, we compare HB with block Gibbs (BG) sampling. 4.1 Simulated dataset Here, we wish to investigate the ability of HB and BG sampling schemes to efficient escape from local modes of the posterior distribution. We consider an artificial data sequence of length N = 200 generated as follows. We simulated K = 5 factor chains (with vk = 0.5 , ρk = 0.05, k = 1, . . . , 5) which subsequently generated observations in the 25-dimensional space according to the additive FHMM from Eq. (12) assuming Gaussian noise with variance σ2 = 0.05. The associated factor vector where selected to be wk = wk ∗Maskk where wk = 0.8 + 0.05 ∗(k −1), k = 1, . . . , 5 and Maskk denotes a 25-dimensional binary vector or a mask. All binary masks are displayed as 5 × 5 binary images in Figure 1(a) in the supplementary file together with few examples of generated data points. Finally, the bias term w0 was set to zero. We assume that the ground-truth model parameters θ = ({vk, ρk, wk, }K k=1, w0, σ2) that generated the data are known and our objective is to do posterior inference over the latent factors X ∈{0, 1}5×200, i.e. to draw samples from the conditional posterior distribution p(X|Y, θ). Since the data have been produced with small noise variance, this exact posterior is highly picked with most all the probability mass concentrated on the single configuration Xtrue that generated the data. So the question is whether BG and HB schemes will able to discover the “unknown” Xtrue from a random initialization. We tested three block Gibbs sampling schemes: BG1, BG2 and BG3 that jointly sample blocks of rows of size one, two or three respectively. For each algorithm a full iteration is chosen to be a complete pass over all possible combinations of rows so that the time complexity per iteration for BG1 is O(20N), for BG2 is O(160N) and for BG3 is O(640N). Regarding HB sampling we considered three schemes: HB1, HB2 and HB3 with radius m = 1, 2 and 3 respectively. The time complexities for these HB algorithms were O(36N), O(256N) and O(676N). Notice that an exact sample from the posterior distribution can be drawn in O(1024N) time. We run all algorithms assuming the same random initialization X(0) so that each bit was chosen from the uniform distribution. Figure 3(a) shows the evolution of the error of misclassified bits in X, i.e. the number of bits the state X(t) disagrees with the ground-truth Xtrue. Clearly, HB2 and HB3 discover quickly the optimal solution with HB3 being slightly faster. HB1 is unable to discover the ground-truth but it outperforms BG1 and BG2. All the block Gibbs sampling schemes, including the most expensive BG3 one, failed to reach Xtrue. 0 50 100 150 200 0 50 100 150 200 250 300 350 Number of errors in X Sampling iterations BG1 BG2 BG3 HB1 HB2 HB3 0 200 400 600 800 1000 0 500 1000 1500 2000 2500 3000 3500 Train MSE Sampling iterations BG1 BG2 HB1 HB2 0 50 100 150 200 1000 1200 1400 1600 Test MSE Sampling iterations BG1 BG2 HB1 HB2 (a) (b) (c) Figure 3: The panel in (a) shows the sampling evolution of the Hamming distance between Xtrue and X(t) for the three block Gibbs samplers (dashed lines) and the HB schemes (solid lines). The panel in (b) shows the evolution of the MSE during the MCMC training phase for the REDD dataset. The two Gibbs samplers are shown with dashed lines while the two HB algorithms with solid lines. Similarly to (b), the plot in (c) displays the evolution of MSEs for the prediction phase in the REDD example where we only simulate the factors X. 4.2 Energy disaggregation Here, we consider a real-world example from the field of energy disaggregation where the objective is to determine the component devices from an aggregated electricity signal. This technology is use6 ful because having a decomposition, into components for each device, of the total electricity usage in a household or building can be very informative to consumers and increase awareness of energy consumption which subsequently can lead to possibly energy savings. For full details regarding the energy disaggregation application see [7, 10, 11]. Next we consider a publicly available data set2, called the Reference Energy Disaggregation Data Set (REDD) [11], to test the HB and BG sampling algorithms. The REDD data set contains several types of home electricity data for many different houses recorded during several weeks. Next, we will consider the main signal power of house_1 for seven days which is a temporal signal of length 604, 800 since power was recorded every second. We further downsampled this signal to every 9 seconds to obtain a sequence of 67, 200 size in which we applied the FHMM described below. Energy disaggregation can be naturally tackled by an additive FHMM framework, as realized in [10, 11], where an observed total electricity power yi at time instant i is the sum of individual powers for all devices that are “on” at that time. Therefore, the observation model from Eq. (12) can be used to model this situation with the constraint that each device contribution wk (which is a scalar) is restricted to be non-negative. We assume an FHMM with K = 10 factors and we follow a Bayesian framework where each wk is parametrized by the exponential transformation, i.e. wk = e e wk, and a vague zero-mean Gaussian prior is assigned on ewk. To learn these factors we apply unsupervised learning using as training data the first day of recorded data. This involves applying an Metropolis-within-Gibbs type of MCMC algorithm that iterates between the following three steps: i) sampling X, ii) sampling each ewk individually using its own Gaussian proposal distribution and accepting or rejecting based on the M-H step and iii) sampling the noise variance σ2 based on its conjugate Gamma posterior distribution. Notice that the step ii) involves adapting the variance of the Gaussian proposal to achieve an acceptance ratio between 20 and 40 percent following standard ideas from adaptive MCMC. For the first step we consider one of the following four algorithms: BG1, BG2, HB1 and HB2 defined in the previous section. Once the FHMM has been trained then we would like to do predictions and infer the posterior distribution over the hidden factors for a test sequence, that will consist of the remaining six days, according to p(X∗|Y∗, Y ) = Z p(X∗|Y∗, W, σ2)p(W, σ2|Y )dWdσ2 ≈1 T T X t=1 p(X∗|Y∗, W (t), (σ2)(t)), (13) where Y∗denotes the test observations and X∗the corresponding hidden sequence we wish to infer3. This computation requires to be able to simulate from p(X∗|Y∗, W, σ2) for a given fixed setting for the parameters (W, σ2). Such prediction step will tell us which factors are “on” at each time. Such factors could directly correspond to devices in the household, such as Electronics, Lighting, Refrigerator etc, however since our learning approach is purely unsupervised we will not attempt to establish correspondences between the inferred factors and the household appliances and, instead, we will focus on comparing the ability of the sampling algorithms to escape from local modes of the posterior distribution. To quantify such ability we will consider the mean squared error (MSE) between the model mean predictions and the actual data. Clearly, MSE for the test data can measure how well the model predicts the unseen electricity powers, while MSE at the training phase can indicate how well the chain mixes and reaches areas with high probability mass (where training data are reconstructed with small error). Figure 3(b) shows the evolution of MSE through the sampling iterations for the four MCMC algorithms used for training. Figure 3(c) shows the corresponding curves for the prediction phase, i.e. when sampling from p(X∗|Y∗, W, σ2) given a representative sample from the posterior p(W, σ2|Y ). All four MSE curves in Figure 3(c) are produced by assuming the same setting for (W, σ2) so that any difference observed between the algorithms depends solely on the ability to sample from p(X∗|Y∗, W, σ2). Finally, Figure 4 shows illustrative plots on how we fit the data for all seven days (first row) and how we predict the test data on the second day (second row) together with corresponding inferred factors for the six most dominant hidden states (having the largest inferred wk values). The plots in Figure 4 were produced based on the HB2 output. Some conclusions we can draw are the following. Firstly, Figure 3(c) clearly indicate that both HB algorithms for the prediction phase, where the factor weights wk are fixed and given, are much better than block Gibbs samplers in escaping from local modes and discovering hidden state configurations 2Available from http://redd.csail.mit.edu/. 3Notice that we have also assumed that the training and test sequences are conditionally independent given the model parameters (W, σ2). 7 that explain more efficiently the data. Moreover, HB2 is clearly better than HB1, as expected, since it considers larger global moves. When we are jointly sampling weights wk and their interacting latent binary states (as done in the training MCMC phase), then, as Figure 3(b) shows, block Gibbs samplers can move faster towards fitting the data and exploring local modes while HB schemes are slower in terms of that. Nevertheless, the HB2 algorithm eventually reaches an area with smaller MSE error than the block Gibbs samplers. Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 0 2000 4000 0 500 1000 1500 2000 Day 2 0 500 1000 1500 2000 Figure 4: First row shows the data for all seven days together with the model predictions (the blue solid line corresponds to the training part and the red line to the test part). Second row zooms in the predictions for the second day, while the third row shows the corresponding activations of the six most dominant factors (displayed with different colors). All these results are based on the HB2 output. 5 Discussion Exact sampling using FF-BS over the entire model space for the FHMM is intractable. Alternative solutions based on conditional updating approaches that use locally asymmetric moves will lead to poor mixing due to the sampler becoming trapped in local modes. We have shown that the Hamming ball sampler gives a relative improvement over conditional approaches through the use of locally symmetric moves that permits joint updating of hidden chains and improves mixing. Whilst we have presented the Hamming ball sampler applied to the factorial hidden Markov model, it is applicable to any statistical model where the observed data vector yi depends only on the i-th column of a binary latent variable matrix X and observed data Y and hence the joint density can be factored as p(X, Y ) ∝p(X) QN i=1 p(yi|xi). Examples include the spike and slab variable selection models in Bayesian linear regression [12] and multiple membership models including Bayesian nonparametric models that utilize the Indian buffet process [13, 14]. While, in standard versions of these models, the columns of X are independent and posterior inference is trivially parallelizable, the utility of the Hamming ball sampler arises where K is large and sampling individual columns of X is itself computationally very demanding. Other suitable models that might be applicable include more complex dependence structures that involve coupling between Markov chains and undirected dependencies. Acknowledgments We thank the reviewers for insightful comments. MKT greatly acknowledges support from “Research Funding at AUEB for Excellence and Extroversion, Action 1: 2012-2014”. CY acknowledges the support of a UK Medical Research Council New Investigator Research Grant (Ref No. MR/L001411/1). CY is also affiliated with the Department of Statistics, University of Oxford. 8 References [1] Lawrence Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, 1989. [2] Steven L Scott. Bayesian methods for hidden Markov models. Journal of the American Statistical Association, 97(457), 2002. [3] Na Li and Matthew Stephens. Modeling linkage disequilibrium and identifying recombination hotspots using single-nucleotide polymorphism data. Genetics, 165(4):2213–2233, 2003. [4] Jonathan Marchini and Bryan Howie. Genotype imputation for genome-wide association studies. Nature Reviews Genetics, 11(7):499–511, 2010. [5] Christopher Yau. OncoSNP-SEQ: a statistical approach for the identification of somatic copy number alterations from next-generation sequencing of cancer genomes. Bioinformatics, 29 (19):2482–2484, 2013. [6] Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Mach. Learn., 29(2-3):245–273, November 1997. [7] J Zico Kolter and Tommi Jaakkola. Approximate inference in additive factorial HMMs with application to energy disaggregation. In International Conference on Artificial Intelligence and Statistics, pages 1472–1482, 2012. [8] Radford M Neal. Slice sampling. Annals of Statistics, pages 705–741, 2003. [9] Robert H Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in Monte Carlo simulations. Physical review letters, 58(2):86–88, 1987. [10] Hyungsul Kim, Manish Marwah, Martin F. Arlitt, Geoff Lyon, and Jiawei Han. Unsupervised disaggregation of low frequency power measurements. In SDM, pages 747–758. SIAM / Omnipress, 2011. [11] J. Zico Kolter and Matthew J. Johnson. REDD: a public data set for energy disaggregation research. In SustKDD Workshop on Data Mining Applications in Sustainability, 2011. [12] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988. [13] Thomas L Griffiths and Zoubin Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, volume 18, pages 475–482, 2005. [14] J. Van Gael, Y. W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In Advances in Neural Information Processing Systems, volume 21, 2009. 9
|
2014
|
117
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.